Title of Competition
stringclasses
343 values
Title of Writeup
stringlengths
6
139
User
stringclasses
1 value
Writeup
stringlengths
419
80.4k
Cassava Leaf Disease Classification
Road from 1058th public to 158th private solution. 1st medal
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you! Yes, PyTorch. I started to learn Computer Vision and neural networks only 3 months ago. So, I had to choose one framework first. And it was PyTorch 😄</p> <p>First of all, i congratulate all the winners and thank community for lots of ideas and tricks, that i've learnt from you during this competition. 😊</p> <p>That was my 1st kaggle medal, and i am glad that i received it in my 1st Computer Vision competition. </p> <p>I didn't try some ideas, because i had only last 3 weeks for participation and limited hardware 😅</p> <h1>Final Solution</h1> <h2>Dataset</h2> <p>Competition data only. Just for 2020.<br> I also tried to use 2019 dataset, but it didn't make any improvement for my CV.</p> <h2>CV strategy</h2> <p>5-fold stratified. </p> <h2>Normalization</h2> <p>I used imagenet mean and std. But i also tried mean and std from dataset, but it didn't make any improvement for my CV. It seems that dataset is quite small for this when we do transfer learning.</p> <h2>Models (pretrained):</h2> <ol> <li>2*resnext50_32x4d (one with custom 95 percentile denoise + w/o it)</li> <li>efficientnet_b3_ns</li> </ol> <p>For ensembling i just averaged predictions.</p> <p>I also replaced fully connected layer for ResNet with some sequential stuff.</p> <p>I used mixed precision for allocating memory, even though it decreased my CV, it allowed me to train batch size 32 with size 512.</p> <p>Class weights or upsampling didn't give me smth.</p> <h2>Train Setting:</h2> <p>Image Size: 512<br> Epochs: 10<br> Early Stopping: No (Saved Best Epoch for loss, not accuracy)<br> Loss Function: Taylor CE Loss with Label Smoothing<br> Optimizer: Adam. (Didn't have time to try others)<br> LR Scheduler: Cosine Annealing with Warm Restarts (Didn't have time to try others)<br> Batch Size: 32<br> Augmentations: Variants of standard (No CutMix, FMix or SnapMix)</p> <h3>Augmentations 1:</h3> <p><code>RandomResizedCrop, Transpose, HorizontalFlip, VerticalFlip, ShiftScaleRotate, HueSaturationValue, RandomBrightnessContrast, CoarseDropout, Cutout,</code></p> <h3>Augmentations 2:</h3> <p><code>RandomResizedCrop, Transpose, HorizontalFlip, VerticalFlip, IAAAdditiveGaussianNoise, GaussNoise, ShiftScaleRotate, CLAHE, IAASharpen, IAAEmboss, RandomBrightnessContrast, HueSaturationValue</code></p> <p>No TTA for final submissions.</p> <h2>Denoising:</h2> <p>For one of my ensembled resnets i used denoising, based on geometric mean of false confidence of my 2 other resnets and effb3.</p> <p>I also choosed for my final submissions only from models which i trained exactly on kaggle notebook, because i don't know if it is allowed to upload pretrained models with custom weights for inference. So, as me not a risky person, i used only models from my notebook output.</p> <p>Single CVs for my models were below 0.9 (except denoised model, but it's not comparable).<br> Public score of my final ensemble was 0.9 , and in Private i have 0.899. </p> <p>So, i moved from 1094th place to 170th place and got my 1st medal. </p>
Cassava Leaf Disease Classification
Silver medal solution (Public 10th)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Always trust CV, but what about a competition where CV doesn't matter :) <br> pb10-&gt;pvt165 <br> Here is a brief summary of our efforts in the past 2 months:</p> <ol> <li><p>Data Set: <br> All our models have been trained on <a href="https://www.kaggle.com/kingofarmy/cassavapreprocessed" target="_blank">2019+20 merged dataset</a>.</p></li> <li><p>Augmentations: <br> We experimented with FMix, CutMix and SnapMix. FMix and CutMix didn't perform well in our pipeline, so we dropped them. We plugged SnapMix with linearly increasing probs in almost all our models except ViT and one variation of EfficientNetB3. It enhanced both our CV and public LB especially in ResNeXt variants. </p></li> <li><p>Loss Functions: <br> Since batches are already created using Stratified K-Fold, class imbalance was being dealt with already. And since the test set is itself noisy, noisy loss functions didn't help us out either. So we decided to stick with standard Cross Entropy Loss.</p></li> <li><p>Models: <br> These are the ones that we included in our final ensemble.<br> efficientnet_b3_ns (w and wo SnapMix), resnext50_32x4d, vit_base_patch16_384, resnest50d. </p></li> <li><p>Ensembling:<br> <code>As long as greed is stronger than compassion, there will always be suffering.</code> </p></li> <li><p>Post Processing: <br> We tried scaling down each class' probabilities with a factor of .975, original idea was to scale 4th class since it has highest representation in train and the 31% test and it didn't work, but surprisingly applying this operation on the 5th class gave an improvement in score, which we later found out was perhaps fluke, and dropped this idea. We didn't try stacking or anything else, since discussions weren't too positive about them. </p></li> <li><p>What Failed:</p> <ul> <li>Cleaning the dataset.</li> <li>Filtering out lowest 5th or 10th percentile of images based on %age representation of green color.</li> <li>SEResNeXt 50&amp;101 models gave good solo scores but deteriorated ensemble performance miserably. </li> <li>Training on 100% data/ training first 5 out of 10 folds.</li></ul></li> <li><p>Acknowledgements:</p> <ul> <li><a href="https://www.kaggle.com/khyeh0719/pytorch-efficientnet-baseline-train-amp-aug" target="_blank">Pytorch Efficientnet Baseline [Train] AMP+Aug</a></li> <li><a href="https://www.kaggle.com/yasufuminakama/cassava-resnext50-32x4d-starter-training" target="_blank">Cassava / resnext50_32x4d starter [training]</a></li> <li><a href="https://github.com/Shaoli-Huang/SnapMix" target="_blank">SnapMix</a></li> <li><a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/212347" target="_blank">Epoch thresholding</a></li></ul></li> <li><p>What I learnt:<br> A lot. This was my first computer vision competition. There's a long long way to go and a lot to learn. Didn't have my hopes high based on public LB, but I'm glad we finished in silver. Congratulations to the winners! Keep learning.</p></li> </ol> <p>Thank you:) Congrats to you as well!</p>
Cassava Leaf Disease Classification
Solution Arhitecture Pipeline for Bronze Medal
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi <a href="https://www.kaggle.com/amiiiney" target="_blank">@amiiiney</a> . On denoising I eliminated cases where prob for ground truth &lt; 0.2 (random chance). For that model run solo, the result were indeed lower than on the original data.<br> My team have tried both soft labels and hard labels, for the selected method we used hard labels</p> <p>Unfortunate, the submission selection was a huge drawback for me, having cv with leaderboard not aligned made me choose wrong final submissions </p> <p>It had been a very interesting competition filled with of a lot of challenges, starting with the noise from the dataset, the possibility of using images from the previous competition dataset, and the diversity of possible solutions. <br> I would like to share to the community one of the designed architectures that lead me and my team to the bronze medal<br> <img src="https://i.ibb.co/M8QLDP8/general-Schematic.jpg" alt=""></p> <p>Presented above is the general schema of the architecture which was designed from 4 steps.</p> <p><strong>First step: Finding the best 3 models for the original provided data</strong></p> <p>There were tuned:</p> <ul> <li>architecture type: EfficientNet B2, EfficientNet B3, EfficientNet B4</li> <li>input image size: 480x480, 512x512, 520x520, 600x600</li> <li>training epochs number: 30, 40</li> <li>learning rate: 0.0001, 0.00015, 0.0002. 0.0003</li> <li>learning rate scheduler: Cosine Anealing w/wo warmup, One Cycle, Reduce on Plateau</li> <li>aditional layers on top of the network</li> <li>random resize crop paramers</li> <li>cutout probability, cutout size, cutout number, random brightness, random contrast parameters, - hue&amp;saturation</li> <li>label smoothing w/wo</li> <li>cutmix w/wo and the proportions of cutting ([0.3, 0.4, 0.5, 0.6, 0.7])</li> <li>imagenet vs noisy student initial weights</li> <li>optimizer: Adam, RangerLars, AdamW</li> </ul> <p>The best 3 models configurations were:</p> <p>Model 1:</p> <ul> <li>Model architecture: EfficientNet B2 (imagenet weights)</li> <li>Image size: 600x600</li> <li>Augmentations: Cutout(nr:32, size:32, prob:0.5), Random Brightness, Random Contrast, Transpose + Flip, Cutmix</li> <li>Adam + One cycle (max value: 0.0003)</li> </ul> <p>Model 2:</p> <ul> <li>Model architecture: EfficientNet B4 (noisy student weights)</li> <li>Image size: 512x512</li> <li>Augmentations: Cutout(nr:32, size:32, prob:0.5), Random Brightness, Random Contrast, Transpose + Flip, Cutmix, </li> <li>Adam + One cycle (max value: 0.0002) + Label Smoothing</li> </ul> <p>Model 3: </p> <ul> <li>Model architecture: EfficientNet B3 (imagenet weights)</li> <li>Image size: 512x512</li> <li>Augmentations: Cutout(nr:32, size:32, prob:0.5), Random Brightness, Random Contrast, Transpose + Flip, Cutmix, </li> <li>Adam + One cycle (max value: 0.0002) + Label Smoothing</li> </ul> <p><strong>Second step: Denoising as much as possible the original data</strong><br> <img src="https://i.ibb.co/nCqjXT4/cleanImg.jpg" alt=""><br> The OOF prediction of the presented 3 models using 8 TTA combined using a weighted mean(weighted using the OOF score) was used to eliminate the samples that most likely were wrongly label. So we use the best OOF accuracy 3 models, each with 8 TTA added for extra robustness. We eliminated data where the predicted probabily for ground truth is lower than 0.20 (random chance). <br> Using the new clean dataset, we tune the parameters and find the best possible model on this data</p> <p><strong>Third step: Adding useful data from the last year similar competition</strong><br> <img src="https://i.ibb.co/gtWBnxv/addData.jpg" alt=""><br> The OOF prediction of the original 3 models using 8 TTA combined using a weighted mean(weighted using the OOF score) was used to determine what data are most likely to be correct from the last year dataset. So we use the best OOF accuracy 3 models, each with 8 TTA added for extra robustness. We added data where the predicted probabily for ground truth is higher than 0.90. <br> Using the new clean dataset, we tune the parameters and find the best possible model on this data</p> <p><strong>Forth step: Designing a meta classifier on top of the 3 models (original, cleaned and extended dataset)</strong><br> <img src="https://i.ibb.co/R7f093v/stacking.jpg" alt=""></p> <p>The input for the final step of the pipeline is the predictions of 3 models(original, cleaned and extended dataset) each with 4 TTA. So in total 12 predictions.<br> There had been tested bagging, boosting and classical stacking and the best result is the one presented below<br> <img src="https://i.ibb.co/n7NJF2w/meta.jpg" alt=""></p> <p>So, we have a bagging of 100 meta classifiers, where each of the classifier is a two layer system with Random Forest, SVC, LGBM and Ridge at the bottom and SVC on the second layer. The main parameters of the bagging system is max_features = 0.3 and max_sample = 0.1 . <br> The important parameters were tuned with a Bayesian approach. <br> An interesting comparison was made between bagging and boosting methodology. As it you know, boosting is an algorithm which lead to more good results but prone to overfitting and bagging is more robust, generalize better but can give lower accuracy. And our results confirm this hypothesis. In the validation set, boosting provide much better accuracy but on the leaderboard due to the fact that we don't have a very good alignment between validation and learboard, bagging wins due to extra robustness.</p> <p>In the end I want to congratulate the winners and to all participants !</p>
Cassava Leaf Disease Classification
Top %8 solution - (with unselected top %4)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone, </p> <p>First of all, congrats to all the winners. It is a fantastic achievement. Also thanks to Kaggle and the competition host. This was quite an interesting challenge with multiple aspects. The noise and the class imbalance were hard to tackle. </p> <p>Our team consists of two people (me and my great teammate <a href="https://www.kaggle.com/emirkocak" target="_blank">@emirkocak</a>). We finished the competition in the top %8. But like many other teams, we had a higher private LB submission which we did not select :) Here, we will summarize both of our solutions:</p> <p><strong>Our final submission (Top %8 - private: 0.8984):</strong></p> <p>We started this competition by visualizing and exploring the data. At first glance, it was somewhat confusing because of the wrong labels. With this in mind, we passed to the modeling stage.</p> <p>To set up a baseline score, we used this <a href="https://www.kaggle.com/yasufuminakama/cassava-resnext50-32x4d-starter-training/" target="_blank">notebook</a> (thanks a lot <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> 🙏) and used the following augmentations. Here <code>HueSaturationValue</code> and <code>RandomBrightnessContrast</code> was optional. We experimented with the different versions of them.</p> <pre><code>def get_train_transforms(): return albumentations.Compose([ # Optionally only resize # albumentations.Resize(512, 512), albumentations.RandomResizedCrop( 512, 512, scale=(0.6, 1.0), ratio=(3/5, 5/3)), albumentations.Transpose(), albumentations.HorizontalFlip(), albumentations.VerticalFlip(), albumentations.ShiftScaleRotate(), albumentations.HueSaturationValue( hue_shift_limit=0.2, sat_shift_limit=0.2, val_shift_limit=0.2, p=0.5 ), albumentations.RandomBrightnessContrast( brightness_limit=(-0.1, 0.1), contrast_limit=(-0.1, 0.1), p=0.5 ), albumentations.Normalize( mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), albumentations.Cutout(num_holes=8, max_h_size=32, max_w_size=32, fill_value=0, p=0.5), ToTensorV2(), ]) </code></pre> <p><strong>Model Details:</strong></p> <ul> <li>Backbone: resnext50, efficient_net_b3, ViT</li> <li>2019 data</li> <li>Image size: <ul> <li>(resnext50, efficient_net_b3): 512</li> <li>(ViT): 384</li></ul></li> <li>Folds: 5</li> <li>Epoch: 10 (for each fold)</li> <li>Optimizer: Adam(lr=0.001)</li> <li>Scheduler: Cosine + Warmup (1 warmup + 9 cosine annealing)</li> <li>TTA: 5 for resnext50 and ViT, 8 for the efficient_net_b3<ul> <li>Here we applied both light and heavy tta. Heavy one performed better.</li></ul></li> </ul> <p><strong>Top %4 submission: (private: 0.8993)</strong></p> <p>This submission looks pretty much the same as above but a little bit lighter. I guess "The Less is More" philosophy was correct here :)</p> <ul> <li>Backbone: resnext50, efficient_net_b1</li> <li>Image size: 512</li> <li>Folds: 5</li> <li>Other parameters are the same</li> <li>TTA: 5<ul> <li>Opposite to the model above, here light TTA worked much better (only transpose and flip).</li></ul></li> </ul> <p><strong>Things did not work:</strong></p> <ul> <li>SnapMix and CutMix</li> <li>Pseudo Labelling</li> </ul> <p>It was funny and challenging competition for both of us. We learned a lot. Congratulations to the top teams again. ✌️</p> <p>Sinan &amp; Emir</p>
Cassava Leaf Disease Classification
Yippeee!!! First silver (tentatively)!!!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks <a href="https://www.kaggle.com/cswwp347724" target="_blank">@cswwp347724</a> </p> <p>ha ha! </p> <p>not selecting the best file happens a lot.. we all get better over time! wish you get a silver soon!!</p> <p>Thanks so much <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> !! And I often also rely on your notebooks and suggestions to improve!! <br> Kaggle is a great place thanks to the wonderful people who share their knowledge and help everyone grow!!</p> <p>Thanks a lot <a href="https://www.kaggle.com/wonjunpark" target="_blank">@wonjunpark</a> !!</p> <p>Thanks so much <a href="https://www.kaggle.com/piantic" target="_blank">@piantic</a> !! fingers crossed!! and hoping to keep doing better!</p> <p>Wooow!!! amazing feeling !! I may have just won my first silver!!</p> <p>Thank you!! It's bee such a helpful community with feedback and learning!! feels awesome!!</p> <p>was really worried since I could not get my 4 model ensemble to work in the last 4 days in spite of all my attempts to fix it…. and given the noisy dataset was totally unsure how it would go…<br> having been shaken out by 400-500 ranks in private LB was very nervous!!</p> <p>Even ended up having a gold level submission that I did not select!! but who cares!! Happy to be in the right side of the shakeup and win my first silver!!</p> <p>Thank you for all the inputs and knowledge sharing throughout the competition!! Great to be part of such an amazing community!!</p>
Cassava Leaf Disease Classification
[0.901 PB solution] From gold to abyss or history on UnLucky leaf
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Intuition is the following - ensembling technics with complex decision boundaries (like Catboost that we used) are fitted to deal with the train noise, but not the test one that probably has a different source (e.g. collected from a different location and annotated by other annotators). On the other hand simple ensembling (like weighed average) technics more robust to such differences in noise distributions, because their boundaries are simple. </p> <p>Hi everyone,<br> Want share with you lesson that I learned from this competition:<br> <strong>Use only simple ensembling technics when your data is noisy</strong></p> <p>This simple lesson cost a lot of places on PB for the Lucky leaf team.<br> <img src="https://i.postimg.cc/8PYGnTvv/Screenshot-2021-02-19-at-2-33-01-AM.png" alt=""></p> <p>Shortly, our 0.901 PB solution is actually the same as the selected 0.889 one (<a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/220588" target="_blank">here I described it in detail</a>), but we used only a linear combination of ResNeXt, EfficientNet, and ViT instead of CatBoost on the second layer. So, I suppose, we have a good example of KISS principle in DS.</p>
Cassava Leaf Disease Classification
[2/21 Update] Private 13th, Public 52th Place Solution. My first medal competition.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Each models is trained with 5 folds and I used all folds for CV score and oof.<br> The split was different in each model as I used different seed. </p> <p>I am not sure the precise time but it took about more than 7 hours, probably.</p> <p>Hi, everyone!<br> First of all, thank you for all the kagglers who contributed to this competition and congratulations to the winners.</p> <p>I was so amazed that I got 13th place on the private LB. I wouldn't dream of it! Unbelievable! </p> <p>In this discussion, I would like to share my solution. </p> <h1>Summary</h1> <p>My best public submission is<br> CV: 0.9071<br> Public LB: 0.905 <br> Private LB: 0.901<br> <img src="https://user-images.githubusercontent.com/66665933/108469766-37ccde00-72cc-11eb-886d-0d9b497a120d.png" alt=""></p> <table> <thead> <tr> <th>model arch</th> <th>loss</th> <th>CV strategy</th> <th>CV score</th> <th>LB score</th> <th>Private score</th> <th>TTA</th> </tr> </thead> <tbody> <tr> <td>EfficientNet-B0</td> <td>CrossEntropy</td> <td>CV1</td> <td>0.87867458</td> <td>0.8828</td> <td>0.8902</td> <td>None</td> </tr> <tr> <td>EfficientNet-B2</td> <td>CrossEntropy</td> <td>CV2</td> <td>0.890498668</td> <td>0.8982</td> <td>0.8902</td> <td>None</td> </tr> <tr> <td>EfficientNet-B3</td> <td>BiTemperedLoss(t1=0.2, t2=1.0)</td> <td>CV1</td> <td>0.8904519</td> <td>0.8998</td> <td>0.8957</td> <td>None</td> </tr> <tr> <td>EfficientNet-B4</td> <td>CrossEntropy</td> <td>CV2</td> <td>0.89816329</td> <td>0.9054</td> <td>0.897</td> <td>Resize, CenterCrop</td> </tr> <tr> <td>EfficientNet-B5</td> <td>DistillationLoss</td> <td>CV2</td> <td>0.892788708</td> <td>0.8984</td> <td>0.8936</td> <td>None</td> </tr> <tr> <td>DenseNet121</td> <td>BCE(smoothing=0.01)</td> <td>CV1</td> <td>0.891059494</td> <td>0.8969</td> <td>0.8906</td> <td>None</td> </tr> <tr> <td>DenseNet121</td> <td>BCE(smoothing=0.1)</td> <td>CV1</td> <td>0.89171379</td> <td>0.8944</td> <td>0.8896</td> <td>None</td> </tr> <tr> <td>InceptionV4</td> <td>BCE(no smoothing)</td> <td>CV1</td> <td>0.89087255</td> <td>0.8914</td> <td>0.893</td> <td>None</td> </tr> <tr> <td>ViT (7)</td> <td>CrossEntropy</td> <td>CV2</td> <td>0.8871337</td> <td>0.8887</td> <td>0.8853</td> <td>None</td> </tr> <tr> <td>ViT (0)</td> <td>BiTemperedLoss(t1=0.8, t2=1.4, smoothing=0.06)</td> <td>CV1</td> <td>0.89348974</td> <td>0.8896</td> <td>0.8856</td> <td>Resize, CenterCrop</td> </tr> <tr> <td>ResNext50-32x4d</td> <td>BCE with weight</td> <td>CV2</td> <td>0.895405898</td> <td>0.8961</td> <td>0.8966</td> <td>None</td> </tr> </tbody> </table> <h1>CV strategy</h1> <p>I used 2 CV strategies.<br> Stratified K-Fold (CV1)<br> Stratified K-Fold with clustering data (CV2)<br> CV1 is just a normal stratified-kfold. (Probably many people use it.)<br> CV2 is inspired with <a href="https://www.kaggle.com/bjoernholzhauer/cassava-leaf-disease-classif-eda-cv-strategy" target="_blank">this</a> notebook. The notebook describes there are just a few root images in the dataset. So I made a clustering data and used it when splitting into folds like <code>skf.split(X=train["clustering"], y=train["label"])</code>.</p> <p>In my case, the correlations between CV and public LB are high. <br> This way I trust my CV. </p> <p>[2/20 Update!]</p> <h1>2019 Dataset</h1> <p>I used the whole 2019 images(train+test+extra) except duplicates.<br> I rarely use the labels because the labels seem to be noisier than the 2019 dataset.<br> I predicted the train images using some models and observed the accuracy was pretty lower (around 0.85) than the CV score (around 0.89).</p> <h1>Models for final submission</h1> <p>All models are trained with 5folds.</p> <h2>EfficientNet-B0</h2> <p>Actually, this model is made just to compare if <code>cleanlab</code> works or not.<br> This model is the version that does not use <code>cleanlab</code>.<br> It doesn’t mean <code>cleanlab</code> doesn’t work. The <code>cleanlab</code> model scored higher than non <code>cleanlab</code> one on CV, LB, private LB. Moreover, the other version using <code>cleanlab</code> has less correlation between other models.</p> <p>As I didn’t expect to use this model for final submission, there might be something to improve this model. </p> <table> <thead> <tr> <th>optimzer</th> <th>scheduler</th> <th>epochs</th> <th>weight decay</th> <th>loss fn</th> <th>CV</th> <th>LB</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>Adam</td> <td>None</td> <td>5</td> <td>1e-6</td> <td>CrossEntropy</td> <td>0.8786</td> <td>0.8828</td> <td>0.8902</td> </tr> </tbody> </table> <p>I applied light augmentation, which is <code>RandomResizedCrop, HorizontalFlip</code> for training, <code>Resize</code> for prediction.</p> <h2>EfficientNet-B2</h2> <p>I trained this model using Mixup Without Hesitation inspired from some discussions.<br> MHW was applied like this</p> <pre><code>mask = random.random() if epoch &gt;= 36: threshold = (40 - epoch) / (40 - 36) if mask &lt; threshold: x, y_a, y_b, lam = mixup_data(x, y, 0.5, use_cuda=True, device=self.device) else: y_a, y_b = y, y lam = 1.0 elif epoch &gt;= 24: if epoch % 2 == 0: x, y_a, y_b, lam = mixup_data(x, y, 0.5, use_cuda=True, device=self.device) else: y_a, y_b = y, y lam = 1.0 else: x, y_a, y_b, lam = mixup_data(x, y, 0.5, use_cuda=True, device=self.device) </code></pre> <p>I slightly changed <code>mixup_data</code> from <a href="https://github.com/yuhao318/mwh" target="_blank">github</a> because I wanted to use <code>cuda:n</code>.</p> <table> <thead> <tr> <th>optimzer</th> <th>scheduler</th> <th>epochs</th> <th>weight decay</th> <th>loss fn</th> <th>CV</th> <th>LB</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>Adam</td> <td>OneCycleLR</td> <td>40</td> <td>1e-6</td> <td>CrossEntropy</td> <td>0.8905</td> <td>0.8982</td> <td>0.8902</td> </tr> </tbody> </table> <p>The augmentation for training was heavy, which is </p> <pre><code>A.RandomResizedCrop(params["height"], params["width"]), A.OneOf([A.Transpose(p=0.5), A.HorizontalFlip(p=0.5), A.VerticalFlip(p=0.5), A.ShiftScaleRotate(p=0.5)], p=1.0), A.OneOf( [A.HueSaturationValue(hue_shift_limit=0.2, sat_shift_limit=0.2, val_shift_limit=0.2, p=0.5), A.RandomBrightnessContrast(brightness_limit=(-0.1, 0.1), contrast_limit=(-0.1, 0.1), p=0.5),], p=0.5), A.Normalize(mean=[0.4303, 0.4967, 0.3134], std=[0.2142, 0.2191, 0.1954],), A.OneOf([ A.CoarseDropout(p=0.5), A.Cutout(p=0.5),], p=0.5), ToTensorV2(p=1.0) </code></pre> <p>The reason I use <code>A.OneOf</code> is to speed up the training process. </p> <h2>EfficientNet-B3</h2> <table> <thead> <tr> <th>optimzer</th> <th>scheduler</th> <th>epochs</th> <th>weight decay</th> <th>loss fn</th> <th>CV</th> <th>LB</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>Adam</td> <td>MultiStepLR</td> <td>20</td> <td>1e-6</td> <td>BiTemperedLoss</td> <td>0.8904</td> <td>0.8998</td> <td>0.8957</td> </tr> </tbody> </table> <p>The t1 and t2 for loss function are t1=0.2, t2=1.0.<br> In the first epoch, I trained only the classifier by freezing the rest of layers.<br> From the second epoch, I set all layers trainable.<br> I applied middle augmentation for training, which includes <code>RandomResizedCrop, HorizontalFlip, ShiftScaleRotate</code>. I used <code>CenterCrop</code> for prediction.</p> <h2>EfficientNet-B4</h2> <table> <thead> <tr> <th>optimzer</th> <th>scheduler</th> <th>epochs</th> <th>weight decay</th> <th>loss fn</th> <th>CV</th> <th>LB</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>Adam+SAM</td> <td>CosineAnnealingWarmRestarts</td> <td>15</td> <td>1e-6</td> <td>CrossEntropy</td> <td>0.8981</td> <td>0.9054</td> <td>0.8970</td> </tr> </tbody> </table> <p>This model is trained in two stages.</p> <ol> <li>training with the 2020 dataset.</li> <li>making soft labels of the 2019 dataset by using the model in the first step and then , train with 2019+2020 dataset. </li> </ol> <p>This idea is inspired by the Noisy Student Method.<br> In the first step, the augmentation for training is middle, which includes <code>RandomResizedCrop, HorizontalFlip, ShiftScaleRotate</code>. I used <code>Resize</code> for soft label prediction.<br> In the second step, I added dropout after the last linear. The augmentation for training is heavy, which is the same as EfficientNet-B2. The labels are clipped [0.1, 0.9] at the probability of 0.5. (This is instead of label smoothing.) The optimizer is Adam+SAM, which boosted the CV and LB score.<br> The CV without TTA (only Resizing) is 0.896, LB is 0.899.<br> The CV with TTA (Resizing, CenterCrop) is 0.898, LB is 0.905. (I felt this is overfitting)</p> <h2>EfficientNet-B5</h2> <table> <thead> <tr> <th>optimzer</th> <th>scheduler</th> <th>epochs</th> <th>weight decay</th> <th>loss fn</th> <th>CV</th> <th>LB</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>Adam+SAM</td> <td>CosineAnnealingWarmRestarts</td> <td>15</td> <td>1e-6</td> <td>DistillationLoss</td> <td>0.8927</td> <td>0.8984</td> <td>0.8936</td> </tr> </tbody> </table> <p>This model is trained with knowledge distillation with 2019+2020 dataset.</p> <p>First, load the probabilities of 2019 dataset and 2020 dataset and create hard labels of 2019 dataset that don't have the label (test-images, extra-images). I used the label in the dataset for this model. This time I have soft and hard labels in both dataset.</p> <p>The loss function is the sum of hard label CrossEntropyLoss and soft label Kullback-Leibler divergence. This is inspired by the <code>Deit</code> repository. The <code>alpha</code> is set to 0.5.</p> <pre><code>def forward(self, pred, y, soft_label): dist_loss = F.kl_div(F.log_softmax(pred, dim=1), soft_label.log()) ce_loss = F.cross_entropy(pred, y.argmax(dim=1)) return dist_loss * self.alpha + ce_loss * (1 - self.alpha) </code></pre> <p>augmentation is heavy, which is the same as EfficinetNet-B2.</p> <h2>DenseNet121, InceptionV4</h2> <table> <thead> <tr> <th>model arch</th> <th>optimzer</th> <th>scheduler</th> <th>epochs</th> <th>weight decay</th> <th>loss fn</th> <th>CV</th> <th>LB</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>DenseNet121</td> <td>Adam</td> <td>MuliStepLR</td> <td>15</td> <td>1e-6</td> <td>BCE, smoothing=0.01</td> <td>0.8910</td> <td>0.8969</td> <td>0.8906</td> </tr> <tr> <td>DenseNet121</td> <td>Adam</td> <td>MuliStepLR</td> <td>15</td> <td>1e-6</td> <td>BCE, smoothing=0.1</td> <td>0.8917</td> <td>0.8944</td> <td>0.8896</td> </tr> <tr> <td>InceptionV4</td> <td>Adam</td> <td>MuliStepLR</td> <td>15</td> <td>1e-6</td> <td>BCE, No smoothing</td> <td>0.8908</td> <td>0.8914</td> <td>0.8930</td> </tr> </tbody> </table> <p>I trained this model using BinaryCrossEntropyLoss.<br> The idea is from the fact that there are duplicate images in the dataset and they have different labels. There might be multiple diseases in the same images. So I tried BCE.</p> <p>The training detail is almost the same as EfficientNetB3. The difference is loss function.<br> I applied BCE with smoothing for DenseNet121 models. One is smoothing=0.01, the other is 0.1, InceptionV4 for no smoothing.<br> You may feel why I put almost the same model for the final prediction. <br> The reason is simple. Optuna suggested that I put these models to maximize the CV score!<br> The detailed process is described in the next section. </p> <h2>ViT (1)</h2> <table> <thead> <tr> <th>optimzer</th> <th>scheduler</th> <th>epochs</th> <th>weight decay</th> <th>loss fn</th> <th>CV</th> <th>LB</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>MomentumSGD</td> <td>OneCycleLR</td> <td>50</td> <td>1e-6</td> <td>CrossEntropy</td> <td>0.8871</td> <td>0.8887</td> <td>0.8853</td> </tr> </tbody> </table> <p>The training method of this model is almost the same as EfficientNet-B2 except using MomentumSGD for optimizer, the number of epochs, input size.<br> It means Mixup Without Hesitation is also applied when training this model.<br> The epoch to apply Mixup Without Hesitation is the same as EfficientNet-B2 although the number of epochs is different. This is because I forgot to change it. However, it worked well and this is the best ViT model that I trained by myself. </p> <h2>ViT (2)</h2> <table> <thead> <tr> <th>optimzer</th> <th>scheduler</th> <th>epochs</th> <th>weight decay</th> <th>loss fn</th> <th>CV</th> <th>LB</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>Adam</td> <td>CosineAnnealingWarmRestarts</td> <td>10</td> <td>0</td> <td>BiTemperedLoss</td> <td>0.8897</td> <td>0.8896</td> <td>0.8856</td> </tr> </tbody> </table> <p>This model is a fork of <a href="https://www.kaggle.com/mobassir/vit-pytorch-xla-tpu-for-leaf-disease" target="_blank">this notebook</a>. <br> I tried to improve the score, only to find it worsened the score. So I used the pretrained weights.<br> The scores are 0.8897 for CV, 0.889 for LB without TTA<br> When predicting, I applied 2xTTA(Resize, CenterCrop), whose scores were 0.8934 for CV, 0.889 for LB (No LB improvement).</p> <h2>ResNext50-32x4d</h2> <table> <thead> <tr> <th>optimzer</th> <th>scheduler</th> <th>epochs</th> <th>weight decay</th> <th>loss fn</th> <th>CV</th> <th>LB</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>Adam+SAM</td> <td>CosineAnnealingWarmRestarts</td> <td>15</td> <td>1e-6</td> <td>BCE with weights</td> <td>0.8954</td> <td>0.896</td> <td>0.8966</td> </tr> </tbody> </table> <p>The training method of this model is almost the same as EfficientNet-B4 except loss function. The loss function is BCE. Since the classes are highly imbalanced, I applied class weights <code>[1.5, 0.7, 0.7, 0.6, 1.5]</code>. I applied <code>drop_path_rate=0.0001</code>. </p> <h1>How to choose the models for final submission</h1> <p>Since I relied on the CV, the models are chosen to maximize the CV score. Here are the steps for it.</p> <ol> <li>find the best model combination</li> <li>find the optimal weight for the models</li> </ol> <p>In the first step, I used <code>optuna</code> and <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/175614" target="_blank">this method</a> for choosing the combination. <br> Although <code>optuna</code> doesn’t officially support finding combinations, I used 6~12x <code>trial.suggest_categorical</code> to find the combinations allowing the duplicates. (I can remove them by human hand.)</p> <p>In the second step, I used <code>optuna</code> for finding the optimal weights.<br> Since the result is different every time I run the code, I reran it again, again and again.</p> <p>I prepared 2 kinds of weights. One is 1d weights for each model. The other is 2d weights for each class probabilities. </p> <p>The weights for the final submission is </p> <table> <thead> <tr> <th></th> <th>cbb</th> <th>cbsd</th> <th>cgm</th> <th>cmd</th> <th>healthy</th> </tr> </thead> <tbody> <tr> <td>EfficientNet-B0</td> <td>0.12</td> <td>0.06</td> <td>0.1</td> <td>0.34</td> <td>0.27</td> </tr> <tr> <td>EfficientNet-B2</td> <td>0.77</td> <td>0.47</td> <td>0.88</td> <td>0.21</td> <td>0.84</td> </tr> <tr> <td>EfficientNet-B3</td> <td>1</td> <td>0.96</td> <td>0.46</td> <td>0.84</td> <td>0.74</td> </tr> <tr> <td>EfficientNet-B4</td> <td>0.87</td> <td>0.45</td> <td>0.54</td> <td>0.81</td> <td>0.28</td> </tr> <tr> <td>EfficientNet-B5</td> <td>0.88</td> <td>0.06</td> <td>0</td> <td>0.46</td> <td>0.29</td> </tr> <tr> <td>DenseNet121_1</td> <td>0.36</td> <td>0.29</td> <td>0.77</td> <td>0.27</td> <td>0.23</td> </tr> <tr> <td>Densenet121_2</td> <td>0.06</td> <td>0.62</td> <td>0.2</td> <td>0.56</td> <td>0.03</td> </tr> <tr> <td>InceptionV4</td> <td>0.02</td> <td>0.87</td> <td>0.08</td> <td>0.72</td> <td>0.88</td> </tr> <tr> <td>ViT (1)</td> <td>0.88</td> <td>0.45</td> <td>0.64</td> <td>0.43</td> <td>0.38</td> </tr> <tr> <td>ViT (2)</td> <td>0.48</td> <td>0.66</td> <td>0.76</td> <td>0.73</td> <td>0.05</td> </tr> <tr> <td>ResNext50-32x4d</td> <td>0.72</td> <td>1</td> <td>0.15</td> <td>0.8</td> <td>0.84</td> </tr> </tbody> </table> <p>Actually, models with 2d weights could achieve higher in CV, but slightly lower in Public LB compared to 1d weights. I was a bit concerned that the models with 2d weights are overfitting to the CV. However, I selected the best CV submission and the best LB submission for the final score. The best CV submission scored the best in Private LB. Trust your CV is true!</p> <p>[2/21 update!]</p> <ul> <li>changed the table</li> </ul> <h1>Correlations</h1> <p>The correlations are calculated from saved oof(probabilities for each class) files and reshaped them to 1d. <br> <img src="https://user-images.githubusercontent.com/66665933/108615417-37a51d80-7447-11eb-8371-dd462531dec5.png" alt=""></p> <h1>Confusion matrix of final submission's oof</h1> <p><img src="https://user-images.githubusercontent.com/66665933/108615422-4095ef00-7447-11eb-82ae-95fa593baba5.png" alt=""></p> <h1>What worked and didn’t work</h1> <h3>worked</h3> <ul> <li>changing seed</li> <li>bi-tempered loss </li> <li>BCE loss</li> <li>taylor cross entropy with smoothing = 0.2 </li> <li>SAM optimizer</li> <li>momentum SGD (slow but higher performance)</li> <li>knowledge distillation</li> <li>2019 dataset</li> <li>CenterCrop for prediction</li> <li>mixup without hesitation</li> <li>calculating optimal weights of oofs </li> </ul> <h3>didn’t work</h3> <ul> <li>Adabelief optimizer </li> <li><code>HorizontalFlip</code> for TTA </li> <li>lightgbm for middle features of pretrained model (higher CV, LB but lower Private score)</li> <li>using TabNet for classifier</li> </ul> <p>Let me know if you have any questions. Thank you again. See you in the next competition :)</p>
Cassava Leaf Disease Classification
[?th] The lost solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>The lost solution</h1> <p>Hello! Time to share our solution for this competition. Why did we call it lost? Because it's 900+ place on the noisy public LB and… and we don't know what to expect from the private part. We didn't know our private place while writing this report. Our team just wants to share an example of advanced solution based on training with denoised data and huge ensemble method. So, if we landed low on the private LB when you're reading this, it means that we've got a noisy private set and our ideas haven't worked. Otherwise, if we're high enough on the LB it means that the private part is clear and we handled noise successfully. Since we're going all-in and taking two denoised solutions as the final submissions, we wanna believe that private data won't contain as much noise as the public set does. Let's review our solution by steps.</p> <p>Let's start from the most interesting part.</p> <h2>Knowledge distillation</h2> <p>We did a really soft knowledge distillation. We hadn't time to do a lot of experiments, so we had only one take to relable the dataset. We explored the forum and found out that there are at least 500 diseased images which are labeled as healthy. Other types of mistakes weren't frequently mentioned. So we decided to make knowledge distillation soft targets only for 500 samples. We trained 5 folds of SWSL ResNeXt 101 8D and predicted labels for all images from each validation part and for each label. Then we saved the predictions and confidences for each image and chose a threshold for confidence, which was used to find ~500 samples where extremely confident predictions didn't match ground truth labels. Then we trained Efficient Net B4 with obtained labels and got the CV boost from 0.899 to 0.915. Relabeled data was used in validation part as well.</p> <h2>External data</h2> <h5>2019 data</h5> <p>We used <a href="https://www.kaggle.com/tahsin/cassava-leaf-disease-merged" target="_blank">2019 data with removed duplicates</a> for training. We used it only for training and didn't put it in the validation part. We've also applied soft relabling for it.</p> <h5>Mendeley Leaves</h5> <p>We also used <a href="https://www.kaggle.com/nroman/mendeley-leaves" target="_blank">Mendeley Leaves</a> for training. This dataset contains images of other plants. The dataset is not fully labeled. The first part of the all images are labeled just as healthy and the rest of them are labeled just as diseased. So the second part had to be pseudo-labeled to be able to use it for training. We used 5 folds of SWSL ResNeXt 101 8D for this task. We predicted labels for each diseased image and checked model confidence for each one. I used a 0.55 confidence threshold - the same was calculated for relabeling ~500 samples in the source train set. (It’s NOT ranged from 0 to 1, it has much more amplitude including negative values.). Finally, we get 2767 / 4334 samples, including the healthy ones. The external data images weren’t used in the validation and placed only in the training part.</p> <h2>TTA</h2> <p>We used simple technique for our TTA: average of the default image and horizontally flipped image. There are two reasons of selecting such a small number of transforms for TTA: time limit - well-structured ensemble usually performs better than TTA, double TTA is the most stable variant in this competition according to our experiments - averaged and weighted 4x or 16x TTA aren't that good to be considered for our solution, LB scores might differ with them in range [-0.007, +0.007]. </p> <h2>Augmetations</h2> <p>Successful:</p> <ul> <li>Horizontal Flip</li> <li>Vertical Flip</li> <li>Blur</li> <li>360 rotates </li> <li>RandomBrigtnessContrast </li> <li>ShiftScaleRotate</li> <li>HueSaturationValue</li> <li>CutMix</li> <li>MixUp</li> <li>FMix </li> </ul> <p>25% probability for each of past three augmentations.</p> <p>Unsuccessful:</p> <ul> <li>ElasticTransform</li> <li>Grid Distortion</li> <li>RandomSunFlare</li> <li>GaussNoise</li> <li>Coarse dropout</li> <li>Optical Distortion</li> </ul> <p>During warm up stage only horizontal flip was used.</p> <h2>Training</h2> <ul> <li>optimizer - Ranger</li> <li>learning rate - 0.003</li> <li>epoch - 30</li> <li>warm up epochs - 3</li> <li>early stopping - 8</li> <li>Loss Function - Cross Entropy loss with label smoothing (0.2 smoothing before using knowledge distillation and 0.1 after it)</li> <li>Progressive image size, start_size=256, final_size=512, size_step=32. Size starts increasing after the warm up stage finishes.</li> <li>pretrained - True</li> <li>Frozen BatchNorm layers</li> <li>scheduler - CosineBatchDecayScheduler with gradual warm up (custom impelementation)</li> </ul> <p>Initially, we tried to impelement Cosine Decay Scheduler that steps every batch and has wide range of customization ways, but finally we created the custom scheduler we used here. Here is the code and a plotted learning rate:</p> <pre><code>import math from torch.optim.lr_scheduler import _LRScheduler class CosineBatchDecayScheduler(_LRScheduler): """ Custom scheduler with calculating learning rate according batchsize based on Cosine Decay scheduler. Designed to use scheduler.step() every batch. """ def __init__(self, optimizer, steps, epochs, batchsize=128, decay=128, startepoch=1, minlr=1e-8, last_epoch=-1): """ Args: optimizer (torch.optim.Optimizer): PyTorch Optimizer steps (int): total number of steps epochs (int): total number of epochs batchsize (int): current training batchsize. Default: 128 decay (int): batchsize based on which the learning rate will be calculated. Default: 128 startepoch (int): number of epoch when the scheduler turns on. Default: 1 minlr (float): the lower threshold of learning rate. Default: 1e-8 last_epoch (int): The index of last epoch. Default: -1. """ decay = decay * math.sqrt(batchsize) self.stepsize = batchsize / decay self.startstep = steps / epochs * (startepoch - 1) * self.stepsize self.minlr = minlr self.steps = steps self.stepnum = 0 super(CosineBatchDecayScheduler, self).__init__(optimizer, last_epoch) def get_lr(self): """Formula for calculating the learning rate.""" self.stepnum += self.stepsize if self.stepnum &lt; self.startstep: return [baselr for baselr in self.base_lrs] return [max(self.minlr, 1/2 * (1 + math.cos(self.stepnum * math.pi / self.steps)) * self.optimizer.param_groups[0]['lr']) for t in range(len(self.base_lrs))] </code></pre> <p><img src="https://i.ibb.co/bBCtNtB/image.png" alt=""></p> <p>Mixed precision training with gradient accumulation (iters_to_accumulate=8) ==&gt; big boost on CV</p> <h2>Models</h2> <ul> <li>SWSL ResNeXt 101 8D</li> <li>SWSL ResNeXt 50</li> <li>Efficient Net B4</li> <li>Efficient Net B4 NS</li> <li>Inception V4</li> <li>DenseNet 161</li> </ul> <p>Here are the table representing perfomance of these and some others models:<br> <img src="https://i.ibb.co/XpFT5VP/table.jpg" alt=""></p> <h2>Final ensembles</h2> <p>We submitted two different ensemble approaches with the same 6 models (5 folds each one). </p> <p>1) Simple Averaging Ensemble<br> <strong>2019 Private LB: 0.92712; 2020 Public LB: 0.898; 2020 Private LB: 0.896</strong></p> <p>2) MaxProb Ensemble<br> Max Probability (or Max Confidence) ensemble allows us to choose the dominating model in the prediction process. In this case, if several models predict different labels, we take the prediction from a model with the biggest confidence.<br> <strong>2019 Private LB: 0.93197; 2020 Public LB: 0.893; 2020 Private LB: 0.897</strong></p> <p>Here is the scheme of our ensembles:<br> <img src="https://i.ibb.co/SBZjRyW/image.png" alt=""></p> <p>You can find our full report (33 pages) about this competition <a href="https://docs.google.com/document/d/1TNTfrDrhYSAAgL_L6gIX5stw1Lfm76XJ-QY7yS4UfTc/edit?usp=sharing" target="_blank">here</a>. Each step, score and submit are described there.</p> <p>Thank you to my team, the organizers and all the participants! We had a great time and learned a lot. In a few days, we will publish our code, but in the meantime, you can check the base version of <a href="https://www.kaggle.com/vadimtimakin/fast-automated-clean-pytorch-pipeline-train" target="_blank">my pipeline</a>. See you in the next competitions!</p> <p><em>Don't deal with the noise…</em></p>
Cassava Leaf Disease Classification
[Public 272nd / Private 37th] 1 week solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, congrats to all the participants who got medals, and thanks to the host and kaggle team.</p> <p>There is little difference in the scores of most participants. I'm really looking forward to the 1st place solution :)</p> <h3>Brief Summary</h3> <ul> <li>training 10 image classification models and ensembling them by 1D-CNN, 2D-CNN, and weight optimization</li> <li>using merged dataset(2019 + 2020) for training but only 2020 for validation</li> <li>using Cross Entropy Loss with label Smoothing (alpha=0.3)</li> <li>as the result, simple averaging(I didn't select as final submission) is better than stacking 😇<ul> <li>stacking(Public: 0.9014, Private: 0.9008): <a href="https://www.kaggle.com/ttahara/infer-cassava-ens10" target="_blank">https://www.kaggle.com/ttahara/infer-cassava-ens10</a></li> <li>averaging(Public: 0.9018, Private 0.9014): <a href="https://www.kaggle.com/ttahara/infer-cassava-ens09" target="_blank">https://www.kaggle.com/ttahara/infer-cassava-ens09</a></li></ul></li> </ul> <h3>Details</h3> <h4>Data</h4> <p>I used <a href="https://www.kaggle.com/tahsin/cassava-leaf-disease-merged" target="_blank">publicly shared merged training dataset(2019 + 2020)</a>. Thank you <a href="https://www.kaggle.com/tahsin" target="_blank">@tahsin</a> for sharing it.</p> <h4>Image Classification Models</h4> <p>I used the following 10 models provided in <a href="https://github.com/rwightman/pytorch-image-models" target="_blank">timm</a>. </p> <ul> <li><p>small models: ResNet50D, ResNeXt50_32x4d, ECA-ResNet50D, ResNeSt50_fast_1s4x24d, RegNetY032</p></li> <li><p>big models: ResNet100D, ResNeXt101_32x4d, ECA-ResNet101D, ResNeSt101, RegNetY080 </p></li> </ul> <h4>Training</h4> <h5>cross validation</h5> <ul> <li>spliting merged training dataset by Stratified K-fold manner(K=5)</li> <li>using 2019 and 2020 data for training but only 2020 for validation</li> </ul> <h5>data augmentation</h5> <p>Transpose -&gt; HorizontalFlip -&gt; VerticalFlip -&gt; <br> ShiftScaleRotate -&gt; RandomResizedCrop -&gt;<br> HueSaturationValue -&gt; RandomBrightnessContrast -&gt; <br> OneOf(RandomErase, CoarseDropout, Cutout) -&gt;<br> Normalize -&gt; ToTensorV2</p> <h5>other settings</h5> <ul> <li>image size: 512x512x3</li> <li>max epoch : 10</li> <li>batch size: 64(small models), 32(big models)</li> <li>loss: CrossEntropy with LabelSmoothing<ul> <li>smoothing alpha: <strong>0.3</strong></li></ul></li> <li>optimizer: AdamW<ul> <li>weight decay: 1.0e-02</li> <li>learning rate: 5e-04(small models), 2.5e-04(big models)</li></ul></li> <li>scheduler: CosineAnnealingWarmRestarts<ul> <li>T_0: 10</li> <li>T_mult: 1</li></ul></li> </ul> <h5>Ensemble</h5> <ul> <li>training 1D-CNN and 2D-CNN using image classification models' outputs as inputs<ul> <li>for more details, see my past topic in MoA competition: <br> <a href="https://www.kaggle.com/c/lish-moa/discussion/204685" target="_blank">https://www.kaggle.com/c/lish-moa/discussion/204685</a></li></ul></li> <li>applying weight optimization to image classification models' outputs</li> <li>finally averaging these three models' outputs</li> </ul> <p>Unfortunately, however, simple averaging has better private score(0.9014) than this stacking(0.9008). I missed a gold medal 😇</p> <p><br></p> <p>That's all. Thanks you reading 😉</p>
Cassava Leaf Disease Classification
public 87th -> private 668th very bad solution for remembering sad memory...
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Firstly, congratulation to all competitors!</p> <p>This is my regret for large shake down.</p> <p>I used 3 type CNN.</p> <ol> <li>SEResNext50 32x_4d public : 0.898 -&gt; private : 0.895</li> <li>EfficientNetB4 public : 0.899 -&gt; private : 0.890 </li> <li>ViT_base_384 public : 0.899 -&gt; private : 0.890</li> </ol> <p>In final, I weights each model with 1 : 1.2 : 1.2 for ensemble.</p> <ul> <li>Public LB : 0.905</li> <li>Private LB : 0.896</li> </ul> <p>I regret to overfit to public LB…</p> <p>I thought this competition is my first silver, but result is reflecting my current ability.</p> <p>And, I think the cause of weak model is below</p> <ul> <li>using cutmix</li> <li>don't use tempered loss</li> <li>fixed random seed for cv through all model</li> </ul> <p>I doubt seed ensemble can help me a little, but the bird has flown :(</p> <p>The ensemble SEResNeXt50_32x4d + SEResNet152d_320</p> <ul> <li>Public LB : 0.900</li> <li>Pirvate LB : 0.899</li> </ul> <p>The ensemble 1SEResNext50 32x_4d + EfficientNetB4 + ViT_base_384 (public:0.895, private:0.891) </p> <ul> <li>Public LB : 0.902</li> <li>Pirvate LB : 0.899</li> </ul> <p>These model can get medal zone private score, but I don't know why bad score ViT is essential for good score. This competition definiely needs for early stopping for not perfect fitting training data?</p> <p>Please tell me how do you think cutmix , tempered loss, seed and so on and your preventing shake method.</p>
Cassava Leaf Disease Classification
simple (public 3rd and private 29th) place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>hi <a href="https://www.kaggle.com/suryajrrafl" target="_blank">@suryajrrafl</a> <br> we haven't got gold unfortunately but regarding your question on hyperparameter tuning,i think you can try something like optuna : <a href="https://www.analyticsvidhya.com/blog/2020/11/hyperparameter-tuning-using-optuna/" target="_blank">https://www.analyticsvidhya.com/blog/2020/11/hyperparameter-tuning-using-optuna/</a></p> <p>i am not a good hyperparameter tuner,but as a beginner i used to do a lot of hyperparameter tuning and waste a lot of time+gpu hours,later i realized that it's a skill where i should put less focus and more focus on novelty and designing good solutions.i think most of the kaggle beginners start with hyperparameter tuning like i did :)</p> <p>for tracking your experiments and models performance you can fill up a sheet like this : <a href="https://docs.google.com/spreadsheets/d/1HSMuTrMmwB5-8GXeJ-h68xeOIEeffcRo2W6HcmfXCzs/edit#gid=0" target="_blank">https://docs.google.com/spreadsheets/d/1HSMuTrMmwB5-8GXeJ-h68xeOIEeffcRo2W6HcmfXCzs/edit#gid=0</a></p> <p>it really helps a lot and i learned it from my old good team mate <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a></p> <p>my suggestion(based on my past mistakes) : spend less time on tuning and more time on trying to implement something that looks promising,,because with tuning you can get very smallllll amount of boost most of the times and most of the times they can overfit too,but if you can come up with an different idea+implementation that very few people will try then you can stand out from the crowd with ease,thank you :)</p> <p>sorry, we didn't use any ML experiment tracking tool,just tracked and saved weights based on best validation multi-class accuracy</p> <p>thank you for your all great discussion posts and kernels of this competition <a href="https://www.kaggle.com/piantic" target="_blank">@piantic</a> </p> <p><strong>Acknowledgements</strong><br> Thanks to Kaggle for organizing this competition.<br> I learned a tremendous amount of tricks in this competition and it would not have been able if it wasn’t for all this generous sharing of top solutions and extremely talented teammates ( <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> <a href="https://www.kaggle.com/hanson0910" target="_blank">@hanson0910</a> and <a href="https://www.kaggle.com/zlanan" target="_blank">@zlanan</a> ) i was lucky to have along the way. thanks to my well collaborative team mates. <strong>Together we learned a lot</strong> 💪</p> <p><strong>We did not have any magic there, only diverse models with highest CV</strong></p> <p>our aim was to design an ensemble using top pytorch + tensorflow models with different image size,different TTA and diverse models (it worked well on public lb and also on private)</p> <p>our 32nd place submission uses,</p> <p><strong>model 1 -&gt; nf-resnet50:</strong><br> [0.897, 0.903, 0.899. 0.8932, 0.8946]</p> <p><strong>model 2 -&gt;ResNext50(image size 512):</strong><br> [0.891, 0.897, 0.892, 0.886, 0.892]</p> <p><strong>model 3 -&gt; 0.8913 vit. + 0.886 +0.884 +0.883 noisy tf B4. Intentionally trained three relatively low CV noisy tf B4 but works, quite strange.</strong></p> <p><strong>model 4 and 5 -&gt;</strong><br> (tf_efficientnet_b4_ns + Vit-B16) with 2019 years of data, so CV has no reference value but it's around 0.9</p> <p><strong>model 6 -&gt;</strong><br> not sure what the CV is for 512 image size efficientnet-b0 <strong>(tensorflow model)</strong> but public lb was 0.895 </p> <p><strong>model 7 -&gt;</strong></p> <p>tf_efficientnet_b4_ns_fold_0_8: cv 0.8914<br> tf_efficientnet_b4_ns_fold_0_5: cv 0.8932</p> <p>only 1 fold of vit base from this notebook : <a href="https://www.kaggle.com/mobassir/vit-pytorch-xla-tpu-for-leaf-disease" target="_blank">ViT - Pytorch xla (TPU) for leaf disease</a></p> <p>vit_base_patch16_384_fold_4: cv0.89175</p> <p>only 2 fold of vit large from this notebook : <a href="https://www.kaggle.com/mobassir/faster-pytorch-tpu-baseline-for-cld-cv-0-9" target="_blank">Faster Pytorch TPU baseline for CLD(cv 0.9)</a></p> <p>vit_large_patch16_384_fold_2: cv 0.89568<br> vit_large_patch16_384_fold_1: cv 0.89826</p> <p>used following tta (3 step):</p> <pre><code>def get_inference_transforms(image_size = image_size): return Compose([ RandomResizedCrop(image_size, image_size), Transpose(p=0.5), HorizontalFlip(p=0.5), VerticalFlip(p=0.5), HueSaturationValue(hue_shift_limit=0.2, sat_shift_limit=0.2, val_shift_limit=0.2, p=0.5), RandomBrightnessContrast(brightness_limit=(-0.1,0.1), contrast_limit=(-0.1, 0.1), p=0.5), Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], max_pixel_value=255.0, p=1.0), ToTensorV2(p=1.0), ], p=1.) </code></pre> <p>and for vit large : </p> <pre><code>inference_transforms = albumentations.Compose([ #albumentations.RandomResizedCrop(image_size, image_size), albumentations.Resize(image_size, image_size), albumentations.Transpose(p=0.5), albumentations.HorizontalFlip(p=0.5), albumentations.VerticalFlip(p=0.5), albumentations.HueSaturationValue(hue_shift_limit=0.2, sat_shift_limit=0.2, val_shift_limit=0.2, p=0.5), albumentations.RandomBrightnessContrast(brightness_limit=(-0.1,0.1), contrast_limit=(-0.1, 0.1), p=0.5), ]) </code></pre> <pre><code> </code></pre> <p><a href="https://www.kaggle.com/zlanan" target="_blank">@zlanan</a> found these as best augmentations : </p> <pre><code> RandomResizedCrop(CFG['img_size'], CFG['img_size']), Transpose(p=0.4), HorizontalFlip(p=0.4), VerticalFlip(p=0.4), ShiftScaleRotate(p=0.3), MedianBlur(blur_limit=7,always_apply=False, p=0.3), IAAAdditiveGaussianNoise(scale=(0, 0.15*255),p=0.5), HueSaturationValue(hue_shift_limit=0.2, sat_shift_limit=0.2, val_shift_limit=0.2, p=0.4), RandomBrightnessContrast(brightness_limit=(-0.1,0.1), contrast_limit=(-0.1, 0.1), p=0.4), Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], max_pixel_value=255.0, p=1.0), CoarseDropout(p=1.0), Cutout(p=0.4), </code></pre> <p>we blended all models that had cv close to 0.9 or more than that, with that we got public lb 0.8978 and <strong>private lb 0.9019</strong> <strong>(but couldn't select it because of low lb score)</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F974295%2F0c878fc4e2b2156904d2ff3ff3cbb05c%2FLocalCV_publicLB.jpeg?generation=1569486918078902&amp;alt=media" alt=""></p> <p><strong>Final note</strong></p> <p>we did observe a rise on both LB with lighter TTA, but also a drop. we did observe a improve with different image size model but also a drop. we can not say anything is really useful, except <strong>avg ensemble.</strong></p> <p>some of our early experiment results can be found here : <a href="https://docs.google.com/spreadsheets/d/1HSMuTrMmwB5-8GXeJ-h68xeOIEeffcRo2W6HcmfXCzs/edit#gid=0" target="_blank">https://docs.google.com/spreadsheets/d/1HSMuTrMmwB5-8GXeJ-h68xeOIEeffcRo2W6HcmfXCzs/edit#gid=0</a></p> <p>thank you for reading</p> <p>we always tracked validation multi class accuracy</p> <p>Sorry i do not understand your question, we monitor validation multi claas accuracy which is a metric i Don't know where the word "tools" comes here,you can check this notebook : <a href="https://www.kaggle.com/mobassir/faster-pytorch-tpu-baseline-for-cld-cv-0-9" target="_blank">https://www.kaggle.com/mobassir/faster-pytorch-tpu-baseline-for-cld-cv-0-9</a></p> <p>For understanding how to monitor validation accuracy. <br> For keeping track of best validation accuract we use a simple code like this <br> If Current_accuracy &gt; best_accuracy:</p> <h1>save weight</h1>
Cause-effect pairs
Sharing Methods
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cause-effect pairs <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|>I already did, but as someone who learned so much from reading other threads like this in other competitions, I thought it to be best to give back now that I can. (: <p>Since this is a research competition, I thought it would be awesome if we could share our methods so that hopefully the next person could do even better. I felt that an AUC of 0.85+ would be possible, especially after reading the forums. I'm not going to bother mentioning the tricks that I figure everyone else did.</p> <p>&nbsp;</p> <p>As a side note, my motivation for entering the competition was to do more research into some automatic feature creation algorithms and publish an open source library, so pretty much all of my work focused on features.</p> <p>&nbsp;</p> <p>What I think I did well:</p> <p>1. Created lots of features. My final model, before feature selection, had almost 9000.</p> <p>2. Compared Apples to Apples. I made sure that comparisons between input types (numerical, categorical, binary) were consistent.</p> <p>3. Handmade features. Made a couple of features to proxy a &quot;vertical line test&quot; (essentially local variance).</p> <p>4. Feature Selection. I made a genetic algorithm to do feature selection. This improved performance, though it wasn't necessary since my submission without feature selection still scored 0.817.</p> <p>&nbsp;</p> <p>What I didn't do (that I probably should have):</p> <p>1. Use any of the previous research. I read in the forum about all of these models in another thread (ANM, PNL, LINGAM, IGCI, etc.) when the competition was almost over, and I didn't want to bother including and that probably could have helped a lot.</p> <p>2. Use more of the public info. I didn't use the four-way division at all, though I could have probably extracted more features out of it.</p> <p>3. Create more features and ensemble. I was confident that doing this could have improved my score, but I was too distracted working on previously mentioned library to do so. This almost cost me the competition, hence my score plateauing in the end.</p> <p>4. Test the impact of features that I added and made more that are similar. I'm unsure if this would be optimal. I feel like this should be done automatically, but since I don't have the ability to do so (yet), it probably could have helped tune the features more.</p> <p>For all the datasets together, about a day to create features. I cached it as well, so these features only had to be computed once.</p> <p>Sure thing, kinnskogr, I didn't use any feature combinations actually. Here's a trick that I assumed everyone used, hence I didn't mention. The problem was essentially to find whether f(x, noise) = y is a better fit than f(y, noise) = x. I spent a bit of time thinking about optimal ways to fit curves, but I realized that every machine learning predictor is essentially a curve fitting algorithm, and the famous ones would be much better than standard polynomial curves or rolling averages. I'll skip a lot of details and just give a bit of pseudocode that gives a high level idea: </p> <p>for each predictor in {large set of predictors}: </p> <p>&nbsp; for each metric in {large set of metrics}:</p> <p>&nbsp;&nbsp;&nbsp; feature1 = metric(y, predictor.fit(x, y).predict(x))</p> <p>&nbsp;&nbsp;&nbsp; feature2 = metric(x, predictor.fit(y, x).predict(y))</p> <p>&nbsp;&nbsp;&nbsp; feature3 = feature1 - feature2</p> <p>Almost all of my features were of that form. A lot of people mentioned converting the numerical variables to categorical through binning/k-means. I did that, but I also converted the categorical variables to numerical.</p>
Cdiscount’s Image Classification Challenge
12th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cdiscount’s Image Classification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><ol> <li><p>The leak is not really useful here for improving accuracy but speeds up training and inference. 0.74 at the LB, that I reported in one of the threads, was achieved before I figured out that this leak exists. </p></li> <li><p>Training Resnet 101 with dropped duplicates =&gt; 0.75 at the LB. Traning Resnet without dropped duplicates =&gt; same 0.75 at the LB.</p></li> <li><p>If admins will clean up test and remove those images that could be found in test, score at the LB for everyone will drop by 10%.</p></li> </ol> <p><img src="https://habrastorage.org/webt/0f/q7/q9/0fq7q9l_r53hhzvjgoxtknwsm2g.jpeg" alt="enter image description here" title=""></p> <p>(each bar represents an image)</p> <p>There are duplicate images in train and test and a set of images is shared between them (data leak). The easiest way to find them is to calculate md5 hash for each image. </p> <p><a href="https://github.com/ternaus/kaggle_cdiscount/blob/master/find_md5_train.py">https://github.com/ternaus/kaggle_cdiscount/blob/master/find_md5_train.py</a> <a href="https://github.com/ternaus/kaggle_cdiscount/blob/master/find_md5_test.py">https://github.com/ternaus/kaggle_cdiscount/blob/master/find_md5_test.py</a></p> <p>This allows:</p> <ol> <li>Significantly decrease the size of the train and test speeding up training and inference: 12m =&gt; 7.5m, 3m =&gt; 2m</li> <li>Get class labels from train for images in test</li> </ol> <p>After this, I trained Resnet 50, 101 and 152, on 160x160 crops, dropping learning rate on the plateau. At the first epoch, all layers except last are frozen.</p> <p>For each model test time augmentation + geometric mean=&gt; 0.75 on LB</p> <p>Geometric mean of the previous step =&gt; 0.77 on LB</p> <p>I did not really invest time into this problem (first months in a new job, conferences, wrapping up Carvana problem (releasing code at github and writing a blog post for Kaggle blog), etc..), so my approach to this problem was very straightforward. If some image presented in classes </p> <ul> <li>A - 10 times </li> <li>B - 20 times </li> <li>C - 30 times</li> </ul> <p>I just created sparse vector with</p> <ul> <li>10 / (10 + 20 + 30)</li> <li>20 / (10 + 20 + 30) </li> <li>30 / (10 + 20 + 30) </li> </ul> <p>in corresponding spots.</p> <p>I believe that better solution would be either to remove images that presented in different classes in the train or predict them using networks in a hope that networks will generate better embedding.</p> <p>I was thinking about dropping placeholders, but there were several classes that contained only placeholder images.</p> <p>Typically, when I work with probabilities geometric mean shows better score. For one of the models I compared arithmetic and geometric means, results were very similar but geometric was still a bit better, so I stick with it.</p>
Cdiscount’s Image Classification Challenge
GTX 1050 Ti solution [43rd place]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cdiscount’s Image Classification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I was crazy enough to enter this competition with my single GTX 1050 ti and 8 GB of RAM. Soon I knew that it is not realistic to train even one epoch of full CNN with this setup. Instead I took ImageNet pre-trained ResNet50 and VGG16 models and saved features from layer before FC to disk. 6 days of pure running time only to do that. From that point I was training models of FC layers on top of these features. And I trained a lot of them experimenting with different FC setup, batch sizes, LR etc. Overall all of them were quite weak, best of them should be around 0.68. </p> <p>Some findings:</p> <ol> <li>Adding number of images in product and image index to features gave me 1% boost. </li> <li>Combining single image predictions by multiplication of probabilities is 0.0025 better then averaging.</li> <li>Although majority of my models predict single image, my best model (looking at ensemble weights) predicts product by maxpooling CNN features for each image (inspired by <a href="https://arxiv.org/abs/1505.00880">Multi-view Convolutional Neural Networks for 3D Shape Recognition</a>)</li> </ol> <p>For each such model I predicted top ten categories, and then linearly ensembled them. Overall my final submission consists of 44 such weak models. Together they should score around 0.725 on LB (haven't checked it). Then I also took available <a href="https://www.kaggle.com/c/cdiscount-image-classification-challenge/discussion/41021">Heng CherKeng models</a> (Thanks a lot Heng!), my ensemble of them gives 0.71054 private LB. And again mixed top 10 predictions from both ensembles with weights 0.6 0.4.</p> <p>I had to struggle a lot with the size of this dataset, lack of RAM, lack of SSD space. It made me extra careful with how I organize my pipeline, and how I can restore anything I computed so far (in a case of unlucky rm). I have a huge make file with targets for every submission. Because every change is under git, and I avoided manual tweaks it should be possible (in theory) to reproduce everything I did. That is one of the most valuable things I got away from this competition. Uploaded code to git <a href="https://github.com/NighTurs/kaggle-cdiscount-image-classification">https://github.com/NighTurs/kaggle-cdiscount-image-classification</a></p>
Cdiscount’s Image Classification Challenge
My brief summary of this competition[0.795]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cdiscount’s Image Classification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>In this competition,the data size is huge,and the probs of eash images are huge too,more than 5000. So the easiest and feasible way to ensemble is just weighted average of them by get feed back from local validation set.<br>If you want to try more complex method,please have a look at:<a href="https://www.kaggle.com/c/cdiscount-image-classification-challenge/discussion/45733">https://www.kaggle.com/c/cdiscount-image-classification-challenge/discussion/45733</a></p> <p>Since our models have same validation set,we can adjust the weights and see the validation score,I found it's very stable in this competition with so large dateset.</p> <p>As said in the solution:I extracted the features of boxes from CRNN and feed them into my multi-input CNN,the first input is Resnet50 FC features and the second is CRNN features.<br> If you read the CRNN pager,you will find that you can extract features after LSTM,the dimension is 512x26 ,if a image have 5 boxes containing text,I averaged the features extracted to get a 512x26 array ,and convert to 1d vectors and then feed them into the network before last 2 fc layer and merged with the CNN features.<br> I hope I explained clearly,if you have question,please let me know.</p> <p>I permuted the order of the images,I guess it was a good option to add some more randomness.<br> When I trained my models for product with 3 images,I randomly selected 3 images from products with 4 images.</p> <p>I did so for too reason:1.Time was limited2.Some NLP papers use average of the wordvecs of a sentence to classify sentences.In this kind of competition we must evaluate the ROI of every method,so I chose simple and this efficient way.</p> <p>It is an interesting competition,and also a tough one.<br><br> I faced the following challenges:<br> 1.Large Dataset with 15+ millions images and 5000+ categories.<br> 2.Some products and 1-4 images<br> 3.CD/BOOK are very hard to classify.<br> 4.I estimated the best overall accuracy will be 0.8 also,there are a lot of methods to choose and there are a large space to improve,so it’s very hard to win.</p> <p><b></b></p><b> <h2>My solution summary:</h2> </b><p><b></b><br> 1.Making preparations for such a big dataset<br> 2.Finetuning pretrained models.(0.759/0.757,inception-resnet-v2,0.757/0.756 resnet50).<br> 3.Make full use of multi-images of a product. (0.772/0.771 inception-resnet-v2 and 0.769/0.768+ Resnet50 models).<br> 4.Use OCR to add semantics to the models. <b></b></p><b> <h2>Preparing:</h2> </b><p><b></b><br> At early stage of this competition,I spent 4 days on trying to feed the images to pytorch efficiently,I I found the best way is to buy 512M SSDs.<br> I also found that if I want to store prediction results into a file,I must use sparse matrix,I spend 1.5 days on writing codes to make sure I can keep probabilities of prediction in a small file(~40M) <b></b></p><b> <h2>Finetuning pretrained models.</h2> </b><p><b></b><br> We can not win such a competition only by downloading pretrained models and finetuning them,we must understand the principles behind every models.<br> After got some statistics of this data set,I searched and read all kinds of papers related:network structures,the optimizers,the train plan......<br> Then I started my experiments on Resnet34,I have following result:<br> 1.The network structure of almost all pretrained model are for imagenet with 1000 labels ,but our dataset have 5270 labels.If we use then directly,there will be some representation bottleneck.<br> 2.I found train with SGD need more epochs to converge than ADAM.<br></p> <p>So I modified the head of resnet34,add a 1x1 kernel convolution layer after layer4 of resnet,which increase the channel from 512 to 5270,and then a FC of 5270*5270.<br></p> <p>And I used Adam with optim.Adam(net.parameters(), lr=lr) and lr plan is:<br></p> <pre> lr = 0.0003 if epoch &gt; 7: lr = 0.0001 if epoch &gt; 9: lr = 0.00005 if epoch &gt; 11: lr = 0.00001 </pre> <p>When I trained 11.5 epochs with 160x160 patch random cropped from 180*180 images,I predicted and averaged predictions of the images of product,I could get a score &gt;0.72 on public LB.<br><br> Then I thought,why not add a FC layer more?So I added a 5270x5270,and I found that it improved more than 0.5<br></p> <p>Since I can train 1 epochs in 2.5 hours on a 4x 1080Ti machine,I could do a lot of experiments,I found if the result is not so good on first 1-2 epochs,the result not be good at end,this speed up my experiments.<br> I tried but not used in the next stage:<br> 1.Multi-level categories as multi-task.<br> 2.Hard Example,Focal Loss,<br> 3.Dilation<br> 4.Dropout<br> <br></p> <p>After I got these conclusion,I turned to resnet50,I found resnet50 is better than resnet34: 0.756 on public.<br><br></p> <p>Because I am busy with my everyday job ,I just let my GPUs keep running,and fine-tuned Resnet101,resnet152,and inceptionresnetv2 and inceptionV4. Inceptionv4 is a little weak at this stage,others could easily get a score &gt;0.755 on public LB.<br></p> <p><b></b></p><b> <h2>Make full use of multi-images of a product</h2> </b><p><b></b><br> Last weeks of this competition,I had more time,and I started analyzing the dataset.<br> I trained my models using images of a product separately before,but when we have a look at a product,we will treat the images as a whole.So I concatenate the images into a image,and fine-tune the models.<br><br></p> <p>That’s to say,We can split the trainset into 4 parts,products with 1,2,3,4 images,if we finetune products with 4 images ,we concatenate the 4 images into a image.if we finetune products with 3 images ,we concatenate the 3 images into a image. I also finetune the model with product with only 1 images.<br> Because I found that there are some invalid or meaningless images for classification in images for multi-images products,that will be noise for product with only on product.It was easy to finetune,the acc improved a lot after 2-3 epochs,it took no so much time.<br><br></p> <p>After this step,I got a cluster of model for resnet50,a model trained with all images,4 models for 1,2,3,4 images products.I predicted the product accordingly and ensembled them,I got a big improvement.Now,Almost all the models can get acc score near 0.77 .<br><br></p> <p>I made several submissions just now,I found that if I ensemble resnet50 models with inception-resnet-v2 models,we can get 0.782/0.781 on private and public LB.<br><br></p> <p><b></b></p><b> <h2>OCR</h2> </b><p><b></b><br> I found that CD and BOOK are very hard to classify,because we need know the meaning of the cover!<br> And I want to evaluate the performance of all kinds of algorithms OCR related for my next project,so I spent more than a week on it,I use CTPN to extract the boxes containing text,the result was quite good.Then I used CRNN to extract the text from the boxes.But I found that the text are not so good,because the images are small sized and CRNN was trained for English and I have no French dataset to retrain it,I can generate it,but I can not found suitable books without copyright limitation to extract French words.Without good words extracted from CD and BOOK cover,I could not use Word2Vec,so I can not provide semantic meaning to the model. As a last resort,I extracted the features of boxes from CRNN and feed them into my multi-input CNN,the first input is Resnet50 FC features and the second is CRNN features.<br> After got the prediction from this OCR-Network,I ensembled with the predictions of Resnet50 ,I make a lot of submissions in last week to confirm I did not over-fit.<br> I found that OCR solution contributed ~0.14% to my final submissions.<br> [Edit],I found that if I use OCR on 0.782 model,the result can improve 0.35%,this means that the improvement got from OCR is covered by multi-model ensemble. <b></b></p><b> <h2>Other models</h2> </b><p><b></b><br> I also tried densenet161,densenet169,dpn92,they are much worse than resnet50,since I have trained them,so I just ensembled them to add some diversities.<br><br> After ensembled the models,the public LB can reach 0.79.<br><br> To reach my final scored,I fine-tuned the 1-image products models with 224/299 sized images accordingly,to add more diversities,I also change the head to two 4096*4096 FC layer following VGG net and ensemble them.</p> <hr> <p>if you want to reproduce my solution,and if you have 4-1080Ti,you can get 0.782/0.781 on private/public LB in 7-8 days with resnet50 and inception-resnet-v2.</p> <p><br> <br></p> <p>Thank you! I think the most important factors are the kaggle rank system and I am so lucky in several competitions including this one.</p> <p>I noticed that the order of images should be varied,so when I trained my model,I randomized the order of images of a product.</p> <p>You can refer to this.please use Adam and scheduler in this post:<br> <a href="https://github.com/bestfitting/kaggle/blob/master/cdiscount/resnet.py">resnet for cdicount</a></p> <pre><code>if self.img_cnt==4 and self.mmv==1: img_out = np.zeros([self.img_size*2, self.img_size * 2, 3]) else: img_out = np.zeros([self.img_size, self.img_size * self.img_cnt, 3]) </code></pre> <p>.....</p> <pre><code>Read images of a product then: h, w = img.shape[0:2] if self.img_size != h or self.img_size != w: img = cv2.resize(img, (self.img_size, self.img_size)) img_size=self.img_size if self.img_cnt == 4 and self.mmv == 1: if array_idx&amp;lt;2: img_out[ :img_size, array_idx * img_size:(array_idx + 1) * img_size,:] = img else: img_out[img_size:, (array_idx-2) * img_size:((array_idx-2) + 1) * img_size, :] = img else: img_out[ :, array_idx * img_size:(array_idx + 1) * img_size,:] = img </code></pre> <p>Please refer to these two images,you can find that if we concatenate the images into a image,our models will get the information of a product in a whole. <br> <img src="https://github.com/bestfitting/kaggle/blob/master/cdiscount/602.png?raw=true" alt="enter image description here" title=""> <img src="https://raw.githubusercontent.com/bestfitting/kaggle/master/cdiscount/284.png" alt="enter image description here" title=""></p> <p>I cropped 5 160x160 patches from 180x180 images ,left-top,right-top,middle,left-bottom,right-bottom,the score could improve 0.5 also.</p> <p>Yes,when I read your post after competition,I had the similar feeling.we tried to solve this problem in different way,but the methods are similar.</p> <p>I found that hard examples and focal loss could not improve score and coverage,I guest this is because of the eval metric is accuracy and the distribution of the dataset,we can not emphasize the hard examples .I just fed all the train images into the network and I found the distribution of test set prediction is almost the same with the trainset.</p> <p>@Ravi Teja Gutta,thanks!<br> Yes,I finetuned all layers at once,if the dataset is large enough,we don't need finetune layer by layer,I this competition,we have so much images,we can even train our nets from scratch.<br>When we start learning a framework,we may find examples finetuneing a pretrained network by using a small dataset,that's reasonable.But when we have a lot of images,I think it's not necessary do so,in Planet Amazon competition,I finetuned all layers at once too.<br> You may ask how many is enough?I think it's determined by the image number and images per label,if we have more than 20000 images and plenty of images to classify a label,we may finetune directly.<br> All the results above are based on my experiments,I tried a lot of methods on different datasets such as different LR on differenet layers,fix BatchNorm after some epochs on Carvarna Competition,but I found that with a good LR scheduler,we can get good results without being too complex.<br> But I can not say for sure these will still be valid on other datasets,all these can only be a reference.</p> <p>Kapok,sorry for late reply,I guess you stored your probabilities as float64,I suggest you *255 and store them as uint8.And I use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html#scipy.sparse.csr_matrix">https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html#scipy.sparse.csr_matrix</a> and save to .npz,you can have a try.</p> <p>Please refer to this file: <a href="https://github.com/bestfitting/kaggle/blob/master/cdiscount/resnet.py">https://github.com/bestfitting/kaggle/blob/master/cdiscount/resnet.py</a></p> <p>I did not try Xception and Inception V3,I guessed Inception V4 ,Inception-Resnet-V2 would be better.<br> Accuracy on imagenet sometimes can not keep on other dataset,I also tried DPN92,SE-Resnet,SE-Inception,they all had been reported have better accuracy on imagenet,but they can only get the accuracy as resnet34 and take much much more time to train.<br> I prefer to use resnet in my everyday job,simple and beautiful<br></p> <p>Taking a mosaic of 4 images as input is also an option,but we need more memory and time to train and predict,so I trained a model with all images and finetuned it using products with different numbers of images.<br></p> <p>Yes,I think sequence model would not get a improvement since there are no sequence patterns in these images.</p>
Cdiscount’s Image Classification Challenge
My solution [9th place private]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cdiscount’s Image Classification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I had a standard scheme:</p> <ol> <li>Train FC adam (LR 0.001-&gt; 0.0001)</li> <li>Polishing, only FC SGD (LR 0.001 -&gt; 0.0001)</li> <li>Train full net SGD (LR 0.001-&gt; 0.0001)</li> <li>Manual annealing SGD (LR 0.01-&gt; 0.001 -&gt; 0.0001) 1-2 epochs at each stage, repeat twice</li> <li>Train without augs SGD LR 0.0001</li> </ol> <p>Tried: Resnext-101, Resnext-50, SE-Resnext-50, Resnet-101, Resnet-152. Best of all is Resnext-101. All nets with 160 input size and TTA10 for predicts.</p> <p>I tried to freeze the groups of conv layers. This gave faster convergence in the initial stages. But final accuracy was not good enough.</p> <p>I tried to sample strait, randomly, randomly one sample from the item. All this with and without hard negative use. Maybe I messed up something in the logs, but in the end it was best to sample randomly one sample from the item with a hard negative use. Without it accuracy were lower.</p> <p>I tried to make a collage of all the photos of the item and train the network with an input of 320x320. But speed was very low and I skiped this approach.</p> <p>I did not implement an update of the weights every n iteration to effectively increase the batch size, so I polished every net on a 4x1080 Ti to increase the batch.</p> <p>I did not experiment with weighted averaging the photos inside the item. I just put answers on clean image hashes, for which there was only one class in train. And the remaining predicates were given geometric mean without weights.</p> <p>Hardware:</p> <ol> <li><p>i7 6700k, 32gb, 2х Titan X (Maxwell)</p></li> <li><p>i7 7700k, 64gb, 2x 1080</p></li> <li><p>i7 5930K, 32gb, 3x 1080Ti</p></li> <li><p>Xeon 2696v3, 64gb, 4x 1080Ti</p></li> </ol>
Cdiscount’s Image Classification Challenge
Our 7th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cdiscount’s Image Classification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First many thanks to Kaggle and CDiscount to host this very interesting competition. </p> <p>Congratulations to the prize winners, your performances are impressive and I look forward to read your solution. </p> <p>A special kudos to Heng CherKeng for all your sharings, your comments. You are definitely #1 in this competition in style!</p> <p>Thanks to my team: Kyle, alup and voglinio, it was a pleasure to work and share together. Special kudo to Kyle whose ensembling speed was very impressive.</p> <p>Here a a quick summary of our solution. It is a combination of 3 things:</p> <ul> <li><p>Baseline CNN models: InceptionResnetv2, Resnet101, SE-InceptionV3, Xception. These models got performance from .69+ to .72+</p></li> <li><p>We extracted the bottleneck features from these models, group them by product (+padding) to shape (bs, 4, dim_features) and train different models to combine the images and classify the object. I shared my top level models here <a href="https://www.kaggle.com/lamdang/models-to-combine-images-and-predict-items">https://www.kaggle.com/lamdang/models-to-combine-images-and-predict-items</a></p> <ul><li>RNN: LSTM and GRU</li> <li>flat model: just flatten the inputs and put dense layers on top</li> <li>NetVlad: implementation based on <a href="https://arxiv.org/abs/1706.06905">https://arxiv.org/abs/1706.06905</a> and <a href="https://arxiv.org/abs/1511.07247">https://arxiv.org/abs/1511.07247</a>. Also from <a href="https://github.com/antoine77340/LOUPE">https://github.com/antoine77340/LOUPE</a>. Kudos to Antoine Miech and Inria researchers on this great work. These models individually get around .73-.74, when averaged get around .75</li> <li>Doing this previous steps and ensembling across baseline model get around .77</li></ul></li> <li><p>We extracted text box with EAST <a href="https://github.com/argman/EAST">https://github.com/argman/EAST</a> and did OCR with <a href="https://github.com/meijieru/crnn.pytorch">https://github.com/meijieru/crnn.pytorch</a>. We did not do language modelling correction after on the output. This boost the score further by ~0.01. We only take the detection and ocr model out of the box and did not retrain, further tuning of these models on this dataset may improve the performance and give extraboost to the global model. Also, we could only manage to use the text in bag of char ngram fashion. Other way of exploiting it could also improve the result.</p></li> </ul> <p>So that is, globally it is quite simple. I am really happy to learn about some really nice aggregation techniques like NetVlad and text detection/OCR models.</p>
Cdiscount’s Image Classification Challenge
Our solution [5th place]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cdiscount’s Image Classification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Since the output of each CNN is 5270-dimensional vector where the sum of elements is equal to 1, it is interpreted that i-th element of this vector is a probability that an image, which was passing to CNN, has class i.</p> <p>The key idea was to train several different convolutional neural networks and then build an ensemble.</p> <p>We trained many different CNNs: Resnet50, Resnet101, Resnet152, InceptionResnetV2, DenseNet161. Each one gives 74-75 LB score except of DenseNet161 which gives about 73. Using simple averaging we got 77.5. However, it was not enough and we decided to train a second-layer model.</p> <p>We had only one CNN which was trained using KFolds split. It was Resnet-50. For each product and for each image of this product we extracted TOP-5 probabilities and 5 relevant classes. So, there were 5 categorical and 5 float features for each image. We also noticed that one image could be present in the data multiple times, thus, we computed MD5 hash for each picture and matched it with the most common class IDs. So, we built another 10 features for each image: 5 TOP most common class IDs with such hash and the number of images with such hash.</p> <p>After this procedure, we had 80 raw features (40 numerical and 40 categorical). We preprocessed categorical features using different approaches. Since it is impossible to train xgboost on 5270 classes we added a new feature "possible_class_id" and changed the multiclassification problem to binary classification. Next, we were training a model which predicts 'is it true that this product has class_id which is equal to "possible_class_id"'. Therefore, a new dataset consisted of samples (product_features, possible_class_id, binary_target). We only used samples with possible_class_id which was one of the TOP5 predictions of CNN or one of the possible classes based on hash trick.</p> <p>Using the new dataset it was possible to train a second-layer model. We tried different models: XGBoost, LightGBM, CatBoost, Random Forest, Extra Trees. They gave similiar results and averaging of their predictions improved our score. Since we didn't want to spend too much time to finding the best coefficient for each model in our ensemble, we decided to train a third-layer model.</p> <p>For third layer we used:</p> <ol> <li>The features which were using for train second-layer models</li> <li>Predictions of second-layer models. We also computed several statistics of predictions on a set of possible class ids.</li> <li>Priors for categorical features. For each class_id we computed a frequency of this class in the whole dataset and a frequency of this class among the closest products based on their IDs (50k closest products). It was helpful because the dataset has a slight leak.</li> </ol> <p>Our third-layer models are three different LightGBM models and XGBoost one. Then, we used weighted arifmetic mean of these models to compute predictions for the test set. It seems that we could improve our score by building a fourth layer model :)</p> <p>For test predictions we used two different approaches:</p> <ol> <li>First, average probabilities extracted from CNNs, then build features and use them in second and third layer models.</li> <li>Extract predictions of each CNN, build features, use it for predictions on second and third layers and only then merge predictions of different CNNs.</li> </ol> <p>These two approaches gave almost the same score. We averaged their predictions and got a slight improvement.</p> <p>Some notes:</p> <ul> <li><p>I also tried to use predictions of the test set for training CNN. It improved the score of my single Resnet101 from 0.749 to 0.754. I stopped training after 1.5 epochs because such model showed worse results when I tried to put the predictions to XGBoost. However, I think it was possible to get 76+ from this single model using this approach. I also tried to apply this trick to second-layer models, the score also improved but it decreased the score of the third-layer model. I do believe that we could get better results applying this trick to our third-layer models. However, we didn't have extra time for doing it :(</p></li> <li><p>For training CNN we (Alexey Kharlamov and me) used such approach: first, initalize a model using pretrained weights from Imagenet, there were not frozen layers, the augmentation was disabled. Then, as soon as the validation score stopped growing, we added augmentation and doubled batch size. After this procedure, the score began to grow sharply. But, after a while the growth stopped, so we repeated this procedure again and again.</p></li> </ul> <p>Congratulations to the winners and thanks for all the participants!</p> <p>We didn't modified CNNs so much, we only changed pooling type to a global average pooling and the size of the latest dense from 1000 to 5270. We used 160x160 central crops, random shifts, rotations, scaling. We used different data augmentation for each CNN because of different convergence. As I mentioned above, we were adding augmentation several times until the network converged. For example, the final angle of rotation for Resnet101 is 80 degrees and only 25 degrees for DenseNet161.</p>
Cdiscount’s Image Classification Challenge
[0.776] Single model solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cdiscount’s Image Classification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I managed to get 0.77603 on Private Set with single model (my final result is 0.77883 - it's an ensemble of two models). Here's what I did:</p> <p>1) First I finetuned the pre-trained SE-ResNet-50 for about 7 epochs: <br> - I took the weights from here: <a href="https://github.com/hujie-frank/SENet">https://github.com/hujie-frank/SENet</a> (I've used Caffe) <br> - Finetuned the network for 7 epochs using Nesterov Momemtum SGD (momentum = 0.9, batch_size = 256) with the following training schedule: <br> - epochs 1..5: lr = 0.001 <br> - epoch 6: lr = 0.0001 <br> - epoch 7: lr = 0.00001 <br> - The input image size was 161x161. I've used random crops + random horizontal flips for augmentation. <br> - I've used horizontal flips for test time augmentation. After averaging the pictures of the products I've got 0.7133 <br> - I tried to finetune network with the exact same protocol, but without random crops (resized images to 161x161 and used only random horizontal flips). It turned out almost 0.01 better than with crops: 0.721. So I decided that I would not use augmentation, except random horizontal flip. <br> - I decided to try KNN on top of the features of SE-ResNet-50: <br> - I've generated features for the train, test and validation (formula for averaging flip with noflip: features = l2_normalize(l2_normalize(flip_features) + l2_normalize(noflip_features))). The matrix for the training set weighed about 96 GB and I used numpy.memmap, because all this did not fit in RAM. I saved the Matrix with features on the SSD. <br> - For each picture from the test, I found 5 closest (by cosine) pictures from the train for each class. I stored the search result on the HDD via memmap (Matrix size 5270 x N_TEST_IMAGES x 5 - weighed about 300 GB for the test.). <br> - Next, I calculated the score of each class for each picture: scores[picture, label] = numpy.power((1 + search_scores[label, picture, :]) / 2, 35).sum() (the sum of the scores from 5 closest pictures). The formula was tuned on validation. <br> - For each image, I stored scores from the top10 classes (ie equated the scores of other classes to 0). Score of the product was the sum of scores of pictures. After these manipulations I've got 0.752 on LB. Thus KNN worked at 0.03 better than softmax. <br> 2) I decided to use SE-ResNet-101 for the final model (I had to convert weights to PyTorch, because Caffe consumes too much memory). <br> - Finetuned the network for 15 epochs using Nesterov Momemtum (momentum = 0.9, batch_size = 512) with the following training schedule: <br> - epochs 1..9: lr = 0.01 <br> - epochs 10..15: lr = 0.001 <br> - The size of the input is 161x161. Of the augmentations, only Random Horizontal Flip was used. In addition, I added dropout with p = 0.2 after global average pooling. <br> - I've used horizontal flips for test time augmentation. After averaging the pictures of the products I've got 0.746 <br> - KNN (according to the scheme described above) gave 0.765 <br> - I tried to average the classifier predictions with KNN. To do this, I multiply the KNN scores by 300 and took the softmax of the top10 classes and summed it with the classifier predictions. This gave 0.776 on Private Set. <br> 3) For the final ensemble, I recalculated KNN combining the training set with the validation set and averaged the classifier predictions with the SE-ResNet-50. This gave 0.77883. <br> P.S My hardware: Intel Core i7 5930k, 2x1080, 64 GB of RAM, 2x512GB SSD, 3TB HDD</p>
Challenges in Representation Learning: The Black Box Learning Challenge
1st place entry
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Challenges in Representation Learning: The Black Box Learning Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Earlier, I briefly described the first-place entry in the Blackbox challenge.This is the full description.</p> <p><strong>My Approach</strong></p> <p>My first 5 entries into this contest used sparse filtering(see below), feature selection and&nbsp;an svm (or a random forest in one instance). The highest scoring of those got 0.6384&nbsp;on the public board, and ended at 0.6300 on the private leaderboard. That entry used the&nbsp;best 60 features from a set of size 160. Throughout the competition, I used between a third&nbsp;and a half of the features developed, most often around 40%. The svm used throughout was libsvm.&nbsp;Most of my entries used a c-svc with a linear kernel. I also had some entries with an rbf kernel or nu-svc/linear kernel. For the most part, the linear kernel was fine, and the rbf kernel was not clearly better. That&nbsp;isn't surprising given that with most feature sets, I had a couple of hundred features&nbsp;for the 1000 labeled examples. For the random forest submission, I used R's randomForest package.</p> <p>The randomForest package in R can generate variable importance scores. It makes the&nbsp;scores by first training an RF classifier and then taking the out-of-bag data, permuting&nbsp;each field in turn, and reclassifying it. The decrease in OOB accuracy with the&nbsp;field permuted is the measure of the variable importance. I used this throughout to rank features by their importance, and select the more important ones. I used the variable importance rankings in a one-pass greedy manner. I experimented with panel-wise selection (cutting the first k% and then k% of the rest, etc.),&nbsp;but it didn't do any better, so I dropped it. Initially, I&nbsp;used randomForest just because it was already written and I wanted a quick test of&nbsp;the concept. Later, I tried a couple of wrapper methods, but couldn't find anything&nbsp;that worked better. Unfortunately this means that my system has to bounce data&nbsp;back and forth between Octave and R.</p> <p>My best early feature set gets 0.60/0.59 when used whole on the public/private leaderboards&nbsp;(I didn't submit this during the competition), with a linear kernel svm.&nbsp;It got 0.6384 on the public board during competition after being cut to its best 60 features.&nbsp;I just re-submitted it for a 0.6480 using my current method, which trims rows out of the weight&nbsp;matrix rather than columns out of the features, resulting in a little bit different&nbsp;(often better) scaling. In any case, on sparse filter features, on this problem,&nbsp;using randomForest for feature selection was worth about 4-5%.</p> <p>It is possible to get better than 60 with just the basic SF features &#43; svm system(no feature selection). The best result that I can find for a whole feature set&nbsp;is 62.4%. That set is size 320. There is a positive relationship between the number of features used and accuracy, but it is not huge.</p> <p>After the fifth entry, most of what I turned in involved selecting&nbsp;features across different sets and combining them, then training a single classifier&nbsp;on the selected, aggregated features. Except for a few experiments, I did feature&nbsp;selection only within sets, and then combined the separately selected features into&nbsp;an aggregate set. The few experiments that I did involving selection across sets&nbsp;were very clearly worse (-2%).</p> <p>I found that surprising. It seems to mean that having possibly redundant, but high-scoring features is good. It may have to do with regularization. If an identical feature is included twice in a classifier that uses regularization, the solver will give them equal weight, and their combined weight&nbsp;will be higher than if only one were present. That could improve performance if the redundant features were good. At this point, this is just speculation, though,&nbsp;I do not know how similar the top features out of different sets really are.</p> <p>Some of these aggregate feature sets got pretty large. The largest set that I turned&nbsp;in had size 400. My other scored sets were size 274, 300 and 366. All of the highest scoring&nbsp;feature sets involved at least a little cherry-picking of 'good' feature sets, that is,&nbsp;they were aggregated from sets that I knew to be good when used whole. Also, for a few of them,&nbsp;I chose numbers of components that I thought would work well, rather than just taking, say, 40%.&nbsp;In the challenge code on bitbucket, there is a script for generating a bunch of weights and features&nbsp;and other scripts for training a model directly from such sets, selecting to a particular number&nbsp;of features. I have run those scripts a couple of times, and the best result I've found was&nbsp;69.16, so the hand-picking of good sets was worth around 1%.</p> <p><br> <strong>The high-scoring entry</strong></p> <p>Although almost all of my submissions were single classifiers, the actual winning&nbsp;entry was a small ensemble of three previous submissions. It got 0.7022 on the&nbsp;private leaderboard. It wasn't my highest scoring entry on the public leaderboard.&nbsp;Each of its components was an aggregate of selected features as described above.&nbsp;I combined them with simple voting, using mode(), which returns the lowest value&nbsp;in the case of a tie. That seemed reasonable, since the classes are skewed&nbsp;toward low values. I was never committed to using an ensemble at all in this.&nbsp;Two other scored entries using single classifiers got 0.7018 and 0.7016.&nbsp;My highest scoring public leaderboard entries also used single svms for classifiers.</p> <p><br> <strong>Sparse Filtering.</strong></p> <p>Sparse filtering is a relatively new method for unsupervised feature learning, introduced by Ngiam, et al in 2011. The paper on it is here:</p> <p><a id="x_SparseFilterPaper" href="http://cs.stanford.edu/~jngiam/papers/NgiamKohChenBhaskarNg2011.pdf">Sparse Filtering Paper</a></p> <p>It computes a set of features in unsupervised learning using an objective that is not reconstruction-based. instead it optimizes for characteristics of good features directly. Specifically, it seeks to ensure that features are about equally active overall, and that each example has a few, and only a few, significantly active features.</p> <p>This was the first time that I have ever used, or even heard of, sparse filtering.&nbsp;Thanks to Foxtrot for pointing the method out. Obviously, the method performed well. Also, I found it to be pretty easy to get up and running. In the future,&nbsp;I will be pretty quick to reach for this method on semi-supervised tasks. Having said that, it has a few quirks:</p> <ol> <li>Different runs of the algorithm return different feature sets that, in general, have different classification performance. This difference can be significant. For instance, I ran 3, 100-feature sets together on the same data, stopping every few hundred iterations to train a linear kernel svm with the resulting features. The best one got 54.1 % after 800 iterations, the worst, 49.4%. </li><li>The sparse filtering objective function is not a good guide to classification performance. Feature sets that have higher classification performance can also have higher loss under the sparse filtering objective. </li><li>The classification performance of the same feature set can change for the better or for the worse during unsupervised sparse filter training. For instance, a feature set that i used in my first competition entries led to classifiers with an initial maximum of near 56% in accuracy (local 10-fold cv) at 800 iterations, then it went down to 55% and stayed there for hundreds of iterations, before ending at 57.6% at 2000 iterations. The next day, I tried training it longer and it headed back down. </li><li>On the positive sode, feature sets that are good for classification are usually relatively good at any iteration count. That is, if you run a few sets at the same time, one that starts out as a good set will probably be good at higher iterations, too. In the test described in (1), the bad set produced a classifier that had 48% at 200 iterations; the good set was at 51% at the same point. </li></ol> <p>Some of these are just the usual unsupervised learning caveats. For instance, k-means will return different centroids in different runs of the algorithm. If the centroids are used as the basis for supervised learning features, the resulting classifiers will, in general, have different performance. Likewise, unsupervised learning objectives are generally not good guides to downstream performance. The bad point, out of those listed above, is the third. It means that we really don't know when to stop.</p> <p><br> <strong>What else I tried.</strong></p> <p>Initially, I tried running svms and random forests on the raw data, the raw data&nbsp;mean-normalized and sigma-scaled, PCA with whitening, reduced to various numbers&nbsp;of components, and ICA. I used the fastICA package in R for the ICA.&nbsp;I used the training and test data to get the PCA basis,the whitening matrix,&nbsp;and to do ICA, but I didn't use any of the bulk data for that.&nbsp;Once when I did use some of the bulk data in a PCA basis, accuracy fell back a bit, which I attributed to a difference in distribution between the test and bulk&nbsp;data.&nbsp;With an rbf kernel, the svm with ICA or PCA whitened features topped out at around 47% in local cross-validation, which is probably around 50% on the board,&nbsp;but I didn't submit any of that. The random forest was a little lower. The best number of components that I found was around 35 for both.&nbsp;That is surprisingly low, but this data seems to have a lot of pretty strongly correlated features.</p> <p>Later in the competition, after I had sparse filtering &#43; feature selection &#43; svm working pretty well, I spent most of my time optimizing that. I did try k-means features using some of the Cotes, Lee and Ng code, using triangle activation.&nbsp;They were better than PCA whitened or ICA features, but only by a few percent, and&nbsp;by that time, my current approach was in the mid-to-high 60's, so I abandoned it&nbsp;fairly quickly. I tried selecting and aggregating kmeans features, and that helped&nbsp;a bit, but not as much as with sparse filter features.&nbsp;I was unable to combine sparse filter features and kmeans features to any good effect.&nbsp;I also tried a voted ensemble of sparse filter features without feature selection,&nbsp;each with its own svm(that is, one SF feature set used whole, one svm, repeat).&nbsp;I only worked on that for a bit. It got to a bit over 64% on the board.&nbsp;At one point, I tried stacking sparse filter features, but I didn't get that to work well. Foxtrot says on his blog that he tried two layers and that he didn't find it to be any better, either. I also tried a couple of other activation functions. ReLU was bad, quadratic was better, but&nbsp;not as good as the soft absolute value function that is already used, so I didn't use either. Since initialization matters, I tried a few other initial distributions:&nbsp;Laplacian, randomly selected datapoints...there may have been one other. None of that helped.</p> <p><strong>Links</strong></p> <p>Ngiam's paper code, which I adapted in this challenge, is here:</p> <p><a id="x_SparseFilteringCode" href="https://github.com/jngiam/sparseFiltering">Ngiam's Sparse Filtering Code</a></p> <p>My challenge code is on my Bitbucket account here:</p> <p><a id="x_bitbucket" href="https://bitbucket.org/dthal/blackbox-challenge-code">Challenge Code</a></p> <p>Foxtrot's blog has some other results about sparse filtering:</p> <p><a id="x_fastml" href="http://fastml.com/more-on-sparse-filtering-and-the-black-box-competition/">fastml</a></p> <p></p> <p>Finally, I'd like to thank Ian and the organizers for putting on a good challenge,&nbsp;Jiquan Ngiam and his coauthors for coming up with this interesting new method,&nbsp;and Foxtrot for bringing it to my attention.</p> <p>Happy Hacking,</p> <p>doubleshot</p>
Challenges in Representation Learning: The Black Box Learning Challenge
2nd place entry (code+paper)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Challenges in Representation Learning: The Black Box Learning Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I attach my submission code and workshop paper to submit. Because the paper is a draft version, please do not link this version of paper on blog or website. And though I used this submission code really, the code used in the paper is a little bit different from this. I will post the well-organized code with the final version of my paper&nbsp;later.</p> <hr> <p><strong>The full description of my approach</strong></p> <p>At first, I used 1 hidden layer neural network with rectified hidden unit + softmax output unit. The private score of pure supervised learning with 1000 labeled data is 0.53~0.57. At that time I used simple mini-batch SGD with fixed learning rate + momentum.</p> <p>For unlabeled data, I was searching for a simpler way of semi-supervised learning without pre-training. So I tried <strong>pseudo-label</strong> method and ranked 1st at that time. For the performance, I used <strong>sigmoid output unit</strong> (inspired by CAE) and <strong>simultaneous training with labeled and unlabeled data.</strong></p> <p>The important thing is that <strong>pseudo-labels are re-calculated every weights update</strong>. If we calculate pseudo-label once after training with only labeled data, pseudo-label might be less accurate because the network is overfitted. After training several initial epochs with only labeled data, the network should be trained with labeled data and unlabeled data using continuously re-calculated pseudo-label. This scheme improve the generalization performance really. Private score is ~0.65</p> <p>In training with pseudo-label, <strong>the balance between labeled data and unlabeled data is also important.</strong> Because the number of unlabeled data is far more than labeled data, mini-batch size also should be larger. And balancing coefficient in loss function should increase linearly according to epoch. (detail of this scheme is in the paper.)</p> <p>Next key ingredient is dropout. <strong>dropout</strong> is an amazing technique for supervised learning of deep neural network. Superficially my method looks like supervised learning, dropout technique can boost up my method significantly. Private score is ~0.6844</p> <p>To enter winning entries, I used also <strong>polarity splitting</strong>. (not included in the paper )&nbsp;Some nice papers for image recognition use polarity splitting. (inspired by &quot;The Importance of Encoding Versus Training with Sparse Coding and Vector Quantization, Adam Coates and Andrew Y. Ng. In ICML 28, 2011.&quot;) In the case of rectifier unit, minus part of net activation is not used. Then additional features using -W are almost always helpful. For this, I trained the network one more with W and -W. So the number of features was doubled. Private score with this technique is ~0.6958.</p> <hr> <p>&nbsp;</p> <p><strong>* My code requires jacket - GPU matlab toolbox</strong>. It is easy to convert to CPU version : grand -&gt; rand, gsingle -&gt; single, grandn -&gt; randn, gzeros -&gt; zeros.&nbsp;</p>
Challenges in Representation Learning: The Black Box Learning Challenge
models
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Challenges in Representation Learning: The Black Box Learning Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I use a 10 layers network as base, ensemble 400 such networks, and other svm/random forest</p> <p></p> <p>but overfit</p> <p></p> <p>The structure is like:</p> <p>INPUT-Autoencoder-Autoencoder-Autoencoder-Autoencoder-Maxout-Rectified Linear-Maxout-Softmax-Argmax</p> <p>Unsupervised Learning help the&nbsp;Maxout-Rectified Linear-Maxout-Softmax-Argmax network a lot, but contribute to overfit.</p> <p></p> <p>My classmate AuroraXie doesn't make complex ensemble only independently use one deep network, and got similar score in both public/private board.</p> <p>I ensemble 1000&#43; different complex models, and drop from 3rd in public to &nbsp;7th in private</p> <p></p> <p>Congratulations to the winners!</p> <p>And does anyone else, eg 3rd or 4th use deep network? It seems deep network fails in this time.</p> <p></p> <p></p> <p></p> <p>that's interesting. I also use 1000&#43; random forest for blending.</p> <p>but contribute to extremely overfit</p> <p></p> <p>your blending is awesome!</p> <p>[quote=Sergey Yurgenson;24813]</p> <p>My model&nbsp; was a blend of multiple 800x100 NNs with dropout &nbsp;and RFs trained on Sparse Filtering features. The most interesting and significant gain (from ~0.6 to ~0.66)&nbsp; I got when I used RF as a blending method using each class probabilities from different models &nbsp;as inputs.</p> <p>[/quote]</p> <p></p> <p>using all data to train or just 1000? 98% is awesome..... if so we have a great gap between it</p> <p></p> <p>[quote=Ian Goodfellow;24795]</p> <p>All methods that have been applied to it before depended heavily on knowledge of what the data was. The best result I know of is just over 98% accuracy.</p> <p>[/quote]</p> <p>You can modify the train.py, add what you want in the main_loop</p> <p>the other method is using serial.load to load a pkl file, then get its layers, use fprop function get the result of each layer.</p> <p></p> <p>Indeed, I have done both, but it is very dirty. So I think it's better to keep it privately :(</p> <p></p> <p>Here is two pictures I post in my Chinese weibo, and I think it may be helpful to you:</p> <p>First is using high level feature to train SVM.</p> <p>The other picture is two autoencoders, independently mapping training data to 2-dimension.</p> <p>The first can achieve 0.65&#43; with random forest, the second can only achieve 0.30 with same config of random forest.</p> <p></p> <p>[quote=José A. Guerrero;25110]</p> <p>I'm interested how to do feature extraction with pylearn2. I have a DAE trained with the labeled/unlabeled data and I want extract the features for the labeled training set and use then other tools like RF.</p> <p>I think TransformerDataset should be usefull but don't know how use it exactly. Any thought?</p> <p></p> <p></p> <p>[/quote]</p>
Chess ratings - Elo versus the Rest of the World
2nd place: TrueSkill Through Time
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Chess ratings - Elo versus the Rest of the World <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|>(Dupe)<br mce_bogus="1"> OK, fixed now.<br mce_bogus="1"> Here's a little pic of my submission history:<br><img src="http://jhoward.fastmail.fm/images/forum/SubmissionPath.PNG" mce_src="http://jhoward.fastmail.fm/images/forum/SubmissionPath.PNG"><br>The points show the public leaderboard score, and the final score, for each submission. As you can see, they are very closely related (r^2 = 0.97). Kaggle has previously shown that a pool of other submissions from the leaderboard do not have such a close relationship - I think this is because I only had a short time, so my submissions really needed to be based on a solid theoretical basis, rather than parameter tuning. Although parameter tuning can easily overfit, you can see from the above picture that carefully targeted changes can allow you can get a good idea of how you're doing from the leaderboard. (I also found a similar level of relationship between my validation set results, and the leaderboard results).<br><br>The lines connecting the points show the progress of my scores over time. You can clearly see how after the first few days, going from standard TrueSkill over to TrueSkill Through Time, the results make a sudden jump. You can see how up until that time, I was mainly going backwards!<br><br>You can also see after I switched to TrueSkill Through Time, I had a few days of going nowhere - up until the last few days, when I forced myself to take a more thoughtful approach, due to the lack of time.<br><br>One takeaway from this chart, I think, is to note that a few days of failure is no reason to give up - improvements tend to be sudden, as new insights are made, rather than gradual. So, doing well in these competitions requires a certain amount of resilience - even when things are going badly on the leaderboard, you're skill learning, and still have the chance to make improvements as long as you're still trying.<br><br>Another takeaway is that it's better to try lots of completely different things, rather than trying to precisely tune whatever you have so far. At least in this competition, for my results, I saw big improvements from adding substantial pieces to the algorithm, and not much from fine-tuning the details.<br> I would like to make my code available, however I can't find any license information for Microsoft Research's TrueSkill Through Time code. It is shown as being (c) Ralf Herbrich - perhaps if one of you guys knows how to get in contact with him, you could ask him to provide a distribution license for the code (e.g. Microsoft Permissive License).<br mce_bogus="1"> Wow, this is a surprise! I looked at this competition for the first time 15 days ago, and set myself the target to break into the top 100. So coming 2nd is a much better result than I had hoped for!...&nbsp; I'm slightly embarrassed too, because all I really did was to combine the clever techniques that others had already developed - I didn't really invent anything new, I'm afraid. Anyhoo, for those who are interested I'll describe here a bit about how I went about things. I suspect in many ways the process is more interesting than the result, since the lessons I learnt will perhaps be useful to others in future competitions.<br><br>I realised that, by starting when there was only 2 weeks to go, I was already a long way behind.&nbsp; So my best bet was to leverage existing work as much as possible - use stuff which has already been shown to work! Also, I would have to stick to stuff I'm already familiar with, as much as possible. Therefore, I decided initially to look at Microsoft's TrueSkill algorithm: there is already a C# implementation available (a language which I'm very familiar with), and it's been well tested (both in practice, on XBox live, and theoretically, in various papers).<br><br>So, step one: import the data. The excellent <a mce_href="http://www.filehelpers.com/" href="http://www.filehelpers.com/">FileHelpers</a> library meant that this was done in 5 minutes.<br><br>Step two: try to understand the algorithm. Jeff Moser has a <a mce_href="http://www.moserware.com/2010/03/computing-your-skill.html" href="http://www.moserware.com/2010/03/computing-your-skill.html">brilliant exposition</a> about how TrueSkill works, along with full source code, which he most generously provides. I spent a few hours reading and re- reading this, and can't say I ever got to a point where I fully understood it, but at least I got enough of the gist to make a start. I also watched the very interesting <a mce_href="http://tv.theiet.org/technology/infopro/turing-2010.cfm" href="http://tv.theiet.org/technology/infopro/turing-2010.cfm">Turing Lecture</a> by Chris Bishop (who's book on pattern recognition was amongst the most influential books I've read over the years), which discusses the modern Bayesian Graphical Model approach more generally, and briefly touches on the TrueSkill application.<br><br>Step three: make sure I have a way to track my progress, other than through leaderboard results (since we only get 2 submissions per day). Luckily, the competition provides a validation set, so I tried to use that where possible. I only ever did my modelling (other than final submissions) using the first 95 months of data - there's no point drawing conclusions based on months that overlap with the validation set!<br><br>I also figured I should try to submit twice everyday, just to see how things looked on the leaderboard. My day one submission was just to throw the data at Moser's class using the default settings. I noticed that if I reran the algorithm a few times, feeding in the previous scores as the starting points, I got better results. So I ran it twice, and submitted that. Result: 0.696 (1st place was about 0.640 - a long way away!) (For the predictions based on the scores, assuming the scores for [white, black] are [s1, s2], I simply used (s1+100)/(s1+s2). The 100 on the top is give white a little advantage, and was selected to get the 54% score that white gets on average).<br><br>For the next few days, I went backwards. Rather than looking at graphs of score difference vs win%, I assumed that I should switch to a logistic function, which I did, and I optimised the parameters to the using a simple hill-climb algorithm. This sent my score back to 0.724. I also tried optimising the individual player scores directly. This sent my score back to 0.701. This wasted effort reminded me that I should look at pictures before I jump into algorithms. A graph of win% against white score (with separate lines for each quartile of black score), clearly showed the a logistic function was inappropriate, and also showed that there were interactions that I needed to think about.<br><br>So, after 5 days, I still hadn't made much improvement (minor tweaks to Trueskill params had got me to 0.691, barely any improvement from day 1). So I figured I needed a whole different approach. And now I only had 10 days to go...<br><br>It concerned me that Trueskill took each individual match and updated the scores after every one - it never fed the later results back to re-score the earlier matches. It turns out that (of course!) I wasn't the first person to think about this problem, and that it had been thoroughly tackled in the "<a mce_href="http://blogs.technet.com/b/apg/archive/2008/04/05/trueskill-through-time.aspx" href="http://blogs.technet.com/b/apg/archive/2008/04/05/trueskill-through-time.aspx">TrueSkill Through Time</a>" paper from MS Research's Applied Games Group. This uses Bayesian inference to calculate a theoretically-optimal set of scores (both mean and standard deviation, by player).<br><br>Unfortunately the code was written for an old version of F#, so it no longer works with the current version. And it's been a while since I've used F# (actually, all I've done with it is some Project Euler problems, back when Don Syme was first developing it; I've never actually done any Real Work with it). It took a few hours of hacking to get it to compile. I also had to make some changes to make it more convenient to use it as a class from C# (since it was originally designed to be consumed from an F# console app). I also changed my formula for calculating predictions from scores, to use a cumulative gaussian - since that is what is suggested in the TrueSkill Through Time paper. My score now jumped to 0.669.<br><br>The paper used annual results, but it seemed to me that this was throwing away valuable information. I switched to monthly results, which meant I had to find a new set of parameters appropriate for this very different situation. Through simple trial and error I found which params were the most sensitive, and then used hill-climbing to find the optimum values. This took my score to 0.663.<br><br>Then I added something suggested in the Chessmetrics writeup on the forum - I calculated the average score of the players that each person played against. I then calculated a weighted average of each player's actual score, and the average of their opponents. I used a hill-climb algorithm to find the weighting, and also weighted it by the standard deviation of their weighting (as output by Trueskill/Time). This got me to 0.660 - 20th position, although later someone else jumped above me to push me to 21st.<br><br>The next 5 days I went backwards again! I tried an ensemble approach (weighted average of TrueSkill, TrueSkill/Time, and ELO), which didn't help - I think because TrueSkill/Time was so much better, and also because the approaches aren't different enough (ensemble approaches are best when combining approaches which are very different). I tried optimising some parameters in both the rating algorithm, and in the gaussian which turns that into probabilities for each result. I also tried directly estimating and using draw probabilities separate from win probabilities.<br><br>I realised that one problem was that my results on the validation set weren't necessarily showing me what would happen on the final leaderboard. I tried doing some resampling of the validation set, and realised that different samples gave very different results. So, the validation set did generally effectively show the impact when I made a change which was based on a solid theoretical basis, but it was also easy to get meaningless increases by thoughtless parameter optimisation.<br><br>On Nov 15 I finally made an improvement - previously in the gaussian predictor function I had made the standard deviation a linear function of the overall match level [i.e. (s1+s2)/2]. But I realised from looking at graphs that really it's that a stronger black player is better at forcing a draw - it's really driven by that, not by the combined skill. So I made the standard deviation a linear function of black's skill only. Result: 0.659.<br><br>So, it was now Nov 16 - two days to go, and not yet even in the top 20! I finally decided to actually carefully measure which things were most sensitive, so that I could carefully manage my last 4 submissions. If I had been this thorough a week ago, I wouldn't have wasted so much valuable time! So, I discovered that the following had the biggest impact on the validation set:<br><ul><li>- Removing the first few months from the training data; removing the first 34 months was optimal for the validation set, so I figured removing the first 40 months would be best for the full set</li><li>- Adjusting the constant in the calculation of the gaussian's standard deviation - if too high, the predictions varied too much, if too little, the predictions were all too close to 0.5</li><li>- And a little trick: I don't know much (anything!) about chess, but I figured that there must be some knockout comps, so people who play more perhaps are doing so because they're not getting knocked out! So, I tried using the count of a player's matches in the test set as a predictor! It didn't make a huge difference to the results, but every little bit counts...<br></li></ul>Based on this, my next 3 submissions were:<br><ul><li>- Remove first 40 months: 0.658</li><li>- Include count of matches as a prediction: 0.654</li><li>- Increase the constant in the stdev formula by 5%: 0.653<br></li></ul>(My final submission was a little worse - I tried removing players who hadn't played at least 2 matches, and I also increased the weight of the count of matches: back to 0.654).<br><br>For me, the main lesson from this process has been that I should more often step back and think about the fundamentals. It's easy to get lost in optimising the minor details, and focus on the solution you already have. But when I stepped away from the PC for a while, did some reading, and got back to basics with pen and paper, is when I had little breakthoughs.<br><br>I also learnt a lot about how to use validation sets and the leaderboard. In particular, I realised that when you're missing a fundamental piece of the solution, then little parameter adjustments that you think are improvements, are actually only acting as factors that happen to correlate with some other more important predictor. So when I came across small improvements in the validation set, I actually didn't include them in my next submitted answer - I only included things that made a big difference. Later in the competition, when I had already included the most important things, I re-tested the little improvements I had earlier put aside.<br><br>Please let me know if you have any questions. I would say that, overall, TrueSkill would be a great way to handle Chess leaderboards in the future. Not because it did well in this competition (which is better at finding historical ratings), but because, as shown in Chris Bishop's talk, it is amazingly fast at rating people's "true skill". Just 3 matches or so is enough for it to give excellent results.<br mce_bogus="1"> Yes, that's right, it's a use of future data, which doesn't help real-world ratings - it's only useful for this particular competition (although it wasn't so useful in the final result, with only a 0.001 impact - it was more useful in the 20% leaderboard, where it showed a 0.004 impact).<br mce_bogus="1">
Chess ratings - Elo versus the Rest of the World
3rd place: Chessmetrics - Variant
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Chess ratings - Elo versus the Rest of the World <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|>Would somebody please correct the header of the thread?<br><br>It should read<br><br><h3 class="proxima topic-title">3rd place: Chessmetrics - Variant<br></h3><br><br><br><br>... and don't find a measure to do it myself.<br><br>Thanks a lot<br>Martin<br mce_bogus="1"> <result_in><test_in><test_in><br mce_bogus="1"></test_in></test_in></result_in> <p><span id="post_text_content_1230"><div dir="ltr" align="left">Dear all,</div></span></p><div dir="ltr" align="left"></div><br>it was a great competition, thanks a lot. <br><div dir="ltr" align="left"><br>Here some notes regarding my approach.</div><div dir="ltr" align="left"></div> <p><br></p><div dir="ltr" align="left">methodology:</div> <div dir="ltr" align="left">&nbsp;</div> <div dir="ltr" align="left">- all coding was done using&nbsp;perl</div> <div dir="ltr" align="left">-&nbsp;I&nbsp;tested some established basics like elo and chessmetrics using the training data</div> <div dir="ltr" align="left">- I found chessmetrics a very promising approach - regarding the parameters&nbsp;performance, opposition quality,&nbsp;activity, weighting, self consistant rating&nbsp;over a time period</div> <div dir="ltr" align="left"><br><br>- I varied the rating formula depending on the parameters above</div><br><div dir="ltr" align="left"><br>my best submission materialized in using:</div> <div dir="ltr" align="left">&nbsp;</div> <div dir="ltr" align="left">- the complete training data from week 1 to 100, and not a subset of these data</div> <div dir="ltr" align="left">&nbsp;</div><br>- using the iterative rating formula: <div dir="ltr" align="left">&nbsp; <br></div>rating = (weighed_performance + weighed_opponent_rating + weighed_tie_to_defined_rating_level + extra_points) / (sum(game_weight) + weight_opponent_rating + weight_tie_to defined_rating_level) <div dir="ltr" align="left"> </div><p><br></p><p>weight_tie_to&nbsp;defined_rating_level&nbsp;= 2.5</p><div dir="ltr" align="left"></div><p><br></p><div dir="ltr" align="left">game_weight&nbsp; = (1 / (1 + ( 100 - month ) / 48 ) )**2</div> <div dir="ltr" align="left">&nbsp;</div> <div dir="ltr" align="left">white_advantage = 27.5</div> <div dir="ltr" align="left">&nbsp;</div> <div dir="ltr" align="left">extention = 850</div> <div>&nbsp;</div> <div dir="ltr" align="left">&nbsp;weighed_performance = sum ( (opponent_rating +- white_advantage&nbsp;+ extention * (result-0.5)&nbsp;) * game_weight )</div> <div dir="ltr" align="left">&nbsp;</div> <div dir="ltr" align="left">weight_opponent_rating = 12.5</div> <div>&nbsp;</div> <div>weighed_opponent_ra ting =&nbsp;weight_opponent_rating * sum( (opponent_rating +- white_adavantage) * (game_weight) ) / sum(game_weight)</div> <div dir="ltr" align="left">&nbsp;</div> <div dir="ltr" align="left">defined_rating_level = 2300</div> <div>&nbsp;</div> <div>weighed_tie_to_defi ned_rating_level =&nbsp;weight_tie_to&nbsp;defined_rating_level&nbsp;* defined_rating_level</div> <div dir="ltr" align="left"> </div><div dir="ltr" align="left"> <div dir="ltr" align="left">&nbsp;</div></div> <div dir="ltr" align="left">extra_points=24.5</div> <p><br></p><p><br></p><div dir="ltr" align="left">- using the prediction formula:</div> <div dir="ltr" align="left">&nbsp;</div> <div dir="ltr" align="left">probability(win_player_white) = ((rating_player_white + white_advantage) - (rating_player_black - white_advantage)) / extention&nbsp;&nbsp;+ 0.5 ; limited to {0..1}<br><br><br></div> <a target="_blank" title="sub_23" mce_href="http://www.ratatoek.de/elo_comp_23_test_pl.txt" href="http://www.ratatoek.de/elo_comp_23_test_pl.txt">Perl source of submission 23.</a><br><br><br>Meanwhile I also updated my profile on Kaggle ...<br><br><p><br></p><div dir="ltr" align="left">Cheers</div><p><br></p><p><span id="post_text_content_1230"><span class="468234720-18112010"><span style="font-family: Arial;" mce_style="font-family: Arial;"><span style="font-size: x-small;" mce_style="font-size: x-small;"><span style="color: rgb(0, 0, 255);" mce_style="color: #0000ff;"><span class="468234720-18112010"><span style="font-family: Arial;" mce_style="font-family: Arial;"><span style="font-size: x-small;" mce_style="font-size: x-small;"><span style="color: rgb(0, 0, 255);" mce_style="color: #0000ff;"><span class="468234720-18112010"><span style="font-family: Arial;" mce_style="font-family: Arial;"><span style="font-size: x-small;" mce_style="font-size: x-small;"><span style="color: rgb(0, 0, 255);" mce_style="color: #0000ff;"><span class="468234720-18112010"><div dir="ltr" align="left">Martin</div></span></span></span></span></span></span></span></span></span></span></span></span></span></span></p>
Chess ratings - Elo versus the Rest of the World
6th place(UriB) by Uri Blass
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Chess ratings - Elo versus the Rest of the World <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><P><FONT face=Verdana>only one correction of the code<BR><BR>The original code had<BR><FONT size=2><FONT color=#0000ff>for</FONT> (i=0;i&lt;10;i++)</FONT><FONT color=#008000><FONT color=#008000 size=2>//i is the number of iteration<BR><BR></FONT></FONT><BR><BR>I changed it to 100 iterations in the code that I posted.<BR>100 iterations probably give better result than the result that I got but&nbsp;not good enough for the first place(my tests suggest small improvement in predicting of less than 0.1%)&nbsp;.&nbsp;<FONT color=#008000 size=2><FONT color=#008000 size=2></P></FONT></FONT></FONT> <FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>I decided to post all the code that I have for my best prediction in this competition directly<BR>so everybody can run the code and verify the results<BR><BR>#include</FONT></FONT><FONT size=2> </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>&lt;stdio.h&gt;</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>#include</FONT></FONT><FONT size=2> </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>&lt;stdlib.h&gt;</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>#include</FONT></FONT><FONT size=2> </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>&lt;assert.h&gt;</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>#include</FONT></FONT><FONT size=2> </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>&lt;memory.h&gt;</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>#include</FONT></FONT><FONT size=2> </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>&lt;math.h&gt;</P></FONT></FONT><FONT size=2> <P>FILE *stream_training, *stream_test;</P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2> <P>//stream_test step1 test data,stream_training step 2 training_data</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>const</FONT></FONT><FONT size=2> </FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> games_training=65053;</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>const</FONT></FONT><FONT size=2> </FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> games_test=7809;</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>const</FONT></FONT><FONT size=2> </FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> total_games=73062;</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>const</FONT></FONT><FONT size=2> </FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> aboveidnumber=8632;</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>int</FONT></FONT><FONT size=2> isplayer2[aboveidnumber];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//constant after step 1</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>int</FONT></FONT><FONT size=2> month_test[games_test];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//constant after step 1</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>int</FONT></FONT><FONT size=2> month[total_games];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//constant after step 1</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>int</FONT></FONT><FONT size=2> whiteplayer[total_games];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//constant after step 1</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>int</FONT></FONT><FONT size=2> blackplayer[total_games];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//constant after step 1</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>int</FONT></FONT><FONT size=2> whiteplayer_test[games_test];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//constant after step 1</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>int</FONT></FONT><FONT size=2> blackplayer_test[games_test];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//constant after step 1</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>int</FONT></FONT><FONT size=2> isplayer[aboveidnumber];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//constant after step 2</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>double</FONT></FONT><FONT size=2> result[total_games];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//constant after step 2</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>float</FONT></FONT><FONT size=2> read_result[total_games];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//constant after step 2</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>int</FONT></FONT><FONT size=2> num_opp[aboveidnumber][106][40];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//third field start with number of players to play at constant after step 3</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>int</FONT></FONT><FONT size=2> result_player[aboveidnumber];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//constant after step 3</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>int</FONT></FONT><FONT size=2> num_color[aboveidnumber][106][40];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//constant after step 3</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>int</FONT></FONT><FONT size=2> num_result[aboveidnumber][106][40];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//constant after step 3</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>int</FONT></FONT><FONT size=2> numgames[aboveidnumber][106];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//constant after step 3</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>int</FONT></FONT><FONT size=2> connected[aboveidnumber];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//constant after step 3</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>double</FONT></FONT><FONT size=2> observed_result[aboveidnumber][106];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//constant after step 4</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>double</FONT></FONT><FONT size=2> observed_result_small[aboveidnumber][106][5];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//constant after step 4</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>int</FONT></FONT><FONT size=2> experienced_players[total_games][5];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//constant after step 4 is used to calculate the expected result</P> <P>//and get 1 only if the relevant game is by 2 players with at least 48 games to be considered for RMSE </P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>double</FONT></FONT><FONT size=2> expected_result_small[aboveidnumber][106][5];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//initiailized in step 3 and is the expected result for RMSE</P> <P>//(calculated at step 6</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>double</FONT></FONT><FONT size=2> strength[aboveidnumber][106];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//strength is based on all the data constant after step 5 that repeats itself</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>double</FONT></FONT><FONT size=2> expect_result_game[total_games][5];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//constant after step 6 and </P> <P>//used to calculate later the expected result and the expected_result_small tables</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>double</FONT></FONT><FONT size=2> predicted[games_test];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//used only in the final step of the program to build the submission</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>double</FONT></FONT><FONT size=2> expected_result[aboveidnumber][106];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//the expected result that include unexperienced players </P> <P>//initinalized in step 6 but I did not use it</P></FONT></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>const</FONT></FONT><FONT size=2> </FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> whitecolor=0;</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>const</FONT></FONT><FONT size=2> </FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> blackcolor=1;</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>void</FONT></FONT><FONT size=2> read_test()</P> <P>{</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//step 1 of the program that read the test data</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//for every game we have month white and black and it is also saved in the full data that include the training data</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> i;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>char</FONT></FONT><FONT size=2> c;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> ((stream_test = fopen(</FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"test_data.csv"</FONT></FONT><FONT size=2>, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"r+b"</FONT></FONT><FONT size=2>)) == NULL) {</P> <P>printf(</FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"The file 'test_data.csv' was not opened\n"</FONT></FONT><FONT size=2>);</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>return</FONT></FONT><FONT size=2>;</P> <P>} </FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>else</P></FONT></FONT><FONT size=2> <P>printf(</FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"The file 'test_data.csv' was opened\n"</FONT></FONT><FONT size=2>);</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (stream_test) {</P> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>/* start reading from stream */</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i = 1; i &lt; 53; i++) {</P> <P>fscanf(stream_test, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%c"</FONT></FONT><FONT size=2>, &amp;c);</P> <P>}</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i = 0; i &lt; aboveidnumber; i++)</P> <P>isplayer2[i] = 0;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i = 0; i &lt; games_test; i++) {</P> <P>fscanf(stream_test, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%d"</FONT></FONT><FONT size=2>, &amp;month_test[i]);</P> <P>month[games_training+i]=month_test[i];</P> <P>fscanf(stream_test, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%c"</FONT></FONT><FONT size=2>, &amp;c);</P> <P>fscanf(stream_test, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%d"</FONT></FONT><FONT size=2>, &amp;whiteplayer_test[i]);</P> <P>whiteplayer[games_training+i]=whiteplayer_test[i];</P> <P>isplayer2[whiteplayer_test[i]] = 1;</P> <P>fscanf(stream_test, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%c"</FONT></FONT><FONT size=2>, &amp;c);</P> <P>fscanf(stream_test, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%d"</FONT></FONT><FONT size=2>, &amp;blackplayer_test[i]);</P> <P>blackplayer[games_training+i]=blackplayer_test[i];</P> <P>isplayer2[blackplayer_test[i]] = 1;</P> <P>fscanf(stream_test, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%c"</FONT></FONT><FONT size=2>, &amp;c); </FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>/* , */</P></FONT></FONT><FONT size=2> <P>fscanf(stream_test, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%c"</FONT></FONT><FONT size=2>, &amp;c); </FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>/* enter */</P></FONT></FONT><FONT size=2> <P>}</P> <P>fclose(stream_test);</P> <P>}</P> <P>}</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>void</FONT></FONT><FONT size=2> read_training_data()</P> <P>{</P> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//step 2 read the training data for every game we have month white black and result </P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> i;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>char</FONT></FONT><FONT size=2> c;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> sum_result=0;</P> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>/* start reading from stream */</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i = 1; i &lt; 53; i++) </P> <P>{</P> <P>fscanf(stream_training, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%c"</FONT></FONT><FONT size=2>, &amp;c);</P> <P>printf(</FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%c"</FONT></FONT><FONT size=2>, c);</P> <P>}</P> <P>memset(isplayer, 0, </FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>sizeof</FONT></FONT><FONT size=2>(isplayer));</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i = 0; i &lt; games_training; i++) </P> <P>{</P> <P>fscanf(stream_training, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%d"</FONT></FONT><FONT size=2>, &amp;month[i]);</P> <P>fscanf(stream_training, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%c"</FONT></FONT><FONT size=2>, &amp;c);</P> <P>fscanf(stream_training, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%d"</FONT></FONT><FONT size=2>, &amp;whiteplayer[i]);</P> <P>isplayer[whiteplayer[i]] = 1;</P> <P>fscanf(stream_training, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%c"</FONT></FONT><FONT size=2>, &amp;c);</P> <P>fscanf(stream_training, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%d"</FONT></FONT><FONT size=2>, &amp;blackplayer[i]);</P> <P>isplayer[blackplayer[i]] = 1;</P> <P>fscanf(stream_training, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%c"</FONT></FONT><FONT size=2>, &amp;c);</P> <P>fscanf(stream_training, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%f"</FONT></FONT><FONT size=2>, &amp;read_result[i]); </FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>/* 0.5 */</P></FONT></FONT><FONT size=2> <P>result[i] = read_result[i];</P> <P>sum_result=sum_result+result[i];</P> <P>fscanf(stream_training, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%c"</FONT></FONT><FONT size=2>, &amp;c); </FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>/* enter */</P></FONT></FONT><FONT size=2> <P>}</P> <P>printf(</FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"sum is %f "</FONT></FONT><FONT size=2>,sum_result);</P> <P>}</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>void</FONT></FONT><FONT size=2> fill_num_opp_array()</P> <P>{</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//step 3 building the basic arrays</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//the arrays are:1)num_opp[num_player][num_month][0]</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//means number of opponents of player with id num_player in num_month</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//num_opp[num_player][num_month][j] for j&gt;0 means num_player of opponent number j at the same month</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//num_color[num_player][num_month][j] for the same j&gt;0 gives the color that num_player played in game number j of </P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//month num_month</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//num_result of the same parameters means the result that the player got in the relevant game when win is 2</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//draw is 1 and loss is 0</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//result_player[i] means the total half points that i got in the training games</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> i,j,k,num,sum;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> white,black;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> month1;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> diff;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> weight;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i=0;i&lt;aboveidnumber;i++)</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (j=1;j&lt;=100;j++)</P> <P>num_opp[i][0][j]=0;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i=0;i&lt;aboveidnumber;i++)</P> <P>result_player[i]=0;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i=0;i&lt;games_training+games_test;i++)</P> <P>{</P> <P>month1=month[i];</P> <P>white=whiteplayer[i];</P> <P>black=blackplayer[i];</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (i&lt;games_training)</P> <P>{</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (result[i] == 1)</P> <P>result_player[white] += 2;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>else</FONT></FONT><FONT size=2> </P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (result[i] == 0)</P> <P>result_player[black] += 2;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>else</FONT></FONT><FONT size=2> </P> <P>{</P> <P>result_player[white]++;</P> <P>result_player[black]++;</P> <P>}</P> <P>}</P> <P>num_opp[white][month1][0]++;</P> <P>num_opp[black][month1][0]++;</P> <P>num_opp[white][month1][num_opp[white][month1][0]]=black;</P> <P>num_color[white][month1][num_opp[white][month1][0]]=whitecolor;</P> <P>num_opp[black][month1][num_opp[black][month1][0]]=white;</P> <P>num_color[black][month1][num_opp[black][month1][0]]=blackcolor;</P> <P>num_result[white][month1][num_opp[white][month1][0]]=1;</P> <P>num_result[black][month1][num_opp[black][month1][0]]=1;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (result[i] == 1)</P> <P>{</P> <P>num_result[white][month1][num_opp[white][month1][0]]=2;</P> <P>num_result[black][month1][num_opp[black][month1][0]]=0;</P> <P>}</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>else</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (result[i]==0)</P> <P>{</P> <P>num_result[white][month1][num_opp[white][month1][0]]=0;</P> <P>num_result[black][month1][num_opp[black][month1][0]]=2;</P> <P>}</P> <P>}</P> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//initialization of arrays like numgames that is the number of games of player i after j months</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//expected_result that is the expected result of player i after j months </P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//and obsereved_result that is the observed result of player i after j months</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//connected means that the player is connected to all the player and the default value 0 means not connected</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i = 0; i &lt; aboveidnumber; i++) </P> <P>{</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (j = 0; j &lt; 106; j++) </P> <P>{</P> <P>numgames[i][j] = 0;</P> <P>expected_result[i][j] = 0;</P> <P>observed_result[i][j] = 0;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (k=0;k&lt;=4;k++)</P> <P>{</P> <P>observed_result_small[i][j][k]=0;</P> <P>expected_result_small[i][j][k]=0;</P> <P>}</P> <P>}</P> <P>connected[i] = 0;</P> <P>}</P> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//1 is connected and every player who played with 1 or played with a connected player is connected</P></FONT></FONT><FONT size=2> <P>connected[1] = 1;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (j = 0; j &lt; 5; j++) </P> <P>{</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i = 0; i &lt; games_training; i++) </P> <P>{</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (connected[whiteplayer[i]] == 1)</P> <P>connected[blackplayer[i]] = 1;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (connected[blackplayer[i]] == 1)</P> <P>connected[whiteplayer[i]] = 1;</P> <P>}</P> <P>}</P> <P>sum = 0;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i = 0; i &lt; aboveidnumber; i++)</P> <P>sum = sum + connected[i];</P> <P>printf(</FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"connected=%d\n"</FONT></FONT><FONT size=2>, sum);</P> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//updating the numagames data to be correct</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//we need also to update expected and observed data</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i = 0; i &lt; games_training+games_test; i++) </P> <P>{</P> <P>j = month[i];</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>while</FONT></FONT><FONT size=2> (j &lt;= 105) </P> <P>{</P> <P>numgames[whiteplayer[i]][j]++;</P> <P>numgames[blackplayer[i]][j]++;</P> <P>j++;</P> <P>}</P> <P>}</P> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//counting players in the test data</P></FONT></FONT><FONT size=2> <P>sum = 0;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i = 0; i &lt; aboveidnumber; i++) </P> <P>{</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (isplayer2[i] == 1) </P> <P>sum++;</P> <P>}</P> <P>printf(</FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"players in test are %d "</FONT></FONT><FONT size=2>, sum);</P> <P>&nbsp;</P> <P>}</P> <P>&nbsp;</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>void</FONT></FONT><FONT size=2> calc_observed()</P> <P>{</P> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//step 4 calculating the observed result for every player in every month</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//do it only one time we already initialized the observed array to 0 do the same for</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//observe_small</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> i,white,black,order,delay,start_check;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> observed;</P> <P>i=0;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>while</FONT></FONT><FONT size=2> (i &lt; games_training) </P> <P>{</P> <P>white = whiteplayer[i];</P> <P>black = blackplayer[i];</P> <P>order = month[i];</P> <P>observed = result[i];</P> <P>observed_result[white][order] += observed;</P> <P>observed_result[black][order] += (1 - observed);</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (delay=1;delay&lt;=5;delay++)</P> <P>{</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//we are at month number order we have results of month number order-delay and not results of the next 5 months</P></FONT></FONT><FONT size=2> <P>experienced_players[i][delay-1]=0;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (order&gt;delay)</P> <P>{</P> <P>start_check=order-delay-48;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (start_check&lt;0)</P> <P>start_check=0;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> ((numgames[white][order-delay]-numgames[white][start_check]&gt;=12)&amp;&amp;</P> <P>(numgames[black][order-delay]-numgames[black][start_check]&gt;=12))</P> <P>{</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//player, number month of playing,delay in predicting minus 1</P></FONT></FONT><FONT size=2> <P>observed_result_small[white][order][delay-1] += observed;</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//the observed results at month number order.</P></FONT></FONT><FONT size=2> <P>observed_result_small[black][order][delay-1] += (1 - observed);</P> <P>experienced_players[i][delay-1]=1;</P> <P>}</P> <P>}</P> <P>}</P> <P>i++;</P> <P>}</P> <P>}</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>int</FONT></FONT><FONT size=2> is_candidate_chess_strength(</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> month,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> missing,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> candidate)</P> <P>{</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> i;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> sum=0;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> monthmin=month-48;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> monthmax=month;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (monthmin&lt;0)</P> <P>{</P> <P>monthmin=0;</P> <P>monthmax=48;</P> <P>}</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i=0;i&lt;aboveidnumber;i++)</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> ((isplayer[i]==1)&amp;&amp;(numgames[i][monthmax]-numgames[i][monthmin]&gt;=10))</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (strength[i][month]&gt;=2625-candidate)</P> <P>sum++;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> ((sum&lt;50)&amp;&amp;(candidate&gt;=0))</P> <P>{</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//increasing by candidate is needed and done because number 50 is not more than </P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>// 2625-candidate&lt;2625</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i=0;i&lt;aboveidnumber;i++)</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (isplayer[i]==1) </P> <P>strength[i][month]+=candidate;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>return</FONT></FONT><FONT size=2> 1;</P> <P>}</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>else</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> ((sum&gt;=50)&amp;&amp;(candidate&lt;0))</P> <P>{</P> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//reducing by -candidate is needed and done because </P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//there are 50 players with at least 2625-candidate&gt;2625</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i=0;i&lt;aboveidnumber;i++)</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (isplayer[i]==1) </P> <P>strength[i][month]+=candidate;</P> <P>}</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>return</FONT></FONT><FONT size=2> 0;</P> <P>&nbsp;</P> <P>}</P> <P>&nbsp;</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>void</FONT></FONT><FONT size=2> fixplace50_chess_strength(</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> month,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> missing)</P> <P>{</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> sum=0;</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//this function changes the rating of player number 50 to be 2625</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> candidate;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (is_candidate_chess_strength(month,missing,0)==1)</P> <P>candidate=1600;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>else</P></FONT></FONT><FONT size=2> <P>candidate=-1600;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>while</FONT></FONT><FONT size=2> ((candidate&gt;0.001)||(candidate&lt;-0.001))</P> <P>{</P> <P>is_candidate_chess_strength(month,missing,candidate);</P> <P>candidate=candidate/2.0;</P> <P>}</P> <P>}</P> <P>&nbsp;</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>double</FONT></FONT><FONT size=2> calc_strength_chess_metric_specific(</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> month,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> missing,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> max_months,</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> big_dif,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> num_avg, </FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> num_weak,</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> value_weak,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> unrated,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> adding,</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> minimal_game_finished,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> reduction_per_game,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> i)</P> <P>{</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//used for step 5</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//impotant function for the prediction because it is used to calculate rating</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> k;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> diff;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> num;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> result,opponents,weight,opp_rating,avg_rating,performance_rating,return_rating;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> special_weight;</P> <P>result=0;</P> <P>opponents=0;</P> <P>opp_rating=0;</P> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//results of month+1,month+2,month+3,month+4,month+5 are missing from the data</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (k=1;k&lt;=100;k++)</P> <P>{</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> ((k&lt;=missing)||(k&gt;=missing+6))</P> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//we consider only results of these months to calculate rating at month number month</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> ((k&gt;month-max_months)&amp;&amp;(k&lt;month+max_months))</P> <P>{</P> <P>&nbsp;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (num_opp[i][k][0]&gt;0)</P> <P>{</P> <P>diff=month-k;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (diff&lt;0)</P> <P>diff=k-month;</P> <P>weight=1-(1.0/(max_months))*diff;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (num=1;num&lt;=num_opp[i][k][0];num++)</P> <P>{</P> <P>opponents+=weight;</P> <P>result+=(num_result[i][k][num]*weight); </P> <P>opp_rating+=(strength[num_opp[i][k][num]][k]*weight);</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (num_color[i][k][num]==whitecolor)</P> <P>result-=0.03*weight;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>else</P></FONT></FONT><FONT size=2> <P>result+=0.03*weight;</P> <P>}</P> <P>}</P> <P>&nbsp;</P> <P>}</P> <P>}</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (opponents&gt;0)</P> <P>{</P> <P>avg_rating=opp_rating/opponents;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (opponents&lt;minimal_game_finished)</P> <P>value_weak-=(minimal_game_finished-opponents)*reduction_per_game;</P> <P>performance_rating=avg_rating+((result-opponents)/opponents)*big_dif;</P> <P>return_rating=(performance_rating*opponents)+(avg_rating*num_avg)+(value_weak*num_weak);</P> <P>return_rating=return_rating/(opponents+num_avg+num_weak);</P> <P>return_rating+=adding;</P> <P>}</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>else</P></FONT></FONT><FONT size=2> <P>return_rating=unrated;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> ((month&gt;missing)&amp;&amp;(month&lt;=missing+5))</P> <P>return_rating+=((numgames[i][month]-numgames[i][missing])*1.2);</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>return</FONT></FONT><FONT size=2> return_rating;</P> <P>}</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>double</FONT></FONT><FONT size=2> add_opp(</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> i,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> missing,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> month)</P> <P>{</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> k;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> num_oppon=0;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> num_oppon_past=0;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> opp_rating_past=0;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> num;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> opp_rating=0;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> past_rating=0;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (k=missing-12;k&lt;=missing;k++)</P> <P>{</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (k&gt;=0)</P> <P>{</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (num_opp[i][k][0]&gt;0)</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (num=1;num&lt;=num_opp[i][k][0];num++)</P> <P>{</P> <P>opp_rating_past+=strength[num_opp[i][k][num]][k];</P> <P>num_oppon_past+=1;</P> <P>}</P> <P>}</P> <P>}</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (k=missing+1;k&lt;=missing+5;k++)</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (num_opp[i][k][0]&gt;0)</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (num=1;num&lt;=num_opp[i][k][0];num++)</P> <P>{</P> <P>opp_rating+=strength[num_opp[i][k][num]][k];</P> <P>num_oppon+=1;</P> <P>}</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (num_oppon&gt;0)</P> <P>opp_rating=opp_rating/num_oppon;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>else</P></FONT></FONT><FONT size=2> <P>opp_rating=strength[i][month];</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (num_oppon_past&gt;0)</P> <P>opp_rating_past=opp_rating_past/num_oppon_past;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>else</P></FONT></FONT><FONT size=2> <P>opp_rating=strength[i][month];</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (opp_rating&gt;opp_rating_past+60)</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>return</FONT></FONT><FONT size=2> 12;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (opp_rating&lt;opp_rating_past-60)</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>return</FONT></FONT><FONT size=2> -12;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>return</FONT></FONT><FONT size=2> (opp_rating-opp_rating_past)*0.2;</P> <P>}</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>void</FONT></FONT><FONT size=2> calc_strength_chess_metric(</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> month,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> missing,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> max_months,</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> big_dif,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> num_avg, </FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> num_weak,</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> value_weak,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> unrated,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> adding,</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> minimal_game_finished,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> reduction_per_game)</P> <P>{</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//used for step 5</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> temp_rating[aboveidnumber];</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> i=0;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i=1;i&lt;aboveidnumber;i++)</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (isplayer[i]==1)</P> <P>temp_rating[i]=calc_strength_chess_metric_specific(month,missing,max_months,big_dif,num_avg,num_weak,value_weak,</P> <P>unrated,adding,minimal_game_finished,reduction_per_game,i);</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i=1;i&lt;aboveidnumber;i++)</P> <P>{</P> <P>strength[i][month]=temp_rating[i];</P> <P>}</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i=1;i&lt;aboveidnumber;i++)</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> ((month&gt;missing)&amp;&amp;(month&lt;=missing+5))</P> <P>strength[i][month]+=add_opp(i,missing,month);</P> <P>}</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>void</FONT></FONT><FONT size=2> repeat_strength_estimate(</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> k,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> max_months,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> big_dif,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> num_avg,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> num_weak,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> value_weak,</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> unrated,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> minimal_game_finished,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> reduction_per_game,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> adding)</P> <P>{</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//step 5 calculating the rating estimate when months k+1,...k+5 are missing k can be 0-100</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> i,j;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i=0;i&lt;aboveidnumber;i++)</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (j=0;j&lt;=105;j++)</P> <P>strength[i][j]=2000;</P> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//k is the last month when result is not missing for the estimate</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i=0;i&lt;100;i++)</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//i is the number of iteration</P></FONT></FONT><FONT size=2> <P>{</P> <P>printf(</FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>" %d "</FONT></FONT><FONT size=2>,i);</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (j=0;j&lt;=105;j++)</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//j is the month that we calculate rating for it.</P></FONT></FONT><FONT size=2> <P>{</P> <P>calc_strength_chess_metric(j,k,max_months,big_dif,num_avg,num_weak,value_weak,unrated,adding,</P> <P>minimal_game_finished,reduction_per_game);</P> <P>fixplace50_chess_strength(j,k);</P> <P>}</P> <P>}</P> <P>}</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>double</FONT></FONT><FONT size=2> expected_number2(</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> diff,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> max_diff)</P> <P>{</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> num=0.5+(diff/(2*max_diff));</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//25/850=1/34</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (num&gt;0.970588)</P> <P>num=0.970588;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (num&lt;0.1)</P> <P>num=0.1;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>return</FONT></FONT><FONT size=2> num;</P> <P>}</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>double</FONT></FONT><FONT size=2> expectedwhitechess_result(</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> white,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> black,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> month)</P> <P>{</P> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//this function predicts the result of white and black in specific month</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//I currently use the rating but it is going to be changed</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> whiterating=strength[white][month];</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//first missing month is 101 </P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//in case of predicting and something else in other cases</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> blackrating=strength[black][month];</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> bonus=(whiterating+blackrating-3100)/40.0;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (bonus&gt;50)</P> <P>bonus=50;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>return</FONT></FONT><FONT size=2> expected_number2(whiterating+bonus-blackrating,425);</P> <P>}</P> <P>&nbsp;</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>void</FONT></FONT><FONT size=2> calc_expected(</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> monthx)</P> <P>{</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//monthx+1,...monthx+5 are missing and we calculate expected result </P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//for every player based on this information</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> white,black,order,i,delay,start_check,j;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> expected;</P> <P>i=0;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i = 0; i &lt; aboveidnumber; i++) </P> <P>{</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (j = monthx; j &lt; monthx+5; j++) </P> <P>expected_result[i][j] = 0;</P> <P>}</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>while</FONT></FONT><FONT size=2> (month[i]&lt;=monthx)</P> <P>i++;</P> <P>&nbsp;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>while</FONT></FONT><FONT size=2> ((i&lt;games_training)&amp;&amp;(month[i]&gt;monthx)&amp;&amp;(month[i]&lt;=monthx+5))</P> <P>{</P> <P>white = whiteplayer[i];</P> <P>black = blackplayer[i];</P> <P>order = month[i];</P> <P>expected=expect_result_game[i][order-monthx-1];</P> <P>expected_result[white][order]+=expected;</P> <P>expected_result[black][order]+=(1-expected);</P> <P>delay=order-monthx;</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//1-5</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (experienced_players[i][delay-1]==1)</P> <P>{</P> <P>expected_result_small[white][order][delay-1] += expected;</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//the expected results at month number order.</P></FONT></FONT><FONT size=2> <P>expected_result_small[black][order][delay-1] += (1 - expected);</P> <P>}</P> <P>i++;</P> <P>}</P> <P>}</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>double</FONT></FONT><FONT size=2> calc_error_result_small(</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> monthx,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> delay)</P> <P>{</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> i;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> order=monthx+delay;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> err;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> RMSE=0;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i=1;i&lt;aboveidnumber;i++)</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (isplayer[i]==1)</P> <P>{</P> <P>err=observed_result_small[i][order][delay-1]-expected_result_small[i][order][delay-1];</P> <P>RMSE+=err*err;</P> <P>}</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>return</FONT></FONT><FONT size=2> RMSE;</P> <P>}</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>void</FONT></FONT><FONT size=2> build_expected_result(</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> monthx)</P> <P>{</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//step 6</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//we build expected result table for monthx+1,...monthx+5 based on the rating list</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//first step is to build the expected_result table for month+1,...month+5 </P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//based on strength in the relevant month and maybe more parameters that I will add later</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> white,black,i;</P> <P>i=0;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>while</FONT></FONT><FONT size=2> (i&lt;games_training)</P> <P>{</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> ((month[i]&gt;=monthx+1)&amp;&amp;(month[i]&lt;=monthx+5))</P> <P>{</P> <P>white=whiteplayer[i];</P> <P>black=blackplayer[i];</P> <P>expect_result_game[i][month[i]-monthx-1]=expectedwhitechess_result(white,black,month[i]);</P> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//should be improved but this is an initial version </P></FONT></FONT><FONT size=2> <P>}</P> <P>i++;</P> <P>}</P> <P>calc_expected(monthx);</P> <P>}</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>void</FONT></FONT><FONT size=2> build_stupid_csv_prediction()</P> <P>{</P> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>/* submission.csv */</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> i;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>char</FONT></FONT><FONT size=2> c;</P> <P>FILE *stream_prediction = fopen(</FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"submission119.csv"</FONT></FONT><FONT size=2>, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"w+b"</FONT></FONT><FONT size=2>);</P> <P>stream_test = fopen(</FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"test_data.csv"</FONT></FONT><FONT size=2>, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"r+b"</FONT></FONT><FONT size=2>);</P> <P>assert(stream_prediction);</P> <P>assert(stream_test);</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (stream_test == NULL || stream_prediction == NULL)</P> <P>{</P> <P>puts(</FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"Usage error:\nfiles submission.cs and test_data.csv are expected to be available in the current working directory.\n"</FONT></FONT><FONT size=2>);</P> <P>exit(EXIT_FAILURE);</P> <P>}</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i = 1; i &lt; 53; i++) {</P> <P>fscanf(stream_test, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%c"</FONT></FONT><FONT size=2>, &amp;c);</P> <P>fprintf(stream_prediction, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%c"</FONT></FONT><FONT size=2>, c);</P> <P>}</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i = 0; i &lt; games_test; i++) {</P> <P>fprintf(stream_prediction, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%d"</FONT></FONT><FONT size=2>, month_test[i]);</P> <P>fprintf(stream_prediction, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%c"</FONT></FONT><FONT size=2>, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>','</FONT></FONT><FONT size=2>);</P> <P>fprintf(stream_prediction, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%d"</FONT></FONT><FONT size=2>, whiteplayer_test[i]);</P> <P>assert(whiteplayer[games_training+i]==whiteplayer_test[i]);</P> <P>fprintf(stream_prediction, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%c"</FONT></FONT><FONT size=2>, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>','</FONT></FONT><FONT size=2>);</P> <P>fprintf(stream_prediction, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%d"</FONT></FONT><FONT size=2>, blackplayer_test[i]);</P> <P>assert(blackplayer[games_training+i]==blackplayer_test[i]);</P> <P>fprintf(stream_prediction, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%c"</FONT></FONT><FONT size=2>, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>','</FONT></FONT><FONT size=2>);</P> <P>fprintf(stream_prediction, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%f"</FONT></FONT><FONT size=2>, predicted[i]);</P> <P>fprintf(stream_prediction, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"%c"</FONT></FONT><FONT size=2>, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>'\n'</FONT></FONT><FONT size=2>);</P> <P>} </P> <P>fclose(stream_test);</P> <P>fclose(stream_prediction);</P> <P>}</P> <P>&nbsp;</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>void</FONT></FONT><FONT size=2> predicting_chess_metrics(</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> white_bonus,</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> max_diff)</P> <P>{</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//425</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> i;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> white,black;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i = 0; i &lt; games_test; i++) </P> <P>{</P> <P>white = whiteplayer_test[i];</P> <P>black = blackplayer_test[i];</P> <P>predicted[i]=expectedwhitechess_result(white,black,month_test[i]);</P> <P>}</P> <P>build_stupid_csv_prediction();</P> <P>&nbsp;</P> <P>}</P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>int</FONT></FONT><FONT size=2> main(</FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>void</FONT></FONT><FONT size=2>)</P> <P>{</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> i,j;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>int</FONT></FONT><FONT size=2> sum = 0;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> sum_result=0;</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>double</FONT></FONT><FONT size=2> error=0;</P> <P>read_test();</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//step 1</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> ((stream_training = fopen(</FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"training_data.csv"</FONT></FONT><FONT size=2>, </FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"r+b"</FONT></FONT><FONT size=2>)) == NULL) </FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>/* C4996 */</P></FONT></FONT><FONT size=2> <P>printf(</FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"The file 'training_data.csv' was not opened\n"</FONT></FONT><FONT size=2>);</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>else</P></FONT></FONT><FONT size=2> <P>printf(</FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>"The file 'training_data.csv' was opened\n"</FONT></FONT><FONT size=2>);</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> (stream_training) </P> <P>{</P> <P>read_training_data();</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//step 2 </P></FONT></FONT><FONT size=2> <P>fill_num_opp_array();</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//step 3</P></FONT></FONT><FONT size=2> <P>calc_observed();</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//step 4</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (i=90;i&lt;=100;i++)</P> <P>{</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//i should be a parameter</P></FONT></FONT><FONT size=2> <P>&nbsp;</P> <P>repeat_strength_estimate(i,81,310,5.9,2.2,2210,2285,15,12,39);</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//step 5</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//months i+1,i+2,i+3,i+4,i+5 are missing and we need to predict them</P></FONT></FONT><FONT size=2> <P>build_expected_result(i);</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//step 6 this function is going to build </P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//expected result table based on strength estimate(months i+1,...i+5 missing)</P></FONT></FONT><FONT size=2> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>for</FONT></FONT><FONT size=2> (j=1;j&lt;=5;j++)</P> <P></FONT><FONT color=#0000ff size=2><FONT color=#0000ff size=2>if</FONT></FONT><FONT size=2> ((i+j)&lt;=100)</P> <P>error+=calc_error_result_small(i,j);</P> <P>printf(</FONT><FONT color=#a31515 size=2><FONT color=#a31515 size=2>" error=%f "</FONT></FONT><FONT size=2>,error);</P> <P>&nbsp;</P> <P>}</P> <P>predicting_chess_metrics(45,425);</FONT><FONT color=#008000 size=2><FONT color=#008000 size=2>//bonus for white and maximal difference</P></FONT></FONT><FONT size=2> <P>fclose(stream_training);</P> <P>}</P> <P>}</P></FONT> For some reason I do not find the attachment in the post I did and I witll try another time.<BR><BR>Edit:<BR>I think that in the second time I did a mistake and try to attach the exe file but I see no attachement&nbsp;even in my first&nbsp;post when I tried to attach only&nbsp;the code<BR>so I will attach the code of the missing function directly(I practically used the last 13 months and not the last 12 months)<BR><BR><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><FONT color=#0000ff size=2><FONT color=#0000ff size=2> <P>double</FONT></FONT></P></SPAN><FONT color=#0000ff size=2> <P></FONT></P></SPAN> <P><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> add_opp(</SPAN><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">int</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> i,</SPAN><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">int</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> missing,</SPAN><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">int</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> month) <P>{</P><FONT size=2> <P></FONT></P></SPAN></P> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">int</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> k;<FONT size=2> <P></FONT></P></SPAN></P> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">int</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> num_oppon=0;<FONT size=2> <P></FONT></P></SPAN></P> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">int</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> num_oppon_past=0;<FONT size=2> <P></FONT></P></SPAN></P> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">double</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> opp_rating_past=0;<FONT size=2> <P></FONT></P></SPAN></P> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">int</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> num;<FONT size=2> <P></FONT></P></SPAN></P> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">double</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> opp_rating=0;<FONT size=2> <P></FONT></P></SPAN></P> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">double</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> past_rating=0;<FONT size=2> <P></FONT></P></SPAN></P> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">for</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> (k=missing-12;k&lt;=missing;k++) <P>{</P><FONT size=2> <P></FONT></P></SPAN></P> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">if</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> (k&gt;=0) <P>{</P><FONT size=2> <P></FONT></P></SPAN></P> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">if</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> (num_opp[i][k][0]&gt;0)<FONT size=2> <P></FONT></P></SPAN></P> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">for</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> (num=1;num&lt;=num_opp[i][k][0];num++) <P>{</P> <P>opp_rating_past+=strength[num_opp[i][k][num]][k];</P> <P>num_oppon_past+=1;</P> <P>}</P> <P>}</P> <P>}</P><FONT size=2> <P></FONT></P></SPAN></P> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">for</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> (k=missing+1;k&lt;=missing+5;k++)<FONT size=2> <P></FONT></P></SPAN></P> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">if</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> (num_opp[i][k][0]&gt;0)<FONT size=2> <P></FONT></P></SPAN></P> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">for</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> (num=1;num&lt;=num_opp[i][k][0];num++) <P>{</P> <P>opp_rating+=strength[num_opp[i][k][num]][k];</P> <P>num_oppon+=1;</P> <P>}</P><FONT size=2> <P></FONT></P></SPAN></P> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">if</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> (num_oppon&gt;0) <P>opp_rating=opp_rating/num_oppon;</P><FONT size=2> <P></FONT></P></SPAN></P> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">else</SPAN></SPAN></P><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> <P>opp_rating=strength[i][month];</P><FONT size=2> <P></FONT></P></SPAN> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">if</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> (num_oppon_past&gt;0) <P>opp_rating_past=opp_rating_past/num_oppon_past;</P><FONT size=2> <P></FONT></P></SPAN></P> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">else</SPAN></SPAN></P><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> <P>opp_rating=strength[i][month];</P><FONT size=2> <P></FONT></P></SPAN> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">if</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> (opp_rating&gt;opp_rating_past+60)<FONT size=2> <P></FONT></P></SPAN></P> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">return</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> 12;<FONT size=2> <P></FONT></P></SPAN></P> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">if</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> (opp_rating<OPP_RATING_PAST-60)< P> <FONT size=2> <P></FONT></P></SPAN> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">return</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> -12;<FONT size=2> <P></FONT></P></SPAN></P> <P><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: "><SPAN class="" style="FONT-SIZE: x-small; COLOR: #0000ff; FONT-FAMILY: ">return</SPAN></SPAN><SPAN class="" style="FONT-SIZE: x-small; FONT-FAMILY: "> (opp_rating-opp_rating_past)*0.2;</P> <P>}<BR><BR>Edit:last comment: It is possible that I could get the first place only by doing more iterations(considering the small distance to the first place)&nbsp;and it may be interesting to check it.<BR><BR>Unfortunately<BR>I did not think about doing more iterations in the last moment.</P></SPAN> <P><FONT face=Verdana>The first thing that I am going to explain is the first step in my calc_strength_chess_metric function.<BR><BR>it was a loop when I calculated the strength for every player so how did I calculate the strength?<BR><BR>I used a special function that got 12 parameters(the same&nbsp;11 parameters that calc_strength_chess_mertics got and the id of the player to calculate the strength of it that had the name i)<BR><BR>This function simply did a loop on all months from 1-100 at distance of at most 80 months from the month to calculate strength for it <BR>except the missing 5 months.<BR><BR>For every possible month number k&nbsp;that player i played in it<BR>I had a weight&nbsp;that is calculated as<BR><BR>1-|<FONT size=2>month-k|/81 when month is the month that I need to calculate strength for it and k is the month of playing of player i.<BR></FONT><FONT size=2><BR>Every game in this month is calculated as 1-|month-k|/81 games<BR>and I also calculated weight result by multiply&nbsp;the result by that number(result before multiplying is 2 for win of i and 1 for draw&nbsp;and 0 for loss)<BR><BR>For every game I calculated <BR>weighted result and weighted opponent rating and weighted number of opponents that is 1-|month-k|/81 for every game.<BR><BR>I reduced&nbsp;0.03*weight from the weighted result if a player played with the white pieces and added the same&nbsp;number if the player played with the black&nbsp;pieces.<BR>The opponent rating&nbsp;is simply the strength&nbsp;of the player(this strength is 2000 in the first iteration when we have no data about it).<BR><BR>After&nbsp;calculating opponents opp_rating and result I simply use chess metrics formula<BR><BR>after doing all of&nbsp;it I also add for the missing months activity bonus based on the number of games in the end of this function(1.2*numgames when numgames is&nbsp;the number of games&nbsp;in the missing month&nbsp;for example to calculate strength at month 103 assuming months 101-105 are missing <BR>I add 1.2*number of games in months 101-103<BR><BR>This explain the&nbsp;first step but there is also a second step for updating the rating based on&nbsp;opponents in the missing months <BR><FONT size=2>and I use the function add_opp to update them.</P> <P></FONT><BR><BR>The idea of add_opp is that I&nbsp;compare&nbsp;rating of opponents in the missing 5 months and rating of opponents in the last 12 months that we have and increase rating if I find that the opponents in the future are stronger.<BR><BR>People who are interested in the exact formula may read the function add_opp that I attach in this post(I attach all the code when I commented out part of the code that was about the cross data and is not relevant to the prediction and &nbsp;pushed it to the end).<BR><BR>Note that I did not do much optimization of parameters and I suspect that different parameters could give me the first place.<BR><BR>Note that in order to run the program succesfully you need to have the files&nbsp;of the test data and the&nbsp;training data at the same file&nbsp;as the exe file that is based on the code that I attach.<BR>&nbsp;</P></FONT></FONT> <P>I calculated rating for every player in months 101-105 and after having the rating I have a simple formula to calculate the expected result only based on the rating and the color.<BR>The tricks that I used were mainly in calculating the rating but I will start explaining the simple part.<BR><BR>The first part was calculating the bonus for white<BR><BR>I had the following formula for this part:<BR>bonus=maximum((white_rating+black_rating-3100)/40.0,50)<BR><BR>Diff=white_rating+bonus-black_rating&nbsp; <BR><BR>Expected_result=0.5+Diff/850 <BR>When I changed it to be not more than <FONT size=2>0.970588 and not less than 0.1(practically it had a very small effect&nbsp;<BR>because&nbsp;the result was always bigger than 0.1 and there was only one case when I needed to reduce it to 0.970588)</P> <P></FONT><BR>Now we go to the hard part that is how to calculate the rating for every player.<BR>For this purpose I admit that I used the future to predict the past(but I have also prediction based on a different model in the top 10 when I did not use the future to predict the past).<BR><BR>I used a function that I called repeat_strength_estimate<BR>The function get the following parameters:<BR>1)k that is the last month that is not missing.<BR>For the prediction of months 101-105 k=100 but for testing my parameters I used k=90,91,92,...99<BR>2)max_months(practically get the value 81 and I admit that it is not a good name)<BR>The meaning of max_months=81 is practically that I&nbsp;do not use the first 20 months to predict month 101 and that I do not use the first 21 months to predict month 102 and generally <BR>I do not use the first&nbsp;m-81 months to predict month number m.<BR><BR>3)<FONT size=2>big_dif=310<BR>big_dif was used to calculate performance rating and for some reason I found that small values give better results<BR>in my tests so I used this small value<BR><BR><FONT size=2>My formula for performance rating was</P> <P>performance_rating=avg_rating+((result-opponents)/opponents)*big_dif;<BR><BR>the value of the division can be at most 1 and at least -1 because result is practically weighted half points and is something between 0 and twice the weight of the opponent.<BR><BR>opponents in this formula mean the number of weight opponents(when the weight is based on the distance in month from&nbsp;the month to predict)&nbsp;&nbsp;<BR>This formula means that even if a player lost all the games against the opponents then he still got performance rating that is only 310 elo weaker than the average of the opponents because the result of the division is always between -1 for losing all games and 1 for winning all games.<BR></FONT></FONT><BR>I guess that it was good because not all games are included so person who played against strong opponents probably performed practically better than his real score and it is not good for the real world when games are not missing.<BR><BR>4)num_avg=5.9 similiar to chess metrics(I added 5.9 faked opponents with average rating)<BR><BR>5)num_weak=2.2(added 2.2 faked weak opponents)<BR><BR>6)<FONT size=2>value_weak=2210(rating of the weak opponents like chess metrics<BR><BR>7)<FONT size=2>unrated=2285(I think that practically had no effect because players always&nbsp;have games in the last 80 months)</P></FONT> <P></FONT><BR>8)<FONT size=2>minimal_game_finished=15(I reduce rating to players with less than 15 weighted games similiar to chess metrics)<BR><BR>9)reduction_per_game=12(the number that I reduce for less of experience for player without many weight games)<BR><BR>10)adding=39(the number that I add to rating of players after every iteration)<BR><BR>repeat_strength_estimate basically did 10 iterations for evaluating the strength of every player in every month.<BR></FONT>The evaluation of&nbsp;the strengh was based on 2 steps when step 1 was the function that calculate strength that is similiar to chess metrics but there are important differences and step 2 was deciding that place 50 has rating 2625 in the rating list that is exactly the same as chess metrics.<BR><BR><FONT size=2></P> <P>calc_strength_chess_metric is the missing function to understand the algorithm and it basically got 11 parameters(all the 10 parameters that repeat_strength_estimate got and another parameter that is the month that we calculate&nbsp;estimate for it).<BR><BR>Note that&nbsp;the&nbsp;estimate for month 50 of player 1 when months 101-105 are missing is important because if player 2 played with player 1 at month 50 <BR>then it is going to influence the rating of player 2&nbsp; at month 101-105 that is used to calculate the expected result.<BR><BR>I use the word estimate and not rating because rating by definition assume that we do not have future results.<BR><BR>I had basically 2 steps in<BR>calc_strength_chess_metric<BR><BR>The first step was a loop that calculated the estimate for strength for every player in the relevant month.<BR>The second step is a&nbsp;step that I used only when I needed to predict the strength in the missing months and it is practically unfair trick but not something that is forbidden in the competition because I used the information about games and not about the results in the supposed missing months to&nbsp;calculate changes in the rating estimate in these months.<BR></FONT><BR>I did not finish to explain my algorithm&nbsp;and I plan also to send code later but now I need only to explain the 2 steps of calc_strength_chess_metric to explain my algorithm and I will do it later in another post(this part of the program is only slightly more than 100 lines of code in C).<BR></P>
Chess ratings - Elo versus the Rest of the World
7th place - littlefish
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Chess ratings - Elo versus the Rest of the World <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|>I'm a little surprised I ended up in the top-10 since I didn't spend that much time on the contest and basically just made some relatively simple modifications to the Chessmetrics system. Also, it's interesting that my best entry is an early one that did pretty badly in the public score and not exceptionally well in cross-validation either. Anyway these are my modifications to Chessmetrics:<br><br>-Weighting: I calculate an individual "time constant" Tc for each player as 550 divided by the square root of the number of games by that player (for my best submission I simply used the number of games in all 100 months, although it may be slightly better to use only the number of games in the previous 48 months or so). The weighting factor for a game that has been played n months ago is then [Tc/(Tc+n)]^2. <br><br>-Padding: I first calculate a weighted, but not padded performance rating. Then I use the weighted average between that performance rating (weighted with the square root of the number of weighted games) and the average opposition rating minus 50 (weighted with 3). That doesn't look very elegant because it will result in some players with very few games having high ratings, but I guess it doesn't matter much since those players aren't part of the connected pool anyway.<br><br>-Monthly ratings: I'm calculating monthly ratings for all 100 months of the training data in one iterative process. That is, I first set all ratings to an initial value (e.g. 2450), then calculate ratings for months 1 to 100 using those initial values for opposition ratings in the first iteration, then use the monthly ratings calculated in the first iteration as opposition ratings in the second iteration and so on. I also include "future" results in the ratings for months earlier than 100 - so for the rating in month 50 a game played in month 55 will be included with the same weight as one played in month 45.<br mce_bogus="1">
Chess ratings - Elo versus the Rest of the World
Released: my Source Code and Analysis
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Chess ratings - Elo versus the Rest of the World <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I had a lot of fun with this competition and learned a lot about ratings systems.</p> <div>Sadly, I only came 18th :)</div> <div>If you're interested, you can download all of my code and&nbsp;analysis&nbsp;from my github repo:&nbsp;https://github.com/jbrownlee/ChessML</div> <div>There are implementations of a few rating systems (elo, glicko, chessmetrics, etc) and many attempts at improving them (a nice little experimentation framework).</div> <div>Thanks all. Looking forward to the next big comp!</div> <div>jasonb</div>
Child Mind Institute - Detect Sleep States
10th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is because hour features are highly redundant and sufficient information has been extracted, so we thought that they should not be treated the same as enmo and anglez.</p> <p>First and foremost, I would like to express my gratitude to all participants and organizers.<br> It was a very tough competition as I did not have many good scores on public LB, but I was satisfied with the final results.</p> <h2>Features engineering</h2> <ul> <li>enmo<ul> <li>Utilized as is</li></ul></li> <li>anglez<ul> <li>Used <code>anglez.diff().abs()</code></li></ul></li> <li>Time Encoding<ul> <li>Normalized the hours into t=[0~1) and employed <code>sin(2*pi*t)</code> and <code>cos(2*pi*t)</code> as features</li> <li>Experimented with finer periodicities (e.g., <code>sin(4*pi*t)</code>, <code>sin(8*pi*t)</code>, <code>cos(4*pi*t)</code>, <code>cos(8*pi*t)</code>), but performance worsened</li> <li>Considered features like week and month, but they were rejected due to performance degradation.</li></ul></li> <li>Duplicate Data<ul> <li>Identified instances of identical sensor data occurring in the same series at the same time.</li> <li>Assigned a flag of 1 for times with duplicates and 0 for times without duplicates</li></ul></li> <li>Step<ul> <li>Flags set for within 1 hour and within 8 hours from the start of measurement</li> <li>Because, included data points where onset occurred within 1 hour or wakeup within 8 hours since there were none</li></ul></li> </ul> <h2>Model</h2> <p>Adopted a 1D-UNet GRU model as illustrated. Features, excluding enmo and anglez, are added just before the GRU. Trained using Binary Cross Entropy.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1630583%2F5e93b7c1e25dd2fbcc18d346c320ec93%2Fmodel.png?generation=1701906351601543&amp;alt=media" alt=""></p> <h2>Target Design</h2> <p>Utilized a heatmap based on a Gaussian distribution. For stability in training and improved AP at a large tolerance, a Gaussian distribution with a large sigma was advantageous. Contrastly, a Gaussian distribution with a small sigma led to instability in training but improved AP at a small tolerance. As a compromise, a weighted sum of Gaussian distributions with large and small sigmas was used.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1630583%2Ff2d28c6cc637e3f55d3ca2383a845d25%2Ftarget.png?generation=1701906225296881&amp;alt=media" alt=""></p> <pre><code>gauss_small_sigma_minute = gauss_large_sigma_size = gauss_mix_ratio = gauss_small_sigma_size = minute2step(gauss_small_sigma_minute) gauss_large_sigma_size = minute2step(gauss_large_sigma_minute) gauss_large = np.exp(-((x - gauss_center)**) / ( * (gauss_large_sigma_size)**)) gauss_small = np.exp(-((x - gauss_center)**) / ( * (gauss_small_sigma_size)**)) gauss = (gauss_large + gauss_mix_ratio * gauss_small) gauss /= gauss.() </code></pre> <p>`</p> <h2>Training</h2> <ul> <li>Randomly sample 12 hours</li> <li>150 epoch, AdamW</li> <li>Apply SWA after 50 epoch</li> </ul> <h2>Inference</h2> <ul> <li>In contrast to training, infer a entire series at a time</li> </ul> <h2>Ensemble</h2> <p>The processing time for the entire test dataset, including data loading, pre-processing, and post-processing, was approximately 18 minutes.<br> Of that time, model inference took 45 seconds.</p> <p>Public LB is calculated by 25% of the test data, so some shake was expected.<br> Therefore, the decision was made to perform ensemble learning with large number of models. The final ensemble consists of 120 models, and the processing is completed within 120 minutes.</p> <h2>Post-processing</h2> <p>It is crucial to detect multiple candidates in one night.</p> <ul> <li>Smooth the output values with a width of 11</li> <li>Detect all local maxima with a threshold of 0.01 and add them as candidate points</li> <li>For all remaining timestamps, perform the following in descending order of output values<ul> <li>If the timestamp is more than 21 minutes away from any previously detected timestamp, add the product of 0.1 and the output value as a candidate point</li></ul></li> </ul>
Child Mind Institute - Detect Sleep States
11th Place - GRU, CNN, Transformer - GPU Deep Learning!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>What a fun competition! I enjoyed this competition! Thank you Kaggle and host.</p> <h1>Building Blocks of GPU Deep Learning!</h1> <p>Below are 3 building blocks of deep learning. By creating different sequences of these building blocks, we can build powerful models! Each block takes an input shape of <code>(batch_size, sequence_length, features)</code> and outputs the same. The RNN "block" uses <strong>bidirectional</strong> (example code <a href="https://www.kaggle.com/cdeotte/11th-place-gold-cv-835-public-lb-788" target="_blank">here</a>). The CNN "block" uses sums and products of multiple <strong>dilated convolutions</strong> (from WaveNet paper <a href="https://arxiv.org/abs/1609.03499" target="_blank">here</a> and example code <a href="https://www.kaggle.com/cdeotte/11th-place-gold-cv-835-public-lb-788" target="_blank">here</a>). And the Transformer blocks uses <strong>self attention</strong> (example code <a href="https://www.kaggle.com/cdeotte/11th-place-gold-cv-835-public-lb-788" target="_blank">here</a>). These 3 "blocks" can create features from signals without needing feature engineering!</p> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Dec-2023/blocks.png" alt=""></p> <h1>Data Loader - Cut Data into Daily Chunks</h1> <p>Our data loader will randomly give us 24 hour windows that <strong>always contain both one onset and one wakeup</strong>. During each epoch, we will get different cuts. This serves as crop data augmentation.<br> <img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Dec-2023/dataloader.png" alt=""></p> <h1>Model Head is NLP QA Softmax (not NLP NER)</h1> <p>For the head of our model, we use an architecture that is <strong>not</strong> in any public discussion <strong>nor</strong> public notebook. We use a technique from NLP called QuestionAnswering (i.e. <code>BertForQuestionAnswering</code>). This is analogous to Computer Vision's <code>object detection</code>. All public discussions and notebooks use NLP NER (i.e. <code>BertForTokenClassification</code>) which is analogous to Computer Vision <code>segmentation</code>. </p> <p>The way to implement QA is for our data loader to always include one onset and one wakeup in every train sequence. Then we apply 1440 multiclass softmax loss to pick the correct minute of target. Later we will use a separately trained classification model to remove false positives (since our QA model was trained on 100% positives and doesn't locate negatives well).<br> <img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Dec-2023/qa-head2.png" alt=""></p> <h1>My CNN-Transformer-GRU, 80 features</h1> <p><strong>Approximately CV 0.800 LB 0.750</strong><br> <img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Dec-2023/model2b.png" alt=""></p> <h1>My Deep-GRU, 2 features</h1> <p><strong>Approximately CV 0.805 LB 0.755</strong><br> <img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Dec-2023/model1c.png" alt=""></p> <h1>Extra Predictions Trick</h1> <p><strong>Approximately CV +0.050 LB 0.050 Wow!</strong><br> A huge trick in this competition is to make multiple onset and wakeup predictions per night. First we make a first choice for every user every night. With 200 test users and approximately 30 nights per user and 2 targets, this is about 12000 predictions. Next we make a second guess (i.e. the second max probability) for each night and divide it by <code>2^1</code>. This is 12000 more guesses. Next we made a 3rd guess (i.e. 3rd max prob) and divide it by <code>2^2</code>. We continue this for 30 guesses per night where the last guesses are divided by <code>2^29</code>. This boost CV by about +0.050 wow 😀! </p> <p>This works because Average Precision metric is <strong>always improved</strong> when adding more predictions if the new predictions have a lower score than all previous predictions. (More explanation <a href="https://www.kaggle.com/competitions/vinbigdata-chest-xray-abnormalities-detection/discussion/229637" target="_blank">here</a>)</p> <h1>Rerank Scores</h1> <p><strong>Approximately CV +0.020 LB 0.020</strong><br> For the metric Average Precision, the score we assign to each prediction is very important. First we use the probability from our above models as the starting score. Next we use CatBoost with handmade convolution features (<strong>CatBoost achieves LB 0.705</strong> by itself wow!), 1D-Unet, 2D-Unet, Mel Spectrograms, ResNet34, EfficientNetB5, and Audio WaveNet to rerank the scores. This is accomplished by predicting a new score for each prediction and then averaging all the scores.</p> <h1>Ensemble with NMS and WBF</h1> <p><strong>Approximately CV +0.010 LB 0.010</strong><br> My CNN-Transformer-GRU and Deep-GRU are very diverse. First I apply NMS (non maximum supression) to each model individually. Whenever two predictions are within 5 minutes of each other, we keep only the prediction which largest probability. Afterward we ensemble the two single models with WBF (weighted box fusion). Whenever two predictions are within 5 minutes of each other (and are the same level 1 thru 30 explained above in "extra scores trick"), then we replace the two with the average step position.</p> <h1>Solution Code</h1> <p>I published the code for these two TensorFlow models <a href="https://www.kaggle.com/cdeotte/11th-place-gold-cv-835-public-lb-788" target="_blank">here</a></p> <p>Wow that's an amazing LGBM model. I'm looking forward to your writeup.</p> <p>I was always watching your team and trying to stay ahead of you haha 😀 . You pushed me to work harder 😀 We were tied with same public LB score on many days. Congratulations building a strong generalizing model and jumping upward on private LB!</p> <p>Thanks Sinan. The Average Precision metric has lots of way to exploit. The main ones are "making more predictions which have a lower score than every other previous prediction" will always increase the Average Precision metric. You can read about is <a href="https://www.kaggle.com/competitions/vinbigdata-chest-xray-abnormalities-detection/discussion/229637" target="_blank">here</a> That is the idea behind the 30 levels of predictions each night where each new set gets divided by 2. The second way to exploit AP is to rerank/modify the scores. We want all false positives to have lower score than true positives for boost in metric.</p> <p>Another PP trick that I used (and did not write about above) was to add or subtract 1 to any prediction which was exactly a multiple of 12. This gave a nice boost. (Because of the metric, we do not want to predict perfect multiples of 12). </p> <p>Absolutely was fun! I knew your team was working hard every day and that encouraged me to work hard every day!</p> <p>Congratulations on an awesome 3rd place cash gold finish!</p> <blockquote> <p>You are dividing the second best prediction's probability by 2^2, the third best predictions's by 2^3, etc. How did you realise that this is the right way to 'adjust' the probabilities?</p> </blockquote> <p>The first thing to realize is that we cannot start making second predictions for one night before we make first predictions for every user every night. (Because when we make second predictions then <strong>certainly</strong> either the first or second prediction will be a false positive and hurt metric). The Average Precision metric will rank every user every night <strong>all together</strong> and we want all false positives to have a lower score than true positives to optimize Average Precision. So the majority of our second predictions need to have a lower score than the majority of our first predictions.</p> <p>The exact division value is a hyperparameter that I determined from CV. First I used 10 because we want the second set of predictions to be "an order of magnitude" lower than the first set of predictions. The hyperparameter was <code>exponent</code> in <code>DIVIDE_CONSTANT = 10**exponent</code>. It turned out that <code>exponent = 1/5</code> was best which makes <code>DIVIDE CONSTANT</code> approximately 2.</p> <p>Awesome job! Congratulations <a href="https://www.kaggle.com/chauyh" target="_blank">@chauyh</a> on solo gold finish.</p> <p>I'm curious did you build features and/or post process information from the fact that sometimes the host make fake anglez enmo? For example, you could make a new feature "has this anglez value at this daily timestep been seen before" boolean feature. Or you could post process remove predictions from time windows that had fake data?</p> <p>I'm curious if there was helpful information here. I think my models naturally removed this stuff.</p> <p>Yes it is impressive how well GRU can do with only input the two features <code>anglez</code> and <code>enmo</code>. Note I did post process the prediction logits for all <code>minute % 15</code> which equaled one of the values that did not occur in the train data. (There were 4 rare values for <code>onset</code> and 4 rare values for <code>wakeup</code>). For these rare logits, I divided their logit prediction by 2 before performing softmax. This boosted CV about approximately <code>+0.010</code> or <code>+0.020</code> (if i remember correctly)</p> <p>I also tried adding <code>minute % 15</code> as a feature for the GRU. It boosted the CV score, but it was better to not include this feature and post process. This is probably because the model will overfit when the feature <code>minute % 15</code> is present.</p> <p>Thanks <a href="https://www.kaggle.com/fritzcremer" target="_blank">@fritzcremer</a> . Congratulations on amazing cash solo gold !!</p> <p>Thanks Ertugrul </p> <p>Thank you</p> <blockquote> <p>One is that you set crops for 24 hours which is 17280 steps but your input for Transformer Model is 1440</p> </blockquote> <p>My <code>CNN-Transformer-GRU</code> uses an input of 1440. The data loader output just skips every 12th using the following Python indexing <code>X_train[:,::12,:]</code>.</p> <blockquote> <p>May I also asked about the 80 features you used?</p> </blockquote> <p>I used the 79 features from this popular LGBM notebook <a href="https://www.kaggle.com/code/renatoreggiani/lightgbm-starter-cmi" target="_blank">here</a>. And I added the additional feature <code>minute%15</code> to learn the weird labeler bias. (I also built my own CatBoost with my own 200 features and my CatBoost scored LB 0.705 but my Transformer used those public notebook features). </p> <blockquote> <p>So did you use the same solution or just use original "0" and "1" labels?</p> </blockquote> <p>I used the original "0" and "1". Each of my two models here has output sequence length 1440 which is the number of minutes per day. The target for each day crop were 1439 zeros with 1 one where the actual target is. My loss is softmax multi-class crossentropy, so the model had no trouble learning the unique target "1" out of "1440" different classes. (I.e. the multiclass is the 1440 different minutes each day and the model finds the correct minute each day)</p> <p>For reference, here are the 3 popular ways to create a target in this competition. All three performed well. From all three, we can compute a <code>step</code> and <code>score</code> for <code>submission.csv</code>:<br> <img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Dec-2023/targets2.png" alt=""></p> <p>For reference, here are two Mel Spectrograms with 1D prediction and 2D prediction and ground truth:<br> <img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Dec-2023/mel1.png" alt=""><br> <img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Dec-2023/mel2.png" alt=""></p> <p>Do you mean that you get gradient vanishing during training? If so, does your data loader output the correct train data? Every train sample must have <code>X_train</code> with shape <code>(batch, 17280, 2)</code> where the 2 features are anglez and enmo standardized (i.e minus mean divide by std using the mean and std of the entire train data). And the target is list of length two <code>[y_train_1, y_train_2]</code> where each are shape <code>(batch, 1440)</code> and each have exactly 1439 zeros and 1 one? (i.e. there is exactly 1 target in every sample. We do not use 24 hour windows that are missing target).</p> <p>Yes this was feasible. My final solution used this model to predict a <code>score</code> for each <code>onset</code> and <code>wakeup</code> and I added the <code>score</code> to the <code>score</code> from my Transformer and GRU to improve CV and LB. Also by itself, this model achieved good results.</p> <p>An easy way to avoid memory error is to process data in chunks and save intermediate data to disk. We can do this both for train and infer.</p> <p>First read Kaggle's train parquet in chunks and we iterate through every user and write each user to disk as a NumPy file. So for each user we have data shape <code>(days, 17280, 4)</code> where <code>days</code> is the number of 24 hour periods (per user), and 17280 is the number of steps in 24 hours. And <code>4</code> is anglez, enmo, and the two targets. (Also we begin each user at 3pm and end at 3pm and add padding to beginning and ending if necessary. That makes all the 24 hour windows the same).</p> <p>Next we iterate through each of these NumPys. We load from disk then convert to image. Then save images to disk. The code below creates 2 channel images of size 256x1536 for each day for each user:</p> <pre><code>import librosa CROP = 17_280; SRR = 17_280; HL = 12; HH = 256 j,u enumerate(USERS): data = np.nan_to_num( np.load(f) ) jj = [] r range(len(data)): ii = [] mel_spec = librosa.feature.melspectrogram(=data[r,:,0], =SRR, =2048, =HL, =HH, =0, =SRR//2, =2048//4) mel_spec_db = librosa.power_to_db(mel_spec, =np.max).astype(np.float32) ii.append(mel_spec_db) mel_spec = librosa.feature.melspectrogram(=data[r,:,1], =SRR, =2048, =HL, =HH, =0, =SRR//2, =2048//4) mel_spec_db = librosa.power_to_db(mel_spec, =np.max).astype(np.float32) ii.append(mel_spec_db) img2 = np.transpose( np.stack(ii), [1,2,0]) jj.append(img2) jj = np.stack(jj) np.save(f,jj) </code></pre> <p>Also note that the highest scoring public notebook uses Mel Spectrograms (created by PyTorch) and provides code on GitHub. Discussion <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/452940" target="_blank">here</a></p> <p>This is a powerful technique to achieve top results in object detection and object segmentation Kaggle competitions! </p> <p>We want the model to <strong>locate</strong> where <code>onset</code> and <code>wakeup</code> is. So every train sample contains an <code>onset</code> and <code>wakeup</code>. (We cannot learn location if there is no onset wakeup). Therefore our dataloader, provides random 24 hour crops and <strong>every crop contains</strong> both <code>onset</code> and <code>wakeup</code>.</p> <p>This model will beat other models and techniques for <strong>locating</strong> <code>onset</code> and <code>wakeup</code>. However this model can not determine existence. This model will think that every day contains an onset and wakeup. So we also need to build a second model (not described in my solution post). The second model job is a classifier that outputs 0 or 1 predicting when a 24 window has an onset and wakeup.</p> <p>We train the second model with a different data loader. The data loader for the second model provides <strong>all types of random 24 hour crops</strong>. Some crops have onset wakeup, and some crops do not have onset wakeup. The second model is binary classifier that learns when onset wakeup is present or not.</p> <p>Then our inference pipeline uses both models, our first model predicts onset wakeup for <strong>every night, i.e. 24 hours</strong>. And our second model removes false positives. Whenever our second model outputs <code>prob &lt; 0.2</code> for a night (i.e. 24 hour window), then we <strong>remove</strong> the first model prediction from <code>submission.csv</code>.</p> <p>Thanks NikhilMishra. Congratulations to you and penguin46. Great job achieving cash gold !!</p> <p><strong>UPDATE</strong> I published the code for these two TensorFlow models <a href="https://www.kaggle.com/cdeotte/11th-place-gold-cv-835-public-lb-788" target="_blank">here</a></p> <p>Hi <a href="https://www.kaggle.com/sayoulala" target="_blank">@sayoulala</a> I think both methods work well. To me the task seemed like QA so I went QA from the beginning. Below are predictions from each technique. I think we need different post process for NER vs. QA. For my QA pipeline, I could simply find the top30 maxes to be my top30 predictions per night since QA makes peaks on single minute values. With NER, I think we would need to use a different way to find additional predictions.</p> <p>For diversity, i also trained NER models and used them to improve the <code>score</code> value for each QA prediction.</p> <h1>NLP QA Predictions</h1> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Dec-2023/qa.png" alt=""></p> <h1>NLP NER Predictions</h1> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Dec-2023/ner.png" alt=""></p> <p>The RNN, CNN, and Transformer blocks generate features for us. Code is shown <a href="https://www.kaggle.com/code/cdeotte/11th-place-gold-cv-835-public-lb-788" target="_blank">here</a>. Then the model heads makes the prediction using features from the proceeding blocks.</p>
Child Mind Institute - Detect Sleep States
13th place solution [team summary]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to thank everyone involved in organizing this competition, all the participants, and my teammates <a href="https://www.kaggle.com/ktakita" target="_blank">@ktakita</a> <a href="https://www.kaggle.com/stgkrtua" target="_blank">@stgkrtua</a> <a href="https://www.kaggle.com/copasta" target="_blank">@copasta</a> <a href="https://www.kaggle.com/masatomatsui" target="_blank">@masatomatsui</a> <br> In this topic, I write up the summary of our team solution. The details of these models will be written by each member.<br> <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/460499" target="_blank">13th place solution [K.T. part]</a></p> <h2>Summary</h2> <ul> <li>multi task learning (sleep state binary prediction &amp; onset/wakeup event prediction)</li> <li>ensemble (9 model)</li> <li>duplicate flag feature</li> <li>multi post processing<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2603247%2F61656908611bdc0b3b2f299e930f12c7%2FSleep%20Solution%20(2).png?generation=1701862054954560&amp;alt=media" alt=""></li> </ul> <h2>Model</h2> <p>simple average ensemble by following 9 models<br> duration and downsampling_rate are tuned by each models<br> Our models is based tubo's public code. Many Thanks!</p> <ul> <li><p>kuto</p> <ul> <li>2D UNet (cv:0.763)<ul> <li>backbone: efficientnet-b3</li> <li>duration: 5760</li> <li>down sample: 2</li></ul></li> <li>1D LSTM (cv: 0.786)<ul> <li>feature_extractor: 1DCNN+Wavelet transform</li> <li>decoder: 1DCNN+LSTM</li> <li>duration: 17280</li> <li>down sample: 4</li></ul></li></ul></li> <li><p>copasta</p> <ul> <li>2D UNet (cv: 0.786)<ul> <li>duration: 17280</li> <li>down sample: 6</li></ul></li> <li>2D UNet (cv: 0.780)<ul> <li>backbone: EfficientNetV2-S</li> <li>duration: 17280</li> <li>down sample: 6</li></ul></li> <li>CenterNet (cv:0.788)<ul> <li>duration: 17280</li> <li>down sample: 6</li></ul></li></ul></li> <li><p>K.T</p> <ul> <li>1D UNet (cv: 0.778)<ul> <li>Network: 4 Encoder and 4 Decoder with SE Block</li> <li>duration: 11440</li> <li>down sample 2</li></ul></li> <li>1D UNet (cv: 0.801)<ul> <li>Network: 4 Encoder and 4 Decoder with SE Block</li> <li>duration: 17280</li> <li>down sample 4</li></ul></li></ul></li> <li><p>toppo</p> <ul> <li>1D UNet (cv: 0.774)<ul> <li>feature_extractor:Wavenet</li> <li>decoder:LSTM</li> <li>duration: 11440</li> <li>down sample 4</li></ul></li> <li>1D UNet (cv: 0.765)<ul> <li>feature_extractor:Wavenet</li> <li>decoder:LSTM</li> <li>duration: 5760</li> <li>down sample 2</li></ul></li></ul></li> </ul> <p>The following techniques were effective in some models.</p> <ul> <li>add L1Loss for sleep state prediction diff</li> <li>warmup</li> <li>negative sampling (bg_sampling_rate greater than 0.5)</li> </ul> <h2>Dataset</h2> <p>features used by all models</p> <ul> <li>anglez, enmo</li> <li>hour(sin, cos transform)</li> <li>duplicate flag feature (<strong>important</strong>)</li> </ul> <p>features used by some models</p> <ul> <li>anglez, enmo diff</li> <li>anglez, enmo lead</li> </ul> <h3>duplicate flag feature</h3> <p>Some non-wear cases were filled with artificial data. These were created by duplicating the wave in 15-minute steps.<br> Therefore, we added a flag as a feature to determine if each step is a duplicated wave. This was effective for all models and improved CV and LB by about +0.005~+0.01.<br> Adding to the model as feature tended to improve it more than removing artificial data by post process.</p> <h2>Post processing(pp)</h2> <p>apply following post processing after scipy.signal.find_peak.<br> 1 and 2 pp are more important.</p> <ol> <li>12step(1 min) unit based pp</li> <li>tolerance based pp</li> <li>remove wakeup event at the beginning of each series</li> <li>remove non pair event</li> <li>score decay at the ending of each series</li> </ol> <h3>pp details</h3> <ul> <li><p>12 step(1min) unit based pp (cv + 0.003)<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2603247%2F17cf56ec24eb0167915ee5d9c1e80df5%2F2023-12-06%2018.29.18.png?generation=1701855024213569&amp;alt=media" alt=""><br> Metric is evaluated in units of 1min (12 steps), and it is wasteful to use a step that is a multiple of 12 as the predicted value. Therefore, if the predicted step is a multiple of 12, the step is shifted by +-1.</p></li> <li><p>tolerance based pp (cv+ 0.005)<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2603247%2F77465796a3083dd7284821d051133291%2F2023-12-06%2018.29.35.png?generation=1701855050816438&amp;alt=media" alt=""><br> post process to bring predicted events in tolerance 12-36 within tolerance 12.<br> Place the score-decayed prediction, +-23 steps away from the high peak prediction(score &gt; 0.2).</p></li> </ul>
Child Mind Institute - Detect Sleep States
146th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I'd like to extend my gratitude to the organizers and congratulate all participants.</p> <p>My solution builds upon the code by <a href="https://www.kaggle.com/tubotubo" target="_blank">@tubotubo</a>. I gained insights into developing deep learning code from his program and am thankful for his willingness to share such excellent code!</p> <p><a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/452940" target="_blank">https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/452940</a></p> <h2>My solution</h2> <p>Here are my strategies for improving the score (although most of my ideas did not yield the expected results):<br> <a href="https://www.kaggle.com/code/ystsuji/cmi-fold-ensembliing-inference/" target="_blank">https://www.kaggle.com/code/ystsuji/cmi-fold-ensembliing-inference/</a></p> <h3>Ensemble Folds</h3> <p>I calculated the average score for each fold and identified peaks using find_peaks.</p> <h3>Remove Predictions in Dummy Signals</h3> <p>After obtaining predictions, those in periods with repeating signals were removed. The mean and variance of each signal (enmo, anglez) per day were analyzed to determine if they were repeating signals.</p> <h2>A Tip That Did Not Work</h2> <p>It turned out to be ineffective, but I welcome any feedback:</p> <ul> <li>Feature Engineering<ul> <li>Taking the absolute value of angle differences.</li> <li>Clipping the enmo value to the 95th percentile.</li> <li>Creating features through unsupervised time series segmentation (<a href="https://github.com/ermshaua/claspy" target="_blank">claspy</a>) -&gt; insufficient time in Kaggle notebook to develop features.</li> <li>Flagging features of repeating signals.</li> <li>Entropy of enmo (inspired by <a href="https://www.nature.com/articles/s41598-020-79217-x" target="_blank">this article</a>).</li></ul></li> <li>Hyperparameter Tuning<ul> <li>Adjusting the duration.</li> <li>Modifying parameters in find_peaks.</li></ul></li> <li>Post-processing<ul> <li>Removing onset (wakeup) predictions sandwiched between other wakeup predictions.</li></ul></li> <li>Others<ul> <li>Correcting the timezone to local time.</li></ul></li> </ul>
Child Mind Institute - Detect Sleep States
14th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This competition was a great learning experience for me. I would like to thank the organizers!</p> <h2>Overview</h2> <ul> <li>The neural network was trained to directly predict the timing of events.</li> <li>The training was performed as a regression task, with 'wakeup' set to 1 and 'onset' set to -1 and decayed before and after the occurrence of those events as the target.</li> <li>The neural network structure is a modified version of U-Net with 1D-CNN.</li> <li>Submission file was created by extracting peaks for the average of the model output in the 10-fold CV.</li> <li>Post-processing before peak extraction improved the CV score and Public LB Score, but had little effect on the Private LB Score.</li> <li>The public/private scores of my final submission are 0.755/0.821 without post-processing, and 0.770/0.824 with post-processing.</li> </ul> <p>The notebooks of my solution are as follows:</p> <ul> <li><a href="https://www.kaggle.com/code/rsakata/cmi-14th-place-solution-train" target="_blank">https://www.kaggle.com/code/rsakata/cmi-14th-place-solution-train</a></li> <li><a href="https://www.kaggle.com/code/rsakata/cmi-14th-place-solution-inference" target="_blank">https://www.kaggle.com/code/rsakata/cmi-14th-place-solution-inference</a></li> </ul> <p>The public/private scores of the above notebook are 0.748/0.822 without post-processing, and 0.768/0.822 with post-processing. The differences appear to be due to minor changes made in the process of refactoring the code or simply due to randomness. (I consider it not essential.)</p> <h2>Input</h2> <p>The data for each series_id was divided into daily segments and input to the neural network. To be precise, however, in order to avoid the influence of padding in the CNN, the data of the previous and following days were partially combined and input.</p> <p>The number of input channels to the neural network is three, as follows:</p> <ul> <li>logarithm of the 2-minute moving standard deviation of 'angelz'</li> <li>logarithm of 'enmo'</li> <li>flag to identify dummy data</li> </ul> <p>Like the others, dummy data was detected by duplicating (anglez, enmo, time) in the same series_id. Predictions for dummy data were set to 0 before calculating the loss, so as not to affect backpropagation. (Therefore, the third input is not that important.)</p> <p>Features related to time were not added to the input to the neural network, and all temporal trends were considered in post-processing.</p> <h2>Target</h2> <p>In the evaluation metrics of this competition, scores can be obtained even if the timing of event occurrence detection is slightly off, so the target should reflect not only the moment of event occurrence but also its surroundings when training.<br> On the other hand, however, insufficient target decay tended to make the timing of peaks only roughly predictable, leading to worse scores at smaller thresholds. After searching for the method that would yield the best validation score, I finally adopted an exponentially decaying form, as shown in the figure below.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F169364%2F40d85921f006a2b2c7c9522d8138d4f5%2F__results___5_0.png?generation=1702044292018443&amp;alt=media" alt=""></p> <h2>Model Architecture</h2> <p>The architecture of my neural network is based on the following notebook of the other competition written by K_mat (2nd place in this competition).<br> <a href="https://www.kaggle.com/code/kmat2019/u-net-1d-cnn-with-keras" target="_blank">https://www.kaggle.com/code/kmat2019/u-net-1d-cnn-with-keras</a></p> <p>The input granularity is every 5 seconds, but the output granularity is every minute, which differs from regular U-Net. I tried various structures in terms of number of layers, number of channels, kernel size, etc., but the search was limited and I believe that better configurations exist.<br> For details on the structure of the model, please see the notebook.</p> <h2>Post-processing</h2> <p>No time information was input to the neural network, which was taken into account in post-processing. The minute-by-minute scores output by the neural network were multiplied by weights according to the trend by time of day.<br> Specifically, they are as follows:</p> <ul> <li>probability of each events by time of day across all series_id</li> <li>probability of each events by 'minute mod 15'</li> <li>average of scores by time of day for each series_id (to capture periodicity at the series_id level)</li> </ul> <p>In addition, as a post-processing step unrelated to time, the scores were multiplied by a coefficient based on the percentage of dummy data. This is because the series_id with more dummy data tended to be somewhat more difficult to guess the peak accurately, and I aimed to improve PR-AUC by placing more confident prediction at the top of the list.</p> <p>Peak extraction using the <code>scipy.signals.find_peaks</code> function was performed on the adjusted scores. A two-step strategy was used, first extracting the larger peaks and then the smaller peaks. The former extracts approximately one peak per day, while the latter is countless. By amplifying the former scores, I aimed to improve the PR-AUC.</p> <p>These post-processing improved the CV score by about 0.005 to 0.01, but unfortunately, as mentioned earlier, the impact of these post-processing on the Private LB Score was limited.</p> <p>To accelerate the study, in the early stages I was exploring an approach that took as input an aggregate of the minute-by-minute. In this process, I found that for anglez, variance worked better than mean. Finally, it was determined that 2 minutes was better, adjusting for window width. By checking the distribution of the variables, it was determined that a log transformation was desirable because it was quite skewed.</p>
Child Mind Institute - Detect Sleep States
152th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Overview</h1> <p>First, I would like to express my sincere thanks to the organizers of the Child Mind Institute - Detect Sleep States Competition.<br> I tried this competition based on <a href="https://www.kaggle.com/tubotubo" target="_blank">@tubotubo</a> 's baseline. I extend my gratitude.</p> <h1>My Solution</h1> <ul> <li>Changed the way of validation<ul> <li>Divided into groups using GroupKFold, ensuring an equal distribution of number of events.</li> <li>0.727</li></ul></li> <li>Clipped enmo<ul> <li>Clipped at 1 as outliers were significant</li> <li>0.724</li></ul></li> <li>Seed averaging with three different values<ul> <li>0: 0.725</li> <li>21: 0.724</li> <li>42: 0.723</li></ul></li> <li>Ensamble models<ul> <li>Averaged the predictions of each model, and then applied post-processing using find_peaks.</li> <li>0.73~0.735</li></ul></li> </ul> <h1>Did Not Work</h1> <ul> <li>window for input was expanded from 8 hours to 24 hours<ul> <li>I wanted to include information from the previous day and other times in the models.</li></ul></li> <li>Changing parameters of Gaussian label<ul> <li>Attempted to pass smoother labels to the model, but did not see an improvement in accuracy.</li></ul></li> <li>Changing parameters of find_peaks</li> <li>Deleting low score predictions in post-process<ul> <li>Set a threshold for scores, but it was not successful.</li></ul></li> </ul>
Child Mind Institute - Detect Sleep States
155th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to organizers and congrats to all participants.</p> <p>I started working after I found <a href="https://www.kaggle.com/tubotubo" target="_blank">@tubotubo</a>'s public notebook &amp; his github repo.<br> <a href="https://www.kaggle.com/code/tubotubo/cmi-submit" target="_blank">https://www.kaggle.com/code/tubotubo/cmi-submit</a><br> <a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states" target="_blank">https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states</a><br> First time for me to tackle on Deep learnning techninque instead of GBDT, I learned so many things.</p> <h2>My solution</h2> <p>Basically my solution comes from tubotubo 's public notebook.<br> Differences are following;</p> <ul> <li>Use top-5 models instead of best model on an epoch.</li> <li>Ensembling on variety of K-Folds</li> </ul> <p>I found validation score fluctuates on learning, also private data is 75% of test dataset, I decided to ensemble as much as I can.</p> <h2>A Tip tried but not worked</h2> <p>Here is a tip I couldn't find on the discussion. This is not an effective tip. I'd appreciate any feedbacks.</p> <ul> <li>Mean/Standardization on each series_id, instead of whole dataset<br> I wanted my model to learn the difference btwn awake/sleep instead of each seires_id's difference.<br> So I applied mean/standardization to anglez/enmo on each series_id, but it didn't work as I expected.</li> </ul>
Child Mind Institute - Detect Sleep States
159th solution 🥉
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I started to working on this competition just after <a href="https://www.kaggle.com/tubotubo" target="_blank">@tubotubo</a> published his solution to learn how to solve this kind of time series problem.<br> I learned a lot from his approach and appreciate sharing great ideas!</p> <p><a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/452940" target="_blank">https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/452940</a></p> <p>My solution is basically same as the referenced one.<br> Here are my ideas to improve the score.</p> <h1>Continuous time features</h1> <p>The original code uses hour_sin/hour_cos features to teach the model when of the day it is looking at.<br> I changed this feature to seconds in a day so the feature has continuous value rather than digital rapidly change on the border of hours. This boosted my cv score about ~ 0.01</p> <h1>Smaller threshold and larger distance of the peaks</h1> <p>Considering the competition metric, I changed post processing parameters.</p> <ul> <li>peak distance: 100steps</li> <li>peak threshold: 0.005</li> </ul> <p>The Larger peak distance decreases false positives that is too close to each other.<br> The smaller peak threshold increases true positives without increasing false positives that have effects on the MAP metric (additional peaks that have relatively small scores are never referenced before the higher peaks are counted as TP/FP when calculating MAP score so the setting smaller threshold is a reasonable strategy.)</p> <h1>Excluding predictions in periodic times</h1> <p>There are periodic times without any label events in training data.<br> Predictions in these period should be removed so that the number of FP decreases.<br> I compared the series features around each predicted events with that of preceding and following day, and excluded the event if it is identical to at least one of them.<br> This had little effect on my cv score but boosted the public score about +0.007 (private: +0.016)!</p> <h1>etc.</h1> <p>Actually I have no Idea about how to improve the model architectures that is already sophisticated and worked on minor changes like post processing. I will catch up the high ranker solutions and learn how to construct models!</p>
Child Mind Institute - Detect Sleep States
15th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I'd like to thank kaggle team, host for hosting this competition and thank many other participants to share codes, ideas and datasets. Special thanks to my teammates (<a href="https://www.kaggle.com/kunihikofurugori" target="_blank">furu</a>, <a href="https://www.kaggle.com/yukio0201" target="_blank">mizoo</a>, <a href="https://www.kaggle.com/yamsam" target="_blank">isamu</a>, <a href="https://www.kaggle.com/tereka" target="_blank">tereka</a>) who worked together on the competition and <a href="https://www.kaggle.com/tubotubo" target="_blank">213tubo</a> for sharing such a great <a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states" target="_blank">pipeline</a>.</p> <h3>Overview</h3> <p>Our final submission is ensemble of 55 models in total, including eleven 5fold models.</p> <p>Our solution has four main parts. <br> They are furu, mizoo, kaerururu part, isamu part, tereka part, and postprocess part.</p> <h3>furu, mizoo, kaerururu part</h3> <p>Our pipeline is based on 213tubo's one.<br> The architecture is various Feature Extractor -&gt; Unet Encoder -&gt; Unet1d Decoder.<br> Onset cv best model and wakeup cv best model are <code>saved separately</code>.</p> <ul> <li><p>Feature Engineering</p> <ul> <li>normalize anglez, enmo with mean, std</li> <li>anglez, enmo, hour feature (sin, cos)</li></ul></li> <li><p>Feature Extractor</p> <ul> <li>LSTMFeatureExtractor</li> <li>CNNSpectrogram</li> <li>GRUFeatureExtractor</li></ul></li> <li><p>Unet Encoder</p> <ul> <li>various pretrained weight<ul> <li>resnet18</li> <li>resnet34</li> <li>inceptionv4</li></ul></li></ul></li> <li><p>Unet1d Decoder</p></li> <li><p>various duration</p> <ul> <li>17280 (24h)</li> <li>5760 (8h)</li></ul></li> </ul> <h3>isamu part</h3> <p>Isamu has two model architectures. The one is MultiResidualBiGRU model based on <a href="https://www.kaggle.com/competitions/tlvmc-parkinsons-freezing-gait-prediction/discussion/416410" target="_blank">4th place solution of parkinson competition</a> and the other is Wavenet based.</p> <ul> <li>Feature Engineering<ul> <li>normalize anglez, enmo with mean, std</li> <li>stack sequences down sampled with various way (min, max, mean, std, median) </li></ul></li> </ul> <h3>tereka part</h3> <p>Tereka has two model architectures. The one is Unet based and the other is Transformer based.</p> <ul> <li><p>Feature Engineering</p> <ul> <li>normalize anglez, enmo with mean, std</li> <li>anglez, enmo, diff (anglez, enmo), hour feature (sin, cos)</li></ul></li> <li><p>Use a moving average (window_size=12) of the model output.</p></li> </ul> <h3>post processing</h3> <ul> <li>Unwear detection<ul> <li>Both rule base (focus on daily cycle) and random forest model base are worked.</li> <li>One of the final submissions was with rule base and the other was with random forest model base.</li></ul></li> <li>When predicted value step=0, set step value to +1</li> <li>Boost candidates within 24 hours<ul> <li>Considering the periodicity of event being one within 24 hours, candidates with the greatest probability within 24 hours were boosted</li></ul></li> </ul> <h3>What Worked</h3> <ul> <li>Remove noisy 8 sample in train and valid phase</li> <li>Add fulltrain</li> <li>The lower threshold, the better score<ul> <li>The lower threshold, the higher Probability of submission scoring error :(</li></ul></li> <li>Exponential Moving Average</li> <li>Label Smoothing</li> <li>combine bce loss with dice loss(only apply peak)</li> </ul> <h3>What Didn’t Work</h3> <ul> <li>Larger model backbone</li> <li>positional encoding</li> <li>date embeddings</li> <li>2nd stage NN model</li> <li>layer lr tunning</li> <li>Transformer decoder</li> <li>The 0:3000 steps of onset predicted value array and the -3000: steps of wakeup predict array are set to 0.</li> </ul> <h3>Comparison of CV, LB, and Private for the final submissions</h3> <table> <thead> <tr> <th>Name</th> <th>CV</th> <th>LB</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>Submission 1 (cv best)</td> <td>0.8125</td> <td>0.773</td> <td>0.823</td> </tr> <tr> <td>Submission 2 (lb best)</td> <td>-</td> <td>0.778</td> <td>0.822</td> </tr> </tbody> </table> <h3>Comparison of CV, LB, and Private with and without post-processing</h3> <table> <thead> <tr> <th>Name</th> <th>CV</th> <th>LB</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>With post-processing</td> <td>0.8125</td> <td>0.773</td> <td>0.823(+0.011)</td> </tr> <tr> <td>Without post-processing</td> <td>-</td> <td>0.767</td> <td>0.812</td> </tr> </tbody> </table> <h3>Best scores for each single model</h3> <table> <thead> <tr> <th>Name</th> <th>Model</th> <th>CV</th> <th>LB</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>furu model</td> <td>GRUINITFeatureExtractor-UNet1DDecoder</td> <td>0.7689665</td> <td>0.746</td> <td>0.8</td> </tr> <tr> <td>kaerururu model1</td> <td>GRUFeatureExtractor-UNet1DDecoder</td> <td>0.743</td> <td>-</td> <td>-</td> </tr> <tr> <td>kaerururu model2</td> <td>GRUFeatureExtractor-UNet1DDecoder</td> <td>0.733</td> <td>-</td> <td>-</td> </tr> <tr> <td>mizoo model</td> <td>LSTMFeatureExtractor-UNet1DDecoder</td> <td>0.7508</td> <td>0.736</td> <td>0.784</td> </tr> <tr> <td>Isamu model1</td> <td>wavenet_lstm</td> <td>0.773</td> <td>-</td> <td>-</td> </tr> <tr> <td>Isamu model2</td> <td>1d-gru</td> <td>0.775</td> <td>-</td> <td>-</td> </tr> <tr> <td>tereka model1</td> <td>unet base</td> <td>0.764</td> <td>0.752</td> <td>0.803</td> </tr> <tr> <td>tereka model2</td> <td>transformer base</td> <td>0.773</td> <td>0.729</td> <td>0.797</td> </tr> </tbody> </table> <h3>Important Citations</h3> <ul> <li>213 tubo's pipeline<ul> <li><a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states" target="_blank">https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states</a></li></ul></li> <li>parkinson 4th place soludion<ul> <li><a href="https://www.kaggle.com/competitions/tlvmc-parkinsons-freezing-gait-prediction/discussion/416410" target="_blank">https://www.kaggle.com/competitions/tlvmc-parkinsons-freezing-gait-prediction/discussion/416410</a></li></ul></li> </ul>
Child Mind Institute - Detect Sleep States
16th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I want to thank the organizers for such a great competition that provided us with the opportunity to delve into the very intresting domain of detecting sleep events using time series data from an accelerometer. I hope the participants' solutions provided meaningful and valuable insights to Child Mind Institute that will speed up and enhance their research. <br> This was my first competition where I participated seriously, and I have achieved 13th place with a gold medal on the Public data and 16th place with a silver medal on the Private data (unfortunately, the shuffling on the private had other plans for my gold medal).</p> <p><strong>Overview</strong></p> <p>Data Processing/Feature engineering:</p> <ul> <li>Remove gaps without labels from the training data.</li> <li>Apply the log1p transformation to enmo and anglez/90.</li> <li>A Combination of spectrograms with different window lengths and rolling features.</li> <li>Gaussian distributions around events as the target labels.</li> </ul> <p>Models:</p> <ul> <li>5-fold GRU + CNN model for detecting events.</li> <li>5-fold LSTM + CNN model for detecting events.</li> <li>5-fold TCNN model for detecting events.</li> <li>5-fold LSTM + CNN model for segmenting states (sleep or awake).</li> </ul> <p>Post-processing:</p> <ul> <li>Find mini peaks (7 minutes distance).</li> <li>Find macro peaks (400 minutes distance) to capture multiple micro peaks (1 minute distance) around them.</li> <li>Apply a segmentation probability map using a custom equation to rescore detections.</li> <li>Detect and remove dummy segments.</li> <li>Maximum increase overlap between input windows to models</li> </ul> <p><strong>Data processing</strong></p> <p>I noticed that models perform better in detecting events outside of dummy segments when trained on filtered data. The filtration process involved identifying gaps without any labels, removing them from the time series, but retaining 300 minutes after the last wakeup before the gap and 300 minutes before the first available onset after the gap. 300 minutes retaining was done to preserve context for models around events. Then, filtered time series were split into chunks of 700 minutes with an overlap of 350 minutes and shuffled into Group 5 Folds for training and validation. Predictions on overlap segments were averaged. </p> <p><strong>Feature engineering</strong></p> <p>Feature engineering was initially to take log1p of enmo, anglez/90 and then make spectrograms with window lengths 12 and 60 of them as well as rolling features such as absolute difference median, std, and median with window lengths of 30, 60, and 120. And the sin of minutes was included as an additional feature. </p> <p>Labels for the event detector task were made by Gaussian with 20 minutes around events with sigma=0.75. And labels for segmentation were made by taking segments between pairs of onset and wakeup. </p> <p><strong>Event detection models and training</strong></p> <p>I achieved the best CV using LSTM with an initial dense layer of 32 dim (increasing dimensionality gave worse results), after LSTM, CNN head was added. Then, I found out that incorporating a TCNN with 7 residual layers (kernel size 3, dilation 3^(layer-1)) into the ensemble boosted the LB score ~ 0.004. </p> <p>To add a little diversity into the ensemble, I thought that because GRU has less gates and information flow differently from LSTM I included it into the ensemble using the first architecture replacing LSTM and got ~0.002 boost on LB. </p> <p>Trained models with CyclicLR with the length of cycle = the number of batches, max lr = 0.005 and min lr = 0.0001 gave the best results. I suppose that it may be related to the cyclic nature of time series data. </p> <p><strong>Segmentation models and training</strong></p> <p>After visualizing predictions, I realized that event detection models sometimes overlook long-term dependencies. For instance, they may assign high probabilities to steps preceding a gap, even when a long real sleep state segment appears in a while.</p> <p>With the wish to include more context I decided to train a segmentation model to segment sleep/awake state and apply its probability map via custom equations which will be explained in the post-processing part. </p> <p>After some experiments with 1d unet, cnn, rnn I found out that LSTM with CNN head (the same architecture as for event detection model) showed the best iou. CyclycLR was used as well with the same parameters as for event detection models. </p> <p>Below are predictions of event detection and state segmentation models on a random segment. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6343664%2Fb7b4a6f8c2d8ba619d51f2d7b6bc7bfc%2FScreenshot%20from%202023-12-09%2001-44-49.png?generation=1702079152087124&amp;alt=media" alt=""></p> <p><strong>Postprocessing</strong> </p> <p>I found that detection of peaks with 7 minute distance between them gave the best score. </p> <p>But because of the strange nature of ground truth with its offsets I found that it is beneficial to additionally find micro peaks with distance 1 minute between them around dominant macro peaks with 400 minutes distance, doing this I increased probability that I did not skip more close to ground truth peaks. It gave about a 0.003 LB boost. </p> <p>Then, as you can see at the visualization of predictions, models can predict peaks before or after some gaps, and before or after segments with fluctuations during sleep (short wakeups during sleep periods). If peaks close to the ground truth have a higher score, these small peaks should not impact the overall score. However, if peaks close to the ground truth have a score similar to those further away, it can significantly decrease the score. Therefore, I needed to rescore the peaks using a larger context provided by segmentation models(like low probabilities above short not sleep segments and higher above real sleep segments). To do this rescoring I came up to such equations:</p> <p><strong><em><em>To rescore onsets I used the following equation:</em></em></strong></p> <p>new onset score = score for peak of onset from event detection model * <br> (1 -MeanSegm(-240 minutes) + MeanSegm(+240 minutes))/2</p> <p>Where: 1−MeanSegm(−240 minutes) is 1 minus the mean of the probability of segmentation map on 240 minute segments before onset peaks. The logic is that the segmentation model should give less probability for time series to be in sleep state before real onsets. The same logic applies in reverse for time series after onset peaks, denoted as MeanSegm (+240 minutes ) .</p> <p><strong><em><em>To rescore wakeups I used the same equation but with inverted signs:</em></em></strong></p> <p>new wakeup score = score for peak of wakeup from event detection model * <br> (1 -MeanSegm(+240 minutes) + MeanSegm(-240 minutes))/2</p> <p>Where: 1 -MeanSegm(+240 minutes) - is 1 minus the mean of the probability of segmentation map on 240 minute segments after wakeup peaks. The logic is that the segmentation model should give less probability for time series to be in sleep state after real wakeups. The same logic applies in reverse for time series after wakeups MeanSegm(-240 minutes). </p> <p>The rescoring provided me with a 0.006 LB boost. </p> <p><strong><em><em>To filter dummy data</em></em></strong></p> <p>To filter dummy data, I decreased the frequency of timestamps to every ten minutes, then took windows for each step with length of 40 and searched for similar windows by euclidean distance with threshold &lt; 0.75. Then, added 400 minutes before the start of the detected dummy segments for onsets as dummy data and the same but after dummy segments for wakeups. It gave about a 0.05 LB boost!. </p> <p><strong><em><em>Maximum overlap</em></em></strong></p> <p>I also noticed that interference with a large overlap boosted score (it allowed model to look at events with a little bit different context). So, with a window of 700 minutes, I selected an overlap of 650 minutes to fit into the 9 hour limit.</p> <p><strong>What did not work</strong></p> <ul> <li>any augmentation like random noise, empty gaps with random constant value for anglez between -90 to 90 and zero for enmo to simulate common pattern in data where multiple false positives appeared, short segments with high variation to simulate common pattern during sleep, amplitude scale.</li> <li>More complicated models like 1dUnet with RNN after each downsample.</li> <li>Rescore of close peaks by XGBoost rank method. </li> <li>The second stage RNN run on short segments around peaks.</li> <li>Hierarchical RNN from this paper - <a href="https://arxiv.org/pdf/1809.10932v3.pdf" target="_blank">https://arxiv.org/pdf/1809.10932v3.pdf</a> </li> </ul> <p>Thanks. I hope that I will regain my luck in the next competitions😀</p>
Child Mind Institute - Detect Sleep States
1790th Place Solution - code details, attempts and insights
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello fellow Kagglers,</p> <p>Firstly, I would like to congratulate all the winners and participants of the recently concluded Child Mind Institute - Detect Sleep States Competition. Although my result was in the "top 96%", which isn't quite where I had hoped, I wanted to share my solution and the insights I gained throughout the competition. My aim is to contribute to the community and hopefully offer some insights for other newcomers.</p> <p><strong>Context</strong></p> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states" target="_blank">https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/data" target="_blank">https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/data</a></li> </ul> <p><strong>Overview of the approach</strong></p> <ul> <li>Target<ul> <li>classification of 4-class events<ul> <li>Awake, Sleep (negative when submitted)</li> <li>Onset, Wakeup (positive when submitted)</li></ul></li></ul></li> <li>Features<ul> <li>enmo, anglez: as default</li> <li>shift_anglez: lagged feature varied from anglez to capture time-series data characteristics (step=1)</li> <li>hour_sin, hour_cos: cycled features varied from timesteps</li></ul></li> <li>Training method: LightGBM</li> <li>Validation:Hold-out<ul> <li>train, valid = 221, 56</li></ul></li> <li>Evaluation<ul> <li>AP (LB score: 0.004)</li></ul></li> </ul> <p><strong>Details of the submission</strong><br> All parts of the code are open in the following notebooks so feel free to refer to them:<br> * Dataset: <a href="https://www.kaggle.com/datasets/coyamax/cmi-anglezlag" target="_blank">https://www.kaggle.com/datasets/coyamax/cmi-anglezlag</a><br> * Training model and validation: <a href="https://www.kaggle.com/code/coyamax/fork-of-cmi-train2-11?kernelSessionId=153365353" target="_blank">https://www.kaggle.com/code/coyamax/fork-of-cmi-train2-11?kernelSessionId=153365353</a><br> * Inference: <a href="https://www.kaggle.com/code/coyamax/cmi-infer-final?kernelSessionId=153746953" target="_blank">https://www.kaggle.com/code/coyamax/cmi-infer-final?kernelSessionId=153746953</a></p> <p><strong>Attempts and insights</strong><br> Lagged features: </p> <ul> <li>lagged features derived from anglez and enmo (shift_anglez and shift_enmo, respectively) have different impacts on scores; shift_anglez leads better results than shift_enmo and both of two used together.<ul> <li>Only shift_anglez used: 0.004</li> <li>Only shift_enmo used: 0.003</li> <li>Both of shitf_anglez and shift_enmo used: 0.002</li></ul></li> <li>The result may suggest using more features derived from anglez is more effective than those from enmo if computational resources are limited.<br> [Feature importance of features at the same time]<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11605610%2F04fc8780b328d777d9575cb0dac2543e%2Ff_imp.jpg?generation=1701835769332359&amp;alt=media" alt=""></li> </ul> <p>Post-process:</p> <ul> <li>The model predicts four labels, awake (the state of not sleeping), sleep (the state of sleeping), onset (when the state of sleeping begins), and wakeup (when the state of sleeping ends).</li> <li>Final submission only uses the predictions of onset and wakeup directly, by extracting the peaks of prediction scores for each series id.</li> <li>In the development process, I also hypothesized the predicted results of awake and sleep may be the key to predict onset and wakeup; valleys of prediction scores (near 0, in other words) of awake and sleep correspond to the steps of onset and wakeup, respectively. However, it turns out detecting score valleys of awake and sleep don’t have significant effect on predicting positive labels compared to detecting the score peaks of wakeup and onset, which ends up with LB score 0.</li> <li>It concludes that simple peak detection on the onset/wakeup scores performs better in my case.<br> [Prediction scores of awake for 3 samples of series ids]<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11605610%2F5be8559d565383751d601c4eec60dd66%2Fawake.png?generation=1701835788873540&amp;alt=media" alt=""><br> [Prediction scores of sleep for 3 samples of series ids]<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11605610%2Fee640d2040c9f10ea95736f2757496e0%2Fsleep.png?generation=1701835824555093&amp;alt=media" alt=""></li> </ul> <p><strong>Sources</strong></p> <ul> <li><a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/452940" target="_blank">https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/452940</a></li> </ul>
Child Mind Institute - Detect Sleep States
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to express gratitude to all participants and the competition host. It was a challenging competition, but I am pleased with the positive outcome and feel relieved.</p> <p>Here is a brief summary of our solution.<br> You can check our code <a href="https://github.com/sakami0000/child-mind-institute-detect-sleep-states-1st-place" target="_blank">here</a>.</p> <h2>Single model</h2> <p>Here's a log on how to improve the CV score after the summary. The final scores were: CV: 0.8206, public LB: 0.768, private LB: 0.829 (equivalent to 9th place).</p> <h3>Summary</h3> <h4>Model structure</h4> <p>The model structure is primarily based on <a href="https://www.kaggle.com/code/danielphalen/cmss-grunet-train" target="_blank">this amazing notebook</a>, with a structure comprising:<br> CNN (down sample) → Residual GRU → CNN (up sample)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2133818%2Fcc6a10ba62b3a3d9361b36e8b162ca5d%2Fmodel_structure.jpeg?generation=1701857831021267&amp;alt=media" alt=""></p> <ul> <li>SEScale</li> </ul> <p>For input scaling, SEModule was utilized. (<a href="https://arxiv.org/abs/1709.01507" target="_blank">https://arxiv.org/abs/1709.01507</a>)</p> <pre><code> (nn.Module): () -&gt; : ().__init__() self.fc1 = nn.Linear(ch, r) self.fc2 = nn.Linear(r, ch) () -&gt; torch.FloatTensor: h = self.fc1(x) h = F.relu(h) h = self.fc2(h).sigmoid() h * x </code></pre> <ul> <li>Minute connection</li> </ul> <p>As noted in several discussions and notebooks, there was a bias in the minute when ground truth events occurred. To account for this, features related to minutes were concatenated separately in the final layer.</p> <pre><code> () -&gt; torch.FloatTensor: cat_embeddings = [embedding(cat_x[:, :, i]) i, embedding (self.category_embeddings)] num_x = self.numerical_linear(num_x) x = torch.cat([num_x] + cat_embeddings, dim=) x = self.input_linear(x) x = self.conv(x.transpose(-, -)).transpose(-, -) gru self.gru_layers: x, _ = gru(x) x = self.dconv(x.transpose(-, -)).transpose(-, -) minute_embedding = self.minute_embedding(cat_x[:, :, ]) x = self.output_linear(torch.cat([x, minute_embedding], dim=)) x </code></pre> <h4>Data Preparation</h4> <p>Each series of data is divided into daily chunks, offset by 0.35 days.</p> <pre><code> train_df = train_df.with_columns(pl.arange(, pl.count()).alias()) series_row_ids = (train_df.group_by().agg().rows()) series_chunk_ids = [] series_chunk_row_ids = [] series_id, row_ids tqdm(series_row_ids.items(), desc=): start_idx (, (row_ids), (config.stride_size / config.epoch_sample_rate)): start_idx + config.chunk_size &lt;= (row_ids): chunk_row_ids = row_ids[start_idx : start_idx + config.chunk_size] series_chunk_ids.append(series_id) series_chunk_row_ids.append(np.array(chunk_row_ids)) : chunk_row_ids = row_ids[-config.chunk_size :] series_chunk_ids.append(series_id) series_chunk_row_ids.append(np.array(chunk_row_ids)) </code></pre> <p>During training, half of each chunk is used in every epoch.</p> <pre><code> sampled_train_idx = train_idx[epoch % config.epoch_sample_rate :: config.epoch_sample_rate] </code></pre> <p>For evaluation, overlapping sections are averaged, and the ends of each chunk are trimmed by 30 minutes.</p> <h4>Target</h4> <p>A decaying target is created based on the distance from the ground truth event, with diminishing values as the distance increases.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2133818%2Fc6d809492da742bbcb9252615375e76d%2Ftarget.png?generation=1701858069231326&amp;alt=media" alt=""></p> <pre><code>tolerance_steps = [, , , , , , , , , ] target_columns = [, ] train_df = ( train_df.join(train_events_df.select([, , ]), on=[, ], how=) .to_dummies(columns=[]) .with_columns( pl.max_horizontal( pl.col(target_columns) .rolling_max(window_size * - , min_periods=, center=) .over() * ( - i / (tolerance_steps)) i, window_size (tolerance_steps) ) ) ) </code></pre> <p>The target is updated each epoch to decay further.</p> <pre><code> targets = np.where(targets == , , (targets - ( / config.n_epochs)).clip(=)) </code></pre> <p>By attenuating the target, the range of predicted values narrows, allowing for the detection of finer peaks.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2133818%2F002ebac2c4da25cdde7a81b27f1ec492%2F2023-12-09%200.15.31.png?generation=1702048562641170&amp;alt=media" alt=""></p> <h4>Periodicity Filter</h4> <p>As discussed in <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/456177" target="_blank">here</a>, there is a daily periodicity in the data when the measuring device is removed. This is leveraged to predict these periods rule-based and used as a filter for inputs and predictions.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2133818%2Fbd12584427bc9e2e628e5a095419ba2f%2F2023-12-06%2015.11.06.png?generation=1701858205750939&amp;alt=media" alt=""></p> <h4>Input Features</h4> <ul> <li>categorical features<ul> <li>hour</li> <li>minute</li> <li>weekday</li> <li>periodicity flag</li></ul></li> <li>numerical features<ul> <li>anglez / 45</li> <li>enmo.log1p().clip_max(1.0) / 0.1</li> <li>anglez, enmo 12 steps rolling_mean, rolling_std, rolling_max</li> <li>anglez_diff_abs 5 min rolling median</li></ul></li> </ul> <h3>Change logs</h3> <ul> <li>baseline model (cv: 0.7510) - public: 0.728</li> <li>Add a process to decay the target every epoch (cv: 0.7699, +19pt)</li> <li>Add a periodicity filter to the output (cv: 0.7807, +11pt)</li> <li>Add a periodicity flag to the input as well (cv: 0.7870, +6pt) - public: 0.739</li> <li>batch_size: 16 → 4, hidden_size: 128 → 64, num_layers: 2 → 8 (cv: 0.7985, +11pt) - public: 0.755</li> <li>Normalize the score in the submission file by the daily score sum (cv: 0.8044, +6pt)</li> <li>Remove month and day from the input (cv: 0.8117, +7pt)</li> <li>Trim the edges of the chunk by 30 minutes on both sides (cv: 0.8142, +4pt) - public: 0.765</li> <li>Modify to concatenate the minute features to the final layer (cv: 0.8206, +6pt) - public: 0.768</li> </ul> <hr> <h2>Post Processing</h2> <p>This post-processing creates a submission DataFrame to optimize the evaluation metrics. With this post-processing method, we significantly improved our scores (public: 0.768 → <strong>0.790</strong>, private: 0.829 → <strong>0.852</strong> !!!).</p> <p>This was a complex procedure, which I will explain step by step.</p> <ol> <li><p><strong>Data Characteristics</strong></p> <p>First, let's discuss the characteristics of the data. As noted in several discussions and notebooks, the second of the target events was always set to zero.<br> The competition's evaluation metric doesn't differentiate predictions within a 30-second range from the ground truth event. So, whether the submission timestamp's seconds are 5, 10, 15, 20, … 25, the same score is returned.</p> <p><img alt="" src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2133818%2F704ce98b414aa9a6d22853ecf77d7322%2F2023-12-06%2015.07.34.png?generation=1701858658892729&amp;alt=media"></p></li> <li><p><strong>Creation of the 2nd Level Model</strong></p> <p>The 1st level model's predictions were trained to recognize events within a certain range from the ground truth as positive. However, the 2nd level model transforms these into probabilities of a ground truth event existing for each minute.<br> The output of the 1st level model was at seconds 0, 5, 10, …, but the 2nd level model aggregates these to always be at second 0. Specifically, it inputs aggregated features around hh:mm:00 and learns to predict 1 only at the exact time of an event, otherwise 0. Details of the 2nd level model will be described later.</p> <p><img alt="" src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2133818%2F2dd6dc2cd5c57b5fb5329844da31badd%2F2023-12-06%2014.58.37.png?generation=1701858758483725&amp;alt=media"></p></li> <li><p><strong>Score Calculation for Each Point</strong></p> <p>As explained earlier, submitting any second within the same minute yields the same score. Therefore, we estimate the score at the 15 and 45 second points of each minute, and submit the one with the highest value, effectively submitting the highest score for all points. The method of score estimation is as follows:</p> <p>For instance, let's estimate the score at 10:00:15.</p> <p><img alt="" src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2133818%2Fa80b03c0b8528bcd1c16781f5ff63b55%2F2023-12-06%2015.43.19.png?generation=1701858900034640&amp;alt=media"></p> <p>First, we create a window of 12 steps from the point of interest and sum the predictions of the 2nd level model within this window to calculate the <code>tolerance_12_score</code>.</p> <p><img alt="" src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2133818%2F69c109dce35517f931ad7e0283518f8a%2F2023-12-06%2017.12.43.png?generation=1701858916775111&amp;alt=media"></p> <p>Similarly, we calculate <code>tolerance_36_score</code>, <code>tolerance_60_score</code>, …, for the respective tolerances used in the evaluation, and the sum of these scores is considered the score for the point of interest.</p> <p><img alt="" src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2133818%2Fad1f0aa666ff215b53f70def28d3077b%2F2023-12-09%200.24.03.png?generation=1702049078123306&amp;alt=media"></p> <p>We perform this calculation for all points, and for each series, we adopt the point with the highest score and add it to the submission DataFrame.</p> <p><img alt="" src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2133818%2F4134da05af054e316ec7b2fe3ae8465a%2F2023-12-06%2017.52.48.png?generation=1701858973499437&amp;alt=media"></p></li> <li><p><strong>Score Recalculation</strong></p> <p>Next, we recalculate the score to determine the next point to be adopted. For example, suppose the point 09:59:15 was chosen.</p> <p>First, consider updating the <code>tolerance_12_score</code>. Events within tolerance 12 of the adopted point cannot match overlappingly with the next point to be submitted.</p> <p><img alt="" src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2133818%2F499fa8273a05e66ba423d31938ba947f%2F2023-12-09%200.26.24.png?generation=1702049229824883&amp;alt=media"></p> <p>Therefore, when calculating the <code>tolerance_12_score</code> for the next point to be adopted, it's necessary to discount the prediction values within 12 steps of the currently adopted point.</p> <p><img alt="" src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2133818%2F7e3bdf523856fd5740555f601aab6397%2F2023-12-09%200.27.40.png?generation=1702049278756714&amp;alt=media"></p> <p>Likewise, for <code>tolerance_36_score</code>, <code>tolerance_60_score</code>, …, we recalculate by discounting the prediction values within 36, 60, …, steps of the adopted point.</p> <p><img alt="" src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2133818%2F6100837ce3d832f3def36f8e3fe33c8a%2F2023-12-09%200.28.25.png?generation=1702049320289322&amp;alt=media"></p> <p>With the updated scores calculated, we again adopt the highest scoring point for each series and add it to the submission dataframe.</p> <p><img alt="" src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2133818%2F6d6d7df8c0ab7a45fa497fc292ea9ed9%2F2024-01-02%2011.20.40.png?generation=1704162066911536&amp;alt=media"></p></li> <li><p><strong>Creating Submissions</strong></p> <p>We repeat the above step 4 to extract a sufficient number of submission points, then compile these into a DataFrame to create the submission file.</p></li> </ol> <h3>Additional Techniques</h3> <p>Several other techniques were employed to make the post-processing work effectively:</p> <ul> <li>Normalize the predictions of the 2nd level model daily.</li> <li>When recalculating the score, calculate the difference from the previous score to reduce the computation.</li> <li>Speed up the above calculations using JIT compilation.</li> </ul> <h3>Details of the 2nd Level Model</h3> <ul> <li>The 2nd level model starts by averaging the 1st level model's predictions on a per-minute basis and then detecting peaks in these averages using <code>find_peaks</code> with a height of 0.001 and a distance of 8.</li> <li>Based on the detected peaks, chunks are created from the original time series, capturing 8 minutes before and after each peak. (Recall: 0.9845)<ul> <li>This step_size was crucial because the ratio of positive to negative examples changes depending on how many steps are included, affecting the accuracy of subsequent stages. Therefore, we tuned the number of steps for optimal performance.</li> <li>If chunks are connected, they are treated as a single chunk.</li></ul></li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2133818%2F907332d70498f46059a2935461f69941%2F2023-12-06%2019.42.55.png?generation=1701859390836314&amp;alt=media" alt=""></p> <ul> <li>For each chunk, we aggregated features from the 1st model's predictions and other features like anglez and enmo. These aggregated features were then used to train models such as LightGBM and CatBoost.</li> <li>Additionally, we treated each chunk as a sequence for training CNN-RNN, CNN, and Transformer models. As a result, we developed a model that could account for minute biases not fully addressed by the 1st level model.</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2133818%2F4b8ec9d1aecb6172e700a8ce485d226f%2F2023-12-06%2019.43.55.png?generation=1701859451118326&amp;alt=media" alt=""></p> <ul> <li>The predictions of the 2nd level model were sufficiently calibrated, so there was no need for further transformation.</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2133818%2Fad0665bfff9828ebc6516a54e4e20276%2F2024-01-06%2016.46.51.png?generation=1704527223913793&amp;alt=media" alt=""></p> <p>Please refer to <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/444374" target="_blank">this discussion</a>.</p> <blockquote> <p>Did you adjust the model to add the skip connections?</p> </blockquote> <p>Yes. Additionally, I have added several more layers, but the underlying architecture remains unchanged.</p> <blockquote> <p>Also, did you keep the KLDivLoss, or did you use a different loss function?</p> </blockquote> <p>In my case, using BCEWithLogitsLoss worked better.</p> <p>I found your notebook to be very helpful. Thank you!!</p> <p>In our case, performance did not improve with post-processing alone. (w/o daily normalization of the score)</p> <p>The 2nd level model and daily normalization of the score were crucial elements in improving performance.</p> <p>Yes, what you're saying is correct.</p> <p>The reason we are only calculating for two points within a minute is to reduce computational complexity. Calculating for all candidate points (5, 10, …, 55 seconds) or using our method should yield the same score.</p> <p>For example, if the tolerance for calculating evaluation metrics is only 1 minute, then submitting any second within the same minute, such as 5, 10, …, 55 seconds, would result in the same matching ground truth, leading to the same score. Therefore, considering candidate points that represent, for instance, hh:mm:30, would be equivalent to optimizing for all points.</p> <p>However, in this calculation, tolerances like 7.5 minutes or 12.5 minutes, in 30-second intervals, are included. As a result, using hh:mm:15 and hh:mm:45 as representative points will yield different scores. Thus, adopting points at 15 seconds and 45 seconds as representative will be equivalent to optimizing for all points.</p> <p>Thank you!!</p> <ol> <li><p>This is determined by the evaluation metrics of this competition. The score is calculated based on whether events submitted fall within a specified tolerance of the Ground Truth events, irrespective of their proximity.</p> <ul> <li>All Ground Truth events must be at 0 seconds.</li> <li>The smallest tolerance width is in 30-second intervals.</li></ul> <p>Therefore, regardless of whether the submitted events are at 5, 10, …, 25 seconds, the same score is returned. Similarly, events at 35, 40, …, 55 seconds returns the same score. To maximize the score for all points, representative points are extracted at 15-second and 45-second intervals, and the score is computed.</p></li> <li><p>The target of 2nd level model is a binary flag indicating whether the ground truth event exists at the point.</p></li> </ol> <blockquote> <p>Specifically, it inputs aggregated features around hh:mm:00 and learns to predict 1 only at the exact time of an event, otherwise 0.</p> </blockquote> <p>You can check our code <a href="https://github.com/sakami0000/child-mind-institute-detect-sleep-states-1st-place" target="_blank">here</a>.</p>
Child Mind Institute - Detect Sleep States
203 Place Solution for the Detect Sleep States Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Overview of the Approach</h1> <p>It was interesting timeseries competition. I've tried many approaches, some works good, some not working.</p> <h1>Details of the submission</h1> <p>I want to notice a great notebooks and code by <a href="https://www.kaggle.com/tubotubo" target="_blank">@tubotubo</a>:</p> <ul> <li><a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states" target="_blank">https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states</a></li> <li><a href="https://www.kaggle.com/code/tubotubo/cmi-code" target="_blank">https://www.kaggle.com/code/tubotubo/cmi-code</a></li> </ul> <p>My solution is neural network and postprocessing by peaks. I have had a silver submit, but not checked it ((</p> <h1>Not working</h1> <p>Boostings<br> TSFresh<br> Blending with public kernels</p>
Child Mind Institute - Detect Sleep States
26th solution - 1D Unet with 1D CNN, Hybrid CNN-Transformer and WBF
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Many thanks competition host and Kaggle for another interesting challenge. A big congrats to all the winners !<br> As usual, I have a good time learning during this competition. In this post, I will brieftly describe my solution.<br> All of my models learn to predict 3 targets:</p> <ul> <li>Lower resolution heatmap of eventness (transition/critical points)</li> <li>Offset to event (from coarser grid coordinates to pixel-level event step)</li> <li>Binary segmentation mask for sleep period</li> </ul> <p>Input is a window of 2 or 4 days, where first and last 4 hours (near border) is still included in the input sequence, but ignored in loss computation. The intuition is to prevent ambigous near border region since we need more context around to judge.<br> The longer crop window, the higher change of overfit, but better context to make accurate prediction. Especially for Hybrid CNN-Transformer with much larger model capacity, model quickly overfit when using large window. I consider crop_len is also an important hyperparam.<br> Segmentation mask prediction was used as an auxiliary target and was not ultilized in the inference pipeline.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F10254700%2F6e48e25ef889bac5268817b5557210a2%2Ftarget.png?generation=1701850406918104&amp;alt=media" alt="target visualization"><br> The offset prediction then be used as "error correction" to provide sub-pixel level accuracy despite large target strides (4, 8, 16, ..), inspired by Human Pose Estimation and Object Detection models. It looks nearly the same as CCRF ( Combined Classification and Regression Form) used in <a href="https://github.com/HuangJunJie2017/UDP-Pose" target="_blank">UDP-Pose</a>.</p> <h2>Model</h2> <p>1D Unet with hierarchical 1D encoder. Two type of encoders: pure 1D-CNN or Hybrid CNN-Transformer. Output prediction is from 1/4 stride from original resolution (that is, make prediction per 3 steps or 15 seconds)<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F10254700%2F4751841db413e7df537c31b08a23d845%2Farchitecture.png?generation=1701850445592192&amp;alt=media" alt="architecture"></p> <ul> <li>Tried various stem type, Squeezeformer stem give slightly better/consistent CV</li> </ul> <h3>1D-CNN</h3> <p>Simply try to transform 2D-CNN image models to 1D version, replace all BatchNorm -&gt; LayerNorm</p> <ul> <li>Crop_len = 4 days</li> <li>Basic Block: MobilenetV3, expand_ratio = 2, constant dim = 96 across stages</li> <li>Depth = 8 with stage computation ratios [3, 3, 3, 3, 3, 3, 3, 3] (VGG style ?) </li> <li>ECA/SE attention</li> <li>backbone_dropout = 0.4, head_dropout = 0.4</li> </ul> <h3>Hybrid CNN-Transformer</h3> <ul> <li>Crop_len = 2 days</li> <li>SqueezeformerBlock with RoPE from top solutions in <a href="https://www.kaggle.com/competitions/asl-fingerspelling/discussion/434485" target="_blank">previous ASL Fingerspelling competition</a></li> <li>Two architecture included in final submission: hierarchical/non-hierarchical Transformer</li> </ul> <h2>Loss</h2> <p>Simple weighted combination of:</p> <ul> <li>BCE for heatmap</li> <li>MSE or L1Loss for offset</li> <li>BCE for segmentation</li> </ul> <p>Best weights: total_loss = 1.0 * heatmap + 0.2 * offset + 0.2 * segment</p> <h2>Training</h2> <ul> <li>FP16, Model EMA, batch size 16</li> <li>Max updates: 500 * 40 for CNN, 200 * 60 for Hybrid Encoder</li> <li>Optimizer: Lookahead AdamW + Cosine LR</li> </ul> <h2>Post processing</h2> <ul> <li>During training, track validation score outputed from simple NMS (Point-based NMS).</li> <li>Submission using Point-based version of <strong>Weighted Boxes Fusion (WBF)</strong> to ensemble multiple models (after NMS).</li> </ul> <h2>Results</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F10254700%2Fb4310b112b80db10d35a33ae37e533c3%2Fresults.png?generation=1701865443994509&amp;alt=media" alt="local cv results"></p> <h1>What worked</h1> <ul> <li>Tuning sigma for Gaussian-like heatmap target: very important and give significant boost. Best sigma = 5 * 12 = 5 minutes; Gaussian width = 2 * 3 * sigma = 30 minutes.</li> <li>Auxiliary segmentation loss: + ~0.01 AP</li> <li>Tuning loss weights</li> <li>Try various architecture designs: depth, dim, norm, activation, stage computation ratios,..</li> <li>Simple downsample the sequence length by reshaping: worked better than Convolution downsampling on raw sequence. Perhaps help avoid overfiting as mentioned in <a href="https://www.kaggle.com/competitions/tlvmc-parkinsons-freezing-gait-prediction/discussion/416026" target="_blank">this solution</a></li> <li>Model EMA: significantly stablize training and allow training for more epochs, crucial for Hybrid encoder</li> <li>Weighted Boxes Fusion: with 2 models ensemble, +1.3 OOF AP from 77.0 -&gt; 78.3</li> </ul> <h1>What not worked</h1> <ul> <li>Feature engineering: tried various features (on single fold) but no boost. One make me suprised is just single feature 'anglez_sign' (+1 or -1) could archive CV=60.89</li> <li>Augmentation: just implement and tried some simple methods (Flip, Noise, Resampling, TimeStretch, ..) in the last day, also seem no boost and need more tuning.</li> <li>Decoupled head: Decoupled head for objectness &amp; offset usually boost Object Detection model, but in this case simple single linear head with output_dim=5 work best.</li> <li>Ultilize segmentation prediction in post processing: no clear boost</li> </ul> <p>Thanks bro! 😀</p>
Child Mind Institute - Detect Sleep States
27th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I implemented this solution based on <a href="https://www.kaggle.com/tubotubo" target="_blank">@tubotubo</a>'s source code. Thank you for sharing!</p> <h2>Input</h2> <ul> <li>anglez(normalized)</li> <li>enmo(normalized)</li> <li>hour_sin</li> <li>hour_cos</li> <li>minute_sin</li> <li>minute_cos</li> </ul> <h2>Model</h2> <ul> <li><p>Label</p> <ol> <li>awake<ul> <li>BCEWithLogitsLoss</li></ul></li> <li>onset, wakeup<ul> <li>BCEWithLogitsLoss</li></ul></li></ol> <ul> <li>loss = (awake loss) * (1 - 0.88) + (onset, wakeup loss) * 0.88</li></ul></li> <li><p>Data Augmentation</p> <ul> <li>GaussianNoise<ul> <li>min_amplitude: 0.01</li> <li>max_amplitude: 0.1</li></ul></li> <li><a href="https://www.kaggle.com/competitions/g2net-gravitational-wave-detection/discussion/275335" target="_blank">Zebra Mixup</a></li></ul></li> <li><p>encoder: UNet</p> <table> <thead> <tr> <th>feature extractor</th> <th>decoder</th> <th>downsample</th> <th>CV</th> </tr> </thead> <tbody> <tr> <td>LSTM</td> <td>UNet1D</td> <td>2</td> <td>0.7611</td> </tr> <tr> <td>CNN</td> <td>UNet1D</td> <td>2</td> <td>0.7628</td> </tr> <tr> <td>Spectrogram</td> <td>UNet1D</td> <td>2</td> <td>0.7463</td> </tr> <tr> <td>CNN</td> <td>Transformer</td> <td>6</td> <td>0.7480</td> </tr> <tr> <td>Spectrogram</td> <td>Transformer</td> <td>6</td> <td>0.7725</td> </tr> <tr> <td>2CNN+ time feature embedding</td> <td>UNet1D</td> <td>2</td> <td>0.7705</td> </tr> </tbody> </table></li> <li><p>2CNN+time feature embedding</p> <ul> <li>anglez and enmo are input separately into CNN feature extractors</li> <li>In this model, month, hour, and minute are input as categorical variables.</li> <li>month, hour and minute are converted into embeddings and concatenated with the encoder output before being input to the decoder.</li></ul></li> </ul> <pre><code>month = self.month_embedding(month) hour = self.hour_embedding(hour) minute = self.minute_embedding(minute) x = x[:, :, :] x1 = self.feature_extractor1(x[:, :x.shape[]//, :]) x2 = self.feature_extractor2(x[:, x.shape[]//:, :]) x = torch.cat([x1, x2], dim=) x = self.encoder(x).squeeze() x = torch.cat([x, month, hour, minute], dim=) logits = self.decoder(x) </code></pre> <h2>PostProcess</h2> <ul> <li>scipy.signal find peaks (after ensemble)<ul> <li>Chose the smallest possible value for height (0.003)</li> <li>distance optimized with optuna</li></ul></li> <li>overlap inference</li> </ul> <h2>Ensemble</h2> <ul> <li>4fold * 6model</li> <li>Ensemble with weighted output heatmaps</li> <li>Weights optimized with optuna<ul> <li>ensemble (mean) -&gt; cv: 0.795</li> <li>ensemble (weighted mean) -&gt; cv: 0.7981 (public: 0.753 private: 0.809)</li></ul></li> </ul>
Child Mind Institute - Detect Sleep States
29th Place Solution and Initial Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>29th Solution Overview:</h2> <h3>Model Architecture (from tubo's GitHub Repository):</h3> <ul> <li>2DCNN Feature Extractor -&gt; Unet -&gt; Unet Decoder</li> <li>LSTM Feature Extractor -&gt; 1DCNN -&gt; LSTM Decoder</li> </ul> <h3>Input Features:</h3> <ul> <li>enmo, anglez, anglez_diff, hour, weekday, and a noise flag.<ul> <li>Noise is flagged if identical (enmo + anglez) occur within a 5-minute span in the same series_id.</li> <li>EDA indicated that the day before a holiday typically shows later onset times, prompting the inclusion of weekday and holiday features.</li></ul></li> <li>Input steps of 5760 and 11520 were used, with four models based on ensemble model and input step pairs.</li> </ul> <h3>Target Size:</h3> <ul> <li>The target size is identical to the input size.</li> </ul> <h3>Training Targets:</h3> <ul> <li>Events with Gaussian soft labels (2 channels)</li> <li>Sleep flag (1 channel)</li> </ul> <h3>Inference Process:</h3> <ul> <li>Chunks are created by sliding the input_step/4.<ul> <li>For example, in the 5760 model, chunk steps begin from 0, 1440, 2880, etc.</li></ul></li> <li>All chunks are predicted and averaged by step.</li> <li>In CNN-based models, edge predictions are trimmed by 12.5%.</li> </ul> <h3>Postprocessing:</h3> <ul> <li>Use only events prediction (sleep flag is not used).</li> <li>Utilizes find_peaks from SciPy signals, and applies a weighted average to steps near detected signals based on the prediction scores.<ul> <li>For instance, if the candidate step is 5000, the weighted average of steps 4998-5002 is calculated.</li></ul></li> </ul> <hr> <p>Final solution is just a small modification from tubo's approach, so let me share my initial solution. (I took most of my competition time in this approach…)</p> <h2>Initial Solution (LB 0.677):</h2> <h3>Based on a two-stage model:</h3> <ul> <li>Stage 1: Chunk Detection by binary classification <ul> <li>Segmenting the dataset into 1-hour chunks with weak labels to identify useful segments. Label 1 is added if an event occurs within a chunk.</li></ul></li> <li>Stage 2: L1 Regression <ul> <li>Pinpointing the event position within the candidate chunks from Stage 1, followed by post-processing.</li></ul></li> </ul> <h3>Challenges and Learning Points:</h3> <p>As a newcomer to neural networks, my approach lacked certain elements:</p> <ul> <li>Random Dataloader: No ideas to implement a random chunk dataloader. Each epoch started from the same position, leading to overfitting and lack of robustness.</li> <li>Model Exploration: I used a very basic model (a single MLP, attention layer, LSTM, and head). Exploring more sophisticated models, as seen in tubo's work, would have been beneficial.</li> </ul> <hr> <h2>Concluding Thoughts:</h2> <p>This competition was a tremendous learning experience. My thanks to Kaggle for this opportunity, and to all the participants for their contributions and shared insights</p>
Child Mind Institute - Detect Sleep States
2nd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to organizers for this interesting challenge and congrats everyone who enjoyed it! It was a fun competition though the data is very simple 1D data. I look forward to seeing the various solutions.</p> <h1>Overview of my solution</h1> <p>My pipeline consists of:</p> <ul> <li><strong>First Stage</strong><ul> <li>Preprocess and Feature engineering</li> <li>Event Detection and Sleep/Awake Classification by 1DCNN(U-NET)</li> <li>PostProcess of 1D CNN (such as Peak Detection)</li></ul></li> <li><strong>Second Stage</strong><ul> <li>Use ML instead of the Post-processing that is difficult to adjust</li> <li>Rescoring by LGBM that can consider the limitation of 2evets/day</li></ul></li> <li><strong>Third Stage</strong><ul> <li>Predict as many events as possible just by shifting the step of the base prediction above.</li> <li>Scoring the generated predictions by LGBM</li></ul></li> <li><strong>Ensemble</strong><ul> <li>combination of averaging and WBF ensemble</li></ul></li> </ul> <p>I think the 2nd stage is the most unique. <br> This model predict the daily accuracy curve of candidates received from 1st stage. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2938236%2Fe307a878f4ca758835ff897f401a7008%2Foverview.png?generation=1701831890750504&amp;alt=media" alt=""></p> <p>Details are below.</p> <h1>1st Stage: Event Detection and Sleep/Awake Classification</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2938236%2F08d268f6a008646b39b272594226df16%2F1ststage.png?generation=1701832322797003&amp;alt=media" alt=""><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2938236%2F94bdf0ce5785ba775f2a818c769d3a9d%2F1ststagepp.png?generation=1701832338347452&amp;alt=media" alt=""></p> <h1>2nd Stage: Rescoring the confidence considering the limitation of 2 events/day</h1> <h2>Concept</h2> <p>I applied 2nd stage for the following 2 reasons.</p> <h3>1. Less than 2 events/day</h3> <p>The first stage NN cannot consider the very important limitation of this competition that the event occurs at most twice in a day. Which do you think is important?</p> <blockquote> <p>A) The third candidate in a day with a confidence score of 0.20.<br> B) The first candidate in a day with a confidence score of 0.19.</p> </blockquote> <p>In many cases, the latter is more important than the former though the score is lower.</p> <h3>Long-term features</h3> <p>NN is really strong. However, it’s not good at handling meta features and long-term cyclic features. For example,<br> “He can be sleeping longer than the usual. This might not actual wakeup.”<br> “This onset is relatively unclear compared to the other days.”</p> <h2>How to train 2nd stage model</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2938236%2Ff6b0889e818ed24add13fd5d74580884%2F2ndstage_r.png?generation=1701861341160280&amp;alt=media" alt=""></p> <h1>3rd Stage: Add as many events as possible</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2938236%2F049000b0c98a2e5f8bd5f2692a1c2986%2F3rdstage.png?generation=1701832658530598&amp;alt=media" alt=""></p> <h1>Ensemble</h1> <ul> <li>prepare 2 CNN models</li> <li>Averaging 10(5fold x 2seed) predictions at 1st stage for each model</li> <li>Run 2nd stage for each model</li> <li>WBF-like Ensemble of 2 models</li> <li>Run 3rd stage<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2938236%2F8f6a3cf08dfcf72067db9d7a3db5d849%2FEnsemble.png?generation=1701832240218774&amp;alt=media" alt=""></li> </ul> <h1>Ablation Study</h1> <p>not yet</p> <ul> <li>CV Scores<ul> <li>1st stage: 0.826 (-0.019)</li> <li>add 2nd stage: 0.832 (-0.012)</li> <li>add third stage: 0.842 (-0.002)</li> <li>add model ensemble(2models): 0.844 (baseline. final submit)</li></ul></li> </ul> <h3>Inference</h3> <p><a href="https://www.kaggle.com/code/kmat2019/cmi-sleep-2ndplace" target="_blank">https://www.kaggle.com/code/kmat2019/cmi-sleep-2ndplace</a></p> <h3>Training</h3> <p><a href="https://www.kaggle.com/code/kmat2019/cmisleep-training-sample-2ndplace-kmat" target="_blank">https://www.kaggle.com/code/kmat2019/cmisleep-training-sample-2ndplace-kmat</a></p> <p>Thank you Chris for mentioned the other results as well :) Congratulations on your solo gold too <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>! </p> <blockquote> <p>Do you have any idea how much this boost CV and LB?</p> </blockquote> <p>I tested your postprocessing, but I couldn't make it work for my model. So I compared it with my former postprocessing which decreases the score of others if the top1 score is enough high. CV score increased by 0.006 with the 2nd stage.</p> <blockquote> <p>When you say "Top-K Accuracy", are you referring to the 10 thresholds? And "Accuracy = number of true positive thresholds satisfied / 10"?</p> </blockquote> <p>Really clever! I also thought the expected value (10 predictions average) would be the best in theory. Yet it proved that it didn't affects the results so much. Threshold of 0.5 is enough good. Averaging 0.3,0.5,0.7 is slightly better. Averaging 10 predictions is almost same as the 3 predictions average.</p> <p>I used all events as removing them didn't affect the score so much. And I finally add the 60 minutes and 1 day offset predictions after 3rd stage with very low score to cope with the annotation errors though the increase in cv score is very small.</p> <p>Thank you. Congratulations to you too !</p> <p>I could not account for the 15 minute pattern as explicitly as the 1st place solution. I just added predictions offset from base prediction by 2, 4, 8, 16 minutes, believing that they would cover the small error due to the 15 min pattern.<br> In third stage, I labeled the added prediction positive if it improved the score. As the inputs to the model include the features of score distribution and step values, it might slightly effective to the 15 minutes pattern.</p> <p>Thank you for the questions but this is not "few" questions for me. 😂</p> <ul> <li><p>How did you handle labeling errors?<br> Are you talking about nan in ground truth? I added the binary mask to the inputs of 1st stage. If the part is repeated in the other day, the value is 1 else 0.</p></li> <li><p>Is it a cyclical representation of time of day or of step % (24 * 60 * 12)?<br> Exactly.</p></li> <li><p>Does it work better than 5 targets for each event?<br> I've not tested 10 targets. But I prefer the current model, as the sleep/awake mask is easier to learn than the sparse event label. Learning the former as sub task can help learn the latter.</p></li> <li><p>apply any weights for the event or state (sleep/awake) targets?<br> Yes. Loss weight of the 5 event predictions are a little different. (but it did not affect the score so much)</p></li> <li><p>L1 postprocessing, do you derive step candidates separately for each event target or do you consider them all in some way to derive each step candidate?<br> I can't understand your question. I applied the peak-detection to all predictions to find the candidates, then separated them into onset/wakeup.</p></li> <li><p>What criteria did you use to break data into nights for L2? Did you break data at 2pm, 12pm, or some other time of day? Did you use a different criteria for the two types of events (onset / wakeup)?<br> approx. 2pm.</p></li> <li><p>In addition to L1 predictions and series aggregated features, what type of features did you use in L2?<br> And the inputs to first stage. Sum of scores over itself is the most important feature.</p></li> <li><p>I understood the target for the L2 model is either 0 or 1 based on whether the TopN accuracy is &gt; 0.5. Is that right?<br> Right. In fact, I tried other cases of 0.3 or 0.7 and the final submission was the average of them.</p></li> <li><p>How many offsets on L2 predictions did you use in L3? just -2 and +2?<br> -16, -8, -4, -2, 2, 4, 8, 16 minutes </p></li> <li><p>What target did you use for the L3 model?<br> In third stage, I labeled the added prediction positive if it improved the score. </p></li> </ul>
Child Mind Institute - Detect Sleep States
33th Place Solution - 1D-Unet with GRU, Detect repeating signals
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, we express our gratitude to the hosting team for organizing this intriguing competition.</p> <p>This was my first chance to tackle the challenge of event detection, and I gained valuable insights from participating in this competition.</p> <h1>Team Member</h1> <p><a href="https://www.kaggle.com/sugupoko" target="_blank">@sugupoko</a></p> <h1>Overview and Summary</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8448924%2F15564725296900e1f17b99106f400e06%2Fcmi_overview.png?generation=1701834611847219&amp;alt=media" alt=""><br> We utilized <a href="https://www.kaggle.com/tubotubo" target="_blank">@tubotubo</a> 's excellent notebook and code, implementing the following improvements. </p> <ul> <li>Ensemble (Multi Duration, Multi Encoder Model)</li> <li>Insert BiGRU into 1D-Unet</li> <li>​Parameter tuning for scipy.signal.find_peaks.</li> <li>Detect and repeating signals</li> </ul> <p>I will explain these approaches in the order in which we tackled them in this competition.</p> <h1>Ensemble</h1> <p>In this event detection task, we speculated that accurate predictions could be achieved by ensemble averaging the outputs of multiple models. Therefore, we applied the following ensemble approach:</p> <h3>Multi Duration</h3> <p>We trained models with input durations of 8 hours (5760 steps), 16 hours (11520 steps), and 24 hours (17280 steps). During inference, we took the average of the outputs from these models."</p> <h3>Multi Encoder Model</h3> <p>For the encoder model, we adopted timm-gernet-l, resnext101, and se-resnet50, and averaged the outputs of these three models, similar to the duration ensemble.</p> <h1>Insert BiGRU into skip connection of 1D-Unet</h1> <p>With the idea from <a href="https://www.kaggle.com/sugupoko" target="_blank">@sugupoko</a>, we inserted BiGRU into the skip connections of the 1D-Unet decoder. We believe that this has improved the prediction accuracy in the temporal dimension.</p> <h1>Parameter tuning for scipy.signal.find_peaks</h1> <p>While conducting several experiments, we noticed that the parameters of the <code>scipy.signal.find_peaks</code> function, used in post-processing, have an impact on accuracy. <br> Initially, we lowered the <code>height</code> (score_th) to 0.001, as setting a lower threshold is advantageous in terms of the evaluation metric for this competition. Subsequently, we adjusted the <code>distance</code> to 65. This parameter determines the distance between peaks, and we are confident that setting an appropriate value helped reduce false positives.</p> <h1>Detect repeating signals</h1> <p>While analyzing the data, we noticed that some input data contained unnatural periodic signals. In many cases, these signals lacked event labels and could potentially be a cause of false positives.<br> We didn't understand the reason for the presence of such signals, but we found that they occurred approximately every 24 hours (17280 steps). Therefore, we attempted to remove these repeating signals using a rule-based approach.<br> We created a function to detect repeating signals as follows. We segmented the anglez data at intervals of 17280 steps, calculating the correlation coefficient between the previous and current segments. If this correlation coefficient exceeded a certain threshold, it was determined to be a repeating signal. By invalidating the model output for the detected repeating segments, we were able to significantly reduce false positives.</p> <pre><code>def (series_id, cycle_length=, cor_th=, phase='test'):​ anglez = np.(f)​ ​ valid_area = np.(anglez.shape[])​ x1_list = ((, anglez.shape[], cycle_length))​ pre_anglez = np.(cycle_length)​ for x1 in x1_list:​ x2 = x1 + cycle_length​ cur_anglez = anglez[x1:x2]​ ​ if (cur_anglez) &lt; cycle_length:​ cur_anglez = np.(cur_anglez, (, cycle_length - (cur_anglez)), constant_values=)​ ​ correlation_coefficient_anglez = np.(pre_anglez, cur_anglez)[, ]​ ​ if correlation_coefficient_anglez &gt; cor_th:​ valid_area[x1-cycle_length:x2] = ​ ​ pre_anglez = cur_anglez​ ​ return valid_area </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8448924%2F901ddaca907928fafc411dd08c7163e3%2Fdetect_repeating_signals.png?generation=1701834185197721&amp;alt=media" alt=""></p> <h1>Transition of LB Scores:</h1> <ul> <li>Baseline : 0.707​</li> <li>(5fold ensemble: 0.730)​</li> <li>Multi duration ensemble :0.745​</li> <li>Multi encoder model ensemble : 0.746​</li> <li>Insert BiGRU into 1D-Unet : 0.756​</li> <li>Parameter tuning for scipy.signal.find_peaks : 0.759​</li> <li>Detect repeating signals:0.764</li> </ul> <h1>Things that didn't go well:</h1> <ul> <li>Utilize sleep state prediction</li> <li>Ensemble longer and shorter durations</li> </ul>
Child Mind Institute - Detect Sleep States
34th Place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to thank the organizers and kaggle staff for organizing the competition. I learned a lot through the competition.</p> <h1>Summary</h1> <ul> <li>based on 213tubo's repository</li> <li>Data cleansing</li> <li>Ensemble models</li> <li>Reduce FP by post-processing</li> </ul> <h1>data cleansing</h1> <p>When I visualized <code>train_series.parquet</code> and <code>train_events.csv</code> by series_id as shown below, I noticed that the label data was dirty.</p> <p>Labels were not assigned where they should have been, or were assigned where they should not have been (where the organizer may have filled in values for missing values).</p> <p>There were many areas of concern, such as labels that were assigned at times that were probably tens of minutes or more off from the time the subjects were onset/wakeup.</p> <p>We thought that such dirty label data would be an obstacle in training the model, so we examined the data for every series_id and cleansed the label data by shifting or adding the position of the labels.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4472591%2F54084323b61ae128f11ae023af23f952%2F1.png?generation=1701821199709357&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4472591%2F094a8b23da8b1a11b44d8518081a552d%2F2.png?generation=1701821213143683&amp;alt=media" alt=""></p> <h1>Model</h1> <ul> <li>Unet</li> <li>encoder<ul> <li>maxvit_tiny</li> <li>eca_nfnet_l1</li> <li>dm_nfnet_f0</li></ul></li> <li>Augment<ul> <li>Mixup</li> <li>Spec_augment</li></ul></li> <li>Label_smoosing(sleep)</li> <li>Sliding inference</li> <li>Not using both ends of the inference</li> </ul> <h1>Post-processing</h1> <p>When I visualized the data, I noticed that some of the data showed unnatural periodic patterns, such as <code>1e6717d93c1d</code>.</p> <p>I considered these unnatural periodic patterns to be invalid because the organizers had filled in common values for each time of the day for periods when no data were measured.</p> <p>The model may output onset/wakeup predictions within these periodic patterns, but they are considered false positives.</p> <p>We post-processed the model to remove predictions within these periodic patterns.</p> <p>The criteria for periodic patterns are as follows,</p> <ul> <li>When anglez and envo are shifted back and forth 17280*n steps, if both anglez and envo values are consistent, the prediction is considered invalid.</li> <li>However, if they do not match for two or more consecutive steps, they are not judged invalid (considering those due to coincidence)</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4472591%2F2f9154d75cc09c12501a037c8fe0b2d6%2F3.png?generation=1701821229161164&amp;alt=media" alt=""></p> <p>PL 0.707→0.713</p>
Child Mind Institute - Detect Sleep States
39th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>excellent teammates</p> <h1>Here is 39th Detail Solution Child Mind Institute - Detect Sleep States</h1> <p>Thanks to Kaggle for hosting this meaningful competition. Thanks to all the Kagglers for your discussions and shared perspectives. This was our first time participating in a formal tabular competition, and we've learned a lot from the experience.</p> <p><strong>Team Avengers will always continue the journey on Kaggle.</strong></p> <p>Main GitHub Repo: <a href="https://github.com/lullabies777/kaggle-detect-sleep" target="_blank">Here</a></p> <p>PrecTime GitHub Repo: <a href="https://github.com/Lizhecheng02/Kaggle-Detect_Sleep_States" target="_blank">Here</a></p> <h2>Here is the Detail Solution</h2> <h3>Baseline Code</h3> <p>Here, we need to thank <a href="https://www.kaggle.com/tubotubo" target="_blank">tubotubo</a> for providing the baseline code. We didn't join the competition from the very beginning, this baseline code provided us with some ideas and basic model structures.</p> <h3>Dataset Preparation</h3> <ul> <li><p>We didn't use any methods to handle the dirty data, which might be one reason why we couldn't improve our scores anymore.</p></li> <li><p>On the evening before the competition ended, my teammate found this <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/456177" target="_blank">discussion</a>. Consequently, we attempted to clean the data by removing the data for the days where the event was empty. However, due to the time limitation, we didn't make significant progress. </p></li> <li><p>We believe data cleaning should be helpful because the model using this method showed a smaller difference in scores on the private leaderboard.</p></li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F10671485%2Fec42c865fae37d8815cc81fff8521184%2F_20231205195307.png?generation=1701903784520525&amp;alt=media" alt="Comparison"></p> <h3>Feature Engineering</h3> <p>We generated new features using shift, different and rolling window functions. </p> <p>The final set we utilized 24 rolling features in addition to the original 4, making a total of 28 features. The use of new features did not significantly improve the model's score, which was somewhat unexpected for us.</p> <p>Code: <a href="https://github.com/lullabies777/kaggle-detect-sleep/blob/main/run/prepare_data.py" target="_blank">Here</a></p> <pre><code>*[pl.().(i).(f) i (diff_start, diff_end, diff_step)], *[pl.().shift(i).(f) i (shift_start, shift_end, shift_step) i != ], *[pl.().rolling_mean(window_size).( f) window_size window_steps], *[pl.().rolling_min(window_size).( f) window_size window_steps], *[pl.().rolling_max(window_size).( f) window_size window_steps], *[pl.().rolling_std(window_size).( f) window_size window_steps] </code></pre> <h3>Wandb sweep</h3> <p>Wandb sweep is a hyperparameter optimization tool provided by the Wandb machine learning platform. It allows automatic exploration of different hyperparameter combinations to enhance a model's performance.</p> <p>Implementation Code: <a href="https://github.com/lullabies777/kaggle-detect-sleep/tree/main/run/sweep" target="_blank">Here</a></p> <h3>Models</h3> <ul> <li>Used overlap - To enhance accuracy in predicting sequence edges, we utilized overlap by using a 10000 length sequence to predict an 8000 length sequence.</li> <li>Implementation of PrecTime Model - You can find details in this <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/459616" target="_blank">discussion</a>. We also made modifications to its structure, including the addition of transformer architecture and residual connection structures. The experiments had shown that these modifications contribute to a certain improvement in the model's performance.</li> </ul> <h3>Post-preprocessing Trick</h3> <p>We used dynamic programming algorithm to deal with overlap problem. </p> <p>Principle behind this method: To achieve a high AP (Average Precision), three main criteria need to be met: Firstly, the predicted label should be sufficiently close to the actual label. Secondly, within a positive or negative tolerance range around the actual label, there should only be one predicted point. Thirdly, the score of other predicted points outside the actual label range should be lower than those within the range.</p> <pre><code>def get_results_slide_window(pred, gap): scores = (pred) stack = [] = [-] * (scores) [] = i in (,(scores)): i - stack[-] &lt; gap: scores[i] &gt;= scores[stack[-]]: stack.() i - gap &gt;= : stack: [i - gap] != stack[-]: stack [i - gap] - stack[-] &lt; gap: stack.() stack.([i - gap]) : stack.([i - gap]) stack.(i) : stack.(i) [i] = stack[-] stack </code></pre> <h3>Ensemble</h3> <p>Our final ensemble method essentially involved averaging different outputs. With post-processing and this ensemble method combined, our results generally follow the pattern that the more models we use or the greater the variety of models, the higher the score.</p> <p>Submission Code1: <a href="https://www.kaggle.com/code/lizhecheng/detect-sleep-states-ensemble-lb-0-761-pb-0-804" target="_blank">Here</a><br> Submission Code2: <a href="https://www.kaggle.com/code/lizhecheng/detect-sleep-states-ensemble-lb-0-75-pb-0-8" target="_blank">Here</a></p> <p>Our submissions:</p> <table> <thead> <tr> <th>Models</th> <th>LB Score</th> <th>PB Score</th> <th>Selected</th> </tr> </thead> <tbody> <tr> <td><code>2 * 5 folds PrecTime + 1 * 5 folds LSTM-Unet</code></td> <td><code>0.75</code></td> <td><code>0.8</code></td> <td><strong>Yes</strong></td> </tr> <tr> <td><code>2 * 5 folds PrecTime + 2 * 5 folds LSTM-Unet + 10 single models</code></td> <td><code>0.759</code></td> <td><code>0.803</code></td> <td><strong>Yes</strong></td> </tr> <tr> <td><code>1 * 5 folds PrecTime + 1 fold LSTM-Unet + 10 single models</code></td> <td><code>0.761</code></td> <td><code>0.804</code></td> <td><strong>No</strong></td> </tr> <tr> <td><code>1 * 5 folds PrecTime + 1 * 5 folds LSTM-Unet + 10 single models</code></td> <td><code>0.759</code></td> <td><code>0.803</code></td> <td><strong>No</strong></td> </tr> </tbody> </table> <h3>Other</h3> <p>Welcome everyone to check our GitHub code, looking forward to any discussions.</p> <h3>Conclusion</h3> <ul> <li>Data Cleaning.</li> <li>Generate New Features. </li> <li>Use Architecture Like Conv1d, RNN, GRU, LSTM or Transformer.</li> <li>Write Post-preprocessing for Special Metrics.</li> </ul> <h2>Thanks to all of my teammates for working together to gain this Silver Medal.</h2>
Child Mind Institute - Detect Sleep States
3rd place solution - GRU, UNET and LGB!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>3rd place solution - GRU, UNET and LGB!</h1> <p>Congratulations to kaggle and the host for this competition, it has been a very interesting and fun competition. Before we start, special thanks to <a href="https://www.kaggle.com/kyakovlev" target="_blank">@kyakovlev</a>, it has been exceptional to be able to work with you and learn from you.</p> <h2>Structure of our approach</h2> <p><strong>1. Preprocessing</strong><br> <strong>2. Training (GRU + UNET + LGB)</strong><br> <strong>3. Inference and postprocess</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3147995%2F970876bd40cd74ade6b68a7ea415c349%2FCaptura%20desde%202023-12-06%2001-35-18.png?generation=1701823419282889&amp;alt=media" alt=""></p> <h3>1. Preprocessing</h3> <p>For final submission our GRU + UNET models work just with 7 features. We tried to add more features but for local validation we did not find any that would work.</p> <h4>1.1. <strong>How do we structure the series before introducing them into our model?</strong></h4> <p>We decided to divide the series into one-day sequences and reduce the granularity from 5 secs to 30 secs. Therefore we had sequences of 2880 length in which there was normally an onset and a wakeup. Here is an standard input for our model.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3147995%2F4d5bb92e188eb1a4e751d18505ffa7ff%2Fstandard_series.png?generation=1701879825870419&amp;alt=media" alt=""></p> <h4>1.2. <strong>Key points and features in preprocesing</strong></h4> <ul> <li><p>Make anglez absolute, this was giving +0.002 on local validation.</p></li> <li><p>For the only two variables we had (anglez and enmo), we tried to find useful aggregations(diff, mean, median, skew, etc…), but the only thing that seemed to work was the standard deviation (<strong>anglez_abs_std</strong> and <strong>enmo_std</strong>).</p></li> <li><p>Detecting noise was another of the key points of our solution. We realized that when exactly the same value is repeated in the same series at the same hour, minute and second, this was basically noise. Here our detected noise is the red line.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3147995%2F4926d54727c25100d547d5656cbc3667%2F04f547b8017d.png?generation=1701879870683731&amp;alt=media" alt=""></p></li> <li><p>To incorporate temporal information into the model, we decided to add 2 frequency encoding variables (one for onsets and one for wakeups) at the hour-minute level. </p></li> <li><p>A good augmentation trick was to reverse all the series during training, this allowed us to have more sequences and increased our local validation by 0.01</p></li> </ul> <h4>1.3. <strong>Small ablation study of single model in the middle of the competition</strong></h4> <p>The public leaderboard has really been making us dizzy throughout the competition, we thought there was a bug in our code. So halfway through the competition we did a study of how each variable was working on the leaderboard and these are the results:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3147995%2F43c76444fe8e5fc6a312bced2d320ddb%2FCaptura%20desde%202023-12-06%2017-26-09.png?generation=1701880005278625&amp;alt=media" alt=""></p> <h3>2. Training</h3> <p>Our best model was GRU, which alone gave us a local validation of <strong>0.835</strong></p> <h4>2.1. <strong>GRU and UNET -- training details</strong> (we did the training similar)</h4> <ul> <li><strong>Augmentation</strong>: just invert series</li> <li><strong>Number of features</strong>: 7 (anglez_std, enmo_std, noise_day_before, noise_day_after, noise, hour_min_onset, hour_min_wakeup)</li> <li><strong>Target</strong>: 2 outputs (one for onsets and other for wakeups)</li> <li><strong>Target transformation</strong>: Add two steps back and one forward. (0,0,0,0,1,0,0,0 -&gt; 0,0,1,1,1,1,0,0)</li> <li><strong>loss</strong> : cross-entropy</li> </ul> <h4>2.2. <strong>Leaderboard variability</strong></h4> <p>It seems that after the competition is over, we can observe that the private is more correlated with our local validation than the public one. In the public there was an instability that was consuming us because we really thought we had a bug somewhere in our code.<br> Our best private submission is this ensemble: (GRU*0.68 + UNET * 0.2 + LGB*0.12)<br> CV: ~<strong>0.840</strong> / public LB <strong>0.784</strong> / private LB <strong>0.848</strong></p> <h4>2.3. <strong>LGB part</strong></h4> <p>We've been trying to make lgb competitive since we formed a team. We finally managed to start getting him to contribute something to the ensemble the day before the competition ended. So I think that in this part we were a few days away from continuing to refine our LGB model.</p> <p>Anyway, in our last submission we can see how it was giving us <strong>+0.002</strong> in the private LB.<br> The maximum we achieved with an LGB single was <strong>0.757</strong> public / <strong>0.82</strong> private.</p> <h3>3. Inference and posprocess</h3> <p>For the best submission that we finally chose, we made an ensemble of several models (<strong>8</strong> GRU and <strong>2</strong> UNET)<br> Running time for inference was ~<strong>1,5</strong> hours.</p> <p>For postprocess we try several things, but basically what we did was keep the peaks every certain distance optimized for our predictions.</p> <h3>4. Conclusions</h3> <p>Surely there are things we are forgetting, so don't hesitate to ask anything and I'll update it in the post.</p> <p>It has been a type of problem that could be approached in many ways, as we are seeing in the different solutions. We would like to mention that it has been a really challenging competition where it was a constant battle with other teams that pushed us to the limit (everything was very tight). Apart from that, I hope we can help your mission and be useful to the host.</p> <p>Github code is here: <a href="https://github.com/FNoaGut/child-mind-institute-detect-sleep-states-3rd-place-solution" target="_blank">https://github.com/FNoaGut/child-mind-institute-detect-sleep-states-3rd-place-solution</a></p> <p>Many thanks <a href="https://www.kaggle.com/snnclsr" target="_blank">@snnclsr</a>! </p> <p>Thanks Javier!!!</p> <p>Thanks Alejandro!</p> <p>But in reality we only did the rolling_mean for the score, not for the steps. So it's a little different</p> <p>` ### Detecting noise through same values in same st_hour and st_step for same series<br> ##############################################################<br> # repeat: number of times that the value appears in the series<br> # noise: binary feature is 1 when repeat&gt;1<br> # group_len: number of consecutive values with noise<br> # noise_removal: binary feat is 1 when is noise==1 and group_len&gt;360</p> <pre><code>temp_df = (series.group_by([,,,]). agg(pl.col().count().())) series = series.(temp_df, =[,,,], how=) series = series.with_columns([ pl.(pl.col()&gt;).().otherwise().cast(pl.Int32).(), ]) print()` </code></pre> <p>Thank you Chris! Very nice read this from you.</p> <p>Our way of ensembling results is very simple, since it is normally a pure average between the raw predictions of each model. Only at the end of the competition we started to perform some submissions with weighting ensemble.<br> Our submission that gives 0.846 private is a pure average of 8 GRU models and 2 UNET.</p> <p>After making this ensemble we carry out post-processing, where we do:</p> <ol> <li>Rolling_mean(center=True) to smooth the predictions </li> <li>Take the highest predictions every certain distance (this allows us to eliminate false positives). This distance is very optimizeable depending on the model, for example for LGB a greater distance was better than for NN models. So basically the post-processing parameters were something that we were constantly optimizing depending on the model.</li> <li>For each series_id, event and night we take the 20 largest predictions after these 2 steps.</li> </ol> <p>I attach an image where you can see some raw predictions for one series_id and a specific night. And you can also see the result of our post-processing (the red dots).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3147995%2Fa2725411bf163205fbe316d4f98b5750%2FCaptura%20desde%202023-12-11%2009-37-14.png?generation=1702284315791504&amp;alt=media" alt=""></p> <p>We decided to create sequences of days starting at 17:00 local time. If one day it was not complete at the beginning or at the end, we added padding.</p> <p>Thank you! Yes, basically once we have made the predictions with GRU, UNET and LGB, we make an ensemble where we multiply each of them by a weight. We adjust these weights manually based on local validation.</p> <p>Thank you! Same as short series. We cut them into sequences of days.</p>
Child Mind Institute - Detect Sleep States
40th Place Solution - Improving prediction with FFT-based data cleaning
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to thank <a href="https://www.kaggle.com/tubotubo" target="_blank">@tubotubo</a> for sharing your high-quality code. I am going to share my solution.<br> This is my first post on a solution, so I apologize if I'm being rude in any way. Although the content is poor, I hope it will be of help to you.</p> <h3>1. Data preprocessing</h3> <ul> <li>Null Value Removal: Removed rows with null timestamp values in the dataset to prevent misinterpretation of the analysis and reduce uncertainty in model training.</li> <li>Event count consistency check: Filtered out unmatched data at the beginning ('onset') and end ('wakeup') of events, increasing data integrity and analysis reliability.</li> </ul> <h3>2. Feature Engineering</h3> <p>Cleaning features using FFT: Referring to <a href="https://www.kaggle.com/code/jjinho/cmi-sleep-detection-fast-fourier-transformation" target="_blank">cmi-sleep-detection-fast-fourier-transformation</a>, data was cleaned using different thresholds (98.75, 99.0, 99.5, 99.75, 99.9) for enmo and anglez. This generated FFT-based features (e.g. fft_9875).</p> <h3>3. Model</h3> <p>For cross-validation, I use a 5-fold GroupKFold.<br> I built seven different models based on <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/452940" target="_blank">@tubotubo's code</a> , each using a different feature set and network architecture (LSTM, GRU, UNet, Transformer, etc.).</p> <table> <thead> <tr> <th>model</th> <th>CV</th> <th>Pubic (5-fold)</th> <th>Private (5-fold)</th> </tr> </thead> <tbody> <tr> <td>FeatureExtractor(LSTM(dim=64)+GRU(dim=64)) + UNet + UNet1DDecoder with no fft feature</td> <td>0.74373</td> <td>0.746</td> <td>0.791</td> </tr> <tr> <td>FeatureExtractor(LSTM(dim=64)+GRU(dim=64)) + UNet + UNet1DDecoder with fft_9900, fft_9950, fft_9975, fft_9990</td> <td><strong>0.75250</strong></td> <td>0.727</td> <td>0.791</td> </tr> <tr> <td>FeatureExtractor(LSTM(dim=64)+GRU(dim=64)) + UNet + UNet1DDecoder with fft_9875</td> <td>0.76012</td> <td>0.747</td> <td>0.795</td> </tr> <tr> <td>FeatureExtractor(TransformerFeatureExtractor(dim=64)) + UNet + UNet1DDecoder with fft_9875</td> <td>0.74572</td> <td></td> <td></td> </tr> <tr> <td>FeatureExtractor(LSTM(dim=128)+GRU(dim=128)) + UNet + UNet1DDecoder with fft_9875</td> <td><strong>0.76400</strong></td> <td>0.739</td> <td>0.801</td> </tr> <tr> <td>FeatureExtractor(LSTM(dim=128)+GRU(dim=128)) + UNet + UNet1DDecoder with fft_9900</td> <td>0.76208</td> <td>0.735</td> <td>0.795</td> </tr> <tr> <td>FeatureExtractor(CNNSpectrogram(dim=128)) + UNet + UNet1DDecoder with fft_9875</td> <td>0.74933</td> <td></td> <td></td> </tr> </tbody> </table> <p>Cleaning features using FFT had a big effect on CV, but when I opened the lid, it didn't seem to have a big effect on the private score.<br> By increasing the number of dimensions during feature extraction from 64 to 128, the accuracy of CV and private score was significantly improved.</p> <h3>4. Model Ensemble</h3> <p>Two-step ensemble approach: <br> In the first step, predictions from seven models were ensembled using three different methods: Optuna, Nelder-Mead, and Hill Climbing.<br> Next, in the second step, these three types of ensemble results were further ensembled using Optuna to obtain the final results.<br> This approach yielded good results in both cross-validation and private scores. The first and second stages of Private were both 0.802, but the second stage was slightly better, but it may not have been necessary to go this far.</p> <h3>first stage</h3> <table> <thead> <tr> <th>Ensemble method</th> <th>CV</th> <th>Pubic (5-fold)</th> <th>Private (5-fold)</th> </tr> </thead> <tbody> <tr> <td>optuna</td> <td>0.79038</td> <td>0.746</td> <td>0.802</td> </tr> <tr> <td>nelder-mead</td> <td>0.78951</td> <td></td> <td></td> </tr> <tr> <td>optuna</td> <td>0.78801</td> <td></td> <td></td> </tr> </tbody> </table> <h3>second stage</h3> <table> <thead> <tr> <th>Ensemble method</th> <th>CV</th> <th>Pubic (5-fold)</th> <th>Private (5-fold)</th> </tr> </thead> <tbody> <tr> <td>optuna</td> <td>0.79049</td> <td>0.746</td> <td>0.802</td> </tr> </tbody> </table> <h3>5. Approaches that didn't work</h3> <ul> <li>Applying Focal Loss: The cross-validation score improved when using Focal Loss, but an error occurred during submission.</li> <li>Applying asymmetric Gaussian and exponentially decaying distributions: I applied asymmetrically Gaussian and exponentially decaying distributions to the labels, but these approaches did not contribute to improved performance.</li> <li>Additional feature engineering: I tried additional features such as moving average, standard deviation, difference signal, cumulative sum, and autocorrelation, but these also did not improve performance.</li> <li>Using DeBERTa-v3-small: I used DeBERTa-v3-small for feature extraction, but learning did not proceed well and I did not get the expected results.</li> </ul> <p>Finally, thank you for organizing the competition. It was a very challenging competition and I learned a lot.</p> <p>Moreover, this is my second silver medal. I want to continue working hard to become a kaggle master. Thank you everyone for letting me learn so much. I will continue to do my best.</p>
Child Mind Institute - Detect Sleep States
41st Place Solution for the Child Mind Institute - Detect Sleep States Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, I would like to express my sincere thanks to the organizers of the Child Mind Institute - Detect Sleep States Competition. This competition provided an invaluable opportunity to engage with long one-dimensional sequences, offering a distinct and educational experience in an important area of data science.</p> <h2><strong>Overview of the Inference Pipeline</strong></h2> <p>Our training pipeline and models are based on the excellent code by <a href="https://www.kaggle.com/tubotubo" target="_blank">@213tubo</a>, available at <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/452940?rvi=1" target="_blank">this page</a>. </p> <h4><strong>Our solution is as follows</strong>:</h4> <ul> <li><p><strong>Data Segmentation:</strong></p> <ul> <li>Segmented input features using overlapping time windows for inference</li></ul></li> <li><p><strong>Models:</strong></p> <ul> <li>Utilized 10 Unet-based models, each trained on two distinct training-validation splits across five unique models.</li> <li>Each of the 5 models has a different backbone.</li></ul></li> <li><p><strong>PostProcessing(Single Model):</strong></p> <ul> <li>Applied Non-Maximum Suppression (NMS) to retain only the highest-scoring prediction when detected peaks were within a specified proximity.</li></ul></li> <li><p><strong>Ensemble Predictions:</strong></p> <ul> <li>Adopted an original ensemble method inspired by Weighted Boxes Fusion (WBF)</li> <li>In the first stage, models within the same fold were assembled.</li> <li>In the second stage, these results from different folds were combined.</li></ul></li> <li><p><strong>Post Processing(Ensembled Results):</strong></p> <ul> <li>Removal of predictions in periodic padding intervals.</li></ul></li> </ul> <p>Below is an overview of our solution, as illustrated in the following diagram.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7135891%2F6087b564bf8572d8309d45d1586fdd6e%2Foverview_zzz.svg?generation=1702429424591245&amp;alt=media" alt=""></p> <h2>Details of the Submission</h2> <h3>Models Used:</h3> <p>We trained Unet-based models featuring five different CNN backbones: ResNet101, MiT-B3, MiT-B5, ResNeXt101, and ResNet152.<br> Our models were trained using two different combinations of training-validation data, with approximately 20% of the training set allocated as validation data in each combination.</p> <p>For each training session of the models, we varied the seed values to increase their diversity. This approach was employed to enhance the reliability of cross-validation (CV) results and to enrich the ensemble for submission.</p> <h3>Data Segmentation for Inference:</h3> <p>For the input data segmentation, we used a specific window size and set the hop size to half of this window size, creating overlapping segments for inference. This method allowed us to effectively utilize the central part of each window in the final analysis, thereby reducing false detections at the edges of the segments.</p> <h3>Ensemble Method</h3> <p>We used an original ensemble method inspired by Weighted Boxes Fusion (WBF). The algorithm steps include:</p> <ul> <li>Grouping the input dataframe by series_id and event.<br> Sorting each event by its score in descending order.</li> <li>Extracting other events within a certain range from different dataframes.</li> <li>Calculating the weighted step as sum(step*score) and calculating the new score as score/(number of dataframes)</li> <li>Removing events within the specified range but with lower scores.</li> </ul> <h4>Our Code:</h4> <pre><code> () -&gt; pd.DataFrame: grouped_list = [df.groupby([, ]) df df_list] events = [, ] ensemble_predictions = [] series_id series_ids: event events: sorted_group_list = [] grouped grouped_list: (event, series_id) grouped.groups: sorted_group_list.append(grouped.get_group((event, series_id)).sort_values(by=, ascending=)) ([ sorted_group.empty sorted_group sorted_group_list]): specific_index = ((index index, sorted_group (sorted_group_list) sorted_group.empty), ) highest = sorted_group_list[specific_index].iloc[] ensemble_steps = [highest[]] ensemble_probas = [highest[]] probas_sum = highest[] highest_step = highest[] i, sorted_group (sorted_group_list): compared_prediction = sorted_group[sorted_group[].sub(highest_step).() &lt;= step_threshold] condition = sorted_group[].sub(highest_step).() &gt; step_threshold filtered_group = sorted_group[condition] sorted_group_list[i] = filtered_group i != specific_index: compared_prediction[].empty: first_score = compared_prediction[].iloc[] first_step = compared_prediction[].iloc[] : first_score = first_step = ensemble_steps.append(first_step) ensemble_probas.append(first_score) probas_sum += first_score ensemble_step = probas_sum!=: s, p (ensemble_steps, ensemble_probas): ensemble_step += s*(p/probas_sum) ensemble_predictions.append({: series_id, : (ensemble_step), : event, : probas_sum/(grouped_list)}) ensemble_df = pd.DataFrame(ensemble_predictions) ensemble_df[] = ensemble_df.index ensemble_df </code></pre> <h3>Post Processing</h3> <p>Post-processing steps were implemented for both single models and ensembled results.</p> <ul> <li><p><strong>For single models:</strong></p> <ul> <li>Applied Non-Maximum Suppression (NMS) using the excellent code from <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/458822" target="_blank">this discussion</a> to retain only the highest-scoring prediction when detected peaks were within a specified proximity.</li></ul></li> <li><p><strong>For ensembled models:</strong></p> <ul> <li>Removal of needless predictions in periodic padding intervals was inspired by <a href="https://www.kaggle.com/code/takanashihumbert/a-simple-way-trying-to-find-the-dark-zones" target="_blank">this notebook</a>.</li></ul></li> </ul> <h2>Ablation Study</h2> <p>The details of the scores are as follows.</p> <table> <thead> <tr> <th>Number</th> <th>Description</th> <th>CV (Fold1)</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>(1)</td> <td>ResNet101 (No overlap)</td> <td>0.756</td> <td>0.722</td> <td>0.748</td> </tr> <tr> <td>(2)</td> <td>ResNet101 (overlap)</td> <td>0.77</td> <td>0.732</td> <td>0.778</td> </tr> <tr> <td>(3)</td> <td>ResNet101 (overlap) + NMS</td> <td>0.773</td> <td>Not Submitted</td> <td>Not Submitted</td> </tr> <tr> <td>(4)</td> <td>Ensemble of 5 models (overlap, same fold) + NMS</td> <td>0.7844</td> <td>0.753</td> <td>0.790</td> </tr> <tr> <td>(5)</td> <td>Ensemble of 5 models (overlap, same fold) + NMS + Remove Predictions in periodic padding</td> <td><strong>0.7847</strong></td> <td><strong>0.756</strong></td> <td>0.801</td> </tr> <tr> <td>(6)</td> <td>Ensemble of 5 models(overlap) across 2 folds + NMS</td> <td>-</td> <td>0.749</td> <td>0.792</td> </tr> <tr> <td>(7)</td> <td>Ensemble of 5 models(overlap) across 2 folds + NMS + Remove Predictions in periodic padding</td> <td>-</td> <td>0.753</td> <td><strong>0.802</strong></td> </tr> </tbody> </table>
Child Mind Institute - Detect Sleep States
4th Place Solution - Nikhil's Part (Modified Unet + Transformer and Weighted Box Fusion)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Firstly thank you to my teammate <a href="https://www.kaggle.com/ryotayoshinobu" target="_blank">@ryotayoshinobu</a> for the amazing performance. Please read his part of the <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/459597" target="_blank">solution</a>.<br> I am really glad I teamed up with him and learnt a lot</p> <p>A big thanks to the organizers as well for such an interesting competition. A really tough one.</p> <p>You can find the solution code here:</p> <p><a href="https://github.com/nikhilmishradevelop/kaggle-child-mind-institute-detect-sleep-states" target="_blank">https://github.com/nikhilmishradevelop/kaggle-child-mind-institute-detect-sleep-states</a></p> <h1>Solution Summary</h1> <p><strong>Validation Strategy</strong>: GroupKFold on Series Ids</p> <p><strong>Model Inputs</strong>: 17280 x n_features length sequences as input (17280 = 12 steps_per_minute x 60_minutes * 24 hours)<br> <strong>Model Outputs</strong>: 17280 x 2 (one for onset and other for wakeup)<br> <strong>Model Type</strong>: Regression Model<br> <strong>Loss Type</strong>: Cubic Loss i.e. abs(y_true-y_pred)**3</p> <p>The last remaining part of the sequences with length &lt; 17280 were padded to make them equal to 17280</p> <p>My solution is a modified UNET (averaged over 4 models, 2 LSTMs and 2 GRUs) using normalized Gaussian targets similar to what <a href="https://www.kaggle.com/tolgadincer" target="_blank">@tolgadincer</a> described. A big thanks to him for sharing a good method, early on in this competition.</p> <h2>Features:</h2> <ol> <li>Original Sequence Features: Enmo, Anglez</li> <li>TimeStamp Features: Hour and Weekday</li> <li>Derived Sequence Features: Anglez difference, Enmo Difference, HDCZA features etc)</li> </ol> <p>Good Features help in faster convergence and much better scores, so adding good features were important (Read penguin's solution for some other good features)</p> <p><strong>My final 4 model ensemble results</strong>: </p> <p><strong>CV</strong>: 0.828<br> <strong>Public LB</strong>: 0.789<br> <strong>Private LB</strong>: 0.841</p> <h1><strong>Model Architecture</strong></h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1265131%2Fad78d36028ed6dc7ab1d3e79631b0172%2FFireShot%20Capture%20019%20-%20Online%20FlowChart%20%20Diagrams%20Editor%20-%20Mermaid%20Live%20Editor%20-%20mermaid.live%20(1).png?generation=1701841137052567&amp;alt=media" alt="Model Architecture"></p> <h2>Patching to reduce sequence length</h2> <p>Since 17280 is a very long sequence length (makes training very slow and harder to train), we reduce the sequence length by patching it.</p> <p><strong>Input:</strong> 17280 * n_features (34)<br> <strong>Patch Size:</strong> (Used different patch sizes 3, 4, 5, or 6 in different models).</p> <p><strong>Modified Sequence Length:</strong> 17280 to 17280//Patch_Size sequence length</p> <p><strong>Modified Feature Size:</strong> k * 4 * num_features (where k is a Dense layer output dim)</p> <h2>Modified Unet Part</h2> <p><strong><em>UNET Encoder -&gt; Bottleneck -&gt; Transformer -&gt; GRU or LSTM -&gt; Unet Decoder</em></strong></p> <p>Each layer of UNET encoder had concatenated connections to the 1st layer</p> <p><strong>Initial Output size:</strong> 17280//Patch_Size, 2 * Patch_size</p> <p><strong>Reshaped Output Size:</strong> (17280, 2)</p> <h1>Post Processing</h1> <p>Initially I was doing just doing simple peak detection when I was competing solo.</p> <p>I wanted to, but could not formulate a good way to rerank model predictions using lightgbm, but <a href="https://www.kaggle.com/kmat2019" target="_blank">@kmat2019</a> is a great read for that.</p> <p>Thanks to <a href="https://www.kaggle.com/ryotayoshinobu" target="_blank">@ryotayoshinobu</a> , I started applying NMS, and finally experimented with a kind of WBF algorithm and made it work. This WBF algorithm gave a score of 0.79 to 0.793 on last day.</p> <p>But it was worse by 0.001 in our private LB lol, so WBF hurt.</p> <p>WBF Working</p> <ol> <li><p><strong>Initialization and Convolution</strong>: The function starts by applying convolution with the specified <code>convolution_kernel</code> to smooth the data.</p></li> <li><p><strong>Peak Detection Loop</strong>: It iteratively searches for peaks in the data. The loop runs until the maximum count (<code>max_count</code>) is reached or the peak value falls below a threshold (<code>max_thresh</code>).</p></li> <li><p><strong>Adaptive Window and Weight Calculation</strong>:</p> <ul> <li>A dynamic window size around each peak is determined based on the current maximum value's power (<code>curr_max_power</code>) and a distance parameter (<code>k_dist</code>).</li> <li>Depending on the <code>section_weight_method</code> (logarithmic or linear), weights are calculated for each section of the data around the peak.</li></ul></li> <li><p><strong>Weighted Average and Score Calculation</strong>: For each detected peak, a weighted average is calculated to determine the score of the peak. <br> This is done by considering the peak and k of its neighbours. This score is influenced by the method of weight calculation and other hyperparameters like <code>log_base</code>, <code>log_scale</code>, and <code>weight_coeff</code>.</p></li> <li><p><strong>Suppression and Updating Predictions</strong>:</p> <ul> <li>After each peak is detected, the function suppresses the neighboring values to avoid detecting the same peak multiple times. This is controlled by <code>overlap_coeff</code> and <code>preds_reduction_power</code>.</li> <li>The indices and scores of detected peaks are stored and returned after the loop completes.</li></ul></li> </ol> <p>I will share the implementation of the WBF part code, you can see if tuning these hyperparameters help in your models too. This accepts predictions per series id.</p> <p>(I found hyperparameters by a mix of manual and automated hyperparamter tuning approach)</p> <pre><code> (): k_dist = hyperparams[] log_base = hyperparams[] log_scale = hyperparams[] curr_max_power = hyperparams[] weight_coeff = hyperparams[] convolution_kernel = hyperparams[] section_weight_method = hyperparams[] preds_reduction_power = hyperparams[] overlap_coeff = hyperparams[] min_distance = hyperparams[] preds = preds_orig.copy() preds = np.convolve(preds, convolution_kernel, mode=) count = indices = [] scores = [] count &lt; max_count: curr_max_idx = np.argmax(preds) curr_max = preds[curr_max_idx] curr_max &lt; max_thresh: k = (k_dist - (min_distance, (curr_max**curr_max_power))) start_idx = (curr_max_idx - k, ) end_idx = (curr_max_idx + k + , (preds)) section = preds[start_idx:end_idx] distances = np.(np.arange((section)) - k) section_weight_method == : weights = / (log_base ** (distances / (k * log_scale))) section_weight_method == : weights = - (distances / k) * weight_coeff weighted_avg = np.(section * weights) / np.(weights) scores.append(weighted_avg) indices.append(curr_max_idx) preds[start_idx:end_idx] *= (( - weights * overlap_coeff))**preds_reduction_power count += indices, scores </code></pre> <h2>Final Ensemble</h2> <p>Final ensemble was a blended weight of my and Penguin's subs regression postprocessed using WBF</p> <p>Final_Sub = WBF(Penguins_Predictions * 0.25 + Nikhil's Predictions*0.75)</p> <p>Since our predictions had slightly different scales of regression targets, Penguin's Predictions were first scaled by using a simple power transform i.e Penguin's_Predictions ** 0.7</p> <p><strong>Ensemble CV:</strong> 0.835<br> <strong>Ensemble Public LB:</strong> 0.793<br> <strong>Ensemble Private LB:</strong> 0.845</p> <p>P.S: This competition is the first time I did not use gradient boosting, even after I knew it worked so well, me being a tabular guy. I mistake I guess. I love lightgbm and will definitely use one in the next competition.</p> <p>Congratulations to all the the top performers, it was a good fight, till it lasted :)</p> <p>Thank you <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> , would definitely want to team up with a legend like you in some competition</p> <p>more of a combination of standard practices, reading papers to see what works best, and a bit of intuition with hit and trial. Usually you would want to start with a standard practice, like UNet for segmentation like tasks, and build upon it</p>
Child Mind Institute - Detect Sleep States
5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Kaggle and competition host for this interesting competition that can be approached by various of methods, it is an enjoyable journey to explore this dataset.<br> And also great thanks to people who sharing insights and ideas during the process of this competition, I learnt a lot from you all.</p> <p><strong>overview</strong><br> I approach this problem in four stages: </p> <ol> <li>candidate step generation by heuristic rules </li> <li>step correct model to improve step quality </li> <li>score model generate score for submission </li> <li>post-processing </li> </ol> <p><strong>candidate generation</strong><br> basic idea of heuristic rules, you can check code for more details and tricks.</p> <ul> <li>region with |diff(anglez)|&lt;5: start -&gt; onset, end -&gt; wakeup. <br> Onset/wakeup is not likely to happen in the middle of an inactive region. Region with suspicious fake data (rules to detect such region is described later) is removed.</li> <li>fake region extended by inactive region: start-&gt;wakeup, end-&gt;onset. <br> One explanation is that behavior of removing the watch can only happen when one is awake.</li> <li>current candidates shift by +/-720 when no other candidates in range of 720*2.</li> </ul> <p>The three rules generate 254653 candidate steps in total for all series with best possible score 0.9006.</p> <p><strong>step correct model</strong></p> <ul> <li>Lightgbm L1 regression with (nearest_target_step-step) as target</li> <li>since we just interested in the quality of steps that matched with target, weight of data points with |target|&gt;=360 is set to be zero</li> <li>data points is weighted by threshold_class_width^(-6) to focusing on ones nearing the target, as I don’t think accurately predict the gap is possible when true target is far away.(threshold_class_width eg: 120&lt;y=127&lt;150, then resp class_width is (150-120)=30)There are also many other choice of weights that can reach similar performance based on my experiments.</li> <li>As minute%15 and second are the most important features I think this stage is mainly correct for the bias in label generation process rather than actually improve the step quality. And the most gains of this stage actually come from shift step by ~-11, which is not reflected in feature importance list.</li> <li>To validate next stage without leak CV is done in inner fold with 5fold group-k-fold by series_id, so 2* 5* 5 models in total.</li> </ul> <p>Best possible score after this stage is 0.932.</p> <p><strong>score model</strong></p> <ul> <li>Lightgbm with cross-entropy objective</li> <li>target=max(0,1-|nearest_target_step-step|/360) for data points nearest to each target else target is set to zero.</li> <li>negative data points (target=0) is down weight by *0.7/0.4 for those with |nearest_target_step-step| &gt;/&lt;360.</li> <li>target is created on corrected step given by out of fold prediction in last stage.</li> <li>I create different models for onset and wakeup candidates and use 5fold group-k-fold by series_id to do validation, so 2*5 models in total.</li> </ul> <p>I compute competition metric after post processing, so no score is reported here, but It is easy to add one.</p> <p><strong>post-processing</strong><br> this consists of three parts:</p> <ul> <li>if step%12=0 then step+/- 1 to match more target steps.</li> <li>for two candidates with gap &lt; 720 only keep them when: <br> <code>min(|gap|/720,1)*(exp_score1*exp_score2/(exp_score1**2+exp_score2**2))**0.5&gt;0.083</code><br> here <br> <code>exp_score=np.exp(np.arctanh(2*score-1))</code><br> else remove the one with smaller score.<br> This avoid too dense candidates and only works when model preference to candidates is clearly biased.</li> <li>If score sum in day for some series is larger than 1 then all scores in that day will be divide by score sum.</li> </ul> <p>After post-processing CV score 0.825, public LB score 0.783, private LB score 0.844.</p> <p><strong>more details</strong></p> <ul> <li><p><strong>fake region</strong></p> <ol> <li>anglez 5min mean (6 decimal accuracy) appear more than one time in the same series</li> <li>anglez 5min std&gt;0.5 to avoid mark some inactive region as fake</li> <li>mark regions surrounded by long fake region as fake, as described in GGIR website.</li></ol></li> <li><p><strong>data cleaning</strong></p> <ol> <li>remove some series with many wrong labels by manual inspection</li> <li>data points covered by long time unlabeled tail region is set to weight zero</li> <li>data points related to target with unexpected anglez distribution is set to weight zero. (onset followed by active region, wakeup followed by inactive region )</li></ol> <p>Those cases are only removed from training process, I still include them when compute CV score.</p></li> <li><p><strong>features</strong></p> <p>2040 in total, I only describe the most important ones, the remaining ones can be duplicated, unimportant or useless. You can check code for more details.</p> <ol> <li>window based features: <strong>source_column X stats X window_size X window_operations</strong>, not full combination.<br> <strong>source_column:</strong> <em>anglez_abs_diff, enmo_abs_diff, anglez_abs_diff_quantile, smoothed_anglez_abs_diff, anglez_abs_diff_in_given_range, fake_mark, sleep_mark by GGIR heuristic rules, …</em><br> <strong>stats</strong>: <em>mean/50 quantile/95 quantile/max/min</em><br> <strong>window_size</strong>: <em>1min,3min,5min,10min,15min,30min,1h,2h,4h,8h,12h</em><br> <strong>window_operations</strong>: <em>left_side_window(lw), right_side_window(rw), (lw-rw), [(lw-rw)/(lw+rw)], concat(lw,rw), [(lw-rw)/ concat(lw,rw))]</em></li> <li>time features<br> weekday, hour, second, minute, second_in_day, step/max_step<br> minute%15 (as discussed in <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/444374" target="_blank">https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/444374</a>)</li> <li>feature stats aggregated in day<br> mainly fake_mark and sleep_mark</li> <li>step gap to mean event time in series given by heuristic methods</li> <li>stats values at the step time</li> <li>step gap to nearby candidates and feature value gap with some features that ranked high based on feature importance, manually picked.</li></ol></li> </ul> <p><strong>Reproduce</strong><br> re-run following notebooks in order<br> <a href="https://www.kaggle.com/code/w5833946/cmi-lgb-v9-train-reproduce" target="_blank">https://www.kaggle.com/code/w5833946/cmi-lgb-v9-train-reproduce</a><br> <a href="https://www.kaggle.com/code/w5833946/cmi-lgb-v9-predict-reproduce" target="_blank">https://www.kaggle.com/code/w5833946/cmi-lgb-v9-predict-reproduce</a></p>
Child Mind Institute - Detect Sleep States
685th Place Solution for the Child Mind Institute - Detect Sleep States Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to express gratitude to Kaggle for hosting the competition and Child Mind Institute for providing a dataset. </p> <p>I'm so grateful to 213TUBO for sharing the public notebook <a href="https://www.kaggle.com/code/tubotubo/cmi-submit?scriptVersionId=149255724" target="_blank">https://www.kaggle.com/code/tubotubo/cmi-submit?scriptVersionId=149255724</a>.<br> I gained a lot of knowledge from github <a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states" target="_blank">https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states</a>.</p> <h1>Context</h1> <p>• Business context: <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/overview" target="_blank">https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/overview</a><br> • Data context: <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/data" target="_blank">https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/data</a></p> <h1>Overview of the approach</h1> <h2>The models</h2> <p>The framework Torch, models UNet, and AutoModel were used.</p> <table> <thead> <tr> <th>Owner</th> <th>Github</th> </tr> </thead> <tbody> <tr> <td>213TUBO</td> <td><a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/tree/main/src/models" target="_blank">https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/tree/main/src/models</a></td> </tr> </tbody> </table> <pre><code> src.models.detr2D DETR2DCNN src.models.spec1D Spec1D src.models.spec2Dcnn Spec2DCNN src.models.transformerautomodel TransformerAutoModel src.models.centernet CenterNet </code></pre> <h2>Data preprocessing, feature engineering:</h2> <table> <thead> <tr> <th>Name</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>Score th</td> <td>0.02</td> </tr> <tr> <td>Distance</td> <td>10</td> </tr> </tbody> </table> <p>Add feature:</p> <table> <thead> <tr> <th>Field</th> <th>Function</th> </tr> </thead> <tbody> <tr> <td>Anglez, enmo</td> <td>diff</td> </tr> <tr> <td>Anglez, enmo</td> <td>mean, avg</td> </tr> <tr> <td>Hour</td> <td>sin, cos</td> </tr> </tbody> </table> <h2>Augmentations:</h2> <table> <thead> <tr> <th>Type</th> <th>Name</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>Mixup</td> <td>prob</td> <td>0.0</td> </tr> <tr> <td>Mixup</td> <td>alpha</td> <td>0.4</td> </tr> <tr> <td>Cutmix</td> <td>prob</td> <td>0.0</td> </tr> <tr> <td>Cutmix</td> <td>alpha</td> <td>0.4</td> </tr> </tbody> </table> <h2>Validation Strategy</h2> <p>Use a train_series_ids and a valid_series_ids.<br> The rate of downsample is 2.</p> <h1>Details of the submission</h1> <p><a href="https://www.kaggle.com/code/emmawilsonev/cmi-submit/edit/run/153313645" target="_blank">https://www.kaggle.com/code/emmawilsonev/cmi-submit/edit/run/153313645</a></p> <table> <thead> <tr> <th>Private Score</th> <th>Public Score</th> </tr> </thead> <tbody> <tr> <td>0.728</td> <td>0.707</td> </tr> </tbody> </table> <h2>Code samples model training:</h2> <table> <thead> <tr> <th>Training/Configuration</th> <th>Code</th> </tr> </thead> <tbody> <tr> <td>Training</td> <td><a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/blob/main/src/modelmodule.py" target="_blank">https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/blob/main/src/modelmodule.py</a></td> </tr> <tr> <td>Configuration</td> <td><a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/blob/main/run/conf/train.yaml#L41" target="_blank">https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/blob/main/run/conf/train.yaml#L41</a></td> </tr> </tbody> </table> <p>Configuration:</p> <table> <thead> <tr> <th>Parameter</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>epochs</td> <td>50</td> </tr> <tr> <td>gradient_clip_val</td> <td>1.0</td> </tr> <tr> <td>lr</td> <td>0.0005</td> </tr> </tbody> </table> <h2>Code samples feature engineering:</h2> <pre><code>num_frames = upsampled_num_frames downsample_rate </code></pre> <pre><code> onset, wakeup this_event_df[[, ]].to_numpy(): onset_pos = ((onset - start) / duration * num_frames) onset_offset = (onset - start) / duration * num_frames - onset_pos wakeup_pos = ((wakeup - start) / duration * num_frames) wakeup_offset = (wakeup - start) / duration * num_frames - wakeup_pos bbox_size = ((wakeup_pos, num_frames) - (onset_pos, )) / num_frames </code></pre> <h2>Code samples model inference:</h2> <pre><code> feature_extractor = get_feature_extractor( cfg.feature_extractor, feature_dim, num_timesteps ) decoder = get_decoder(cfg.decoder, feature_extractor.height, n_classes, num_timesteps) model = Spec2DCNN( feature_extractor=feature_extractor, decoder=decoder, in_channels=feature_extractor.out_chans, mixup_alpha=cfg.aug.mixup_alpha, cutmix_alpha=cfg.aug.cutmix_alpha, **cfg.model.params, ) </code></pre> <h2>What was impactful about the submission.</h2> <p>Grait concept 213TUBO concerning sin and cos:<br> <a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/blob/main/run/prepare_data.py" target="_blank">https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/blob/main/run/prepare_data.py</a></p> <h2>What was tried and didn’t work.</h2> <p>Skillet-learn gradient boosting.</p> <h2>Helpful notebooks:</h2> <table> <thead> <tr> <th>Author</th> <th>notebook/github</th> <th>link</th> </tr> </thead> <tbody> <tr> <td>213TUBO</td> <td>notebook</td> <td><a href="https://www.kaggle.com/code/tubotubo/cmi-submit?scriptVersionId=149255724" target="_blank">https://www.kaggle.com/code/tubotubo/cmi-submit?scriptVersionId=149255724</a></td> </tr> <tr> <td>213TUBO</td> <td>github</td> <td><a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states" target="_blank">https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states</a></td> </tr> </tbody> </table> <p>Thank you for reading.</p>
Child Mind Institute - Detect Sleep States
6th Place Solution - BiLSTM-UNet
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes thats true :) Ah I don't know why I didn't think about using some library for that.<br> I am not sure what you mean, as I didn't predict only one event per day. I enforced a one event per 21 minutes rule by repeatedly selecting the maximum, and then nulling out the predictions 10 minutes before- and after that. This way there can'e be another prediction in this range.</p> <p>Thanks a lot to the competition host and kaggle for hosting this fun competition!</p> <h1>Data Processing</h1> <p>I split the data into different separate days with a padding of 60 minutes. The days are shifted by 500 minutes (so that one day starts and ends at 15:40). I did this to avoid wakeup/onset events near the end or start of a day, which I hypothesized to be benefical to the model (since there is only limited padding in both directions). I discarded 8 series ids which were mostly filled with nan values, and hand picked a first-n-day cutoff for a few series (removing the end of the series whenever there were no events but still regular activity). Since events only ever happened on full minutes, I constructed features for every one-minute bucket of data and generated an array of size (1440, n_feats). The features were mostly basic: anglez, enmo, anglez*enmo, hdcza were selectively paired with min, max, mean, std aggregations to generate 10 features, where hdcza was inspired by this discussion post:<br> <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/453267" target="_blank">https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/453267</a></p> <p>Since it was discovered by <a href="https://www.kaggle.com/ymatioun" target="_blank">@ymatioun</a> made the great discovery that events had different frequencies at different steps in the 15 minute cycle. Therefore, one additional feature is the (step // 12) % 15.<br> Since there were large repeated periods in the data, I used an algorithm that utilized hashing and hashmaps which looked for such patterns in the data:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6735218%2F5afb15a36f6816d1a90766c8983d30fb%2Frepeat1.PNG?generation=1701822960376424&amp;alt=media" alt=""></p> <p>Such a feature proved incredibly useful since there were many, non-obvious such repeating patterns in the data.<br> Here is a plot of enmo (blue), awake (red), and this new features, which indicates whether the data occurs somewhere else in the series or not (black):<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6735218%2F83b03dd0cfbd6bb590934c9a3ad8a71b%2Frepeat2.PNG?generation=1701823230374898&amp;alt=media" alt=""></p> <p>As you can see, there were many parts of the data which were copy pasted from somewhere else. I am sure some models will have learned this anyway from the sometimes apparent cuts in the signal, but this of course works much better. The important thing about this feature is, that the patient is almost always sleeping when it is 1 (&gt;99%). I didn't do too much in depth testing on how this affected the score, but local cv indicated that it improved the score by roughly ~0.015. Since the data was repeated and likely not valid anyway, I null out all anglez and enmo values where this features is 1 to prevent overfitting.</p> <h1>Model Architecture</h1> <p>I used 4 model architectures which were all very similar and mostly based on Bidirectional LSTM layers. I used a 6 fold split. </p> <p>My main model used a UNet LSTM/Transformer architecture with a fully connected dimension of 1200 and an LSTM/Transformer dim of 300. In contrast to the typical UNet implementation, I added the layers in the skip connection rather than concatenating.</p> <h2>Main model:</h2> <p>This is the model pipeline used by the best single models. The other 3 architectures I used are very similar (with varying layer counts and dimensions), except 1 architecture which doesn't use the UNet.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6735218%2Fdf7fde6fec78f361a99cad3a934031b6%2Fmodel1.PNG?generation=1702143119651974&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6735218%2F40cbb13011a969d0f9f1a6969af1bb07%2Fmodel2.PNG?generation=1702142987749483&amp;alt=media" alt=""></p> <p>1 day of data in 1 minute intervals (60 minute padded in both directions) shape: (batch_size, 1440 + 60*2, n_feats) -&gt; fully connected layers -&gt; BiLSTM -&gt; max pooling (pool size 15) -&gt; 2 lstms layers -&gt; max pooling (pool size 2) -&gt; transformer block -&gt; upsampling (factor 2) -&gt; skip -&gt; 2 lstm layers -&gt; upsampling (factor 15) -&gt; skip -&gt; lstm layer -&gt; fully connected layers -&gt; output layer</p> <p>The 'skip' layers indicated the the output from the same stage at the downsampling stage is added onto the current sequence in the upsampling stage.</p> <p>The minute-mod-15 feature was used to calculate an embedding which was added to the sequence before, and after the UNet. Also, I used the mish activation most of the time.</p> <p>The other models had slight variations, e.g. leaving out the UNet, changing dimensions, increasing layer count.<br> Some variations also include a sub-lstm layer, which applies an LSTM layer on all 180 minutes intervals within a day, with 120 minute overlaps between the intervals.</p> <h2>Loss function and output</h2> <p>The model predicted if an onset or wakeup event occurred within 0, 1, 3, 5, 7 minutes of the current step. I use a weighted BCE loss which weights the loss of the different minute thresholds by 1/(threshold+1). The prediction is the sum of all the probabilities that an event is in the 0, 1, 3, 5, 7 threshold. This is done for wakeup and onset separately so the model had 10 outputs per step.</p> <h1>Postprocessing</h1> <p>The model output itself had dimension (minutes_in_series, 10). This is the process of retrieving predictions from the model output:</p> <ol> <li>Concatenate the model predictions from each day in a series.</li> <li>Take the sum of the prediction that an event is in 0, 1, 3, 5, 7 minutes range at each step, for onset and wakeup respectively.</li> <li>Until there are no predictions larger than 5e-5 left:<br> a. Select the highest prediction.<br> b. Null out all predictions within a range of 10 minutes of the prediction.</li> <li>Since we are still in minute space, multiply by 12 to obtain the correct steps.</li> <li>As was pointed out in the discussion forums, predictions for full minutes are punished since the metric doesn’t count a prediction as within a threshold, if it is directly on a threshold. Therefore, to maximize the number of possible thresholds we may be in, it is beneficial to add or subtract 1. This is decided by looking at the neighboring prediction.</li> <li>Use the summed predictions from step 2. As scores for the retrieved steps.</li> </ol> <p>Thanks to <a href="https://www.kaggle.com/maruichi01" target="_blank">@maruichi01</a> for pointing that out! The discussion related to this is here:<br> <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/446919#2487136" target="_blank">https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/446919#2487136</a></p> <h1>Training</h1> <p>I used the AdamW optimizer with a weight decay of 1e-7 and lr of 1-e3 that exponentially decayed. It helped training stability to clip the norm of the gradient to 1e-2. Regular dropout was used and a kind of dropout that randomly masked a 15 minute segments with a probability of 15-25%. I trained on 4 seeds and picked the best seed of each architecture within each fold. I also refitted the best 2 architectures using the entire data.</p> <h1>Other stuff</h1> <p>Like Chris mentioned in his write-up, the discovery that we can make a lot of predictions per event was also very important. I first had a decoder model, which used the previous predictions the generate the next one using a probability distribution over all future steps. This worked quite well but only if the maximum prediction would have been counted. So I discarded this idea after discovering that we can make a lot of predictions per event without getting punished too much.<br> My local cv is ~0.833.</p> <h1>Things that didn't work</h1> <p>Like I said earlier, I first worked on a decoder model to predict the next change event, which didn't work so well since it was beneficial to make many predictions per night. I also worked on stacking the model with an LGBM, but that only gave tiny improvements so I avoided the idea to not add too much complexity, though if I had more time I would have probably reconsidered it.<br> Since the ranking of the predictions was important I tried using other loss like e.g. a pearson correlation loss but that did not work better.</p> <p>I hope I covered the most important parts and didn't forget anything. I tried a lot of things I may add later. I am very happy I managed to get my second solo Gold today, thus becoming a Kaggle master 😃</p> <p>Edit:<br> Link to git repo:<br> <a href="https://gitlab.com/fritz_cremer/cmidss_final" target="_blank">https://gitlab.com/fritz_cremer/cmidss_final</a></p> <p>Thanks, yes I figured it was important to capture long- and short-term dependencies. Pure Transformer didn't work out for me. Possibly because there wasn't enough data to accurately learn the position embeddings.</p> <p>I thought about a few things like that but in the end I just let the model predict probabilities between 0 and 1 for each step without constraint. And yes postprocessing was one of the most important things I believe.</p> <p>Thank you! I tried using convolutional layers but it didn't improve my model, though I am sure if I had further tried it could have improved my ensemble. I padded, since otherwise, the model has no context about the minutes right before and after the day, which might be helpful to make predictions at these steps.</p> <p>Thanks a lot :)</p>
Child Mind Institute - Detect Sleep States
75th Place Detailed Solution - Spec2DCNN + CenterNet + Transformer + NMS
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Same feeling as your, we have done with many feature works but they don't work at all.</p> <p>First of all, I would like to thank <a href="https://www.kaggle.com/tubotubo" target="_blank">@tubotubo</a> for sharing your high-quality code, and also thank my teammates <a href="https://www.kaggle.com/liruiqi577" target="_blank">@liruiqi577</a> <a href="https://www.kaggle.com/brickcoder" target="_blank">@brickcoder</a> <a href="https://www.kaggle.com/xtzhou" target="_blank">@xtzhou</a> for their contributions in the competition. Here, I am going to share our team’s “<strong>snore like thunder</strong>” solution from the following aspects:</p> <ol> <li>Data preprocessing</li> <li>Feature Engineering</li> <li>Model</li> <li>Post Processing</li> <li>Model Ensemble</li> </ol> <h2>1. Data preprocessing</h2> <p>We made EDA and readed open discussions found that there are 4 types of data anomalies:</p> <ul> <li>Some series have a high missing rate and some of them do not even have any event labels;</li> <li>In some series , there are no event annotations in the middle and tail (possibly because the collection activity has stopped);</li> <li>The sleep record is incomplete (a period of sleep is only marked with onset or wakeup).</li> <li>There are outliers in the enmo value.</li> </ul> <p>To this end, we have some attempts, such as:</p> <ul> <li>Eliminate series with high missing rates;</li> <li>Cut the tail of the series without event labels;</li> <li>Upper clip enmo to 1.</li> </ul> <p>But the above methods didn't completely work. In the end, our preprocessing method was:</p> <p>We split the dataset group by series into 5 folds. For each fold, we eliminate series with a label missing rate of 100% in the training dataset while without performing any data preprocessing on the validation set. This is done to avoid introducing noise to the training set, and to ensure that the evaluation results of the validation set are more biased towards the real data distribution, which improve our LB score + 0.006.</p> <p>Part of our experiments as below:</p> <table> <thead> <tr> <th>Experiment</th> <th>Fold0</th> <th>Public (single fold)</th> <th>Private (5-fold)</th> </tr> </thead> <tbody> <tr> <td>No preprocess missing data</td> <td>0.751</td> <td>0.718</td> <td>0.744</td> </tr> <tr> <td>Eliminate unlabeled data at the end of train_series &amp; series with missing rate &gt;80%</td> <td>0.739</td> <td>0.709</td> <td>0.741</td> </tr> <tr> <td>Drop train series which don’t have any event labels</td> <td>0.752</td> <td>0.724</td> <td>0.749</td> </tr> </tbody> </table> <h2>2. Feature Engineering</h2> <ul> <li>Sensor features: After smoothing the enmo and anglez features, a first-order difference is made to obtain the absolute value. Then replace the original enmo and anglez features with these features, which improve our LB score + 0.01.</li> </ul> <pre><code>train_series[] = train_series[].diff().() train_series[] = train_series[].rolling(window=, center=, min_periods=).mean() train_series[] = train_series[].diff().() train_series[] = train_series[].rolling(window=, center=, min_periods=).mean() </code></pre> <ul> <li>Time features: sin and cos hour.</li> </ul> <p>In addition, we also made the following features based on open notebooks and our EDA, such as: differential features with different orders, rolling window statistical features, interactive features of enmo and anglez (such as anglez's differential abs * enmo, etc.), anglez_rad_sin/cos, dayofweek/is_weekend (I find that children have different sleeping habits on weekdays and weekends). But strangely enough, too much feature engineering didn’t bring us much benefit.</p> <table> <thead> <tr> <th>Experiment</th> <th>Fold0</th> <th>Public (5-fold)</th> <th>Private (5-fold)</th> </tr> </thead> <tbody> <tr> <td>anglez + enmo + hour_sin + hour_cos</td> <td>0.763</td> <td>0.731</td> <td>0.768</td> </tr> <tr> <td>anglez_abs_diff + enmo_abs_diff + hour_sin + hour_cos</td> <td>0.771</td> <td>0.741</td> <td>0.781</td> </tr> </tbody> </table> <h2>3. Model</h2> <p>We used 4 models:</p> <ul> <li>CNNSpectrogram + Spec2DCNN + UNet1DDecoder;</li> <li>PANNsFeatureExtractor + Spec2DCNN + UNet1DDecoder.</li> <li>PANNsFeatureExtractor + CenterNet + UNet1DDecoder.</li> <li>TransformerAutoModel (xsmall, downsample_rate=8).</li> </ul> <p><strong>Parameter Tunning</strong>: Add more kernel_size 8 for CNNSpectrogram can gain +0.002 online.</p> <p><strong>Multi-Task Learning Objectives</strong>: sleep status, onset, wake.</p> <p><strong>Loss Function</strong>: For Spec2DCNN and TransformerAutoModel, we use BCE, but with multi-task target weighting, sleep:onset:wake = 0.5:1:1. The purpose of this is to allow the model to focus on learning the last two columns. We tried to train only for the onset and wake columns, but the score was not good. The reason is speculated that <strong>the positive samples in these two columns are sparse, and MTL needs to be used to transfer the information from positive samples in the sleep status to the prediction of sleep activity events.</strong> Also, I tried KL Loss but it didn't work that well. </p> <pre><code>self.loss_fn = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([,,])) </code></pre> <p>At the same time, we adjusted epoch to 70 and added early stopping with patience=15. The early stopping criterion is the AP of the validation dataset, not the loss of the validation set. batch_size=32.</p> <table> <thead> <tr> <th>Experiment</th> <th>Fold0</th> <th>Public (single fold)</th> <th>Private (5-fold)</th> </tr> </thead> <tbody> <tr> <td>earlystop by val_loss</td> <td>0.750</td> <td>0.697</td> <td>0.742</td> </tr> <tr> <td>earlystop by val_score</td> <td>0.751</td> <td>0.718</td> <td>0.744</td> </tr> <tr> <td>loss_wgt = 1:1:1</td> <td>0.752</td> <td>0.724</td> <td>0.749</td> </tr> <tr> <td>loss_wgt = 0.5:1:1</td> <td>0.755</td> <td>0.723</td> <td>0.753</td> </tr> </tbody> </table> <p>Note: we used the model_weight.pth with the best offline val_score to submit our LB instead of using the best_model.pth with the best offline val_loss。</p> <h2>4. Post Processing</h2> <p>Our post-processing mainly includes:</p> <ul> <li><strong>find_peaks()</strong>: scipy.signal.find_peaks;</li> <li><strong>NMS</strong>: This task can be treated as object detection. [onset, wakeup] is regarded as a bounding boxes, and score is the confident of the box. Therefore, I used a time-series NMS. Using NMS can eliminate redundant boxes with high IOU, which increase our AP.</li> </ul> <pre><code> (): x1 = dets_arr[:, ] x2 = dets_arr[:, ] scores = dets_arr[:, ] areas = x2 - x1 order = scores.argsort()[::-] keep = [] order.size &gt; : i = order[] keep.append(i) xx1 = np.maximum(x1[i], x1[order[:]]) xx2 = np.minimum(x2[i], x2[order[:]]) inter = np.maximum(, xx2 - xx1 + ) ovr = inter / (areas[i] + areas[order[:]] - inter) inds = np.where(ovr &lt;= thresh)[] order = order[inds + ] dets_nms_arr = dets_arr[keep,:] onset_steps = dets_nms_arr[:, ].tolist() wakeup_steps = dets_nms_arr[:, ].tolist() nms_save_steps = np.unique(onset_steps + wakeup_steps).tolist() nms_save_steps </code></pre> <p>In addition, we set score_th=0.005 (If it is set too low, a large number of events will be detected and cause online scoring errors, so it is fixed at 0.005 here), and use optuna to simultaneously search the parameter distance in find_peaks and the parameter iou_threshold of NMS. Finally, when distance=72 and iou_threshold=0.995, the best performance is achieved.</p> <pre><code> optuna (): score_th = distance = trial.suggest_int(, , ) thresh = trial.suggest_float(, , ) val_pred_df = post_process_for_seg( keys=keys, preds=preds[:, :, [, ]], score_th=score_th, distance=distance, ) val_pred_df = val_pred_df.to_pandas() nms_pred_dfs = NMS_prediction(val_pred_df, thresh, verbose=) score = event_detection_ap(valid_event_df.to_pandas(), nms_pred_dfs) -score study = optuna.create_study() study.optimize(objective, n_trials=) (, study.best_params) (, study.best_value) </code></pre> <table> <thead> <tr> <th>Experiment</th> <th>Fold0</th> <th>Pubic (5-fold)</th> <th>Private (5-fold)</th> </tr> </thead> <tbody> <tr> <td>find_peak</td> <td>-</td> <td>0.745</td> <td>0.787</td> </tr> <tr> <td>find_peak+NMS+optuna</td> <td>-</td> <td>0.746</td> <td>0.789</td> </tr> </tbody> </table> <h2>5. Model Ensemble</h2> <p>Finally, we average the output probabilities of the following models and then feed into the post processing methods to detect events. By the way, I tried post-processing the detection events for each model and then concating them, but this resulted in too many detections. Even with NMS, I didn't get a better score.</p> <p>The number of ensemble models: 4 (types of models) * 5 (fold number) = 20.</p> <table> <thead> <tr> <th>Experiment</th> <th>Fold0</th> <th>Pubic (5-fold)</th> <th>Private (5-fold)</th> </tr> </thead> <tbody> <tr> <td>model1: CNNSpectrogram + Spec2DCNN + UNet1DDecoder</td> <td>0.77209</td> <td>0.743</td> <td>0.784</td> </tr> <tr> <td>model2: PANNsFeatureExtractor + Spec2DCNN + UNet1DDecoder</td> <td>0.777</td> <td>0.743</td> <td>0.782</td> </tr> <tr> <td>model3: PANNsFeatureExtractor + CenterNet + UNet1DDecoder</td> <td>0.75968</td> <td>0.634</td> <td>0.68</td> </tr> <tr> <td>model4: TransformerAutoModel</td> <td>0.74680</td> <td>-</td> <td>-</td> </tr> <tr> <td>model1 + model2(1:1)</td> <td>-</td> <td>0.746</td> <td>0.789</td> </tr> <tr> <td>model1 + model2+model3(1:1:0.4)</td> <td>-</td> <td>0.75</td> <td>0.786</td> </tr> <tr> <td>model1 + model2+model3+model4(1:1:0.4:0.2)</td> <td></td> <td>0.752</td> <td>0.787</td> </tr> </tbody> </table> <p>Unfortunately, we only considered CenterNet and Transformer to model ensemble with a tentative attitude on the last day, but surprisingly found that <strong>a low-CV-scoring model still has a probability of improving final performance as long as it is heterogeneous compared with your previous models</strong>. But we didn’t have more opportunities to submit more, which was a profound lesson for me.</p> <p>Thoughts not done:</p> <ul> <li><p><strong>Data Augmentation</strong>: Shift the time within the batch to increase more time diversity and reduce dependence on hour features.</p></li> <li><p><strong>Model</strong>: Try more models. Although we try transformer and it didn’t work for us. I am veryyy looking forward to the solutions from top-ranking players.</p></li> </ul> <p>Thanks again to Kaggle and all Kaggle players. This was a good competition and we learned a lot from it. If you think our solution is useful for you, welcome to upvote and discuss with us.</p> <p>In addition, this is my first 🥈 silver medal. Thank you everyone for letting me learn a lot. I will continue to work hard. :)</p> <p>Nice, I haven't try smp.DeepLabV3Plus. Thanks for sharing.</p>
Child Mind Institute - Detect Sleep States
77th solution - UNet1D -> WaveNet, Bi-GRU
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to the organizer and Kaggle staff for this excited competition. And congratulations to winners. I only participated in the last three weeks, but I learned a lot and enjoy this competition.</p> <h1>PreProcess</h1> <ul> <li>standard scaler (for each subject)</li> <li>sequence_size = 17,280 (12 * 60 * 24)</li> <li>sequence overlap = 11,520 (12 * 60 * 16)</li> <li>prepare features to be used for NN inputs<ul> <li>anglez</li> <li>enmo</li> <li>anglez_diff</li> <li>enmo_diff</li> <li>time_unit_sin</li> <li>time_unit_cos</li></ul></li> </ul> <h1>Validation</h1> <ul> <li>GroupKFold (k=5)</li> </ul> <h1>Model</h1> <table> <thead> <tr> <th></th> <th>CV(=5fold)</th> <th>Public</th> <th>Private</th> <th>Weight</th> </tr> </thead> <tbody> <tr> <td>UNet1D - WaveNet</td> <td>0.746</td> <td>0.735</td> <td>0.772</td> <td>0.3375</td> </tr> <tr> <td>UNet1D - BiGRU(num_block=2)</td> <td>0.731</td> <td>not submit</td> <td>not submit</td> <td>0.14875</td> </tr> <tr> <td>UNet1D - BiGRU(num_block=3)</td> <td>0.721</td> <td>0.708</td> <td>0.754</td> <td>0.06375</td> </tr> <tr> <td>UNet1D - ViT - BiGRU(num_block=2)</td> <td>0.728</td> <td>not submit</td> <td>not submit</td> <td>0.0375</td> </tr> <tr> <td>UNet1D - WaveNet - BiGRU(num_block=1)</td> <td>0.751</td> <td>0.730</td> <td>0.770</td> <td>0.4125</td> </tr> <tr> <td>Ensemble</td> <td>0.775</td> <td>0.756</td> <td>0.786</td> <td></td> </tr> </tbody> </table> <h3>Common in all models</h3> <ul> <li>Linear Layer at beginning</li> <li>Downsampling before calculation of loss (negative = positive * 10)</li> <li>Use hard label (window=12)</li> <li>Multi Task Learning<ul> <li>segmentation: “asleep”, “awake”, “not_worn”</li> <li>event detection: “onset”, “wakeup”</li></ul></li> <li>I add the output of segmentation to the input of the event detection head. After that, I add multiple conv1d layer with various kernel_size(3, 7, 15) before the head.</li> <li>Skip Connection is very important.</li> </ul> <h3>Each Component</h3> <ul> <li><strong>UNet1D</strong><ul> <li>used skip connection instead concatenate when merge encoder and decoder</li></ul></li> <li><strong>WaveNet</strong><ul> <li>block = 2</li> <li>dilation = 4 ** N (N = 6, 4)</li></ul></li> <li><strong>Bi-GRU</strong><ul> <li>Transformer-like architecture ( (Bi-GRU → FeedForward) * num_block, with skip connection and layer normalization)</li></ul></li> <li><strong>ViT</strong><ul> <li>patch_size=4</li> <li>I use ViT for diversity, but attention-based model isn't well in my case.</li></ul></li> </ul> <h1>Ensemble</h1> <ul> <li>Weighted average of the model's output probabilities</li> </ul> <h1>PostProcess</h1> <ul> <li>Moving Average (window=3)</li> <li>Find Peak (using scipy.signal.find_peaks, score_th=0.01, distance=100))</li> <li>Eliminate time periods when events are unlikely to occur. (Removed predictions outside the following duration)<ul> <li>onset : 20:00 ~ 12:00</li> <li>wakeup : 3:00 ~ 20:00</li></ul></li> </ul>
Child Mind Institute - Detect Sleep States
79th Place Solution. 🥈
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Dear Kagglers.</p> <p>Thank you to the competition organizers hosting this interesting competition.<br> Thank you to everyone involved in this competition. I learned a lot from public notebooks and discussions.</p> <h1>My Final Result</h1> <ul> <li><p>Score<br> Public: 0.751 → Private 0.786</p></li> <li><p>LeaderBoard<br> Public: 52nd → Private 79th 🥈</p></li> </ul> <h1>Summary of Solution</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1452109%2F1836c58970cc7d4f7691e8efc8861355%2Fsolution.drawio.png?generation=1701895087837059&amp;alt=media" alt=""></p> <ol> <li><p>common settings</p> <ul> <li>criterion: focal loss</li></ul></li> <li><p>1DUnet<br> based on these <a href="https://www.kaggle.com/werus23" target="_blank">@werus23</a>'s and <a href="https://www.kaggle.com/danielphalen" target="_blank">@danielphalen</a>'s notebooks, I modified loss criterion, dataset, features, architecture, etc.</p> <ul> <li><a href="https://www.kaggle.com/code/werus23/sleep-critical-point-train" target="_blank">https://www.kaggle.com/code/werus23/sleep-critical-point-train</a></li> <li><a href="https://www.kaggle.com/code/werus23/sleep-critical-point-infer" target="_blank">https://www.kaggle.com/code/werus23/sleep-critical-point-infer</a></li> <li><a href="https://www.kaggle.com/code/danielphalen/cmss-grunet-train" target="_blank">https://www.kaggle.com/code/danielphalen/cmss-grunet-train</a></li> <li><a href="https://www.kaggle.com/code/danielphalen/cmss-grunet-infer" target="_blank">https://www.kaggle.com/code/danielphalen/cmss-grunet-infer</a></li></ul> <p>architecture<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1452109%2F73f3fbff32b9c5b76a2c2f88377c7bed%2F1dunet.drawio.png?generation=1701893136038754&amp;alt=media" alt=""></p></li> <li><p>Spec2DCNN<br> based on <a href="https://www.kaggle.com/tubotubo" target="_blank">@tubotubo</a>'s public repository, I added focal loss criterion.</p> <ul> <li><a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states" target="_blank">https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states</a></li> <li><a href="https://www.kaggle.com/code/tubotubo/cmi-submit" target="_blank">https://www.kaggle.com/code/tubotubo/cmi-submit</a></li></ul></li> </ol> <h1>Source (inference notebook)</h1> <p><a href="https://www.kaggle.com/code/hutch1221/cmi-ensemble" target="_blank">https://www.kaggle.com/code/hutch1221/cmi-ensemble</a></p> <h1>Finally</h1> <p>I gained significant insights into event detection techniques from the public notebooks and discussions. In particular, the public codes shared by <a href="https://www.kaggle.com/werus23" target="_blank">@werus23</a>, <a href="https://www.kaggle.com/danielphalen" target="_blank">@danielphalen</a>, and <a href="https://www.kaggle.com/tubotubo" target="_blank">@tubotubo</a> were outstanding and contributed greatly to my learning.</p> <p>Advice is always welcome!<br> Thank you for your attention.</p>
Child Mind Institute - Detect Sleep States
7th Place Solution - Wavenet and Some Tricks
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> , congrats to you too!</p> <p>We were using 4 folds locally and we noticed how sensitive the folds are even though each fold was larger than Public LB size. Public LB score was also much lower than our worst fold, which indicates that labels were probably very noisy. Simple improvements like averaging more models etc was harming Public LB time to time. So we decided that Public LB was very noisy and mostly ignored it.</p> <p>Why did you use only 1 fold? Were your models running slow?</p> <p>First of all, thanks my teammate <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a> . It was a fun ride. We have noticed that Public LB is not very trustworthy and focused on our CV. We had around 0.826 CV at the end.</p> <p><strong>Our main model is a wavenet with some modifications.</strong> It works on a sequence of minutely aggregated 3 days.</p> <pre><code> (nn.Module): (): (SleepModel, self).__init__() emb_size = self.minute_emb = nn.Embedding(, embedding_dim=emb_size) self.bn = nn.BatchNorm1d(inch-) self.wave_block1 = Wave_Block(inch-+emb_size, , , kernel_size) self.wave_block2 = Wave_Block(, , , kernel_size, base=) self.wave_block3 = Wave_Block(, , , kernel_size) self.wave_block4 = nn.Sequential(nn.Conv1d(, , kernel_size=, dilation=DAY_LEN//), nn.BatchNorm1d(), nn.LeakyReLU(), nn.Conv1d(, , kernel_size=, dilation=DAY_LEN//), nn.BatchNorm1d(), nn.LeakyReLU() ) self.top = nn.Conv1d(, , kernel_size=) self.top2 = nn.Conv1d(, , kernel_size=) self.gn1 = nn.GroupNorm(, ) self.gn2 = nn.GroupNorm(, ) self.avgpool = nn.AvgPool1d(kernel_size=DAY_LEN+, padding=DAY_LEN//, stride=) self.maxpool = nn.MaxPool1d(kernel_size=DAY_LEN+, padding=DAY_LEN//, stride=) (): x = torch.cat([x[:, -DAY_LEN:]*, x, x[:, :DAY_LEN]*], axis=) x = x.permute(, , ) minute, x = x[:, -, :], x[:, :-, :] x[:, -, :] += minute / x = self.bn(x) minute_emb = self.minute_emb(torch.fmod(minute, ).long()) x = torch.cat([x, minute_emb.permute(, , )], axis=) x = self.wave_block1(x) x = self.gn1(x) x = self.wave_block2(x) x = self.gn2(x) x = self.wave_block3(x) x = torch.cat([x, self.avgpool(x[:, :]), self.maxpool(x[:, :])], axis=) x = self.wave_block4(x) x, x2 = self.top(x), self.top2(x) x, x2 </code></pre> <p>Its <strong>features</strong> are: <code> ["target", "idx", "anglez_mean", "anglez_std", "enmo_mean", "enmo_std", "same_anglez_prev_min", "same_anglez_next_min"] + volatility_cols + ["hour", "minute"] </code><br> Volatility columns are median over anglez absolute difference on 3 different time windows. (5, 30, 480 steps)<br> Same anglez prev and next features are difference of anglez with the last day same minute.</p> <p><strong>Training details:</strong></p> <ul> <li>Target is also set to 1 for the adjacent minute because we predict with +3 step offset to cover the next minute too.</li> <li>Ignoring near misses in loss: 2nd and 3rd minutes around the target are set to -1. SO that we dont penalize positive predictions there.</li> <li>2 heads (1 for actual target -&gt; categorical crossentropy, 1 for 15min windows -&gt; BCE)</li> <li>Online Hard Example Mining with 50%</li> <li>6 epochs with diminishing LR. Each epoch takes 18 seconds on RTX3090. (and submission was taking 30 minutes in total)</li> <li>2 iterations with different seed for diversity. 1st iteration excludes some bad series and 2nd iteration has very little augmentation.</li> </ul> <p><strong>Prediction details:</strong></p> <ul> <li>1 day sliding predictions with 3 days span. Only the middle day is used.</li> <li>Postprocessing: Starting from the max prediction, we add their location as prediction. Score is their probability and 2nd highest probability within +-4 minute window. We set these windows to zero. We multiply surrounding +-20 minutes windows by 0.5 and continue. Later, we sample very low probability predictions for +-18 steps span for predictions with high score.</li> <li>Stacking with LightGBM: Lightgm with a lot of median absolute anglez difference features are used. NN probabilities and time to/till onset/wakeup features using this probabilities are added. AUC got significantly improved but the metric is improved very little. (0.001)</li> <li>Ensembling: We had another model with LSTM+Transformer. We added it to get 0.001 improvement.</li> </ul>
Child Mind Institute - Detect Sleep States
858th Place Solution: MLP-BiGRU
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, I'd like to thank the organizers and Kaggle team for this interesting competition. Second, a huge congratulation to all the winners and everyone who participated in this competition. It was a very great competition for me to learn more about time-series approaches. Finally, many thanks to <a href="https://www.kaggle.com/werus23" target="_blank">werus23</a>, <a href="https://www.kaggle.com/tolgadincer" target="_blank">Tolga</a>, <a href="https://www.kaggle.com/tubotubo" target="_blank">213tubo</a>, and <a href="https://www.kaggle.com/cdeotte" target="_blank">Chris Deotte</a> for sharing their valuable knowledge and discussions throughout the competition. It was super helpful for me to understand and develop the final solution for this competition. </p> <p><strong>1. Data preparation:</strong></p> <ul> <li><strong>Data</strong>: I truncated the original data at the end based on the valid events. Then, I added some engineered features based on the public notebooks and codes from <a href="https://www.kaggle.com/code/werus23/sleep-critical-point-prepare-data" target="_blank">[1]</a> <a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/blob/main/run/prepare_data.py" target="_blank">[2]</a>. The 18 final features I selected for my training/inference phase were: <br> <code>num_features = ['anglez', 'enmo', 'anglez_lag_diff', 'anglez_lead_diff','anglez_mean_60','anglez_max_60','anglez_std_60', 'enmo_lag_diff','enmo_lead_diff','enmo_mean_60', 'enmo_max_60','enmo_std_60', 'hour_sin','hour_cos','month_sin','month_cos','minute_sin','minute_cos' ]</code></li> <li><strong>Label</strong>: There were 2 approaches for preparing the labels in this problem: hard labelling vs soft labelling. Hard labelling is created by pre-selecting the number of offset from the events and considering it as a part of the training labels. In contrast, soft labeling is created using Gaussian distribution around the onset/awake positions with different sigma and offset values <a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/blob/main/src/utils/common.py" target="_blank">[3]</a>. This approach has demonstrated much better results compared to hard labelling in this competition. Since I didn't have a good understanding of the soft labelling approach, I shifted my focus on training hard labelled data. A visualization of these two approaches is as follows (with an offset of 180 on each side): <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6261540%2F8618580d7b03ac5fd064a28762da89f0%2Flabel_encoding.png?generation=1701984345645339&amp;alt=media" alt=""></li> </ul> <p><strong>2. Cross-validation:</strong><br> I used 269 series ids with group 10-fold. I initially used K-Fold but after reading <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/447882#2491744" target="_blank">Chris's discussion</a>, I switched to group K-Fold and it boosted my LB score by ~0.05. However, I only selected fold 4 and 7 for training purposes. <br> Valid series ids:</p> <ul> <li><strong>fold_4:</strong> <code>['062dbd4c95e6', '0ef7d94fde99', '154fe824ed87', '280e08693c6d', '31011ade7c0a', '3aceb17ef7bd', '4743bdde25df', '51fdcc8d9fe7', '5f40907ec171', '653622ac8363', '703b5efa9bc1', '7822ee8fe3ec', '87a6cbb7c4ed', '8877a6586606', '89bd631d1769', '91127c2b0e60', '971207c6a525', '9b9cd7b7af8c', 'aed3850f65f0', 'ba8083a2c3b8', 'c38707ef76df', 'cca14d1966c1', 'd2d6b9af0553', 'db5e0ee1c0ab', 'e69aff66e0cb', 'eef041dd50aa', 'f981a0805fd0']</code></li> <li><strong>fold_7:</strong> <code>['04f547b8017d', '0dee4fda51c3', '137b99e936ab', '25e2b3dd9c3b', '2e9ced2c7976', '3664fe9233f9', '3c336d6ba566', '449766346eb1', '5c088d7e916c', '601559e1777d', '6d6b9d22d48a', '752900afe3a6', '83fa182bec3a', '8f6f15b9f598', '99b829cbad2d', 'a2b0a64ec9cf', 'a88088855de5', 'a9e5f5314bcb', 'b750c8c1556c', 'c107b5789660', 'd0f613c700f7', 'd93b0c7de16b', 'dff367373725', 'e0686434d029', 'e34b496b84ce', 'ece2561f07e9', 'f7eb179216c2']</code></li> </ul> <p><strong>3. Model architecture:</strong><br> The model architecture was adapted and modified based on this MLP-Residual BiGRU model <a href="https://www.kaggle.com/code/werus23/sleep-critical-point-train/notebook" target="_blank">[4]</a><a href="https://www.kaggle.com/competitions/tlvmc-parkinsons-freezing-gait-prediction/discussion/416410" target="_blank">[5]</a>. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6261540%2F4326451a4c802ae15d99fb35f8e039de%2FChild_mind_model_architecture.png?generation=1701992082562879&amp;alt=media" alt=""><br> <strong>4. Training:</strong></p> <ul> <li><p>Optimizer: Adam</p> <ul> <li>LR=1.0e-3</li></ul></li> <li><p>Scheduler: CosineAnnealing</p> <ul> <li>T_max=10</li></ul></li> <li><p>Epochs=10, batch_size=3</p></li> <li><p>Loss function: <code>BCEWithLogitsLoss()</code></p></li> <li><p>Evaluation metrics: <code>Event Detection AP</code>, <code>precision_score(average='macro')</code><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6261540%2Ff326b99e48fb43ea867cad1343ff4705%2Floss.png?generation=1702001203166923&amp;alt=media" alt=""><br> <strong>5. Inference:</strong></p></li> <li><p>Used the same preprocessing steps as training phase, except there was no truncating in the test series.</p></li> <li><p>Inferred with batch size = 1, and input_size=(17280,18)</p></li> <li><p>Filtered out insignificant predictions</p></li> <li><p>Added as many onset/awake positions as possible within the input time frame of 17280 (24 hours)</p></li> <li><p>Selected only middle point of onset/awake transition periods<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6261540%2F2da3fe65d4c375da4d9282b66fa93c2c%2Fmodel_0.png?generation=1701998529729107&amp;alt=media" alt=""><br> <strong>6. Results:</strong></p> <table> <thead> <tr> <th>Fold</th> <th>Model</th> <th>Hidden_size</th> <th>CV</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>4</td> <td>model_0</td> <td>128</td> <td>0.743</td> <td>0.681</td> <td>0.706</td> </tr> <tr> <td>4</td> <td>model_1</td> <td>224</td> <td>0.737</td> <td>0.675</td> <td>0.716</td> </tr> <tr> <td>7</td> <td>model_2</td> <td>128</td> <td>0.748</td> <td>0.680</td> <td>0.71</td> </tr> <tr> <td>7</td> <td>model_3</td> <td>224</td> <td>0.745</td> <td>0.677</td> <td>0.711</td> </tr> </tbody> </table></li> <li><p>Ensemble <strong>public LB</strong>: 0.25*(model_0+model_1+mode_2+model_3) = <strong>0.687</strong></p></li> <li><p>Ensemble <strong>private LB</strong>: 0.25*(model_0+model_1+mode_2+model_3) = <strong>0.716</strong></p></li> </ul> <p><strong>7. Conclusion:</strong><br> Things I tried but unsuccessful 🫠:</p> <ul> <li>WaveNet</li> <li>UNet</li> <li>Spectrogram-UNet</li> <li>Transformer</li> <li>Soft-labelling</li> </ul> <p><strong>References:</strong><br> <a href="https://www.kaggle.com/code/werus23/sleep-critical-point-prepare-data" target="_blank"><em>[1] Sleep critical point prepare data-werus23</em></a> </p> <p><a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/blob/main/run/prepare_data.py" target="_blank"><em>[2] Prepare data-tubo213</em></a></p> <p><a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/blob/main/src/utils/common.py" target="_blank"><em>[3] Common utils funtions-tubo213</em></a></p> <p><a href="https://www.kaggle.com/code/werus23/sleep-critical-point-train/notebook" target="_blank"><em>[4] Sleep critical point train-werus23</em></a></p> <p><a href="https://www.kaggle.com/competitions/tlvmc-parkinsons-freezing-gait-prediction/discussion/416410" target="_blank"><em>[5] 4th Place Solution: a MultiLayer Bidirectional GRU with Residual Connections-zinxira</em></a></p> <p>Thanks for reading! 😀</p>
Child Mind Institute - Detect Sleep States
86th Place Solution for the Child Mind Institute - Detect Sleep States Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, big thanks to the organizers for preparing such an interesting competition. I really enjoyed working on it and learned a lot. I was quite surprised how well the NN models worked in this competition and how easily they surpassed the handcrafted features and decision tree models.</p> <p>Also big thanks to all the people who shared their knowledge in discussions and code. Especially to:</p> <ul> <li><a href="https://www.kaggle.com/tubotubo" target="_blank">213tubo</a> for <a href="https://www.kaggle.com/code/tubotubo/cmi-submit" target="_blank">CMI-submit</a></li> <li><a href="https://www.kaggle.com/danielphalen" target="_blank">Daniel Phalen</a> for <a href="https://www.kaggle.com/code/danielphalen/cmss-grunet-infer" target="_blank">CMSS-GRUNET</a></li> <li><a href="https://www.kaggle.com/werus23" target="_blank">werus23</a> for <a href="https://www.kaggle.com/code/werus23/sleep-critical-point-infer" target="_blank">Sleep - Critical point</a></li> <li><a href="https://www.kaggle.com/lccburk" target="_blank">Lucas Burke</a> for <a href="https://www.kaggle.com/code/lccburk/feature-engineering-and-random-forest-prediction" target="_blank">Feature Engineering and Random Forest Prediction</a></li> </ul> <p>Even though my best solution was based on code published by <a href="https://www.kaggle.com/tubotubo" target="_blank">213tubo</a> in <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/452940" target="_blank">Detecting sleep using UNet(2D) post</a>, I decided to publish the write up to show some simple ways how to improve on it and that it's possible to get good results using only limited resources. All training was done on Google Colab using T4 GPU instance.</p> <h1>Context</h1> <p>Business Context - <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/overview" target="_blank">https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/overview</a><br> Data Context - <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/data" target="_blank">https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/data</a></p> <h1>Overview of the Approach</h1> <p>My best scoring pipeline is based on code published by <a href="https://www.kaggle.com/tubotubo" target="_blank">213tubo</a> in <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/452940" target="_blank">Detecting sleep using UNet(2D) post</a>.</p> <h2>Model</h2> <p>As described by in the original post the model consist of feature extractor, encoder and decoder.</p> <ul> <li><strong>Feature extractor</strong> takes raw input and transforms it into wavegram of image like shape. 3 Conv1d layers with different kernel sizes were used as feature extractors.</li> <li><strong>Encoder</strong> takes the wavegram and creates a two dimensional heatmap. Unet and FPN models with resnet34 as backbone from <a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank">PyTorch Segmentation models library</a> were used as encoders.</li> <li><strong>Decoder</strong> creates probabilities of sleep, onset and wakeup from the heatmap generated by encoder. 1d Unet and TransformerEncoder were used as decoders. Only onset and wakeup predictions are used during inference.</li> </ul> <p>Model input: 8 hour window (5760 samples) with 4 features: normalized anglez, normalized enmo, hour sin and hour cos</p> <p>Used around 20% of data for validation and trained on remaining 80%.</p> <h2>Additional improvements that boosted the score:</h2> <p>1) Train more models<br> 2) Ensemble using average output of multiple models<br> 3) Use timeframes with overlap during inference, average out predictions from overlapping segments<br> 4) Tune threshold and peak gap - lower threshold with 5 minute gap between events produced the best results</p> <h1>Details of the submission</h1> <p>Model published in <a href="https://www.kaggle.com/code/tubotubo/cmi-submit" target="_blank">CMI-submit</a></p> <table> <thead> <tr> <th>Base model</th> <th>CV Score</th> <th>Public Score</th> <th>Private Score</th> </tr> </thead> <tbody> <tr> <td>Base model</td> <td>0.7305</td> <td>0.707</td> <td>0.728</td> </tr> </tbody> </table> <h2>Experiment with different models, architectures, settings</h2> <p>I trained around 30 models with different encoders, feature extractors, downsampling rate, losses, architectures and sizes. Most of the models were trained with downsample rate 4 which sped up the training and did not decrease performance. From those models I selected best 6 for the final ensemble based on their score.</p> <table> <thead> <tr> <th>Feature extractor base filters</th> <th>Encoder architecture</th> <th>Decoder</th> <th>downsample rate</th> <th>CV Score</th> </tr> </thead> <tbody> <tr> <td>256</td> <td>Unet</td> <td>UNet1DDecoder</td> <td>4</td> <td>0.7545</td> </tr> <tr> <td>64</td> <td>Unet</td> <td>TransformerCNNDecoder</td> <td>4</td> <td>0.7526</td> </tr> <tr> <td>128</td> <td>Unet</td> <td>UNet1DDecoder</td> <td>4</td> <td>0.7525</td> </tr> <tr> <td>128</td> <td>Unet</td> <td>UNet1DDecoder</td> <td>2</td> <td>0.7519</td> </tr> <tr> <td>128</td> <td>Unet</td> <td>UNet1DDecoder</td> <td>3</td> <td>0.7510</td> </tr> <tr> <td>128</td> <td>FPN</td> <td>UNet1DDecoder</td> <td>4</td> <td>0.7486</td> </tr> </tbody> </table> <h2>Ensemble</h2> <p>Average output of models was used as prediction. I tested mean, max, min and some simple weighted average based on CV score but simple average gave best results.</p> <table> <thead> <tr> <th>Ensemble</th> <th>Ensemble method</th> <th>CV Score</th> </tr> </thead> <tbody> <tr> <td>6 model ensemble</td> <td>mean</td> <td>0.7717</td> </tr> </tbody> </table> <h2>Timeframe with overlaps during inference</h2> <p>Inference was made on 8 hour windows with 4 hour step, overlapping segments were avereged out. Bigger overlap gave better results but unfortunatelly finished few minutes after competition ended. Taking average from overlapping segments was better than taking minimum prediction, using maximum was giving the worst results.</p> <table> <thead> <tr> <th>Overlap during inference</th> <th>CV Score</th> </tr> </thead> <tbody> <tr> <td>4 hours</td> <td>0.7818</td> </tr> </tbody> </table> <h2>Threshold and peak gap tuning</h2> <p>Lower threshold with bigger gap between events produced better score. Missing on event seems to be punished more than extra prediction but too many predicted events led to submission scoring error. Best score I was able to get was using 0.001 threshold and 60 sample distance (5 minutes) between predicted events.</p> <table> <thead> <tr> <th>Threshold</th> <th>Min event distance</th> <th>CV Score</th> <th>Public Score</th> <th>Private Score</th> </tr> </thead> <tbody> <tr> <td>0.001</td> <td>60</td> <td>0.7871</td> <td>0.740</td> <td>0.782</td> </tr> </tbody> </table> <h1>Sources</h1> <ul> <li><a href="https://www.kaggle.com/code/tubotubo/cmi-submit" target="_blank">https://www.kaggle.com/code/tubotubo/cmi-submit</a></li> <li><a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/452940" target="_blank">https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/452940</a></li> <li><a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/tree/main" target="_blank">https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/tree/main</a></li> </ul> <h1>Ending note - trust your CV or not</h1> <p>I started with my own pipeline, using handcrafted features and decision tree models and later moved to NNs. Even though the models were getting really good CV score (over 0.8) the leaderboard score was always about 0.15 lower. I tried multiple CV strategies, random split, split based on length, number of nights without labels etc. and could not get a good CV strategy. I checked my code multiple times for leaks and was not able to find out what could be the cause. Getting discouraged by the low leaderboard score I abandoned my pipeline and switched to 213tubo's amazing solution about a week before the competition end. After the private leaderboard was revealed I found out that my pipeline was working just as well and only the public score was low. Lesson learned to trust your CV even though it's hard sometimes.</p>
Child Mind Institute - Detect Sleep States
8th Place Solution for the Child Mind Institute - Detect Sleep States Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Chris, great job on 9th place too! I got that idea by visualizing the precision-recall curves per tolerance. I guess it sometimes helps to break the metric down into its parts to look for potential improvements 🤔.</p> <h1>8th Place Solution for the Child Mind Institute - Detect Sleep States Competition</h1> <p>I thank CMI and Kaggle for hosting this interesting competition, and other competitors who worked hard to push up LB improving the quality of the event detection methods. I got my first medal 🦾 and I've learnt a lot from it.</p> <h2>Context</h2> <p>Study context: <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/overview" target="_blank">https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/overview</a><br> Data: <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/data" target="_blank">https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/data</a></p> <h1>Overview of the approach</h1> <p>The main idea of my solution is to use as little pre/postprocessing as possible, and try to detect sleep/wake events in an end to end fashion. This is because I've observed that different postprocessing methods influence the mAP inconsistently on different folds (increase on some and decrease in others), probably due to the inconsistencies in the labels.</p> <p>My pipeline contains two kinds of models, one to detect the accurate position of events (aka "Regressor"), and one to detect the probability density of the event happening inside a day (aka "DensityNet").</p> <h1>Details</h1> <h2>Regressor</h2> <p>This is a simple 1D <a href="https://arxiv.org/pdf/1505.04597.pdf" target="_blank">unet</a> which <strong>only</strong> uses <em>local information</em> and <em>anglez</em> to detect where the event occurs. This is motivated by <a href="https://arxiv.org/pdf/1506.01497.pdf" target="_blank">Faster-RCNN</a> and subsequent bounding box RPN regression methods like <a href="https://arxiv.org/pdf/1506.02640.pdf" target="_blank">YOLO</a>. As we are working with 1D data and the events are well-separated, it is enough to predict two values, (onset, wakeup) per step.</p> <h3>Training</h3> <p>A fixed hyperparameter "width" is chosen for training. The data loader will shuffle and load the onset and wakeup events of the training series_id(s), and the time series interval between $ [\text{event} - \text{width}, \text{event} + \text{width}] $. The target to optimize the model against is the following:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6066624%2Fcf5c6f25986330c81b40ac34c076ca23%2Fboxes.png?generation=1702111977884619&amp;alt=media" alt=""></p> <ul> <li>Each box corresponds to a time step (5s)</li> <li>Coloured box is the ground truth location of the event</li> <li>i.e model predicts the relative position of the current step vs location of the event</li> </ul> <p>Since the data is noisy, I found out that Huber loss works the best, just like the smooth L1 regression loss in <a href="https://arxiv.org/pdf/1504.08083.pdf" target="_blank">Fast-RCNN</a>, so it is less sensitive to outliers.</p> <h3>Inference</h3> <p>For inference, the Regressor network will be run on the whole time series, to predict relative (onset, wakeup) values per step, which gives locations of interest. Gaussian kernels with std=12 and centered at the locations of interest will be accumulated together. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6066624%2Fe0ecf143f5e15170f341a58ad81b474c%2Foutput.gif?generation=1702113013116745&amp;alt=media" alt=""></p> <ul> <li>First row: Relative location predictions by model</li> <li>Second row: Time step</li> <li>Blue color: Current iteration of accumulation</li> <li>Green color: Location of interest at current iteration</li> <li>Graph: Accumulated score </li> </ul> <p>There will be two accumulated scores, one for onset and one for wakeup. The peak of the scores gives the possible locations for the onset and wakeup events. I used the simplest peak detection method:<br> <code> locations = np.argwhere((score[1:-1] &gt; score[:-2]) &amp; (score[1:-1] &gt; score[2:])).flatten() + 1 </code></p> <p>An extra NMS postprocessing step so that predicted locations must be at least 6mins from each other.</p> <h3>Model architecture</h3> <p>The Regressor is a simple 1D Unet, with 1-channel input and 2-channels output, with a <a href="https://arxiv.org/pdf/1512.03385.pdf" target="_blank">1D ResNet</a> backbone. The hidden channels are 2, 2, 4, 8, 16, 32, 32, and 2 ResNet blocks between each pooling operation. I did not include <a href="https://arxiv.org/pdf/1709.01507.pdf" target="_blank">SE</a> modules and used BatchNorm1D instead of InstanceNorm/GroupNorm/LayerNorm to make the network insensitive to global changes.</p> <h3>Ensembling</h3> <p>I trained 3 models on the whole training dataset, with width 120, 180, 240 (10, 15, 20 mins resp.). The score of the 3 models are subsequently averaged before obtaining the locations.</p> <h3>Local location prediction quality</h3> <p>To validate the performance of this model, I computed the argmax of the scores of the steps within 120 steps (10 mins) of each event. The CDF (5 fold out of fold) looks as follows:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6066624%2F9fba37237efdbc6768746993ebeda0bb%2Fpercentile.png?generation=1702187284014701&amp;alt=media" alt=""></p> <ul> <li>x-axis percentile</li> <li>y-axis distance in steps</li> <li>model trained with Huber loss performs best</li> </ul> <p>Around 85% of the argmax predictions lie between 3mins of the actual event. As there may be multiple peaks in the 240 step interval around the event, we expect the min distance to have less error:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6066624%2F03bd38590d9224abe6e5360199899b1f%2Fpercentile2.png?generation=1702187925749506&amp;alt=media" alt=""></p> <ul> <li>same x, y-axis</li> <li>min distance between events and all predicted peaks</li> <li>model trained with Huber loss</li> <li>["Huber", "Gaussian" , "Laplace"] is the shape of the kernel to reconstruct the score</li> </ul> <h2>DensityNet</h2> <p>Another network (aka "DensityNet") is needed to assign scores to each event. This network has to determine which bout of wake and sleep is the most likely (pick the longest one within a day), and honor the 30-min length and interruption rules as stipulated by <a href="url" target="_blank">https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/data</a>. Therefore a longer context is necessary. </p> <p>To this end, I added transformer encoder modules in the deepest feature level of the 1D Unet to model global information. Since there are at most one event per day, the DensityNet will predict the probability density of the onset/wakeup event in the window of time. I used symmetric <a href="https://arxiv.org/pdf/2108.12409.pdf" target="_blank">ALiBi</a> encodings to make the transformer encoder blocks translation equivariant.</p> <h3>Training</h3> <p>I found it useful to train the model on a larger interval than that for inference to include more global context. However, my model does not predict onset/wakeup probabilities per step. Motivated by signal processing, the unknown onset/wakeup signal is a random variable within a fixed interval of interest. As such, the DensityNet is fitted against the ground truth onset+wakeup locations in a 2-day interval. </p> <p>This is simply the cross entropy loss with 12 * 60 * 24 * 2 = 34560 classes. Since the labels are noisy (and are clipped to nearest min), the target probability distribution is smoothed with the Laplace distribution. Pseudocode:</p> <pre><code> = get_distribution(interval_min, interval_min + , series_onset_lbls, series_wakeup_lbls) = model(time_series) = cross_entropy_loss(pred_logits.permute(, , ), target_distribution) </code></pre> <p>Since the data is imputed with fake intervals when the watch is taken off, the DensityNet also predicts two probabilities - whether there is any onset/wakeup event respectively in the whole interval.</p> <ul> <li>Trained 3 kinds of models<ul> <li>With anglez input only</li> <li>With enmo input only</li> <li>With anglez + time input (time randomly shifted with uniform distribution [-30min, 30min] to avoid overfitting)</li></ul></li> <li>Random flipping</li> <li>Random elastic deformation</li> </ul> <h3>Inference</h3> <p>The model now does inference only on the center 1-day subinterval of the 2-day interval. To assign scores to the events predicted by the Regressor, we use the conditional probability</p> <p>$$p(t|\text{actual event in Regressor events}) = \frac{p(t)}{\sum_{t' \in \text{Regressor events}} p(t')}$$</p> <p>where p(t) is the probability density predicted by the DensityNet. To account for the fake intervals, the final scores are</p> <p>$$\text{score} = q * p(t|\text{actual event in Regressor events})$$</p> <p>where q is the predicted probability that the interval contains some event (onset, wakeup respectively).</p> <p>The scores for the events in entire series are computed by shifting the prediction window over the whole series and averaged when each prediction window overlaps.</p> <p>The conditional probability can easily be computed by restricting the softmax to only on the logits of the suggested locations by the Regressor.</p> <h2>Postprocessing</h2> <h3>Shifting predictions</h3> <p>Similar to other teams, I shifted the events to xx:xx:15, xx:xx:45 to increase the mAP. Note that xx:xx:30 is undesirable too as there is a 7.5 min window in the mAP score.</p> <h3>Augmentations</h3> <p>I am surprised not much top solutions used this trick to increase the mAP. It stands to reason that events cannot be accurately labelled to 3mins / 1min precision, a slight move of the mouse will perturb the label by 1min. Below are the local 5fold out of fold mAP scores (computed with bad series and portions with missing labels removed):<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6066624%2Fc9953f4b90940237b76fb9c26aa61fa5%2Fmap.png?generation=1702188844055651&amp;alt=media" alt=""><br> (no augmentation)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6066624%2Fd73e12be795498ae06ae23764f5c487b%2Fmap_augmented.png?generation=1702188937356696&amp;alt=media" alt=""><br> (with augmentation, ~+0.002 mAP)</p> <ul> <li>Similar performance for tolerance &gt;3 min</li> <li>Gains for tolerance 1, 3 mins</li> </ul> <h3>Use matrix profile to remove fake data</h3> <p>Matrix profile can detect exact repetitions. I added that as an extra postprocessing step to remove predictions located in fake data.</p> <h1>Possible things to improve</h1> <h2>Use result from matrix profile as input</h2> <p>Some top solutions added a binary feature (1, 0) to indicate whether the step lies inside the fake data, or trained only on the intervals with clean data. This should make the model more performant as compared to my current approach of letting the model predict if the events are present in the window.</p> <h2>Make use of per-15 min labelling error distribution</h2> <p>Many top solutions made use of the pattern (uneven distribution of events per 15min). This suggests including the pattern either as input to the model or as a postprocessing step could improve the performance.</p> <p>Congrats on solo gold too! I hope this will be a great start for us on Kaggle.</p>
Child Mind Institute - Detect Sleep States
9th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to the organizers of the contest for hosting such an interesting competition! We share our 9th place solution.<br> team member: <a href="https://www.kaggle.com/chris62" target="_blank">@chris62</a>, <a href="https://www.kaggle.com/shu421" target="_blank">@shu421</a>, <a href="https://www.kaggle.com/ymatioun" target="_blank">@ymatioun</a>, <a href="https://www.kaggle.com/tomo20180402" target="_blank">@tomo20180402</a></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2126876%2F38e25c55ee67973fa0b39146030e5adb%2F2023-12-09%209.18.17.png?generation=1702081125918958&amp;alt=media" alt=""></p> <ul> <li>Result<ul> <li>CV 0.832, Public LB 0.783, Private LB 0.829 (CV best submission)</li> <li>split=StratifiedGroupKFold(k=5, y=sleep, group=series_id)</li></ul></li> </ul> <p>The details are provided in the articles for each specific part.</p> <ul> <li><a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/460276" target="_blank">model1: tomo's part</a></li> <li><a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/459678" target="_blank">model2: shu's part</a></li> <li><a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/459613" target="_blank">model3: chris' part</a></li> <li><a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/459742" target="_blank">post-processing: youri's part</a></li> </ul> <p>Finally, I would like to express my gratitude for having the opportunity to work on this competition with such wonderful team members. Thank you!</p>
Child Mind Institute - Detect Sleep States
Detecting sleep using UNet(2D)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/yukio0201" target="_blank">@yukio0201</a> </p> <p>Even if the "duration" is small, it should still be possible to run.<br> The following commands infer for train.<br> You can use it for debugging.</p> <pre><code>rye python run/inference.py =train </code></pre> <p><a href="https://www.kaggle.com/riow1983" target="_blank">@riow1983</a> <br> Thank you for your response.</p> <p>It may not fall under private sharing, but I think many people don't even look through the issues and PR.<br> I asked you to post the content in the thread as well, as I think it is unfair to disadvantage people by not looking at the issues and PRs.</p> <p><a href="https://www.kaggle.com/jonnykimmm" target="_blank">@jonnykimmm</a> </p> <p>Thanks for the compliment on my code. I am very happy.</p> <p>First of all, the answer to your question is to create a prototype and then a github repo.</p> <p>AI/ML projects (especially kaggle) change a lot. Therefore, I believe you need to prototype first to get a deep understanding of the problem.</p> <p>I will use the following process to mature the code.<br> I have not studied software engineering, so please read this as an opinion.</p> <ol> <li>create a prototype</li> <li>add options on an ad-hoc basis. (e.g. model_type, task_type, dataset_type, …)</li> <li>abstract the parts that can be common among the options.<br> In kaggle, too much abstraction can limit the range of strategies. Therefore, it is important to strike a balance between abstraction and concreteness. This balance is very difficult, but I believe it is also an interesting point.</li> <li>back to 2.</li> </ol> <p>This repo was created as follows.</p> <p>1st iteration</p> <ol> <li>create a 1D model as prototype.</li> <li>add a 2D model as an option</li> <li>abstract feature_extractor, encoder and decoder common to 1D and 2D models<br> This abstraction is based on <a href="https://huggingface.co/docs/transformers/main/en/index" target="_blank">transformers</a>.</li> </ol> <p>2nd iteration</p> <ol> <li>add CenterNet and DETR as options</li> <li>Abstraction of CenterNet, DETR, Spec1D, Spec2D with <a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/blob/main/src/models/base.py" target="_blank">BaseModel</a> so that input_shape=(n_timestamp, n_channles), output_shape=(n_timestamp, 2), output: event occurrence probability per step<br> This abstraction makes it easy to ensemble models that handle different tasks (segmentation , detection).<br> (This may be too abstract…?)</li> </ol> <p>Thanks for reading to the end.<br> I hope you find it useful.</p> <p>If you want to make a contribution to this repository, please write a PR or issue summary and link to it in this thread as well to avoid private sharing.</p> <hr> <p>template</p> <h1>[PR or Issue]title</h1> <ul> <li>summary</li> <li>link</li> </ul> <p>I've implemented a few additional models, so if you're interested, give it a try!</p> <h1>Implemented models</h1> <p>The model is built with two components: feature_extractor and decoder.</p> <p>The feature_extractor and decoder that can be specified are as follows</p> <h3>Feature Extractor</h3> <ul> <li><a href="https://github.com/analokmaus/kaggle-g2net-public/tree/main/models1d_pytorch" target="_blank">CNNSpectrogram</a></li> <li>LSTMFeatureExtractor</li> <li><a href="https://arxiv.org/abs/1912.10211" target="_blank">PANNsFeatureExtractor</a></li> <li>SpecFeatureExtractor</li> </ul> <h3>Decoder</h3> <ul> <li>MLPDecoder</li> <li>LSTMDecoder</li> <li>TransformerDecoder</li> <li>TransformerCNNDecoder</li> <li><a href="https://github.com/bamps53/kaggle-dfl-3rd-place-solution/blob/master/models/cnn_3d.py" target="_blank">UNet1DDecoder</a></li> <li>MLPDecoder</li> </ul> <h3>Model</h3> <ul> <li>Spec2DCNN: Segmentation through UNet.</li> <li>Spec1D: Segmentation without UNet</li> <li>DETR2DCNN: Use UNet to detect sleep as in <a href="https://arxiv.org/abs/2005.12872" target="_blank">DETR</a>. This model is still under development.</li> <li>CenterNet: Detect onset and offset, respectively, like <a href="https://arxiv.org/abs/1904.07850" target="_blank">CenterNet</a> using UNet</li> <li>TransformerAutoModel: <ul> <li>Segmentation using huggingface's <a href="https://huggingface.co/transformers/v3.0.2/model_doc/auto.html" target="_blank">AutoModel</a>. don't use feature_extractor and decoder.</li> <li>Since the Internet is not available during inference, it is necessary to create a config dataset and specify the path in the model_name.</li></ul></li> </ul> <p>The correspondence table between each model and dataset is as follows.</p> <table> <thead> <tr> <th>model</th> <th>dataset</th> </tr> </thead> <tbody> <tr> <td>Spec1D</td> <td>seg</td> </tr> <tr> <td>Spec2DCNN</td> <td>seg</td> </tr> <tr> <td>DETR2DCNN</td> <td>detr</td> </tr> <tr> <td>CenterNet</td> <td>centernet</td> </tr> <tr> <td>TransformerAutoModel</td> <td>seg</td> </tr> </tbody> </table> <p>The command to train CenterNet with feature_extractor=CNNSpectrogram, decoder=UNet1DDecoder is as follows</p> <pre><code>rye run python run/train.py model=CenterNet dataset=centernet feature_extractor=CNNSpectrogram decoder=UNet1DDecoder </code></pre> <blockquote> <p>Just so I'm sure, this is treating the problem as a multi-label problem where for each timestep we predict 0 or more labels?</p> </blockquote> <p>Yes. There is a possibility of predicting multiple labels in 1step, although the number may be small.</p> <blockquote> <p>wouldn't this loss function bias towards the background class?</p> </blockquote> <p>Since ap depends only on the order of the scores, it does not matter if bg scores higher.</p> <ol> <li>image recognition model (2D) has been studied extensively</li> <li>converting to spectrogram can get frequency information</li> </ol> <p>lr is 0.0005 at step=0.</p> <p>I use the "pdb" for debugging</p> <p>Because of the learning rate scheduler, lr varies with each step.</p> <p>Thanks for the good refactoring!</p> <blockquote> <p>how many seconds does this correspond to?</p> </blockquote> <p>10 * 5 = 50 [sec]</p> <blockquote> <p>I am interested in Gaussian label width (sigma). How width do you set the value?</p> </blockquote> <p>Good point.<br> I am training the model using a sigma that I set up at the beginning.</p> <p>However, this sigma is very important.<br> By setting the sigma and duration properly, the accuracy (local cv) can be improved by about 0.01-0.02.</p> <p>The following command can be used for grid search, so try it.<br> <code>rye run python run/train.py -m duration=100,1000,10000 sigma=5,10,15,20</code></p> <p>I experimented in a local environment, but this code works in a kaggle environment as well.<br> Try running it with dir=kaggle.</p> <p>Saving the best weights based on the evaluation index of the competition can be done with the following code.</p> <p><code>rye run python run/train.py monitor=val_score monitor_mode=max</code></p> <p>I share my approach in this discussion.</p> <ul> <li>The code is also available on github<br> <a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/tree/main" target="_blank">https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/tree/main</a></li> <li>submission notebook<br> <a href="https://www.kaggle.com/code/tubotubo/cmi-submit" target="_blank">https://www.kaggle.com/code/tubotubo/cmi-submit</a></li> </ul> <h1>Model Structure</h1> <p>My model consists of three components: feature extractor, encoder, and decoder.</p> <p>The feature extractor takes the raw waveform(anglez, enmo, hour) and creates a wavegram (like an image).<br> The encoder (UNet) takes the wavegrame and creates a two-dimensional heatmap.<br> decoder takes the heatmap and outputs the probability(sleep, onset, wakeup) of an event occurring at each time.<br> Finally, post-processing is performed to create a submission file.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6958652%2Ffa7f5ecfa0f290e1f44e3d2e7dbc4f06%2F(1).jpg?generation=1699079676016847&amp;alt=media" alt=""></p> <p>Each component has the following already implemented.</p> <ul> <li>feature extractor<ul> <li>CNN</li> <li>LSTM</li> <li>Spectrogram</li> <li><a href="https://arxiv.org/abs/1912.10211" target="_blank">PANNs</a></li></ul></li> <li>encoder<ul> <li>UNet</li></ul></li> <li>decoder<ul> <li>UNet1D</li> <li>LSTM</li> <li>MLP</li> <li>Transformer</li></ul></li> </ul> <h1>Result</h1> <p>LB: 0.714<br> feature extractor=CNN, encoder=UNet, decoder=UNet1D, downsample_rate=2, seq_len=5760</p> <p>Because the code has changed since the last submission, the experiment may not be accurately reproduced with the above settings.</p> <h1>Tips</h1> <ol> <li>The evaluation function can be sped up using a binary search. When the number of detections is large, the evaluation index can be calculated tens to hundreds of times faster than the original implementation.<br> <a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/blob/8bbc09aff9c34f12259d536fd496deaade8ec087/src/utils/metrics.py#L179" target="_blank">https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/blob/8bbc09aff9c34f12259d536fd496deaade8ec087/src/utils/metrics.py#L179</a></li> <li>Outputting a file other than submission.csv and submitting it will result in an error. In the inference notebook, it is better to leave only the final submission.csv.</li> </ol> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6958652%2Fca7e96b19b87610f029c9b42be3b1bea%2FScreenshot%20from%202023-11-04%2016-39-08.png?generation=1699083570125882&amp;alt=media" alt=""></p> <h1>How to contribute</h1> <p>If you want to make a contribution to this repository, please write a PR or issue summary and link to it in <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/452940#2530545" target="_blank">this thread</a> as well to avoid private sharing.</p> <p>I do not believe that publicizing this approach will spoil the efforts of the other participants.<br> There are two reasons.<br> The first is that my approach does not encompass all other approaches. I believe that a better model can be created by combining my approach with your approach.<br> The second is that the code I have published has room for many improvements and I believe there is ample time to make improvements.<br> For these two reasons, I thought it was worth releasing the code and model to the public.</p> <p>Temporarily, LB may be full of people who just forked it, but it will be weeded out by those who are serious about experimenting.</p> <p>This is my impression from the experiments I have done, the 2D approach is not that important.<br> It would be more accurate to capture more detailed information in the time direction rather than frequency.<br> This means that the 1D approach may be better than the 2D approach.</p> <p>Assume that the training code is allowed to be published.</p> <p>If the training code is not publicly available, I do not think it is permissible to publish the model weights. If you don't publish the training code, you are not providing any value to the platform.<br> However, if the training code is to be made public, the weights of the model may or may not be made public.<br> As long as the training code is publicly available, I think you are providing value to the platform.<br> I also think there is little difference between just forking the training code and submitting and using the published weights.</p> <p>I think this discussion is a difference of ideas, so I will not discuss it further.</p> <blockquote> <p>Do you use only one method at a time in your pipeline?</p> </blockquote> <p>Pipelines use one method at a time.</p> <ol> <li>feature extractor: CNN</li> <li>encoder: UNet</li> <li>decoder: UNet1D</li> </ol> <blockquote> <p>Also, any logic or thoughts why Unet encoder was used here? Not the ResNet, for example?</p> </blockquote> <p>This task is very similar to soud event detection (SED). <br> For this reason, I tried the 2D approach often used in SED.<br> <a href="https://arxiv.org/abs/2107.05463" target="_blank">Sound Event Detection: A Tutorial</a></p> <blockquote> <p>Much fewer people actually go out of their way and train the model and then submit it. </p> </blockquote> <p>I am skeptical on this point.<br> Simply using published weights is difficult to maintain a high ranking. For a time, more people may use the published weights, but I believe that gradually more people will use the weights from the improved model using the training code.</p> <blockquote> <p>Also, I wonder why you last submission is one month ago? Couldn't you have published the code back then?</p> </blockquote> <p>The reason is that a month ago my goal was to win this competition. However, I got busy and decided to withdraw from this competition. Therefore, I decided to publish the code I wrote a month ago.</p> <p>Better to evaluate with val_score</p> <p>Sorry, Disscussion is wrong.</p> <p>My model uses 4 features: anglez, emmo, hour_sin, hour_cos</p> <p>The version of the code may be different.<br> Try git pull to use the latest code.</p> <p><a href="https://www.kaggle.com/atamazian" target="_blank">@atamazian</a> </p> <p>This architecture is called DETR.<br> I have implemented it for study purposes, but have not yet achieved good accuracy.</p> <p><a href="https://arxiv.org/abs/2005.12872" target="_blank">https://arxiv.org/abs/2005.12872</a></p> <p><a href="https://www.kaggle.com/lhchina" target="_blank">@lhchina</a> </p> <p>Please let me know what commands you have executed</p> <p><a href="https://www.kaggle.com/lhchina" target="_blank">@lhchina</a> </p> <p>The learning is going well in my environment.<br> Please try git pull and then run the command again.</p> <p><a href="https://www.kaggle.com/kaggledummie007" target="_blank">@kaggledummie007</a> </p> <p>The default parameters are written under run/conf/.<br> sigma is set in run/conf/dataset/seg.yaml.<br> Config is specified in src/conf.py.</p> <p>In my environment, learning has been done successfully.<br> Please check your environment</p> <blockquote> <p>Do you mind me asking what specifications your computer has?</p> </blockquote> <p>The number of threads on my computer is 24.</p> <blockquote> <p>I would expect with that many workers your RAM would fill up quickly? Or is there a trick I'm not aware of to avoid that problem?</p> </blockquote> <p>num_workers is the number of parallelism and has nothing to do with RAM.<br> RAM used depends on batch_size</p> <p><a href="https://www.kaggle.com/lhchina" target="_blank">@lhchina</a> </p> <p>The find_peak function of scipy is used to detect events.<br> See the following page for details<br> <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.find_peaks.html" target="_blank">https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.find_peaks.html</a><br> <a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/blob/main/src/utils/post_process.py" target="_blank">https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/blob/main/src/utils/post_process.py</a></p>
Child Mind Institute - Detect Sleep States
🥉 146th place – 9 models 1h inference - Segmentation through UNet
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Short description of the solution below:</p> <p>As I only used 1-3 days of time for modifying, training and tuning the solution stayed at 9 models 1h inference but could easily been more modified and with also more models for better score.</p> <p>For the problem I used the base code from</p> <ul> <li><a href="https://www.kaggle.com/code/tubotubo/cmi-submit" target="_blank">https://www.kaggle.com/code/tubotubo/cmi-submit</a></li> <li><a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states" target="_blank">https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states</a></li> <li><a href="https://www.kaggle.com/code/tubotubo/cmi-code" target="_blank">https://www.kaggle.com/code/tubotubo/cmi-code</a></li> </ul> <p>The original code had resnet34 as default and that was also the public pretrained model. Based on that I trained several of different models with Spec2DCNN: Segmentation through UNet - resnet18, efficientnet-b0 and resnext101_32x4d.</p> <p>Then I created a validation notebook below to get the validation score for some of the pretrained public and also the own trained models.</p> <ul> <li><a href="https://www.kaggle.com/kirderf/eval-child-mind-institute-detect-sleep-states" target="_blank">https://www.kaggle.com/kirderf/eval-child-mind-institute-detect-sleep-states</a></li> </ul> <p>The final inference solution was a torch stacked mean ensemble of 9 models using 1h test time inference with the same duration used, code below.</p> <ul> <li><a href="https://www.kaggle.com/kirderf/cmi-submit-9-models-1h-inference" target="_blank">https://www.kaggle.com/kirderf/cmi-submit-9-models-1h-inference</a>.</li> </ul> <p>With more time following had been tested: different architectures, more models, augmentation, using different durations…</p> <p>That’s it!<br> Happy Kaggling!</p>
Click-Through Rate Prediction
4 Idiots' Solution & LIBFFM
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Click-Through Rate Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Okay I see!</p> <p>1. &quot;v1&quot; and &quot;v2&quot; respectively are x_j1 and x_j2 in page 6 in <a href="http://www.csie.ntu.edu.tw/~r01922136/slides/ffm.pdf">this slide</a>.</p> <p>2. &quot;r&quot; is the normalization constant, when auto-normalization is not enabled, it is always 1; and if it is enabled, we automatically normalize each instance. For example, if we have x = &quot;1:3:1 2:7:2 3:9:1.5&quot;, and normalization is enabled, then r = 1/sqrt(1*1+2*2+1.5*1.5).</p> <p>3. For the constant &quot;2.0f.&quot; Sorry I don't have a good explanation now. Actually FFM still works if this constant is removed, but empirically I find adding this constant makes the convergence faster. One possible reason is that FFM is used to model interactions between features. If we consider polynomial models, for instance, (x1+x2+x3)^2 = x1^2+x2^2+x3^2+2*x1*x2+2*x1*x3+2*x2*x3. The interaction terms have &quot;2&quot; here, so I also have this &quot;2&quot; in FFM.</p> <p>Please feel free to let me know if the explanations above are not clear.</p> <p>[quote=Dmitry Efimov;64943]</p> <p>Great! Thanks!</p> <p>One more question, could you explain a little bit more about parameter &quot;v&quot; you are using for prediction? If I understood correctly, you need this parameter to adjust weights only (to avoid them to be very small). Is it correct? Or you need this parameter for something else?</p> <p>[/quote]</p> <p>Sorry I don't quite understand. Do you mean &quot;ffm_float v&quot; in &quot;struct ffm_node&quot;??</p> <p>[quote=Inspector;64869]</p> <p>Awesome - thank you for sharing!</p> <p>One basic question up front : Is a field aware FM the same as a FM (e.g. LIBFM) trained on data set that has a distinct name spaces for every predictor (using 'name space' to mean the role played by this concept in Vowpal Wabbit)? So for example if there were two predictors in a data set:</p> <p>A which takes values 'RED' and 'BLUE'</p> <p>B which also takes values 'RED' and 'BLUE'</p> <p>Instead of having data that hashed RED and BLUE to the same value for both A and B, the data would be A_RED or B_RED thus properly treating them as two separate things? Or is there more to it? :)</p> <p>[/quote]</p> <p>Sorry I still don't quite understand what &quot;name space&quot; is. In your example, my understanding is: &quot;A&quot; and &quot;B&quot; are predictors, and &quot;RED&quot; and &quot;BLUE&quot; are values. Assuming my understanding is correct, in your first question you have the term &quot;name spaces for every predictors.&quot; So in your example, what are namespaces for &quot;A&quot; and &quot;B,&quot; respectively? Thanks! :)</p> <p>[quote=rcarson;64872]</p> <p>[quote=Triskelion;64871]</p> <p>From Rendle's <a href="http://www.ismll.uni-hildesheim.de/pub/pdfs/Rendle2010FM.pdf">paper</a>: FM work well on datasets with a huge sparsity, like this clicklog dataset, recommender datasets, and&nbsp;also text (bag-of-words).</p> <p>Thanks to 3 Idiots for releasing LIBFFM. Wil play around with it and report back.</p> <p>[/quote]</p> <p>Thank you very much! That's an excellent paper :D</p> <p>[/quote]</p> <p>Thanks! Here are my opinions for your questions:</p> <p>1. As Triskelion has mentioned, I agree with section IV in <a href="http://www.ismll.uni-hildesheim.de/pub/pdfs/Rendle2010FM.pdf">Rendle's paper</a>&nbsp;is a<br> good explanation for the reason why FM-based models can be better than<br> SVM-based models on sparse datasets.</p> <p>2. For this question, I have no idea. So far I have only used FFM on CTR<br> datasets. I am also very interested to see if it works on other types of<br> data. That's why we released LIBFFM. :D</p> <p>[quote=Dmitry Efimov;64936]</p> <p>[quote=guestwalk;64863]</p> <p>In this competition and <a href="https://www.kaggle.com/c/criteo-display-ad-challenge">Criteo's CTR competition</a>, we find field-aware factorization machine (FFM) is a powerful model. We hope this model can be more widely used, so we also release an implementation&nbsp;<a href="http://www.csie.ntu.edu.tw/~r01922136/libffm/">LIBFFM</a>. We are very interested to see if it is effective in other kinds of datasets.</p> <p>[/quote]</p> <p>Hi, guestwalk,</p> <p>Thank you very much for sharing and congratulation! I would like to ask the question about implementation: in stochastic gradient descent (file ffm.cpp, function wTx) you used the update w &nbsp;= w - lr * \sqrt(1+sum of squares of previous gradients) * gradient. I am wondering why you used this factor&nbsp;\sqrt(1+sum of squares of previous gradients)? Maybe you could give some links for the description of this idea?</p> <p>Dmitry.</p> <p>[/quote]</p> <p>Hi Dmitry, this learning-rate schedule is described in our <a href="http://www.csie.ntu.edu.tw/~cjlin/papers/libmf/mf_adaptive_pakdd.pdf">recent paper</a>. Hope it is useful for you.</p> <p>[quote=Mikhail Trofimov;65188]</p> <p>Thanks for sharing!</p> <p>Can you explain, what is the difference between FFM and PITF?</p> <p>[/quote]</p> <p>I don't know what PITF is. May you provide a reference? Thanks!</p> <p>[quote=Herimanitra;65170]</p> <p>how to convert seamlessly any .csv file into the format of Libffm?</p> <p>[/quote]</p> <p>We do not have such a tool. Users have to convert from raw data to LIBFFM format by by themselves.</p> <p>[quote=guestwalk;65260]</p> <p>[quote=Mikhail Trofimov;65188]</p> <p>Thanks for sharing!</p> <p>Can you explain, what is the difference between FFM and PITF?</p> <p>[/quote]</p> <p>I don't know what PITF is. May you provide a reference? Thanks!</p> <p>[/quote]</p> <p>Oh, you are talking about&nbsp;<a href="http://www.wsdm-conference.org/2010/proceedings/docs/p81.pdf">this paper</a>. These two models can be equivalent. This paper is for recommender systems, so there are&nbsp;three fields &quot;user,&quot; &quot;item,&quot; and &quot;tag.&quot; (see equation 12) In FFM we don't have such restriction.</p> <p>[quote=earthMovingEquipment;65105]</p> <p>[quote=guestwalk;64863]</p> <p>Dear all,</p> <p>We have prepared our <a href="https://github.com/guestwalk/kaggle-avazu">code</a>&nbsp;and <a href="http://www.csie.ntu.edu.tw/~r01922136/slides/kaggle-avazu.pdf">document</a>.</p> <p>In this competition and <a href="https://www.kaggle.com/c/criteo-display-ad-challenge">Criteo's CTR competition</a>, we find field-aware factorization machine (FFM) is a powerful model. We hope this model can be more widely used, so we also release an implementation&nbsp;<a href="http://www.csie.ntu.edu.tw/~r01922136/libffm/">LIBFFM</a>. We are very interested to see if it is effective in other kinds of datasets.</p> <p>Your questions or comments are welcome! :)</p> <p>-- 4 Idiots</p> <p>[/quote]</p> <p>what purpose of bag features?</p> <p>[/quote]</p> <p>The concept behind this feature is that if some users have similar app_id (or something else), then these users may have some similarity.</p> <p>[quote=lynn;65099]</p> <p>Congratulations, thank you for sharing your solution. But I&nbsp;have some concerns about the FFM model. In practice,&nbsp;especially when use it in&nbsp;prediction, this model need too many&nbsp;parameters for sparse features. In&nbsp;standard FM model, each sparse feature only has one dense vector of latent factors. &nbsp;But in FFM model, each&nbsp;sparse feature have N(number of feature&nbsp;field) - 1&nbsp;dense vectors to be&nbsp;stored. If both of the&nbsp;dimension of sparse features and number of&nbsp;feature field are big, the&nbsp;storage of the FFM model in single machine will be&nbsp;challenge in model training and online prediction. Do you have any&nbsp;advice for this problem?&nbsp;</p> <p>Thanks again!</p> <p>[/quote]</p> <p>Yes, you are right. The training / prediction time of FFM is usually greater than FM, especially when the number of fields is large. &nbsp;</p> <p>However, the difference may not be as large as you think.&nbsp;In my experience, the best number of latent factors (k) for FFM is much smaller than that for FM. For example, if I remember correctly, in Criteo's competition, the best k for FFM is 8, and the best k for FM is 100.</p> <p>[quote=Triskelion;65151]</p> <p>I tried to build LibFFM on Windows, but had to give up and switch to Ubuntu. Install and usage was easy.&nbsp; I could not get it to work on text tokens from longer documents. I guess text tokens do not really have a field (or it should be word order).</p> <p>I tried with static fields</p> <p>1 0:token_id:1 0:token_id:1 0:token_id:1 0:token_id:1 (seems to take only first token_id for learning)</p> <p>and incrementing fields</p> <p>1 0:token_id:1 1:token_id:1 2:token_id:1 3:token_id:1 ... n:token_id:1 (first 38 words of document)</p> <p>However this field characteristic is still very interesting, I' ll have to try it on other highly dimensional sparse datasets with multiple columns instead of raw text columns (maybe PCA first?).</p> <p>[/quote]</p> <p>Sorry we do not support Windows at this stage. Thanks very much for giving me<br>feedback! I have some ideas:</p> <p>1. I don't think your first setting (setting all fields to 0) will work. The<br> reason is that we do not allow interactions within the same field. So<br> basically you are not training anything. If you want to use this setting,<br> you can change</p> <p>if(j2 &gt;= model.n || f2 &gt;= model.m || f1 == f2)</p> <p>to</p> <p>if(j2 &gt;= model.n || f2 &gt;= model.m)</p> <p>You can find this code in ffm.cpp. I think it will work. However, because<br> all features are within the same field, FFM reduces to FM in this case.</p> <p>2. There is no strong field concepts in document datasets. FFM may not work<br> very well in this kind of dataset..</p> <p>3. Did you use ``--norm'' to enable normalization? If not, I suggest you to<br> try it. In my experience with normalization, the result can be much better.</p> <p>Dear all,</p> <p>We have prepared our <a href="https://github.com/guestwalk/kaggle-avazu">code</a>&nbsp;and <a href="http://www.csie.ntu.edu.tw/~r01922136/slides/kaggle-avazu.pdf">document</a>.</p> <p>In this competition and <a href="https://www.kaggle.com/c/criteo-display-ad-challenge">Criteo's CTR competition</a>, we find field-aware factorization machine (FFM) is a powerful model. We hope this model can be more widely used, so we also release an implementation&nbsp;<a href="http://www.csie.ntu.edu.tw/~r01922136/libffm/">LIBFFM</a>. We are very interested to see if it is effective in other kinds of datasets.</p> <p>Your questions or comments are welcome! :)</p> <p>-- 4 Idiots</p> <p>[quote=KamilBek;65405]</p> <p>HI!<br>Congratulations!<br><br>guestwalk, &nbsp;I have few questions:<br><br>Can you explain why you use exactly this categorical variables<br>,'C14','C17','C20','C21', but not another anonymized&nbsp;categorical variables?<br><br>And also how you find which variables you should use?<br><br>And how did you choose condition for subsetting?<br><br>And also, sorry but I did not find what did you do after hashing? Did you after that use hot-one-encoding of hashed values for get variables or it is already variable for FFM?</p> <p>[/quote]</p> <p>Thanks. Here are my answers to your questions.</p> <p>1. In all following experiments, to measure performance, we use the last day of training data as validation set and the other days as training set.</p> <p>2. For variable selection: We select variables by running experiments on a subsampled dataset. The size is 10% of the whole training data. We use all features as a baseline, and if removing a certain feature gives us the same or even better result, then we remove this feature.</p> <p>3. For subset:</p> <p>a. We first select top 50 or so frequent features. <br> <br> b. We do not consider those anonymized features because we completely have no idea what they are.<br> <br> c. For each feature, we try to run experiments on the subset that contains these feature. If after ensemble there is an improvement, then we add this subset into our model pool.</p> <p>4. For the question what we do after hashing: After hashing, we feed the hashed data to FFM. You can find the flowchart on page 5 of <a href="http://www.csie.ntu.edu.tw/~r01922136/slides/kaggle-avazu.pdf">these slides</a>.</p> <p>[quote=Jianfeng Zhu;66007]</p> <p>Congratulations and thanks for your sharing.</p> <p>I have one question: if I get it right, FFM only models the interaction between variables. Will the performance be improved if you include the bias term for each variable in the model?</p> <p>[/quote]</p> <p>Yes, I have tried this. In my experience adding these terms or not do not<br>affect the loss on the test set a lot. In <a href="https://github.com/infinitezxc/kaggle-avazu">Random Walker's solution</a>&nbsp;they have<br>the linear terms and the result is slight better according to their<br>observation.</p> <p>[quote=Julian de Wit;65283]</p> <p>Congratulations..</p> <p>Did you change the code a lot since the Criteo version ?<br>I compiled that one to win64 for this competition.<br>Perhaps I could build a Windows version again.<br><br>Also, as a completely non-scientific experiment I also added the term where feature values were multiplied against themselves. (something like (w[jn,fm],w[jn,fm]) xjn*xjn if xn was present for fm.)<br><br>I thought it would be an approximation for the normal SUM(xn&nbsp;* wn)first order terms in the original FM.&nbsp;Halfway the competition I tried it and it gave slightly better results so I kept it in.<br><br>Perhaps you can say something sensible about it. Did you consider adding this term or is it complete non-sense. :)</p> <p>[/quote]</p> <p>1. For the Windows part: We will make our code compilable on Windows soon. However, we are not planning to release a Windows binary yet.</p> <p>2. For the term you added: Yes, I did exactly the same thing before. :)&nbsp; My experiment results was with/without this term make almost no difference, so I do not include this term in LIBFFM.</p> <p>@Foxtrot,&nbsp;&nbsp;ThierryS, rcarson</p> <p>We understand that the concept of fields is not clear in the slides you checked. We have updated it. There is a concrete example in the <a href="http://www.csie.ntu.edu.tw/~r01922136/slides/ffm.pdf">updated version</a>. Please check it&nbsp;again. If you still have questions, please re-post your questions and I will reply. Thanks!</p> <p>@Steffen: Great comments! In my experience even with k = k'*m, FM is usually faster probably because of better memory locality.</p> <p>@Dmitry: Very interesting to see the improvement by sorting data with different features.</p> <p>[quote=LifeMatrix;71672]</p> <p>Congratulations! Thanks for your sharing the solution. It is a great work!</p> <p>I have two questions:</p> <p>1)&nbsp;<a href="http://www.kaggle.com/c/criteo-display-ad-challenge/forums/t/10555/3-idiots-solution">&nbsp;The 3 Idiots' solution in criteo</a>&nbsp;includes GBDT to generate additional features which is feed to&nbsp;FFM. But it is omitted &nbsp;in this solution. I guess it is FFM that make most contribution and GBDT help a little to improve the score. &nbsp;Right?</p> <p>2) Although both of solutions adapted FFM, the feature engineering methods differ (one use GBDT, the other use bag of words...). What is your basic idea or inspiration to choose different approach?</p> <p>Thanks again,</p> <p>Steven</p> <p>[/quote]</p> <p>Hi Steven,</p> <p>For you questions:</p> <p>1. The main reason we don't use GBDT in this competition is that there is no numeric features. (Tree based models are conceptually more suitable for numerical features.)</p> <p>2. Sorry I do not have a good answer to this question. In my opinion, sometimes feature engineering is just intuition and trial-and-error.</p> <p>Yu-Chin</p>
Coleridge Initiative - Show US the Data
13th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Coleridge Initiative - Show US the Data <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h4><strong>Most Important Key</strong></h4> <p>For me the most important key to this competition was Khoi Nguyen’s (@suicaokhoailang) <a href="https://www.kaggle.com/c/coleridgeinitiative-show-us-the-data/discussion/232964" target="_blank">discussion topic</a> that encouraged the removal of predictions similar to the provided training labels before submission in order to better assess the ability of a solution to generalize.</p> <h4><strong>General Thoughts on Approach</strong></h4> <p>Since we were provided with noisy, incomplete labels for training, I leaned away from methods that required labels for training. Actually I leaned away from training altogether except for playing around a bit with <a href="https://github.com/NorskRegnesentral/skweak" target="_blank">skweak</a>, which I probably should have stuck with in retrospect.</p> <h4><strong>Solution</strong></h4> <p><a href="https://www.kaggle.com/trentb/coleridge-dataset-extractor?scriptVersionId=66770265" target="_blank">Notebook</a> (runs in 20 - 25 minutes)</p> <h5><strong>Extractor Functions</strong></h5> <p>Through some exploratory methods, it appeared that words like Study, Survey, Database, Dataset, Archive, Assessment, Catalog, Collection, Registry, and Initiative appeared frequently in presumed dataset mentions. For each word, I created a function to examine words around each of these words to extract dataset mentions. While I initially considered doing the extraction with regular expressions, I discovered the versatility of <a href="https://spacy.io/api/sentencizer/" target="_blank">SpaCy’s sentencizer</a> and leveraged it. The sentencizer permitted an easy, intuitive, pythonic way to examine tokens appearing before and after words, including built-in functions for checking tokens for case combinations and sentence positions. I added a good bit of nested, complex conditional statements, but it was debuggable and maintainable because the code was much more readable than regular expressions. And it was still really fast, which allowed for a lot of developmental iterations.</p> <p>I iteratively developed these extractor functions by running them on the training data, ranking the extracted dataset mentions by their document frequency, and modifying the functions to make cleaner extractions with more focus put on the dataset mentions with higher document frequency.</p> <h5><strong>Frequency Filtering</strong></h5> <p>After combining the outputs of all extractor functions, I ranked them by their document frequency and retained only the top 95 percentile. The rationale behind retaining the top 95 percentile was that infrequent dataset mentions were less likely to be important to the sponsor’s objective and, hence, less likely to have been labeled in the testing data.</p> <h5><strong>Final Dataset Search</strong></h5> <p>The last step was to search through the documents again but with a slight twist. The list of high-frequency dataset mentions was sorted by decreasing length. For each document, the sorted list was iterated through, replacing matches with Xs. For example, in theory, if “ADNI” and “Alzheimer’s Disease Neuroimaging Initiative (ADNI)” were both in the list and only “Alzheimer’s Disease Neuroimaging Initiative (ADNI)” was in the document, this method prevents predicting both “ADNI” and “Alzheimer’s Disease Neuroimaging Initiative (ADNI)”. To explain further, because it’s longer, “Alzheimer’s Disease Neuroimaging Initiative (ADNI)” would be found first, saved as a prediction, and Xed out. When the shorter “ADNI” is subsequently searched for, it will not be found. Only “Alzheimer’s Disease Neuroimaging Initiative (ADNI)” will be predicted as desired.</p> <p>I've updated my initial post to include a link to my <a href="https://www.kaggle.com/trentb/coleridge-dataset-extractor?scriptVersionId=66770265" target="_blank">cleaned-up notebook</a>.</p>
Coleridge Initiative - Show US the Data
14th Place Solution (with notebooks)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Coleridge Initiative - Show US the Data <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you!</p> <p>One away from gold! I did this competition on Kaggle notebooks only:</p> <ul> <li>Additional hand-labeled dataset titles (from train only): <a href="https://www.kaggle.com/lichena/coleridgehandlabeled" target="_blank">https://www.kaggle.com/lichena/coleridgehandlabeled</a></li> <li>Preprocessing: <a href="https://www.kaggle.com/lichena/coleridge-pre-processing-hand" target="_blank">https://www.kaggle.com/lichena/coleridge-pre-processing-hand</a></li> <li>Training: <a href="https://www.kaggle.com/lichena/coleridge-ner?scriptVersionId=66238738" target="_blank">https://www.kaggle.com/lichena/coleridge-ner?scriptVersionId=66238738</a></li> <li>Inference: <a href="https://www.kaggle.com/lichena/coleridge-ner-inference" target="_blank">https://www.kaggle.com/lichena/coleridge-ner-inference</a></li> </ul> <p>My highest chosen private LB submission scored 0.558/.437, while my best scored 0.545/0.444:</p> <ul> <li>I fine-tuned bert-base-cased + crf on BIO-tagged chunks of ~200-400 words. </li> <li>The key for me was hand-annotating a larger training set. I used regex to find an initial set, then went through the list of false positives for my model that I thought were valid and recategorized them into the positive label set. Ultimately, this larger dataset scored 0.601 publicLB when naively submitted, a subset scored 0.621.</li> <li>For training, I used 80% chunks which were positive mentions, 20% chunks which had no mentions. I tried doing a naive cv of 20% chunks with positive mentions, but this didn't generalize well to the private lb</li> <li>Instead of the default clean_text() function provided, I also kept casing and parentheticals during training and inference, which boosted the LB score. </li> </ul> <p>If you're like me and are only doing competitions with kaggle notebooks, here's some tips I learned:</p> <ul> <li>Use CPU as much as you can - I would choose 2-3 files as the training set and run all the code with those test files first to test bugs</li> <li>When you have your pipeline, start with distilled models which are faster to train and save you gpu time. Run it on a small sample of data</li> <li>Start the competition early</li> <li>Look for competitions where there is a systematic advantage to be had - in this case, hand-annotation provided an "edge" that I knew many others weren't going to attempt to replicate.</li> </ul>
Coleridge Initiative - Show US the Data
17th Place Solution - SpaCy 3 (EntityRuler) and NER CRF model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Coleridge Initiative - Show US the Data <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Approach</h1> <p>We initially approached the problem of missing labels by trying to label a subset of the data ourselves using prodigy and spacy with active learning. However we quickly found it intractable because, the default spacy NER was overfitting to specific dataset names and the actual context in which dataset names appeared seemed to be inconsistent with the way that we would identify a dataset in the wild as a human. </p> <p>This led us to think that we should focus on contexts where dataset names are identifable as datasets and not as potentially other types of entities. For instance, the sentence "ADNI was financed by XXX" does not give us any indication that ADNI is a dataset, while the sentence "We used the ADNI dataset to analyse …" gives us the context required to correctly identify ADNI as a dataset.</p> <p>As a result we made the decision early on to focus on sentences which included words like "data" or verbs like "analysed" in order to only consider entities that could be identified from context. We used a small SpaCy model to split the sentences and greatly reduced the space of text to comb through.</p> <p>We also probed the training set for names of datasets that were mentioned that we could identify.</p> <h1>Custom Spacy Abbreviation Finder component</h1> <p>In order to deal with abbreviations, we built a custom SpaCy abbreviation finder that also matched abbreviations with MiXeD casing and tried to pair them with spans either to the left or right of the abbreviation.</p> <p>For instance given the sentence "This is a Fake Name Compendium FaNaCO". We identify "FaNaCo" as an abbreviation and look left and right for spans that match a certain number of the Capital letters present in the acronym. We used Jaccard Similarity with a threshold here to find similarity between all Capital letters in the span and the ones in the acronym.</p> <p>We found this method to be decently robust and able to cover more dataset names than a more naive abbreviation finder. </p> <p>Once a span is found to be a likely meaning for the abbreviation we added it to a dictionary of abbreviation that was constructed over the entire document.</p> <p>Unfortunately, we decided early on to not consider acronyms as dataset names, since ADNI was the only acronym present in the training set it was hard to know if including the acronyms of dataset would generalise to unseen datasets. This was a mistake, and would likely have given us several points on the LB had we tested the alternative.</p> <h1>Custom Spacy EntityRuler component</h1> <p>Our initial approach (and most robust) was to use the SpaCy EntityRuler with custom rules to find candidate dataset names.</p> <p>Here we build a set of rules to match sequences of tokens agains. For instance, if we have a rule "Title Title Study", and a sentence "We use the Great Age Study in our analysis". Then we would find the "Great Age Study" span as a candidate dataset.</p> <p>However this is not generic enough and so we expand the component by adding a look backward and look forward functionality. </p> <p>So that if we have the sentence "We use the Title Title Study of Youth .." we will match the Title Title Study and then go backward and foward to see if we can expand the span. Here we would allow "of" (as an accepted foward stopword) and then see another Title (Youth) and also include it in the predicted span. Going backwards, "the" would not be accepted backward stopword and so we would stop. The final candidate dataset would then be "Title Title Study of Youth". </p> <p>With enough simple rules and a couple of manual use cases we are able to get a decent pipeline component that can identify candidate datasets.</p> <h1>NER w/ CRF and masking</h1> <p>In the last leg of the competition and having exhausted the rule based approach, we decided to implemented a BERT NER model with a CRF (Conditional Random Field) layer.</p> <p>Since we filter the text by sentences, the BERT model takes in a sentence and tries to identify a dataset(s) within it. For each word it predicts a tag (BIO scheme) whether or not the word is the Beginning of a dataset, Inside a dataset, or Outside a dataset. </p> <p>The CRF layer allows the model to be consistent (for instance prevent it from predictiong O then I), and helps us in this case because a lot of datasets have long names. In fact, we also don't train the model for dataset names that are shorter than 3 words (hyphenated words are counted as 1).</p> <p>However, as other teams have also realised, an NER model can quickly overfit to known labels.</p> <p><strong>We use a clever trick to help the neural network generalise better to unseen dataset labels.</strong> </p> <p>By realising that for datasets names, the actual meaning of words doesn't matter as much as the structure/syntax of the dataset. For instance National Survey of Youth is just likely to be a dataset as National Survey of Soda. </p> <p>We therefore hypothesize that the actual word embeddings are making model overfit to certain types of datasets.</p> <p>To reduce overfitting, we add special mask tokens to our embeddings <code>[TITLE]</code>, <code>[UPPER]</code>, <code>[MIXED]</code>. For each word (not including the first word) in the sentence we replace Title words with <code>[TITLE]</code>, UPPER words with <code>[UPPER]</code> and MiXeD words with <code>[MIXED]</code>. </p> <p>For words which are "neutral" in datasets, for instance stopwords and words like "dataset" and "study" then we do not mask them.</p> <p>For example the sentence: <strong>"We use the Heart Cancer Study of Aging HCA in our analysis."</strong></p> <p>Would be transformed to: <strong>"We use the [TITLE] [TITLE] Study of [TITLE] [UPPER] in our analysis."</strong></p> <p>Through k-fold cross validation, we found that this trick allowed our model to be able to succesfully detect datasets with more accuracy consistently than using the non-masked sentence. This was especially true in low data regime (with smaller train) and for labels that were held out during training.</p> <h1>Classifier</h1> <p>Once we obtain our large set of candidate datasets, we built a simple LGBM classifier to replace a selection regex and decide whether or not to include the dataset or not. this is used for candidates that we are not sure on (cannot be automatically included or excluded based on regex) and which have appeared more than a couple times across all documents in the dataset. We</p> <p>We use very simple features such as counts of tags, and counts of identified geographical names as input to generalise to unseen names as much as possible.</p> <h1>Take Aways</h1> <p>Evaluating the accuracy of our approach was really tricky (as others have already stated), to try and estimate as much as possible without overfitting, we did some probing at the beginning and use some string matching in our predictions (for known labels) but found that metrics such as using no training labels (thanks <a href="https://www.kaggle.com/suicaokhoailang" target="_blank">@suicaokhoailang</a>) got worse the more string matching we tried.</p> <p>Our best submission is 0.451 private LB and does not include the NER model, but unfortunately due to not being able to accurately predict our model's performance we selected suboptimal kernels (same as most people in this competition 🙂 ).</p> <p>I still think that the NER model could be just as performant if not more, given enough labels and training data.</p> <p>I'll release the code/kernels when I have time to clean them up a little, and will update this post then.</p> <p>Finally, thanks to my teammate <a href="https://www.kaggle.com/eeeedev" target="_blank">@eeeedev</a>!</p>
Coleridge Initiative - Show US the Data
1st place solution: Metric learning and GPT
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Coleridge Initiative - Show US the Data <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Causal Language Model. Every language model was causal before BERT was a thing.</p> <p>Thank you for your kind words 😊</p> <p>Apparently I'm kind of a rebel here for sharing my own kernel<br> <img src="https://i.imgur.com/HGt0JfL.png" alt=""></p> <p>The code is available here: <a href="https://github.com/suicao/coleridge-gpt/" target="_blank">https://github.com/suicao/coleridge-gpt/</a><br> My inference kernel is available here: <a href="https://www.kaggle.com/suicaokhoailang/submit-gpt-spacy?scriptVersionId=66488765" target="_blank">https://www.kaggle.com/suicaokhoailang/submit-gpt-spacy?scriptVersionId=66488765</a>. <br> <strong>Overview</strong><br> I'll lay out a few notable features from our solution here for now, kudos to my teammate <a href="https://www.kaggle.com/dathudeptrai" target="_blank">@dathudeptrai</a> for his immense effort in this competition, especially in the second half.<br> His solution is metric learning with usual MLM backbones, I'll leave it to him to give more details later. We could not find a way to make our ensemble to look stable so we ended up submitting each of ours separately, my final sub scored 0.565 while his was 0.576.<br> Mine is a text extraction model with CLM backbone and beamsearch, in this case GPT. I also tried to modify Roberta and Scibert with future masking, basically turning them into CLMs but so far GPT worked the best.</p> <p><strong>Why GPT</strong><br> The reason why GPT worked and Roberta failed so hard is that Roberta is too clever for its own good. When you try to build a text extraction model with a single query of finding dataset names, Roberta doesn't need to care about the context and will try to find a substring that most resemble a dataset name, thus lead to overfitting.</p> <p>In my GPT + beamsearch model, GPT will be forced to predict <strong>whether the next token would be the start of dataset name</strong>, given only the previous context, and then predict <strong>wherether the next token would be the end of the mentio</strong>n, given the starting point and the extracted content so far. To make it more robust, I used a few sources of dataset and replaced 95% of mentions in the original training set with these random labels. Funnily, <a href="https://www.kaggle.com/datasets" target="_blank">https://www.kaggle.com/datasets</a> was the best source I could find for augmentation, although it has zero overlapping with the public (and possibly private) labels.</p> <p><strong>Modeling</strong><br> The architecture is pretty much the same as what I used in the Tweet extraction competition. We reused XLNet’s decoder head for SQuAD, instead of predicting start and end indices independently, we first predict the start index, and use that information to predict the corresponding end.</p> <p><img src="https://i.imgur.com/hdVe7bV.png" alt=""></p> <p>For training, we predicting the start index normally.<br> We take the hidden representation at the target index (ignoring the predicted) and concat it into the representations at every position. The new presentation is then fed to a MLP to predict the end index:</p> <p>Inference is a bit more complicated:</p> <ul> <li>We take top-k hidden states corresponding to top-k start indices with highest probabilities, this is normalized with sigmoid, not softmax like training.</li> <li>Each hidden state is then concatenated into the representations at every position.</li> <li>The new representation is fed to a MLP, similar to training. We then select top-k end indices for each selected hidden state, resulting in k*k top start-end pairs.</li> <li>We then calculate the joint probablities of every start-end pairs and take any pairs with a score large than 0.8.<br> <img src="https://i.imgur.com/nyP25it.png" alt=""></li> </ul> <p><strong>Final results and some gotchas</strong><br> My selected submission score 0.565 on the private LB, but my best one (that I know of) was 0.594. I only took predictions that appeared at least 4 times in the corpus, the rare ones were only selected if they contained words like 'Data', 'Survey'.. etc. I tried to add more predictions using scispacy's abbreviation detector for my final sub but this actually lowered my score, should have trusted my model in the first place.</p>
Coleridge Initiative - Show US the Data
21st Place Solution: Deep Transformers 1400->21 and the String-Matching repulsion
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Coleridge Initiative - Show US the Data <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="url" target="_blank"></a>First of all, congratulations to everybody and to the new GMs and Masters, as well, and thanks to Kaggle and C.I. for hosting this competition.<br> This was my first one actually, but I found it very interesting and stimulating.</p> <h1><strong>Preprocessing</strong></h1> <p>This stage appeared to be non-trivial since a huge proportion of training publications came with non-labelled datasets that would have misled the models during the training process. To address this, several labelling-enhancement methods were carried out: a list of all possible labels was created from the occurrences found in the train.csv, but it initially contained a very low number of mentions (some hundreds); dataset names (both titles and mentions) from two external dataset sources were included in the list, and then some string-based operations allowed to further improve this result. Eventually, from less than 200 mentions the list was composed of several thousands, and the labelling of training set got huge improvements. <br> Moreover, 5000 new publications taken from the same source as the datasets were included in the training set to get a more consistent amount of training data after the train-val split, that was the only feasible strategy (apparently) due to hard local CV splits; after the labelling-enhancement process, in fact, there was almost no possibility to split publications in completely disjoints sets.<br> Thus, the split was made on thematic areas, with the Education-related mentions prevailing in the validation set. Still, the huge number of found labels and common associated acronyms led this local validation to be only a rough measure to detect overfitting (which was what I tried to avoid the most) rather than a true overview of model performance.</p> <h1><strong>Models</strong></h1> <p><img src="https://www.linkpicture.com/q/Architecture.png" alt=""></p> <h2>NER</h2> <p>The following models are the one that apparently improved Public LB score without overfitting, and this was more or less the rough rationale used for model selection process itself.<br> All models have 512 as max length; bigger models such as XLNet and SciBERT come with higher regularization (0.5 Dropout), while the smaller ones with proportionally less (0.1 to 0.3); the LR was moderately set at 3e-5. After the validation assessment confirmed each model was not overfitting, the model was trained on the whole set, without any train/val split, and this apparently improved performance on both public and private LBs. Avoiding overfitting is what I focused on the most actually, and to do so I kept a very low number of fine-tuning epochs for transformers and a dropout rate proportional to transformer size. Without a suitable validation strategy, assessing whether this led to underfitting in some proportion is difficult; anyway, I preferred a slight underfit to the whole problem than overfitting training labels, given a huge amount of scores in the public Leaderboard were clearly doing so, and this appeared to be a good choice looking at the shakeups. <br> Making predictions on the full text was extremely time-consuming and brought a number of avoidable false positives, hence I made NER models work only on some selected sentences: a list of words was created: <br> ['scans','cohort','assessment','Assessment','Estimates','Map','map','project','Project','panel','Panel','taken','sample','Sample','Program','program','obtain','include','including','Index','source','images','using','collected','used','Repository','Archive','data','Data','records','dataset','Survey','Study','Atlas','atlas','Supplement','Indicator','report','Questionnaire','questionnaire','Longitudinal','Cohort']<br> and only sentences which contained at least two of those words in the list were analyzed by the models. This reduced false positives and speeded up the inference process by 10 times, allowing several models to be included in such process. <br> All the six models are concatenated together in a weighted voting ensemble with weights based on the single-model performance on the public LB, over a total voting weight of 7, only the predictions with at least 4 votes were propagated to the sequence classification layer; the most important hyperparameters as well as weights are described in the picture above.</p> <ul> <li>Distilbert Base Cased</li> <li>Electra Small</li> <li>Distilroberta Base </li> <li>XLNet Base Cased</li> <li>SciBERT Base Cased</li> <li>SciBERT Base Uncased</li> </ul> <h2>Seq. Classifiers</h2> <p>These models took the predictions of the NER ones and deemed them as ‘They may be datasets’ and ‘They definitely are not datasets’ based on the structure of the prediction itself, rather than contexts. For the training I randomly took pieces of text from the publications (e.g. ‘the education was thus’, ‘good results achieved’) and removed those containing training labels, then included all the true training mentions from the list created in the preprocessing phase, which were almost 20,000. Differently from the NER task, big models performed better here, with an ultimate selection of DeBERTa, RoBERTa and BERT base. This layer appeared to drastically improve performance with NER single-model, and slightly improve it when dealing with the ensemble.</p> <h1>Postprocessing</h1> <p>After the prediction got at this point, it was assumed to be quite likely to be a true mention, so not much postprocessing was included. Only predictions which were acronyms with length &lt;= 3 (e.g. MHS) were removed blindly: they were likely to be chosen by the NER models if the context suggested it but not likely to be always correctly labelled as ground truths, and the sequence classifiers were not able to distinguish acronyms that referred to a datasets and acronyms that did not. Overall performance increased consistently once again.</p> <h1>Things that did not work</h1> <ul> <li>Strict voting (5/7 rather than 4/7); improved public LB but would have performed much worst on private (fortunately it was not selected…)</li> <li>Larger training corpus for sequence classification models</li> <li>Training without acronyms mentions in the corpus in the first place (they were probably closely related with some contexts that models would have missed otherwise)</li> <li>Transformers for longer sequences (Longformer and BigBird): the great majority of sentences had length 512 at most, several ones were filled with padding, hence models for longer sequences were unlikely to work well; a Longformer was anyway included in some of the ensemble trials, but it took too much computing time with small changes in performance</li> <li>Electra base: good for ground truths but made too many false positives</li> <li>SqueezeBERT, Albert, ConvBERT: low performance overall regardless of the hyperparameters</li> <li>Training without external labels, or without external publications, or without both: I had to check whether using external data improved predictive power, and it did</li> <li>Stricter selection of sentence to perform inference on (at least 3 votes rather than 2)</li> </ul> <h1>Conclusions</h1> <p>Starting from a 0.2 public leaderboard with a weak Roberta base, a transition to smaller models led to improvements up to 0.3-0.35. SciBERT was much stronger than them anyway, as it was more able to detect contexts in scientific publications, reaching 0.45 without many problems and 0.49 with all the inference sentence selection and postprocessing tricks. Eventually, the ensemble models broke the 0.5; about Private LB, ensemble models appeared to be slightly better than SciBERT alone.<br> String matching was avoided as plague, since I though labels were completely disjoint from training and private LB and in such a case relying on it would have led to an illusory overconfidence, but apparently it would have strenghtened results.</p> <p>There were a couple of submission that would have led to much higher private LB score actually, but as this is my first competition overall I was not very experienced with submission selection.</p> <p>Maybe I missed something in the description, in case of ambiguities just let me know.</p>
Coleridge Initiative - Show US the Data
23rd private (1437th public, 4 submissions total) - regular expressions
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Coleridge Initiative - Show US the Data <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>1) I look for any word from this list ['Survey', 'Data', 'Study', 'Database', 'Statistics'] in text.<br> 2) Then I look 5 words backward and forward for words starting from upper case letter.<br> 3) From all sequences I choose only sequences with more than 2 words<br> 4) Then I add found labels from train dataset.</p> <p>Link: <a href="https://www.kaggle.com/ilyenkov/coleridgeinitiative-show-us-the-data-v0" target="_blank">https://www.kaggle.com/ilyenkov/coleridgeinitiative-show-us-the-data-v0</a></p>
Coleridge Initiative - Show US the Data
2nd place solution overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Coleridge Initiative - Show US the Data <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Updated my post to link to the relevant Github repo</p> <p>my observation was that you need negative examples more than you need positive examples i.e., string candidates that are NOT datasets. it's very easy to run into a plateau just throwing positive examples at a given model/approach</p> <p>Thanks to Coleridge &amp; Kaggle for organizing this competition, and congrats to the other prize winners and medallists. This was a challenging competition in contrast to the standard Kaggle NLP classification/regression task, and I hope Coleridge will consider sponsoring future competitions in this vein. I suspect a future iteration will have the kinks shaken out :)</p> <h2>TL;DR: Search + Classify</h2> <ol> <li>Search for named entities using the Schwartz-Hearst algorithm</li> <li>Filter candidates using a fine-tuned roberta-base binary classifier</li> <li>Threshold and propagate candidates</li> </ol> <p>CV strategy: It should be clear that there was no way to perform clean local validation. I used public LB while aware of its flaws. </p> <p>I spent the first month of the competition working on an end-to-end Transformer solution with minimal text processing. However, it became clear that a tweaked Transformer approach was problematic - they’re a) unsuited for the long document sequences we’re dealing with, even with the specialized variants designed for such texts, and b) ended up requiring heavy pre/post-processing to perform well. My processing heuristics effectively turned into a model parallel to the main Transformer model. I began thinking about how to simplify this - </p> <h4>Insight #1: No context needed</h4> <p>Any attempt to inject semantic context into my models seemed to worsen public LB accuracy. I suspected the annotation process was noisy and decided to focus on IDing datasets purely by title. </p> <h4>Insight #2: Search for LONG-FORM (ACRONYM)</h4> <p>Data sets in academic papers often have a regular structure - mixed cap words followed by an acronym. An example is “Baltimore Longitudinal Study of Aging (BLSA)”. I needed something lighter than a Transformer to search for them efficiently - the answer was the <a href="https://psb.stanford.edu/psb-online/proceedings/psb03/schwartz.pdf" target="_blank">Schwartz-Hearst algorithm</a>. Schwartz-Hearst is a non-learning string search algorithm used to extract strings from text in the form of “LONG-FORM (ACRONYM)” - it’s basically a more involved regexp search. A bonus from using this algorithm is that I could also use the acronym form of a given dataset for additional points. </p> <h4>Insight #3: No scraped data needed</h4> <p>The third insight (hypothesis?) was that scraped data was unnecessary. I did not use any externally derived list of datasets or scraped data - my sense was that 1) positive examples of datasets weren’t as useful as negative examples, and 2) it wasn’t even clear whether these lists hewed to the dataset definition that annotators were using. </p> <h4>Putting it all together</h4> <p>Schwartz-Hearst worked well at identifying named entities but there’d be many flagged candidates that weren’t data sets (e.g., organization names, scientific paraphernalia) so I needed a binary classifier to filter candidate strings. I hand-annotated (using my own judgment of what a dataset was) a miniature training corpus of positive and negative examples based on examples that Schwartz-Hearst IDed from training data and fine-tuned a roberta-base binary classifier to filter these named entities using the corpus. I’ll re-emphasize that this corpus was built wholly from the training data without any scraped data. </p> <p>The last piece of the puzzle was dealing with the fact that not all papers referred to datasets using the full “LONG-NAME (ACRONYM)” format. Many referenced only the LONG-FORM. A simple solution was to generate a dynamic table of datasets and search for LONG-FORM in documents. I noticed that thresholding this search on a minimum document frequency improved public LB. The intuition is that in any given document corpus, a dataset is probably referenced by multiple documents so the more frequently they’re referenced, the more likely they’re a hit and not a spurious find. </p> <p>This search + classify approach dramatically reduced inference time since I did not have to pass entire documents through a Transformer model. Inference on the test set took ~10 minutes vs &gt;&gt;1hr using end-to-end Transformer QA/NER models. </p> <h2>End-to-end dummy example:</h2> <ol> <li><p>The LONG-FORM (ACRONYM) string of “Singapore Longitudinal Study on Diabetes (SLSD)” is found in 64 documents</p></li> <li><p>“Singapore Longitudinal Study on Diabetes” is classified by the Transformer binary classifier to be a dataset with probability 0.99, above the prediction threshold of 0.9 </p></li> <li><p>The document frequency of 64 exceeds the search threshold frequency of 50 - a search of LONG-NAME is conducted across all documents and another 200 documents are found to contain the label</p></li> <li><p>For the combined 264 documents, the LONG-FORM is added as a prediction. Additionally, if any of the 264 documents contains the string ‘ SLSD ‘ (checked against the raw document texts), an additional prediction of the acronym is added</p></li> </ol> <p>Note: this is a summary of the model logic, there are other rules applied that I’ve elided.</p> <p>Code <a href="https://github.com/leecming/kaggle_coleridge" target="_blank">here</a></p> <p>It was puzzling to see all the scraping going on… </p>
Coleridge Initiative - Show US the Data
33rd place solution: pre- and postprocessing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Coleridge Initiative - Show US the Data <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I only had a couple of days to look at this problem, so the main insights I got were:<br> 1) Pretty much every dataset string transforms into a chain of proper nouns after POS-tagging.<br> 2) There is a limited amount of datasets both in train and test parts.</p> <p>The idea was that even a weak model could show generalization ability if frequency-based filtering is applied after training on 'PROPN' tags instead of original strings. Overall, my pipeline consists of four parts:<br> 1) replacing every proper noun chain with a single token. "Entity typing on the Open Entity dataset (Choi et al., 2018)" becomes "Entity typing on the PROPN dataset (PROPN, 2018)".<br> 2) training a simple token classification model (most transformer-based models will do).<br> 3) collecting top document candidates by using an output probability threshold.<br> 4) decoding all collected 'PROPN' tokens to original strings and filtering them out by frequency.</p> <p>Based on 4) implementation, this algorithm is capable of achieving a ~0.4 score on the private leaderboard (and frequently finds dataset mentions that look legit but are missing from annotations).</p>
Coleridge Initiative - Show US the Data
34th place solution: Key points
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Coleridge Initiative - Show US the Data <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the Coleridge Initiative and to Kaggle for organizing such an exciting competition. Congratulations to all the winners!</p> <ol> <li>My solution was based on the BERT MLM model published by <a href="https://www.kaggle.com/tungmphung" target="_blank">@tungmphung</a> early on in the competition. It is referenced on the leaderboard.</li> <li>It survived the leaderboard shakeup at the end (moved up 25 places).</li> <li>All datasets / pre-trained models used were made public before the competition deadline as stipulated.</li> <li>I was impacted by the 9 hour kernel training time limit. If I could circumvent this in some way, I would have gained further (more training epochs, more realistic model).</li> <li>I used the external datasets published by <a href="https://www.kaggle.com/mlconsult" target="_blank">@mlconsult</a> It improved my public score dramatically and had no adverse impact on my private score. I verified this by choosing 1 submission with this data included and 1 without, for my 2 final submissions and results were almost the same (0.372 and 0.373).</li> <li>I look forward to continuing to be a part of this initiative. Thanks much!! </li> </ol>
Coleridge Initiative - Show US the Data
36th Place Solution (and "could have been" 22nd solution)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Coleridge Initiative - Show US the Data <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulation to all the medal winners and everyone else who learned new things through this competition. Thanks to organizers and Kaggle for hosting the competition. </p> <p>We secured 36th using the solution we selected, but had a submission which would have given us 22nd had we selected it (no regrets, all part of the game). Here is an overview and some high level details about our solution. </p> <p><strong>Overview</strong></p> <ol> <li>NER with Roberta backbone using new and old competition data: ~ 0.5x / 0.3x. One modification we had done was - we trained the NER with dual objective of predicting the probability of a dataset in a sentence along with NER labels (beginning / middle / end). This seemed to help compared to just using single NER objective. </li> <li>Ensemble with similar pipeline using different backbones like Scibert, Conll, Biobert: ~0.56 / ~0.414 (the solution we didn't select)</li> <li>Ensemble + some post processing that seemed work on Local and public LB but not on Private LB: ~0.56x / 0.368 (the solution we selected - 1/2)</li> <li>Ensemble + post processing + String Match using dataset lists: 0.640 / 0.363 (the solution we selected - 2/2)</li> </ol> <p><strong>Details</strong></p> <ul> <li>The training / scoring was done at a sentence level, extracted using Spacy Sentencizer </li> <li>Scored only ~10% of the total sentences in test publications which were extracted using regular expression to be able to run Ensemble of 4 models in allotted time for the notebooks. This led to no significant deterioration in accuracy on validation / LB. (thanks to my teammate <a href="https://www.kaggle.com/soloway" target="_blank">@soloway</a> for this)</li> <li>There were some datasets with a ton of publications in new / old training data, which was causing local CV to be unstable. Used only 5-10 Publications per dataset for training / validation to address this.</li> <li>String Matching seemed to work on Public LB, but hurt us slightly on the Private. We knew the risk, so only 1 of our 2 selected solutions had string matching. </li> </ul>
Coleridge Initiative - Show US the Data
38th place solution (NER + Heuristics)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Coleridge Initiative - Show US the Data <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I am afraid my solution is way too simple compared to some of the cool models I've seen. But here goes:</p> <p>I used Spacy's entity recogniser to filter all possible candidate datasets. Then I wrote a simple CFG which determines if those entities indeed denote a dataset. This part generates candidate datasets that are strict matches, thereby increasing the rate of false negatives. To mitigate this, I did a second pass on all texts with key terms in these candidate datasets to generate the final discovered dataset. That's the model in a nutshell. </p> <p>Few additional notes:</p> <ul> <li>This model is nearly unsupervised. </li> <li>I did not use the train dataset for the actual dataset prediction. I only used it to determine what kind of CFG I need to write. </li> <li>One very direct observation is, all datasets that are referred in these publications are camel-cased. If nothing else, you can start with these as your candidates and even exclude the named-entity task. (That gave me a private score of 0.29. The final model I submitted with Spacy gave 036).</li> <li>These are not free-form texts on which standard large-scale models have been trained. These are publications and they follow a certain standard. If you analyze the training dataset, you can automatically see that most of them are of the form <code>data collected from &lt;some dataset&gt;</code>, <code>Used &lt;some report/study&gt;</code>, and so on. The second pass in my model exists to capture those that don't follow these candidate phrasings. </li> <li>I did not use Roberta for the same reason. See the first place solution post for a much more in-depth discussion on this. </li> <li>As for the heuristics, you can just start with "data" and look for subtree (in the PCFG) matches that intersect with the matched entities. I used a few more. Again, the input texts are almost free of grammatical errors and follow very similar patterns while quoting references and external datasets. </li> <li><strong>Spacy 3.0's Transformer gave me much better initial candidates. But it kept throwing some submission errors.</strong> So I used Spacy 2.5 since I didn't have much time to compete in this competition.</li> </ul>
Coleridge Initiative - Show US the Data
41st place solution, experience sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Coleridge Initiative - Show US the Data <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The race is finally over, from the very beginning when the outer circle of the medal position upwards when the high spirits, to the code debug after occupying the front row when the complacency, to the second half of the stage fluctuates back and forth the sense of roller coaster, until the end of the race after the small ripples. This emotional journey is much more than the actual feeling brought by the race itself. In the end, the results were poor, at least a silver medal 🥈, and some results were considered to be a gold medal, but for various reasons (which I will discuss later) I could not submit my best results. Of course, the most important thing about the competition is not the results, now talk about the feelings and gains after more than 2 months of competition.</p> <p><strong>1. Difficulties</strong><br> At the beginning, when I faced this competition, I thought it was just Ner or QA, which is a common type of competition, and I had previous experience in related competitions. The most abundant time in school is not enough, basically only a few hours after work every day, so in the beginning of the period, a lot of time on weekends will be used to do the competition, because the joy of the competition is really attractive. The problems with this piece gradually settled in as the first few weeks were struggled with, and could only be alleviated by the relatively large time span of over 2 months.<br> But as the model added and modified features gradually, the number of debug also rose, the first mainly used to unblock the entire pre-processing, training process is also more and more time-consuming, this problem is more prominent in the tone of the official white whoring kaggle machine! The problem of cpu makes the official training time, excluding debug, doubled several times to 30 minutes each time (you know I have so many hours of effective time off work every day 😞), while the official gpu The time limit for resources and the 9-hour limit for each training session made me experience gpu anxiety. After a few days of review and practice, using offline multi-threaded processing can significantly reduce the pre-processing time. Also, for training, renting a gpu cloud server properly is an option, but it is still not cheap, requiring tens of soft dollars for a training session.</p> <p>What's worse is that I encountered the problem of "memory explosion" that few people would encounter, which is the problem of insufficient memory when making predictions on a private test set. Of course, this problem is caused by the roughness of data pre-processing and training process construction when training models by individuals. Since I was mostly doing domestic competitions before, I used enough resources from the lab, so I naturally did not pay attention to the problem of insufficient memory, but the kaggle competition has a 13g memory limit, which made me spend 1-2 weeks on dealing with this problem. Finally, I solved the problem by migrating the pre-trained inputs such as input id, type id to the dataset class for batch generation.<br> To summarize, the problem of having little time to face the competition can be alleviated by participating in a long time span and making good use of the weekend. The use of multiple threads can effectively reduce the pre-processing time. Through process optimization, it can reduce the memory occupation. Properly renting a gpu cloud server can solve the urgent problem (of course I am also considering purchasing my own server host, although the current graphics card premium is too high).</p> <p><strong>2. Data</strong><br> Whether it's a competition or a research lab, the model's understanding of the data determines whether your model is over-fitted or can have good generalization performance. Therefore the amount of quality data set can greatly influence the final result of your model, which is why many competitions do not allow the act of manually labeling additional data.<br> In this competition, the primary thing that was performed was the data analysis, which determined the goal of our task, what kind of model we needed, and how we performed the pre-processing and post-processing. I personally use a model for reading comprehension, but of course it can also be seen as a ner task. It is especially important to be able to have a whole overview of the data after data analysis, such as the number of texts, the length distribution of texts, the number and length distribution of tags, the share of different tags, how serious the quantitative imbalance is, etc. This is a particularly important step.</p> <p>In this contest, I also performed the above-mentioned operations, and the results were really surprising.<br> <strong>Problem 1:</strong><br> Officially, more than 1w samples were given, and the length of the samples was much longer than 512 so a sliding window, a common preprocessing method in reading comprehension, was needed. The sample labels are less than 200, which means that each label may be contained in 50 samples on average. It turns out that due to the long tail phenomenon, the label adni appears in more than 60% of the samples, which will prevent the model from learning other types of labels, and also make our validation meaningless due to the presence of leak, which seriously affects our training.<br> <strong>Problem 2:</strong><br> In addition to this, the presence of labeled text with inclusion relationships as TP at the same time in the labels makes it impossible to perform effective post-processing to filter out unsuitable FPs when generating prediction results (of course there is no good solution to this problem in the end).<br> <strong>Problem 3:</strong><br> Finally, there is another problem with huge impact, that is, the training set is not completely labeled well. That is, there are some valid labels in the training set that are not labeled. Therefore, we cannot judge the performance of the model by the effect of the validation set, which can only be reflected by the score on the public test set.</p> <p>Since problem one is a common problem and it is not the pain point of this competition, it is not optimized. For problems two and three, I tried a series of approaches to identify some strategies that can alleviate it.<br> <strong>- Division of training set and validation set</strong><br> Due to the previous problem, to perform validation as effectively as possible, it should make the validation set without the labels seen during training, so I used the way of dividing the set by using and checking the set, so that the training set and the validation set do not have the same labels, and then the validation is closest to the score of the private dataset, but due to the poor results on the public test set, I did not optimize the strategy based on this However, due to the poor results on the public test set, I did not optimize the strategy on this basis, but focused on the fusion with the text matching method, which is the reason why my final model did not work well. Also it might be better if a similar operation could be performed on span, as this would allow a validation set containing a sufficient number of labels to verify the effect, although the score is lower.<br> <strong>- Input of text</strong><br> Since there are some unlabeled labels in the training set, it is necessary to limit the length of the input qa model text while training without using negative samples to minimize the negative effect of hidden samples. I also added some possible valid labels (these labels are from the bigger-govt-dataset-list, without verifying its validity) to expand the training sample.<br> <strong>- Post-processing</strong><br> For post-processing, I am directly using the size judgment of logit values. Also when predicting the results, the answers do not have overlapping parts, and finally use whether to include Remove the duplicate prediction labels.<br> <strong>- Semi-supervised</strong><br> Use the labels with higher logit values in all the results predicted by the model, while the length of the words is between 2 and 10, and include some special words such as survey, as the labels used for string matching, to improve the recall of the private test set.</p> <p>Finally, the public validation set shows that a text length of 128 or 256 can have a score of about 0.5. In the private test set it can reach 0.42, but I did not choose this result because the score in the public test set is too low, which is also a reflection of problem 2.<br> <img src="https://i.imgur.com/hFsnexq.png" alt="https://i.imgur.com/hFsnexq.png"></p> <p><strong>Finally, enjoy the game!</strong></p>
Coleridge Initiative - Show US the Data
44th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Coleridge Initiative - Show US the Data <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I didn't use any</p> <p>Here's the implementation I use :</p> <pre><code>import torch import torch.nn as nn import torch.nn.functional as F from torch import Tensor from typing import Optional class DiceLoss(nn.Module): """ Dice coefficient for short, is an F1-oriented statistic used to gauge the similarity of two sets. Given two sets A and B, the vanilla dice coefficient between them is given as follows: Dice(A, B) = 2 * True_Positive / (2 * True_Positive + False_Positive + False_Negative) = 2 * |A and B| / (|A| + |B|) Math Function: U-NET: https://arxiv.org/abs/1505.04597.pdf dice_loss(p, y) = 1 - numerator / denominator numerator = 2 * \sum_{1}^{t} p_i * y_i + smooth denominator = \sum_{1}^{t} p_i + \sum_{1} ^{t} y_i + smooth if square_denominator is True, the denominator is \sum_{1}^{t} (p_i ** 2) + \sum_{1} ^{t} (y_i ** 2) + smooth V-NET: https://arxiv.org/abs/1606.04797.pdf Args: smooth (float, optional): a manual smooth value for numerator and denominator. square_denominator (bool, optional): [True, False], specifies whether to square the denominator in the loss function. with_logits (bool, optional): [True, False], specifies whether the input tensor is normalized by Sigmoid/Softmax funcs. ohem_ratio: max ratio of positive/negative, defautls to 0.0, which means no ohem. alpha: dsc alpha Shape: - input: (*) - target: (*) - mask: (*) 0,1 mask for the input sequence. - Output: Scalar loss Examples: &gt;&gt;&gt; loss = DiceLoss(with_logits=True, ohem_ratio=0.1) &gt;&gt;&gt; input = torch.FloatTensor([2, 1, 2, 2, 1]) &gt;&gt;&gt; input.requires_grad=True &gt;&gt;&gt; target = torch.LongTensor([0, 1, 0, 0, 0]) &gt;&gt;&gt; output = loss(input, target) &gt;&gt;&gt; output.backward() """ def __init__( self, smooth: Optional[float] = 1e-4, square_denominator: Optional[bool] = False, with_logits: Optional[bool] = True, ohem_ratio: float = 0.0, alpha: float = 0.0, reduction: Optional[str] = "mean", index_label_position=True, ) -&gt; None: super(DiceLoss, self).__init__() self.reduction = reduction self.with_logits = with_logits self.smooth = smooth self.square_denominator = square_denominator self.ohem_ratio = ohem_ratio self.alpha = alpha self.index_label_position = index_label_position def forward( self, input: Tensor, target: Tensor, mask: Optional[Tensor] = None ) -&gt; Tensor: loss = self._binary_class(input, target, mask=mask) if self.reduction == "mean": return loss.mean() if self.reduction == "sum": return loss.sum() return loss def _compute_dice_loss(self, flat_input, flat_target): flat_input = ((1 - flat_input) ** self.alpha) * flat_input intersection = torch.sum(flat_input * flat_target, -1) if not self.square_denominator: loss = 1 - ( (2 * intersection + self.smooth) / (flat_input.sum() + flat_target.sum() + self.smooth) ) else: loss = 1 - ( (2 * intersection + self.smooth) / ( torch.sum( torch.square( flat_input, ), -1, ) + torch.sum(torch.square(flat_target), -1) + self.smooth ) ) return loss def _binary_class(self, input, target, mask=None): flat_input = input.view(-1) flat_target = target.view(-1).float() flat_input = torch.sigmoid(flat_input) if self.with_logits else flat_input if mask is not None: mask = mask.view(-1).float() flat_input = flat_input * mask flat_target = flat_target * mask else: mask = torch.ones_like(target) if self.ohem_ratio &gt; 0: pos_example = target &gt; 0.5 neg_example = target &lt;= 0.5 mask_neg_num = mask &lt;= 0.5 pos_num = pos_example.sum() - (pos_example &amp; mask_neg_num).sum() neg_num = neg_example.sum() keep_num = min(int(pos_num * self.ohem_ratio), neg_num) neg_scores = torch.masked_select(flat_input, neg_example.bool()) neg_scores_sort, _ = torch.sort( neg_scores, ) threshold = neg_scores_sort[-keep_num + 1] cond = (flat_input &gt; threshold) | pos_example.view(-1) ohem_mask = torch.where(cond, 1, 0) flat_input = flat_input * ohem_mask flat_target = flat_target * ohem_mask return self._compute_dice_loss(flat_input, flat_target) def __str__(self): return f"Dice Loss smooth:{self.smooth}, ohem: {self.ohem_ratio}, alpha: {self.alpha}" def __repr__(self): return str(self) </code></pre> <p><strong>Overview</strong></p> <p>My solution is a straight-forward Roberta NER model, which scored 0.497 public (#1400~) -&gt; 0.347 private (#44). I initially wanted to use it as part of a two step pipeline to extract candidates, and then plug another model in charge of choosing whether to keep the dataset or not. <br> However, the nature of the public leaderboard and the impossibility to build a reliable CV made this competition extremely frustrating, so I decided to spend my free time on something else :)</p> <p><strong>Main points</strong></p> <ul> <li>Architecture : Roberta + concatenation of all the output layers + few convolutions + 0/1 output</li> <li>Loss : Dice loss, from <a href="https://arxiv.org/abs/1911.02855v3" target="_blank">https://arxiv.org/abs/1911.02855v3</a></li> <li>I built clusters using cooccurrences of datasets in the training data, this allowed me to have a (not so reliable) validation scheme with no overlap, but also to sample data more cleverly during training. </li> <li>I trained a first model using crops of 256 tokens containing at least one dataset. This approach biases the model towards predicting more datasets, and since the metric penalizes FPs more than FNs, I figured out this won't be enough</li> <li>So I used this first model to generate ~3000 candidates, that I manually reviewed. This was quite boring, and since it's hard to tell what organizers actually considered to be datasets, I mostly relied on keywords (i.e. survey, study, data, dataset &amp; more) for identification</li> <li>A second model was retrained using positive and negative examples from extracted candidates, which is the one I ended up using for my final submission</li> <li>I did a bunch of post processing in order to remove false positives. I don't really know how well each of these worked on LB but the first three worked on my CV :<ul> <li>Only keep predictions of length &gt; 10</li> <li>Only keep predictions where the maximum token probability is &gt; 0.99</li> <li>Only keep predictions that contains a keyword such as the one above</li> <li>Try to merge similar predictions in order not to predict the same dataset twice</li></ul></li> <li>Few ideas that didn't work and that I didn't already forget about :<ul> <li>Replacing datasets from the training data with datasets from external sources </li> <li>Using the previous Coleridge competition data</li></ul></li> </ul> <p>Feel free to ask any questions !<br> Also, my inference code is available <a href="https://www.kaggle.com/theoviel/coleridge-ner-inference" target="_blank">here</a> if you wish to dig into the details.</p>
Coleridge Initiative - Show US the Data
47th place solution - no training, no dataset label string matching
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Coleridge Initiative - Show US the Data <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes you read that right. All of the models I tried to train did worse than pre-trained models.</p> <p>Inspired by <a href="https://arxiv.org/pdf/1910.11476.pdf" target="_blank">this publication about MRC for NER</a>, I tested many question-answering models on the <a href="https://huggingface.co/models?pipeline_tag=question-answering" target="_blank">Hugging Face model hub</a> to see which ones were able to extract the dataset name from a sentence. I ended up using an <a href="https://huggingface.co/ahotrod/electra_large_discriminator_squad2_512?context=The+inverse+association+between+bilirubin+and+risk+of+coronary+disease+was+analyzed+in+the+European+study+%22Prospective+Epidemiological+Study+of+Myocardial+Infarction+PRIME%22+30.&amp;question=What+is+the+name+of+study+used%3F" target="_blank">electra model trained on squad v2</a>.</p> <p>I trained a distilbert, roberta, and fasttext model to be able to tell if a sentence contained a datatset, but sadly a regex search for words like "Data" or "Survey" did the best. I think the advantage of regex is that it has very high precision compared to the others. I'm sure that if I knew how to fine-tune a q-a model better I could have gone much higher up the leaderboard. Not bad considering it had no training… </p> <p>In brief</p> <ol> <li>String match to find sentences mentioning dataset.</li> <li>Use question-answering model to extract name if probability above a certain threshold. e.g. "What is the data source used?"</li> </ol> <p>Pretty crazy that this helped me go up 1400 positions from public to private! </p> <p><a href="https://www.kaggle.com/nbroad/coleridge-regex-electra" target="_blank">Notebook here</a></p>
Coleridge Initiative - Show US the Data
4th place solution - LB probing, acronym detection, and NER
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Coleridge Initiative - Show US the Data <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Our solution is composed of 6 parts below.</p> <ul> <li>LB probing</li> <li>Acronym detection</li> <li>Acronym detection version 2</li> <li>String-matching with dataset-names from external data</li> <li>Dataset-name variation detection using NER</li> <li>String-matching with dataset-names from the train data</li> </ul> <h1>1. LB probing</h1> <p>The metric of this competition is F-score. Under this metric, assuming a current score is <strong><em>F</em></strong>, a newly detected label can improve the score when the expected value that the label is true positive is greater than 0.8F. Therefore, it is important to estimate the private test score in order to determine the detection threshold. For example, if <strong><em>F</em></strong> is 0.6, the best threshold is 0.48 and if <strong><em>F</em></strong> is 0.4, the best threshold is 0.32. For this reason, it is very important to know the number of the training-data labels in the private test data, because it affects the private test score strongly. <br> To tackle this problem, we did LB probing. In this competition, the public test data contains duplicates of the train data. Therefore, we can create a submission only with true positive labels and with no false positive labels by applying true positive labels of the train data to their duplicates. Thus, by setting the number of true positive labels to a value related to the hidden test data, we can get information about the hidden test data from the submission score. Using this strategy, we got rough estimates of values below (actual codes are <a href="https://www.kaggle.com/osciiart/lb-probing-3" target="_blank">this</a>, I shared in the competition period, and <a href="https://www.kaggle.com/osciiart/lb-probing-4" target="_blank">this</a>). </p> <ul> <li>The number of the public test data: 923</li> <li>The number of the private test data: 7,695</li> <li>The number of labels in the public test data: 8,546</li> <li>The number of labels in the private test data: 62,671</li> <li>The number of detected labels in the test data by string-matching of train-data labels: 1,717 </li> </ul> <p>From these results, we found that the public test score of train-data label string-matching is very high (0.530), but there are very few train-data labels in the test data (1,717). Therefore, at least 1,600 of the train-data labels in the test data might be in the public test data and very few might be in the private test data. Therefore, the public score of submission which discards the train-data labels from the prediction will correlate well with the private test score. The best submission can be obtained by finding the submission with the max score without train-data labels and adding train-data label string-matching to it.<br> <strong>By this approach, we succussed to select the best-private-score submission out of our 201 submissions.</strong> We knew we can survive the big shake of the private test LB, this was a very very big advantage for our team.</p> <h1>2. Acronym detection</h1> <p>Most datasets have acronyms (e.g., National Education Longitudinal Study → NELS). So, we did acronym detection to detect dataset-names that are not included in the train-data labels. The following procedure was used to extract them.</p> <ol> <li>Make a list of words by splitting a text by space.</li> <li>If a word in the list is surrounded by () and has uppercase characters and no lowercase characters, it is detected as an acronym candidate.</li> <li>If the number of characters in the acronym candidate is less than the threshold, remove it.</li> <li>Extract a few words before the acronym candidate from the text as a dataset-name candidate.</li> <li>If the initial characters of each word in the dataset candidate can form the acronym candidate, detect them as a dataset-name/acronym pair. (The dataset candidate is allowed to have initial characters unrelated to the acronym candidate.)</li> <li>Extract only those dataset-names that contain keywords (study, studies, data, survey, panel, census, cohort, longitudinal, or registry).</li> <li>Exclude dataset-names that contain ban words (system, center, committee, etc.).</li> <li>Apply the clean_text function.</li> <li>Exclude dataset-name if Jaccard scores between the dataset-name and any train-data labels or acronym-detection labels are greater than or equal to 0.5.</li> <li>Perform string-matching to the train and test data with the detected dataset-names and count the number of occurrences among the texts of each dataset-name. Extract only those dataset-names whose count is above the threshold, because It is more likely to be a dataset-name if it appears in a lot of texts.</li> <li>Finally, perform string-matching using the extracted dataset-names. Only when a dataset name appears more than a threshold number of times in the text, it is detected as a label. </li> </ol> <p>The acronym itself is also detected as a dataset-name. String matching is performed on the dataset-name and the acronym. The acronym is detected as a label only when it and its long name appear more than a threshold number of times in the text.<br> By this acronym detection, we get a score of <strong>0.418</strong> on the public LB and <strong>0.436</strong> on the private LB. Each threshold was chosen based on the public LB score.</p> <h1>3. Acronym detection version 2</h1> <p>To obtain more dataset-names, we performed a more aggressive acronym detection. We extract words that contain uppercase characters and no lowercase characters from the texts as acronym candidates. We search a chunk of words that is valid as the full name of the acronym candidate among the entire text (the actual code is <a href="https://www.kaggle.com/osciiart/210615-det-acronym-ver2/notebook?scriptVersionId=66042722" target="_blank">this</a>). This acronym detection does not improve the leaderboard score because it detects many false-positive labels. But we used the dataset-names detected by it for the NER model's training, which we describe in the later step.</p> <h1>4. String matching with dataset-names from external data</h1> <p>We used the external U.S. government’s dataset-names obtained from <a href="https://www.kaggle.com/mlconsult/100000-govt-datasets-api-json-to-df/" target="_blank">this notebook</a>. To reduce false-positive, we apply some processing below.</p> <ol> <li>Ext Extract only those dataset-names that contain keywords (study, studies, etc.).</li> <li>Apply the clean_text function.</li> <li>Exclude dataset-name if Jaccard scores between the dataset-name and any train-data labels or acronym-detection labels are greater than or equal to 0.5.</li> <li>Exclude dataset-name if the number of words it contains is less than the threshold.</li> <li>Perform string-matching to the train and test data with the extracted dataset-names and count the number of occurrences among the texts of each dataset-name. Extract only those dataset-names whose count is above the threshold.</li> <li>Finally, perform string-matching using the extracted dataset-names. </li> </ol> <p>This approach improved the score of the public LB score from <strong>0.418</strong> to <strong>0.424</strong> and the private LB score from <strong>0.436</strong> to <strong>0.486</strong>.</p> <h1>5. Dataset-name variation detection using named entity recognition (NER)</h1> <p>We attempted to train a NER model as a solution to this competition, using BERT or RoBERTa with the train data or the Rich Context competition data as training data. However, NER models never outperformed rule-based approaches. We think this is because a large number of true-positive labels are intentionally excluded from the provided train data. Therefore, the train data is incomplete as training data for machine learning. However, we found NER can be used to cover the weakness of string-matching. That is, NER is useful for detecting dataset-name variations that cannot be detected by string-matching. For example, National Education Longitudinal Study is sometimes quoted as National Educational Longitudinal Survey. <br> We used spacy library to train a NER model. We used the train-data label, the acronym-detection label, the acronym-detection-version-2 labels, and the external U.S. government label for training.<br> We detect dataset-name candidates from the test data using the trained NER model. We calculated Jaccard scores between dataset-name candidates and any train-data labels or acronym-detection labels. We selected candidates with Jaccard scores greater than or equal to 0.5 as variations of the existing labels. This approach improved the score of the public LB score from <strong>0.424</strong> to <strong>0.440</strong> and the private LB score from <strong>0.486</strong> to <strong>0.504</strong>.</p> <h1>6. String matching with dataset-names from the train data</h1> <p>Finally, we applied basic string-matching using the train-data labels. We also used the acronyms of the train-data labels for string-matching. This approach improved the score of the public LB score from <strong>0.440</strong> to <strong>0.614</strong> and the private LB score from <strong>0.504</strong> to <strong>0.513</strong>.</p> <h1>Codes</h1> <p><a href="https://github.com/OsciiArt/Kaggle_Coleridge_4th_Solution" target="_blank">Kaggle_Coleridge_4th_Solution</a> (main) <br> <a href="https://github.com/usuyama/coleridge_ner" target="_blank">coleridge_ner</a> (sub). <br> Please refer to README.md of the main repository to reproduce the training and the prediction. </p>
Coleridge Initiative - Show US the Data
516th solution with the one which is supposed to win Copper medal in private score
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Coleridge Initiative - Show US the Data <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The one with the private score is 0.196 (summit with 516th)<br> original by Jelal External_Datasets_Matching + MLMv4</p> <p>The one with the private score is 0.228 (didn't summit)<br> <a href="https://www.kaggle.com/batprem/coleridge-initiative-solution-mlmv2-1da4df?scriptVersionId=66296172" target="_blank">https://www.kaggle.com/batprem/coleridge-initiative-solution-mlmv2-1da4df?scriptVersionId=66296172</a><br> original by Coleridge Initiative Solution - MLMv2</p> <p>I join this competition at an earlier stage but didn't make any progress. 2 days before the deadline come back to check it again and the validated dataset seems to change. I decided to modify public notebooks of "Jelal" and "Coleridge Initiative Solution - MLMv2" by playing around with datasets.</p> <p>Jelal modification gives a bit higher public score by 0.576 and 0.575 where Coleridge one gives me 0.574, and that I decided to choose the top 2 as my final submission. It turns out that in private score Coleridge one performs better.</p>
Coleridge Initiative - Show US the Data
55th place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Coleridge Initiative - Show US the Data <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Coleridge Initiative 55th place Solution</h1> <p>Since I went through the huge shakedown (3rd-&gt;55th), this solution may not be the effective approach, but I will share my solution with you. </p> <p>If give me any feedback on this solution, I would be happy!! </p> <p>I use the two stage predictions which is text matching and NER. </p> <p>Of the 2nd Stage predictions, added the ones that are not similer to the 1st Stage predictions to the 1st Stage predictions. <br> This score was lower than the 1st Stage score in public. </p> <p>The following table is the public score and the private score of each stage. <br> My inference notebook is <a href="https://www.kaggle.com/spshota/coleridge-text-matching-and-ner-cnt?scriptVersionId=66373085" target="_blank">here</a>.</p> <table> <thead> <tr> <th>Stage</th> <th>Public Score</th> <th>Private Score</th> </tr> </thead> <tbody> <tr> <td>1st</td> <td>0.673</td> <td>0.319</td> </tr> <tr> <td>2nd</td> <td>0.636</td> <td>0.303</td> </tr> <tr> <td>1st+2nd</td> <td>0.672</td> <td>0.322</td> </tr> </tbody> </table> <h2>1st Stage: Text Matching with Additional Labels</h2> <p>I added labels to the labels of training data by the following method. <br> I have adapted this solution only for train data, but the one I applied to test data as well was my best submit. (Public: 0.662, Private: 0.427) </p> <h3>1-1. Headline Style Phrases Extraction</h3> <p>Looking at the articles, we can find that most of the labels in the training data is used in the following structure. <br> I used regular expressions to extract the expressions for the bolded parts. </p> <p>"the" + <strong>[several words written in headline style]</strong> + "(" + [an abbreviation of headline] + ")" <br> e.g. the Alzheimer's Disease Neuroimaging Initiative (ADNI) </p> <p>After the extraction, I counted DF values of the headline phrases. </p> <h3>1-2. Additional Labels Selection</h3> <p>I selected phrases by the rule-based approach. <br> Of the extracted phrases of 1-1, add the ones that match the following conditions to the labels for text matching. </p> <ul> <li>DF value &gt;= 20 </li> <li>Contains one of the curated topic words </li> <li>Not contains any of the curated stop words </li> <li>Not similar to any of the labels of train data(jaccard similarity &lt; 0.5)</li> </ul> <h2>2nd Stage: Named Entity Recognition (NER)</h2> <p>I spent most of my time on this approach because I joined this competition to learn NER. <br> However, since my approach at this stage was strongly fitting to public and basically doing the same thing as text matching, the total score did not change much from the predictions of the first stage. </p> <h3>2-1. Training Data for NER</h3> <p>I generated train data for NER from the text divided into sentences as the follow. </p> <pre><code>Positive samples: text contains the labels of the train and 1-2 Negative samples: text contains the labels not selected in 1-2 </code></pre> <h3>2-2. Model Architecture</h3> <p>ELECTRA-base + Spatial Dropout + Bi-LSTM(using attention mask) + Dense(Activation: Softmax) + CRF(using attention mask) </p> <h3>2-3. Hyper Parameters</h3> <table> <thead> <tr> <th>Item</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>Optimizer</td> <td>Adam</td> </tr> <tr> <td>Learning rate</td> <td>1.6e-4</td> </tr> <tr> <td>Sequence Length</td> <td>128</td> </tr> <tr> <td>Spatial Dropout ratio</td> <td>0.1</td> </tr> <tr> <td>Bi-LSTM output size</td> <td>256 * 2</td> </tr> <tr> <td>Batch size</td> <td>128</td> </tr> <tr> <td>Folds</td> <td>4</td> </tr> <tr> <td>Epoch</td> <td>100</td> </tr> <tr> <td>Patiance of early stopping</td> <td>20</td> </tr> </tbody> </table> <h3>2-4. Label Selection</h3> <p>The labels predicted by the 3 folds was taken as the predictions of this stage. <br> (I tried to use the mean values of CRF's potentials, but they didn't work not well…) </p> <p><a href="https://www.kaggle.com/leecming" target="_blank">@leecming</a> <br> Thank you for your comment!<br> It’s has been fun to fight on the higher rank of public LB for 3 months!!<br> (But, my private score was much different than yours lol.)</p> <blockquote> <p>It’s unfortunate it didn’t translate to private LB but tbh, not sure what else you could’ve done given we only had public LB as reference.</p> </blockquote> <p>Yes, I also think that was an important and difficult point.<br> I think the biggest reason for my defeat was that I fought without a guideline for tackling the problem.</p> <p>I would like to make use of this experience in the next competition and aim for gold.</p> <p><a href="https://www.kaggle.com/kucherevskiy" target="_blank">@kucherevskiy</a> <br> you for your comment!</p> <blockquote> <p>One of the approaches that I used to solve this problem was also NER, on manually marked up data I noticed that architectures based on transformers are worse at defining the context on the basis of which one can say whether the described dataset was used in the article or it is just mentioned.</p> </blockquote> <p>I think so too. <br> Using NER without rich context recognition, I think that it's almost the same as text matching.</p> <blockquote> <p>It's funny that the best solutions are based simply on regular expressions)) It was a great experience anyway!</p> </blockquote> <p>It's so sad. <br> But I should have focused on this solution. lol</p>
Coleridge Initiative - Show US the Data
5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Coleridge Initiative - Show US the Data <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>create <a href="https://github.com/vlomme/Find-dataset-from-text" target="_blank">github</a></p> <p>No. I just didn't want to repeat datasets. I think you can just delete it, the rating shouldn't go down</p> <p>Thanks. I'm sad that you didn't participate</p> <p>Thanks to the organizers for an interesting competition.</p> <h1><a href="https://www.kaggle.com/vlomme/coleridge-initiative-5th-place?scriptVersionId=66408633" target="_blank">My solution</a></h1> <p>I tried two approaches (no dataset label string matching)</p> <ul> <li>Regular expression - 0.5/0.56</li> <li>BERT NER - 0.37/0.52</li> </ul> <p>Ensemble, searching among known datasets and other approaches with BERT did not improve the results</p> <h2>Regular expressions</h2> <ul> <li>Looking for uppercase letters</li> <li>Looking for nearby words with these letters</li> <li>Just looking for words beginning with uppercase letters</li> <li>I memorize all found and search among other documents</li> </ul> <h2>BERT NER</h2> <ul> <li>I divide the document into sentences of 200-400 characters</li> <li>Select potential candidates</li> <li>I take 90% of sentences with tags and 10% without tags</li> <li>Use 3 classes - no class, first word and last word of dataset name</li> <li>Predict and select good candidates</li> </ul>
Coleridge Initiative - Show US the Data
6th place solution (lucky novices!)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Coleridge Initiative - Show US the Data <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Together with my teammate, <a href="https://www.kaggle.com/federicomatorra" target="_blank">@federicomatorra</a>, we are a couple of lucky newcomers!. But I have to say it’s was not luck at all, we invested many hours thinking of and implementing a strategy and finally results were favorable.</p> <p>We were quite sure that the private test set was a completely different issue, and so it was important to think of a good statistical approach, but with the additional restriction of scarce resources and ignorance of the latest developments in NLP.</p> <p>So, in summary, although we evaluated many ideas, we used spaCy 2 (v2.spacy.io) for text classification in 2 steps, first for separating sentences (ie., sentences that mention a dataset vs no dataset at all) and second for identifying datasets versus ORGs, LOCs, etc., and many other acronyms and names that were only (at least in our view) false positives.</p> <p>To train the text classification models, we used the datasets names provided for training and managed to create a list of additional dataset names together with corresponding acronyms (positive and negatives) using an abbreviation detector (<a href="https://allenai.github.io/scispacy/" target="_blank">https://allenai.github.io/scispacy/</a>) and matching for words such as dataset, databank, survey, etc.</p> <p>We think if was simple enough to be trained and tested fast and flexible enough for adapting to unknown publications.</p> <p>The notebook that was scored in 6th place it was a hybrid of text matching and the approach mentioned above but as I said, in our opinion, the public and private sets of documents were two completely different universes, so it was not possible to finish in the first ten using only string matching.</p> <p>We want to thank the organizers and congratulate the immense number of participants and in particular, those who finished in the medallist positions. Also, recognize the work done by Coleridge to set up the training and testing sets.</p>
Coleridge Initiative - Show US the Data
73rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Coleridge Initiative - Show US the Data <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Our team used an Ensemble of Roberta base and Bert base uncased</p> <ul> <li>Train data - the competition training data</li> <li>Label - Competition labels + extra labels shared by <a href="https://www.kaggle.com/mlconsult" target="_blank">@mlconsult</a> + labels from Paperswithcode.</li> </ul> <p>Approach</p> <ul> <li>Roberta model trained for 2 epoch and Bert base trained for 1 Epoch. </li> <li>Get common predictions from both the models (intersect the predictions).</li> <li>Remove predictions which are &lt;= 2 words other than these ('adni', 'cccsl', 'ibtracs', 'slosh model')</li> <li>Along with the predictions add the String match of train_labels and extra labels shared by <a href="https://www.kaggle.com/mlconsult" target="_blank">@mlconsult</a>. </li> </ul> <p>Apparently the last submission I made for this competition turned out to be the best one that's non-string matching in our private LB too. <br> We were sure that string match won't help in Private LB. But at the same time we were sure if we are performing bad on the public LB with string match it will affect the private LB too, since it is an indication of the possible datasets present in the private LB and did not feel that it could be overfitting the public LB. </p> <p>Our intuitions with string match:</p> <ul> <li><p>Of course like everyone else even my best on the public LB was a string match however I used the string match submission to understand the labels.</p></li> <li><p>if we add one extra label and if the score goes up which means the label could be present in the Private LB too and can use that label for training. </p></li> <li><p>However if the extra labels though it might sound like a dataset and though it might be in the train data(by string match) and if it does not decreased the public LB this means that the label would a FP since it may not be labelled by the Kaggle team probably. So there is no point in adding them to the model as they may not be labelled in the private LB too. </p></li> <li><p>By adding one extra label which sounds like a dataset, if the string match did not change the public LB score/Improve then we can add them to the model. Reason the extra label did not match so no chance of FP on 18% but could be present in private LB, if the score improved then the label is in 18% of the data and chances of being in private data is also high.</p></li> </ul> <p>Apparently I trusted the Public LB even if it's only 18% of the overall data.</p> <p>Thank you so much to my teamate <a href="https://www.kaggle.com/jagadish13" target="_blank">@jagadish13</a> for pairing up with me and chose the final submission trusting the extra label :)</p> <p>This competition will be one of the special ones for me though there were lot of changes that happened in between, with not enough labels, no proper validation strategy. </p> <p>I got my first medal and I'm happy though I slipped by some 30 positions :)</p> <p>You are welcome 👍 😊</p>
Coleridge Initiative - Show US the Data
How I missed a gold medal in my first kaggle competition (could've got 13th place)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Coleridge Initiative - Show US the Data <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for your kind words. I hope the shake up won’t be as horrible as coleridge😂</p> <p>I want to start by saying thank you Coleridge Initiative for hosting such an interesting competition and thank you to the community that makes Kaggle such a friendly, cozy place to learn data science. </p> <p>This is my first competition on Kaggle and my first experience in NLP. I came into the competition not knowing what to expect. I came out with a bronze medal but most importantly, the knowledge I learned from so many people. Special thanks to <a href="https://www.kaggle.com/tungmphung" target="_blank">@tungmphung</a> for sharing great notebooks that I learned so much from at the beginning.</p> <p>As the title says, I could've got a gold medal. I had a notebook with private LB 0.444<br> <img src="https://i.ibb.co/fqQshCc/0-444.png" alt=""> I stopped working on the competition a few weeks ago and I forgot that I have to choose the notebooks for private LB! </p> <p>In the end, the bronze medal I got was from a LB probing notebook 😂 lol</p> <p>what I did in the would've-been-13th-place-notebook was really simple:</p> <ol> <li>divide train data into 256 tokens long chunks</li> <li>use all chunks that have positive labels in them and discard half of the chunks with no labels</li> <li>take XLM-Roberta finetuned on conll03, from dense layer copy weights for only "O" (out of entity) and "MISC" label and use "MISC" for positive in binary classification head</li> <li>train 3 epoch</li> </ol> <p>I was having mixed feelings about losing a gold medal for a few moments, but I had a great time working on this competition and learning from every notebook and discussion, what more can I ask for?</p> <p>I am still very much a beginner myself, so I don't know if I could help you much. But if you want my 2 cents, here it is. </p> <p>Public kernels are very good for beginners. I went through all the popular ones, picked out ones that made more sense to me and tried to replicate them. </p> <p>Huggingface source code is a great place to learn when implementing a training loop and adding custom features. </p> <p>Towards data science and medium are good sources to learn as well. Posts generally consist of easy-to-understand explanations and illustrations; check them out first when it shows on your google search.</p> <p>Research papers, this is probably the most important one. </p>
Coleridge Initiative - Show US the Data
NoML : beginner's luck (28th)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Coleridge Initiative - Show US the Data <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Even though there was quite a large drop (-13) in private leaderboard, I got a nice score (maximum 0.410) through public leaderboard probing. I used simple string matching and RE matching. Although I contained BERT in my final submission, it didn't make a significant lift for me.</p> <h3>1. Simple String Matching</h3> <p>I googled the existing dataset, and cherry-picked the datasets which I think are likely to be contained in test data. Although the amount was small (about 2kb), I can make a huge lift-up in public leaderboard using this external dataset.</p> <h3>2. Regular Expression Matching</h3> <p>At first I mapped the original tokens in the texts into a token type (acronym, a word that starts with capital letter, etc) through Regular Expression. Then I used RE again to find phrases which are likely to contain dataset names, and filtered the phrases based on the indicators (dataset, database, survey, etc).</p> <p>These LB probing indeed led to overfitting, however. It's clear that using proper ML models combined with these methods will make a better result. My notebook will be made public as soon as I check up the references and all other stuffs.</p>
CommonLit - Evaluate Student Summaries
108th solution🥉
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Solution</h1> <p>My best submission was almost same as this public code.<br> (<a href="https://www.kaggle.com/code/tsunotsuno/debertav3-lgbm-no-autocorrect" target="_blank">https://www.kaggle.com/code/tsunotsuno/debertav3-lgbm-no-autocorrect</a>)<br> I appreciate <a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a> to share the sophisticated notebooks!</p> <p>I changed the deberta-v3-base to deberta-v3-large.<br> Training was done on google colab notebook.</p> <h1>What I tried but didn't work</h1> <ul> <li>MLM training for summary texts</li> <li>summary_text + prompt_question +prompt_text training.</li> <li>feed manual features made in the above public notebook to the nlp header.</li> </ul> <p>The fact that training set and test set have no common prompt_id and the training set have only 4 prompt_ids (it seems not enough for the model to learn the interaction pattern between prompts and summaries) seemed to be the most important factor in this competition to me.</p> <p>So I thought it might be efficient to teach the model the interaction of the prompt and summary using the test set, because we have only 4 prompts in our training set but many prompts in test set. So I tried deberta-v3-base MLM training on test set before regression training, but it didn't work.</p> <p>Thinking the way to tackle this point wasted me the most of the time I consumed in this competition…<br> I eventually selected the deberta-v3-large simple solution but there were potential to be improved…</p> <h1>What I should have done</h1> <ul> <li>ensemble nlp models using different combination of input order</li> <li>use longer token length</li> <li>ensemble GBDT stacking</li> <li>pooling techniques (mean/max pooling, convolution, LSTM)</li> </ul> <p>and so on.</p> <p>I didn't noticed changing the order or combination of the input to nlp model make the enough difference to the nlp model to be used for ensemble!</p> <h1>Other</h1> <p>I noticed that deberta sequence classification model implementation uses 'type_ids' argument instead of 'token_type_ids' argument.<br> This seems to be one of the cause that including the prompt_text to the deberta input harms the model performance when you use huggingface Trainer and AutoTokenizer combination.<br> Maybe the SEP token is not enough for the model to distinguish where to score the content and wording.<br> You can solve this issue by changing the returned key name by tokenizer from token_type_ids to type_ids.</p> <p><a href="https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/apps/models/sequence_classification.py" target="_blank">https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/apps/models/sequence_classification.py</a></p>
CommonLit - Evaluate Student Summaries
11th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>back translated samples were pseudo labeled </p> <p>Thank you for your question, This model had all 7165 samples provided in the competition + back translated pseudo labeled samples. <br> So every fold had ~9000 training samples (~6000 provided by the competition + ~3000 back translated samples ) </p> <p>Thank you Learning Agency Lab and Kaggle for hosting this competition. We are relieved 😌 to survive the shake up. </p> <p><strong>Context:</strong> <a href="http://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/overview/description" target="_blank">www.kaggle.com/competitions/commonlit-evaluate-student-summaries/overview/description</a></p> <p><strong>Data:</strong> <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/data" target="_blank">https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/data</a></p> <h3>Overview of the approach</h3> <p>Our selected submission had 17 diverse models. We used stacking with LGBM. </p> <p>We used one prompt per fold cv strategy. All our models had collate function which speeds up training as well as inference. Lower batch sizes were helpful as well. Most models had evaluation at the end of epoch, few models had evaluation multiple times during epoch. Many of the diverse and smaller models were distilled (described below). </p> <p>All of us were able to contribute models to the stack, some of us never checked individual models scores because public LB seemed unstable. Our stack seemed more stable than individual models. </p> <p>Apart from model predictions we had following features in stack: </p> <pre><code>word_overlap_count bigram_overlap_count bigram_overlap_ratio trigram_overlap_count trigram_overlap_ratio - Text stat features Osman Gutierrez_polini Dale_chall_readability_score flesch_kincaid_grade </code></pre> <h3>Model Description</h3> <p>Here we describe each of the models : </p> <p><strong>(A)</strong> <strong>Deberta v3 large - full prompt text psuedo labeled model</strong> [ar3]<br> This model had best cv (0.4721) in stack - This was trained with max len 1790, attention head on top. This was also using back translated pseudo labeled dataset. Every sample was translated to german and then back to english. We randomly selected 50% of this data for training. So every training fold had different ~3100 samples that were pseudo labeled , validation set had no psuedo labeled samples. <br> Loss function for this model was combination of SmoothL1Loss, RMSE Loss and RankLoss - <a href="https://www.kaggle.com/fightingmuscle" target="_blank">@fightingmuscle</a> described the loss function <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/446554" target="_blank">here</a><br> Input for this model - <code>[ANSWER_START] + summary_text + [ANSWER_END] + [PROMPT_START] + prompt_text [PROMPT_END] + [QUESTION_START] + prompt_question + [QUESTION_END]</code><br> Inference maxlen = 1024</p> <hr> <p><strong>(B)</strong> </p> <ul> <li>Deberta V3 Large [MK 43] - MeanPooling</li> <li>Deberta V3 Large [MK 45] - MeanPooling</li> </ul> <p>These were the same models with different seeds and mean pooled, input to these were summary_text + prompt_question, RMSE loss, validation end of epoch. </p> <hr> <p><strong>(C)</strong> </p> <ul> <li>Deberta V3 Large [r3] - CLS </li> <li>Deberta V3 Large [r4] - CLS</li> <li>Deberta V3 Large [r5] - CLS </li> </ul> <p>These were the same models with different seeds and with CLS token, input to these were summary_text + prompt_question, MSE Loss,validation end of epoch. I added extra text “Summarize:” to prompt question , if we look 👀 at the data 🔎carefully, notice that there are prompts with "cite evidence" and these have much longer summary texts and it doesn't say summarize. After reading other solutions, should have tried some more variations to this. </p> <hr> <p><strong>(D)</strong> </p> <ul> <li>Bart Large - Attention Head (Distilled)</li> <li>Electra Large - Attention Head (Distilled)</li> <li>Funnel Large - Mean pooling (Distilled)</li> </ul> <p>All these models were distilled - 50% oof labels and 50% true labels, with AWP added - only summary_text as input, validated multiple times during epoch. </p> <hr> <p><strong>(E)</strong></p> <ul> <li>Deberta v3 large [ar2] - CLS Token </li> <li>Deberta v3 base [ar4] - CLS Token - Distilled model </li> <li>Deberta v3 XSmall [ar5] - CLS Token - Distilled model </li> <li>Deberta v3 XSmall [ar7] - Distilled model - Bigram Signal </li> </ul> <hr> <p>Deberta v3 large [ar2] - Order of input to this model was different than the remaining models - <code>text + SEP + prompt_question</code> </p> <p>Deberta v3 Base and XSmall models [ar4, ar5, ar7] - These were distilled models. We utilized oof labels from a previous stack. These models were trained using oof labels where as validation set remained as true labels. </p> <p>Bigram Signal - </p> <pre><code> outputs = .model(input_ids, attention_mask, token_type_ids) bigram_signal = outputs.last_hidden_state[, , ] * outputs.last_hidden_state[, , ] feature = bigram_signal[, , ] feature </code></pre> <p><strong>(F)</strong> <strong>Deberta v3 large - Classification - Attention Head , Mean Pooling - (4x)</strong> [ar6]<br> After analyzing data, we noticed there were 1134 distinct categories of content and wording, so we added a classification model with <code>BCEWithLogitsLoss</code>. We trained 4 different models with different heads, averaged them together and then added to the stack. </p> <h4>Other</h4> <p>We haven't understood how or why the splits for training/public lb/private lb were created. The distribution of prompts for training and public lb is very different than private lb. Very long prompt text in public and training set but not in private lb. Also number of summary text per prompt is very different for training/public LB and private LB, unfortunately these resulted in shake down for many many teams. </p> <p><em>Many other things tried but were not part of selected submission</em></p> <ul> <li>MLM based on this dataset shared <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/441202" target="_blank">here</a> </li> <li>RAG (Similar to what was shared in LLM )</li> <li>Many different architectures , heads, hyperparameters, loss functions </li> <li>2x T4 Inference </li> <li>Large Distilled models </li> <li>Adding more features to LGBM</li> <li>Catboost, XGBoost, NN alongwith LGBM</li> <li>Generating data from GPT3.5</li> <li>I'm glad we didn't spend much time on sliding window approach because the maxlen of prompts in private lb is less than or equal to public lb. </li> </ul> <p><strong>Acknowledgements</strong><br> Thank you <a href="https://www.kaggle.com/kononenko" target="_blank">@kononenko</a> for <a href="https://www.kaggle.com/code/kononenko/pip-install-nlp-mit?scriptVersionId=142526877" target="_blank">this</a> compliant textstat kernel </p> <p>Thank you <a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a> for <a href="https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-spell-autocorrect" target="_blank">LGBM features</a> </p> <hr> <p>🙏 Thank you to my amazing teammates, grateful to have worked together on this with you all. </p> <ul> <li><a href="https://www.kaggle.com/aman1391" target="_blank">@aman1391</a> </li> <li><a href="https://www.kaggle.com/phoenix9032" target="_blank">@phoenix9032</a> </li> <li><a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> </li> <li><a href="https://www.kaggle.com/fightingmuscle" target="_blank">@fightingmuscle</a> </li> </ul>
CommonLit - Evaluate Student Summaries
12th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for your comment.</p> <p>CLSPooling using only the final layer certainly did not work. Therefore, I used the averaged CLS pooling of the last 4 layers and the accuracy increased.(call CLS Mean Pooling?)</p> <p>But, since the training was not stable and CV of some seeds were low score, I tried several seeds and used the model with the highest CV.</p> <p>Thank you for organizing such an interesting competition!<br> I was unable to sleep due to the fear of shakedown in the last part of the competition, but I am happy to have remained in the gold medal position.</p> <p>I would like to briefly summarize my methodology.</p> <h1>Model &amp; Result</h1> <p>My best private solution is an ensemble of long mode with prompt_text and short model without prompt_text.<br> The former is intended to evaluate that the text is summarized correctly, and latter to evaluate that sentence structure and word quality is good or bad.</p> <h2>Long Model</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3310894%2F0c8e645a91c267f1452d1356066ca558%2Flong.png?generation=1697338005852669&amp;alt=media"></p> <p>Using prompt_text and long max_len worked very well for me. But inference time is long (over 6 hour) and I was not able to ensemble with other long model. So using the output of the Pooling Layer, trained LightGBM to improve robustness.</p> <h2>Short Model</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3310894%2F0626c69c7fba14b4528ea768288fe45c%2Fshort.png?generation=1697338755742782&amp;alt=media"></p> <p>Most of this model is based on <a href="https://www.kaggle.com/code/tsunotsuno/debertav3-lgbm-no-autocorrect" target="_blank">a very nice public notebook</a>.<br> The changes are as follows</p> <ul> <li>change deberta-base to deberta-v3-large</li> <li>non text cleaning</li> <li>drop prompt_length</li> <li>change definition of "overlap"</li> <li>freeze top 12 deberta layers / non layers ensemble</li> </ul> <h2>Ensemble Model</h2> <p>I simply ensemble (Long + Short)/2<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3310894%2F420023ddb770abcdfe7ad91ac5fec276%2Fens.png?generation=1697339278226917&amp;alt=media"></p> <h1>Not working for me</h1> <ul> <li>Text Cleaning</li> <li>Backbone except deberta-v3-large</li> <li>MLM</li> <li>AWP</li> <li>Augmentation with ChatGPT</li> <li>SVR, xgboost (As an alternative to lightgbm)</li> </ul> <p>Thanks.</p>
CommonLit - Evaluate Student Summaries
14th place Solution (lucky for me)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The tuning was stable. I did not do too many re-runs, but the ones I did were quite ok. The tuning was done in a step-wise manner. After several experiments I found max_length impacted the results the most. So I worked on that one first by trying several candidates for max_length. Then I did a grid search on learning_rate and weight_decay. </p> <p>First my HUGE thanks to the competition host. what a fantastic competition! I indeeed had a fun time kaggling!<br> Also HUGE thanks to the shared notedbooks and the valuable discussions, from which I got a lot of inspiration and enlightment.</p> <p>I'm not actually an NLP expert with all sorts of experiences and tuning skills, thus I felt very lucky that I could have got this far. Also because of the shake which fortunately turned out to be in my favour this time, doesn't really happen very often, :p.</p> <p>Like many other top solutions, my model is acutally very simple deberta-v3-large. But the way I got there was rather serpentine.</p> <h3>MY SOLUTION</h3> <ul> <li><strong>Inputs</strong>:<br> Paried input into tokenizer: <em>summary text</em> and <em>prompt title</em> +[SEP]+ <em>prompt text</em> +[SEP] + <em>prompt question</em></li> </ul> <pre><code>prompt = sep.join([data[], data[], data[]]) tokenized = tokenizer( data[], prompt, =, =, =config.max_seq_length, ) </code></pre> <ul> <li><p><strong>Model</strong>:<br> deberta-v3-large. Like everybody else :P. I also tried some other models: deberta-v3-large-squad2, deberta-v3-base</p></li> <li><p><strong>Max Length</strong>:<br> I tried different max lengths, the final model was with 1600.</p></li> <li><p><strong>Model set up and Hyperparameters</strong>:<br> learning rate: after many rounds of hyperparameter tuning, I found 6e-6 the best<br> num_epochs: 4 <br> weight_decay: after hyperparameter tuning, 1e-4 <br> warmup_ratio: 0<br> batch_size: 2 for training and eval (I bascially cannot afford a larger batch_size due to GPU memory)<br> lr_scheduler_type: cosine<br> optimizer: adamw<br> I also tried freezing some layers, but did not boost local CV so the final model I actually did not freeze anything.\<br> No drop was used, no extra pooling was used.<br> Moreover, I found that local CVs and public LBs positively correlated with max length in training. Later the private LB proved this. Models with larger max lengths tend to have better private LBs.</p></li> <li><p><strong>CV Results and Ensemble</strong>:<br> the best privat LB model achieved CV: 0.5513 (fold 814d6b) , 0.4587 (fold 39c16e), 0.5038 (fold 3b9047) and 0.4316 (fold ebad26)<br> The final model is a simple ensemble of the 4 folds. <br> I actually considered to train the model using the set up above on the whole training data set. However, in the end I chose to do an ensemble of the models of different folds. This might not be the best practice, but gave me better opportunity to discover the relations between CV and public LB.<br> Public LB: 0.429<br> Private LB: 0.458</p></li> </ul> <h3>Things did NOT work</h3> <ol> <li><p><strong>MLM did not work</strong><br> This was actually a major mistake I made in this competition. I took it for granted that MLM of deberta on previous commonlit datasets would help boosting the CV. With that in mind, I actually spent the whole August and the first half of September only training the models pretrained with MLM. The best model I got with MLM had however, local CVs of 0.5449, 0.4782, 0.5182, 0.4473 and public LB 0.445. This could not then be improved anymore.<br> It was not until the third week of September did I overthrow everything and started again without any pretraining using MLM and ended up with the model described above.</p></li> <li><p><strong>Manually generated datasets with pseudo scores could help accelerate the training process but can lead to overfitting</strong><br> Almost at the end phase of the competition, I came up with the thoughts to expand the training data for pretraining task. Here is how I did this:</p> <ul> <li>Minor part of the extended training data: I downloaded from commonlit website some texts as prompt texts and questions as prompt questions. I then generated summary texts using gpt-4 by letting the machine mimick the way students write. The pseudo scores were given by the best model I created at that time. That gave me around 300 more training rows.</li> <li>Major part of the extended training data: without any prompt text, I directly used the student written texts in the previous commonlit competitions and let my best model score them. That gave me ~4000 extra rows</li></ul> <p>I hope this explains your concern <a href="https://www.kaggle.com/kononenko" target="_blank">@kononenko</a> :p. </p></li> </ol> <p>I used the extended data to do pretraining task. I got better local CV results and earlier converged networks. <strong>HOWEVER</strong>, the public LBs were not improving and in the end, the private LB turned out to be even worse :(. It could have led to overfitting (good lesson learnt). So my final model was not trained on this dataset.</p> <ol> <li><strong>Lightgbm did not work</strong><br> I did not dive deeply enough to understand why lgbm did not work. I actually expected it could boost my CV.</li> </ol> <h3>Things that I would like to try if I were given more time</h3> <ol> <li>EMA<br> I tried EMA in the early phase of this competition, with models pretrained by MLM. It did not help boosting the local CV so I didn't even bother submitting them. I would like to try EMA on the model without MLM pretraining if I had been given more time.</li> <li>layer-wise discriminative learning rates <br> same with EMA, i only applied this in my early models but did not boost local CV.</li> <li>play with more max length set ups</li> </ol> <h3>Here is a summary of the models I tried</h3> <p>*All LBs are the ensembles over 4 folds</p> <table> <thead> <tr> <th>Model name</th> <th>Remarks</th> <th>Local CV</th> <th>LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>Model 2308</td> <td>pretrained with MLM, both pretraining and training are with max length 1536</td> <td>0.5449, 0.4782, 0.5182, 0.4473</td> <td>0.445</td> <td>0.481</td> </tr> <tr> <td>Model 2708</td> <td>pretrained with MLM with max length 2048, model training with max length 1536</td> <td>0.5778, 0.4598, 0.5305, 0.4369</td> <td>0.451</td> <td>0.466</td> </tr> <tr> <td>Model 3008</td> <td>pretrained with MLM, both pretraining and training are with max length 2048</td> <td>0.5532, 0.4706, 0.5226, 0.4383</td> <td>0.447</td> <td>0.471</td> </tr> <tr> <td>Model 1009</td> <td>pretrained with MLM. EMA used in training</td> <td>0.5631, 0.4728, 0.5201, 0.4407</td> <td>not submitted</td> <td>not submitted</td> </tr> <tr> <td>Model 2609 (best model)</td> <td>no pretraining, max length 1600</td> <td>0.5513, 0.4587, 0.5038, 0.4316</td> <td>0.429</td> <td>0.458</td> </tr> <tr> <td>Model 0810</td> <td>with incomplete extended dataset to pretrain, max length 1600</td> <td>0.5363, 0.4454, 0.4974, 0.4314</td> <td>0.431</td> <td>0.474</td> </tr> <tr> <td>Model 1010</td> <td>with complete extended data to pretrain, max length 1600</td> <td>0.5199, 0.4451, 0.4958, 0.4356</td> <td>0.458</td> <td>0.469</td> </tr> <tr> <td>Model 0810 + lgbm</td> <td>with complete extended data to pretrain, max length 1600, plus lgbm</td> <td>0.6154, 0.4695, 0.4990, 0.4391</td> <td>0.435</td> <td>0.47</td> </tr> </tbody> </table> <p>As you can tell, with only a few exceptios, my public LBs and private LBs correlate somehow well.</p> <p>congrats to you too!</p> <p>thanks very much !!!</p> <p>the post is finished. I tried to generate some extended training data. It accelerated the convergence and boosted local CVs but unfortunately not public or private LB :(</p>
CommonLit - Evaluate Student Summaries
167th Place Solution [Deberta Large with LoRA]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h3>Submission Overview</h3> <table> <thead> <tr> <th>Models</th> <th>CV</th> <th>Public</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>Deberta base + lgb</td> <td>508</td> <td>447</td> <td>485</td> </tr> <tr> <td>Deberta large LoRA + lgb</td> <td>501</td> <td>441</td> <td>495</td> </tr> <tr> <td>Result blend + lgb</td> <td>492</td> <td>435</td> <td>482</td> </tr> </tbody> </table> <p><br></p> <h3>Things which didn't work</h3> <p>Fine-tuning like GPT models for regression<br> SFT Llama for direct target generation<br> Additional data from ChatGPT, SFT Llama2<br> <br></p> <h3>Example code how to fine-tuning with LoRA</h3> <p><a href="https://www.kaggle.com/code/alexcei/fine-tuning-deberta-large-with-lora-adapter" target="_blank">fine-tuning-deberta-large-with-lora-adapter</a></p>
CommonLit - Evaluate Student Summaries
16th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks, Kaggle and <a href="https://www.kaggle.com/cookiecutters" target="_blank">@cookiecutters</a> for this interesting competition. Congratulations to all winners, and gold medalists. I would like to give credit to my talented and hard-working teammates <a href="https://www.kaggle.com/conjuring92" target="_blank">@conjuring92</a> and <a href="https://www.kaggle.com/syhens" target="_blank">@syhens</a>.</p> <p>Kudos to <a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a> for their fantastic work on LGBM trick, and all participants of this competition for their sharing and hard work.</p> <h1>First words:</h1> <ul> <li>We think this competition is a perfect example of out-of-distribution forecasting, therefore, we don’t fully trust either CV or LB. Instead, we judge the models by our sense of training stability, partly CV, partly LB, and diversity;</li> <li>Therefore, we try to think of as many model architectures, and model training techniques as possible;</li> </ul> <h1>Model 1 - Transformer + Catboost</h1> <h3>Transformer model</h3> <ul> <li>CV/LB/PB: 0.552/0.488/0.556</li> <li>Input: text + [SEP] + prompt_question</li> <li>Loss: MCRMSE </li> <li>Backbone: deberta-v3-large</li> <li>Hyperparams:<ul> <li>max_length: 256</li> <li>pooling: attention</li> <li>random-reinit: 1</li> <li>differential learning rate factor: 2.6</li> <li>lr: 1e-5</li></ul></li> <li>epochs: 4</li> <li>evaluate at the end of each epoch</li> <li>add additional 2x full-fit checkpoints, it helps to gain 0.003 in PB</li> </ul> <h3>GBDT - Catboost</h3> <ul> <li>CV/LB: 0.513/0.443/0.484</li> <li>Hyperparams:<ul> <li>learning_rate: 0.05</li> <li>Cat_features = [“summary_n_sentence”]</li></ul></li> <li>Features:<ul> <li>summary_length, summary_n_sentence, mean_edit_distance of summary sentences, …</li> <li>remove many features like spell errors, POS tag features, …</li> <li>removing prompt_length improves CV a bit, but no help in LB/PB</li></ul></li> <li>Total inference time: 32mins</li> </ul> <h1>Model 2 - Span Model</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2176524%2F6170216dc252bbe8f8ef098e626a3b4b%2FSpanModel.png?generation=1697201774978961&amp;alt=media" alt=""></p> <ul> <li>Reformulated the problem as a span-regression task by randomly grouping N summaries associated with a particular prompt and predicting their scores together </li> <li>Predictions were conditioned on a context that consisted of the top 8 sentences from the prompt text as ranked by their importance. </li> <li>The importance value of a sentence from the prompt text was determined as the average cosine similarity score between the sentence and all summaries for the prompt</li> <li>Used MSE Loss on the context and wording targets</li> <li>Discretized the content and wording scores into 16 bins to create auxiliary targets and used ArcFace loss on aux targets</li> <li>Used ranking loss among summaries in each example span</li> <li>During inference, the random grouping of N summaries provided a TTA-like impact</li> <li>Private LB by using 2x full-fit checkpoints: 0.473</li> </ul> <h1>Model 3 - Search Approach</h1> <ul> <li>Input text is comprised of a context and the student summary</li> <li>Context:<ul> <li>Retrieved the top 3 sentences from the prompt text based on cosine similarity scores with the summary. </li> <li>Concatenated prompt title and prompt question </li></ul></li> <li>Trained the model using BCEWithLogitsLoss<ul> <li>Scaled the scores between 0-1 </li> <li>Used BCEWithLogitsLoss to compute the loss between scaled scores and logits</li></ul></li> <li>Private LB by using 2x full-fit checkpoints: 0.476</li> </ul> <h1>Model 4 - Long Context</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2176524%2F9184846a10fbdd62ab5d384d294d81d4%2FLongContext.png?generation=1697201712777096&amp;alt=media" alt=""></p> <ul> <li>Remarks:<ul> <li>max_len: 1024</li> <li>lr: 2e-5</li> <li>weight_decay: 1e-2</li> <li>n_epochs: 3</li> <li>AWP and EMA after 1 epoch</li> <li>evaluate 10 times/epoch</li> <li>Inference time: 205 mins (4 folds)</li> <li>CV/Public LB/Private LB: 0.499/0.438/0.462</li> <li>Backbone: OpenAssistant/reward-model-deberta-v3-large-v2</li> <li>We have another model with this structure and its CV/Public LB/Private LB: 0.494/0.448/0.459. Unfortunately, we didn’t include this model in the ensemble</li></ul></li> <li>Problems:<ul> <li>Inference time is too long due to the long context → not good for ensemble</li></ul></li> <li>Code:<ul> <li>Training: <a href="https://www.kaggle.com/datasets/shinomoriaoshi/commonlitsummaryv5e" target="_blank">https://www.kaggle.com/datasets/shinomoriaoshi/commonlitsummaryv5e</a></li> <li>Inference: <a href="https://www.kaggle.com/code/shinomoriaoshi/commonlitsummary-v5e-infer" target="_blank">https://www.kaggle.com/code/shinomoriaoshi/commonlitsummary-v5e-infer</a> </li></ul></li> </ul> <h1>Model 5 - Dual Encoder with Transfer Learning</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2176524%2Fde86838dc52ff852ab461f461ab2a4b7%2FDualEncoder.png?generation=1697201727858488&amp;alt=media" alt=""></p> <ul> <li>Model Ideas:<ul> <li>Instead of concatenating text and prompt info (prompt text, question, title) before passing through the backbone, we can concatenate them after the transformer. One benefit is that many summaries share the same prompts, we can compute them only once and put them into a hashing table, once the embedding of a prompt is computed, we can just query it instead of computing it again.</li> <li>However, I think this way is not good in terms of performance because there is no connection between text and prompt (they’re processed separately in spite of being processed by the same backbone). To overcome it, I load the trained backbone from the long-context model (trained with concatenated text and prompt, with a long sequence length), freeze the top 12 layers, and train the second stage model on top.</li></ul></li> <li>Remarks:<ul> <li>max_len context: 1536</li> <li>max_len text: 512</li> <li>n_epochs: 1</li> <li>AWP and EMA enabled after 50 steps</li> <li>Use LGBM trick</li> <li>Inference time: 39 mins (4 folds)</li> <li>CV/Public LB/Private LB: 0.500/0.445/0.470</li> <li>Backbone: OpenAssistant/reward-model-deberta-v3-large-v2</li></ul></li> <li>Code:<ul> <li>Training: <a href="https://www.kaggle.com/datasets/shinomoriaoshi/commonlitsummaryv6c" target="_blank">https://www.kaggle.com/datasets/shinomoriaoshi/commonlitsummaryv6c</a> </li> <li>Inference: <a href="https://www.kaggle.com/code/shinomoriaoshi/commonlitsummary-v6c-infer" target="_blank">https://www.kaggle.com/code/shinomoriaoshi/commonlitsummary-v6c-infer</a> </li></ul></li> </ul> <h1>Final ensemble</h1> <p>We make ensembles of these above models and the public models by <a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a> (we actually train it by ourselves rather than get the model from them directly). We observe a super messy correlation between CV and Public LB here, at the ensemble phase. For example, we simply took the average of the component models, the weights were optimized with some constraints to deal with overfitting.</p> <h1>What didn’t work</h1> <ul> <li>We tried to generate synthetic data from LLMs (T5 and Llama), but it didn’t work at all;</li> <li>Training with LoRA for target-wise models worked and got CV 0.53+, but didn’t help in the local ensemble;</li> </ul> <h1>Team member</h1> <ul> <li><a href="https://www.kaggle.com/conjuring92" target="_blank">@conjuring92</a></li> <li><a href="https://www.kaggle.com/syhens" target="_blank">@syhens</a></li> <li><a href="https://www.kaggle.com/shinomoriaoshi" target="_blank">@shinomoriaoshi</a><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2176524%2F1ff6a8913b16a36c1474d8418353be2a%2Fteam.PNG?generation=1697202450769081&amp;alt=media" alt=""></li> </ul> <p>Thanks, this model is developed by Raja. <a href="https://www.kaggle.com/conjuring92" target="_blank">@conjuring92</a>, Could you please explain the model to him?</p>
CommonLit - Evaluate Student Summaries
17th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all the winners, and thanks to organizers and all participants. <br> This is my first NLP competition and I have learnt a lot to deal with this area.</p> <p><strong>Overview</strong><br> Weighted average of 4 models:</p> <ol> <li>deberta-v3-large. Using all of title, quesiton, prompt_text and summary. GroupKFold with prompt_id. (CV:0.488, Public:0.430, Private:0.463)</li> <li>Same as model1, but using mask augmentation and trained on all data.</li> <li>Same as model2, but using mask augmentation and trained on all data. (Only seed is different from model2)</li> <li>LGBM trained from the output of debereta-v3-large (prompt_text is not used for this model) and handcraft features.</li> </ol> <p>Ensemble Score: Public:0.422, Private:0.459</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5568744%2Fbe6e40d6a2a2017d84435e1ab12dce53%2Fcommonlit_solution.png?generation=1697117455399929&amp;alt=media" alt=""></p> <p><strong>Training Settings (model2,3)</strong></p> <ul> <li>Epoch: 3</li> <li>Loss: SmoothL1Loss</li> <li>lr: 1e-5</li> <li>Optimizer: Adam (weight_decay: 5e-4, beta=(0.9, 0.999))</li> <li>Scheduler: cosine scheduler</li> <li>token length: 2024 (training: 2024, inference: 1664)</li> </ul> <p><strong>Did work</strong></p> <ul> <li>Mask augmentation<ul> <li>Mask augmentation doesn't improve cv, but loss curve became more stable. I tuned hypaer parameter in GroupKFold training and apply same parameters for the training with all data (model2, model3).</li></ul></li> <li>Freezing Layers</li> </ul> <p><strong>Did not worked</strong></p> <ul> <li>Text cleaning</li> <li>AWP</li> <li>Back translation augmentaion</li> </ul>
CommonLit - Evaluate Student Summaries
191th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Solution Summary</h2> <p>My <a href="https://www.kaggle.com/code/tsunghsuanhung/lgbm-deberta-autocorrect-readability" target="_blank">solution notebook link</a> is a fork of <a href="https://www.kaggle.com/code/siddhvr/commonlit-ess-lgbm-autocorrect-deberta-v3-tuned?scriptVersionId=144476692" target="_blank">tuned-debertav3-lgbm-autocorrect</a> notebook with only one change: </p> <ul> <li>add <code>readability</code> features</li> </ul>
CommonLit - Evaluate Student Summaries
19th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I'd like to start by expressing my gratitude to the organizers for hosting this competition and inspiring engaging discussions.<br> A special congratulations to the top participants for their outstanding achievements.<br> Personally, I was on the cusp of becoming a GM with a gold medal, and it's a bit disappointing to have come so close yet fall short. Nevertheless, your hard work and insights have truly made this a memorable event for everyone. <br> Thank you all for making it such a rewarding experience.</p> <p>And also thank you for my teammate takai380. </p> <h2>Overview</h2> <ul> <li>A two-stage structure with the 1st stage being a Transformer and the 2nd stage being a GBDT (LightGBM).</li> <li>For the 2nd stage GBDT, a model is created for each out-of-fold (oof) of the Transformer, and finally, a Weighted Average is taken.<ul> <li>Some raw prediction values from the Transformers are also included in the final Weighted Average.</li></ul></li> <li>Adjust the weight according to the total token count of summary_text and prompt_text.</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1305588%2F9c362ac762756babd02f63e96e36e971%2Foverview.png?generation=1697077612844422&amp;alt=media" alt="overview"></p> <h2>Final Model</h2> <p>Three models with different features and weights were selected for the final submission:</p> <ul> <li>Sub1 (TrustLB) LB:0.422 / CV:0.463-0.465 (Different CV calculation method)</li> <li>Sub2 (TrustCV) LB:0.448 / CV:0.458</li> <li>Sub3 (Balance) LB:0.441 / CV:0.460</li> </ul> <h2>Transformer</h2> <ul> <li>pao<ul> <li>model-p1: deberta-v3-large (question + summary_text)</li> <li>model-p2: deberta-v2-xlarge (question + summary_text)</li></ul></li> <li>takai<ul> <li>model-t1: deberta-v3-large (question + summary_text)</li> <li>model-t2: deberta-v3-large (question + summary_text)</li> <li>model-t3: deberta-v3-large (summary_text + prompt_text)</li> <li>model-t4: deberta-v3-large (summary_text + question + prompt_text)</li></ul></li> </ul> <p>Predictions from model-t3 and t4 are also used in the final ensemble.<br> Model with prompt_text is strong, but it takes long time to inference, so just 2 models.</p> <ul> <li>For pao model:<ul> <li>Freezing layer</li> <li>Concatenate the embedding of deberta-v3-large before training to the final layer (only for model-p1)</li> <li>Prediction using CLS token</li> <li>Linear scheduler with warmup, 3 epochs</li></ul></li> <li>For takai model'<ul> <li>Freezing layer</li> <li>concat pooling (concat multiple layer cls tokens)</li></ul></li> </ul> <h2>LightGBM</h2> <ul> <li>Ultimately, about 80 features.</li> <li>10 seeds x 4 folds of LightGBM Average for each Transformer model.</li> <li>To add features, check if the CV is improved with a score from a 50 seed average.</li> </ul> <h3>Features</h3> <ul> <li>From public notebooks: (great thanks to <a href="https://www.kaggle.com/nogawanogawa" target="_blank">@nogawanogawa</a>)<ul> <li><a href="https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-feature-engineering" target="_blank">https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-feature-engineering</a></li></ul></li> <li>Jaccard coefficient between question/prompt_text and summary_text.</li> <li>Calculate tfidf for summary_text per prompt, and take the average tfidf per record.<ul> <li>For all of summary_text and only words not in prompt_text.</li> <li>Taking average of all columns and average of non-zero elements.</li></ul></li> <li>Cosine similarity of average tfidf/BoW of summary_text per prompt.</li> <li>Sentence Transformer embedding features.<ul> <li>Cosine similarity between prompt_text and summary_text.</li> <li>Cosine similarity between question and summary_text.</li> <li>Cosine similarity between average embedding of summary_text per prompt.</li> <li>Calculate cosine similarity between embeddings of sentences split from prompt_text and summary_text, and take the standard deviation of similarity for all sentences.</li> <li>Similarly, calculate cosine similarity between embeddings of sentences split from summary_text and prompt_text, and take the standard deviation of similarity for all sentences.</li></ul></li> <li>kNN-based features.<ul> <li>For each prompt, kNN with the embedding of summary_text.</li> <li>Feature extraction of the average and cosine similarity of the top 5% of oof features with high similarity for each record.</li> <li>The average of oof features also includes the difference from the average oof features per prompt.</li></ul></li> <li>Text similarity metrics:<ul> <li>BERT Score</li> <li>ROUGE</li> <li>BLEU</li></ul></li> <li>Learning Word2Vec with prompt_text for each prompt. Cosine similarity of the vectors of prompt_text and summary_text.</li> <li>Include prediction values from the model of Feedback competition 3 as features.<ul> <li><a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-inference?scriptVersionId=104649006" target="_blank">https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-inference?scriptVersionId=104649006</a></li></ul></li> <li>Features from pyreadability.</li> <li>Average the general frequency of words present only in summary_text and not in prompt_text for each summary_text.<ul> <li>For bert_score and embedding features, the average embedding was taken from extracting text from the beginning, middle, and end for long prompts.</li></ul></li> </ul> <h2>Ensemble</h2> <ul> <li>Weight optimization using Nelder-Mead.<ul> <li>Ultimately, only use those above 0 and round to a total of 1.0.</li></ul></li> <li>Change ensemble weights according to the token count of summary_text + prompt_text.<ul> <li>For those with a certain number of tokens or more, gradually reduce the weight of raw Transformer predictions and increase the weight of LightGBM (linearly adjust the weight by token count).</li> <li>It is assumed that when the token count is high, the prompt_text does not fit entirely in the Transformer with prompt_text input, reducing accuracy.</li></ul></li> </ul>
CommonLit - Evaluate Student Summaries
1st Place Efficiency Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is a classic distill into deberta-v3-xsmall. Runs in ~15mins. </p> <p>Inference on prompt title, prompt question, summary text.</p> <p>The magic: tokenize prompt text, lookup token/2-token/3-token matches—and include as token_type_ids (0-6, for all windows, is better than 0-3, max pooled by 1/2/3-token)</p> <p>The model knows exactly which text is quoted/copied, and which words and phrases come from the prompt, which is a lot of contextual information at 1/10th the sequence length. </p> <p>—-</p> <p>Base models are deberta-v3-base with attn/cls pooling, and 4/7 token type ids. These four plus lgb are stacked into out of fold predictions, for distillation. </p> <p>—</p> <p>Content is adjusted by ~80% and wording is adjusted by 20-40% by prompt mean. </p> <p>Final ensemble is 68/42 nn/lgb for content; 75/30 for wording. </p> <p>—</p> <p>Distillation includes cosine annealing from pseudo labels to half pseudo half label across the course of three epochs. </p> <p>Many other ideas were tried (cross-attention to prompt; embedding approaches; massive library of generative text; etc); these were the ones that worked. </p> <p>Credit to Psi, Team Turing, and Kaggle—also the gzip paper. May release code after packaging. </p> <p>Update: </p> <p>Code—<a href="https://www.kaggle.com/code/stochoshi/commonlit-d1-cpu-code-4" target="_blank">https://www.kaggle.com/code/stochoshi/commonlit-d1-cpu-code-4</a></p> <p>Real Code—<a href="https://www.kaggle.com/datasets/stochoshi/commonlit-d1-cpu-2" target="_blank">https://www.kaggle.com/datasets/stochoshi/commonlit-d1-cpu-2</a></p>
CommonLit - Evaluate Student Summaries
1st: A brief review of the competition experience (Detail solution is on the way)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for your approval and Cannot agree more, as a student, kaggle Is the fairest and most justice platform I have ever seen.</p> <p>Because we have no advantage but the ability to think and write the code, Whether the enterprise-level pre-training model, Or the business data of the enterprise, Even a computing GPU is a luxury. (As for Chinese mainland, Legal use of colab is impossible).</p> <p>But kaggle by limiting the external data and model license, and tool library license, And providing free computing resources to ensure that all participants focus on using their minds and intelligence rather than things outside themselves to solve problems.</p> <p>Fairness is always relative, however, at least the serious imbalance of computing resources and high cost, lead to different participants to experiment and training time cost is completely different, in the same game time limit, for lack of training resources is extremely unfair, especially the NLP the high demand of training resources, on the premise of "free", basic it is difficult to be competitive.<br> (But for a little, i can rent a RTX4090 one day for $10, which is acceptable for most contestants.)</p> <p>Therefore, I believe that kaggle officials have been looking for a relatively fairest rule boundary. Before the popularization of bert, the cost of computing resources is not very high and will not seriously affect the fairness. Now that LLM is becoming popular, how to define a limited extent for using LLM to ensure the fairness of most people is a thorny issue, however, the world is always dynamically balanced, and the ripples will eventually converge. I believe that with the gradual maturity of LLM applications, this problem will be solved naturally.</p> <p>We cannot use the same llm to ensure sufficient topic diversity, so we tried as many open source LLM as well as chatgpt, and screened the topics in terms of expanding the knowledge domain.</p> <h2>Some Feelings</h2> <p>I would like to thank the organizers for bringing us the wonderful competition, and thank you for the atmosphere of sharing and discussion. This competition is a practical project required by the natural language processing course. My teammates and I learned in this competition than in the school class. Of course, thanks to the lucky shake, luck is also an important factor.</p> <h2>What we did in the comp</h2> <p>Our final submission only used a 4fold microsoft/deberta v3 large. Because each team member's academic pressure and time reasons, we did not plan to study various tricks or training skills from the beginning, but started with the quality and diversity of data.</p> <p>WHY data? Because we found that the training data provided, only contains four topics, but from our own experience to think, training a student writing ability requires a lot of different topics, so the real test scenario, the model for the understanding of different topics is the most important (subsequent other players through prob test set also proved this), and we found that the LLM game, many people use LLM data increase and obtained the exciting fect, so we focus on improving the diversity of topic.(We continue to focus on other competitions and competitors' experience and progress in using LLM, but we do not see anyone openly revealing whether using LLM is "USEFUL", even if they have tried to use the generated data.)</p> <p><strong>We sincerely hope that our ideas can provide some feasible ideas for the use of LLM in the FUTURE competitions, and can bring reference value to other participants</strong></p> <h3>The key points of our solution:</h3> <ol> <li><p>Meta pseudo label (3 rounds), the most critical and time-consuming part, is the key to associating unannotated data with annotated data, from a paper by Google.(<a href="https://arxiv.org/pdf/2003.10580.pdf" target="_blank">https://arxiv.org/pdf/2003.10580.pdf</a>)</p></li> <li><p>Carefully designed prompt to guide LLM to spit out the topic and topic text in his stomach (we can actually use the commonlit website, but we didn't realize that at the moment.)</p></li> <li><p>Another prompt used to generate ten summary of different quality for each additional topic.</p></li> <li><p>Change the data preprocessing in the open source training code to introduce prompt text into the model, but most other good teams have this key point, which commonly mentioned in discussion.</p></li> <li><p>Two stage traning: stage1 - Use pseudo labeled data only for 2epoch and valid on train. stage2 - Use train data only for 2-3 epochs. In this way, we need not to pay too much attention to the data distribution of pseudo-labels, which mentioned by <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a> in his insightful <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369578" target="_blank">solution</a> of Feedback-ELL.</p></li> <li><p>Sort index according to the length of the input text, and reduce the infer time. The whole model of inference is estimated to be 7 hours, but if you do not do so, the inference time will exceed the limit of 9 hours.</p></li> <li><p>Traning and inference pipeline<a href="https://www.kaggle.com/code/tsunotsuno/debertav3-lgbm-no-autocorrect" target="_blank">code</a> shared by <a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a> ,the perfect process steps save us a lot of time to build pipeline.</p></li> </ol> <h2>Other words</h2> <p>We only used the open source code and did not make any improvement on the model.<br> In these days, we have read the plans of other teams, which impressed us, especially the "Head Mask" mentioned by <a href="https://www.kaggle.com/ivanaerlic" target="_blank">@ivanaerlic</a> and his deep understanding and insight into the game itself and the nature of the model structure, which made us admire and hope that one day we can make such excellent model optimization like you.</p> <p>I believe combine the head mask and llm meta pseudo procedure, the private score might improved to .43+ ONLY BY ONE MODEL.</p> <p>Thanks again to all the participants for your selfless sharing. I hope our working can contribute to the community!! :)</p> <h2>Prompt used for LLM generate prompts:</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F15921268%2F18c17ba5a587a959e9b1c548b063c954%2FCOMMONLIT-1.jpg?generation=1697356510042870&amp;alt=media" alt=""></p> <h2>Prompt used for LLM generate summaries:</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F15921268%2F3e612d66341007d176e84c454b284517%2FCOMMONLIT-2.jpg?generation=1697356525760792&amp;alt=media" alt=""></p> <h2>Related source:</h2> <p>inference code: <br> <a href="https://www.kaggle.com/itsmeanszao/debertav3-lgbm-no-autocorrect" target="_blank">https://www.kaggle.com/itsmeanszao/debertav3-lgbm-no-autocorrect</a><br> llm generated data: <a href="https://docs.google.com/spreadsheets/d/1gKT1jYOVVtcMsIMiqgrxXGxdLSBQPkIM/edit?usp=sharing&amp;ouid=117192899697376325370&amp;rtpof=true&amp;sd=true" target="_blank">https://docs.google.com/spreadsheets/d/1gKT1jYOVVtcMsIMiqgrxXGxdLSBQPkIM/edit?usp=sharing&amp;ouid=117192899697376325370&amp;rtpof=true&amp;sd=true</a></p>
CommonLit - Evaluate Student Summaries
20th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to the organizers for the fun competition and everyone who participated. <br> I share my solution.</p> <h1>Summary</h1> <ul> <li>When I added prompt_text, the local score clearly improved. However, as a result of the experiment, I found that prompt_text is very long (more than 4000) for test dataset. So I set the max_length to 5000. This is the important point for me.</li> <li>weighted ensemble: deberta-v3-large + LightGBM = 7:3</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F164129%2F5c1484cc862d2bcea8e3c359b287799b%2Fimage2.png?generation=1697126808205146&amp;alt=media" alt="solution-image"></p> <h1>1. Validation Strategy</h1> <ul> <li>GroupKFold : 4-fold (groups=prompt_id)</li> </ul> <h1>2. deberta-v3-large model</h1> <ul> <li>input<ul> <li>input_text = answer: summary_text [SEP] title: prompt_title | question: prompt_question | text: prompt_text</li> <li>tokenizer: max_length=5000</li> <li>not text cleansing</li></ul></li> <li>model:<ul> <li>pretrained-model: microsoft/deberta-v3-large</li> <li>freeze: 18 layers</li> <li>head: cls-token</li> <li>max_position_embeddings=5000</li></ul></li> <li>train: <ul> <li>loss: torch.nn.SmoothL1Loss()</li> <li>optimizer: Adam</li> <li>batch_size=2</li> <li>epoch=10</li> <li>scheduler: cosine annealing</li></ul></li> <li>predict:<ul> <li>max_token=5000 </li> <li>batch_size=1</li> <li>only 2-fold (When it was 4-fold, it timed out.)</li></ul></li> </ul> <h1>3. LightGBM</h1> <ul> <li>features: 141 (count_word, count_paragraph, count_sentence, count_space, etc.)</li> <li>train: 2 model (content, wording)</li> </ul> <h1>4. Ensemble</h1> <ul> <li>weighted average (tune weight by train_oof)</li> <li>deberta-v3-large : LightGBM = 7:3</li> </ul> <table> <thead> <tr> <th>id</th> <th>model</th> <th>local-cv</th> <th>public</th> <th>private</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Deberta-v3-large (4fold)</td> <td>0.4964</td> <td>Timeout</td> <td>Timeout</td> </tr> <tr> <td>2</td> <td>Deberta-v3-large (2fold)</td> <td>0.5007</td> <td>0.461</td> <td>0.476</td> </tr> <tr> <td>3</td> <td>LightGBM</td> <td>0.5532</td> <td>0.495</td> <td>0.516</td> </tr> <tr> <td>4</td> <td>ensemble(2+3)</td> <td>0.4822</td> <td>0.455</td> <td>0.461</td> </tr> </tbody> </table> <h1>Others</h1> <ul> <li>pooling-layer: almost same</li> <li>N-hidden-layer: almost same</li> <li>LSTM/1dcnn-layer: not work</li> <li>Set lr for each layer: not work</li> <li>Longformer: not work</li> <li>summarizing prompt_text using Llama2: not work</li> </ul> <p>When I added prompt_text, the local score clearly improved. I also considered it important to have a model that can handle the long token length of test data. For this reason, although it was a submitted file that was outside the medal range, I believed in the local score and selected this submit file. I'm glad I chose it.<br> However, I realized the length of the tokens two days before the end of the competition, so I regret not being able to try 4-fold model. .</p> <p>Thank you for reading.</p> <p>Thank you for your comment.<br> I did the following to avoid CUDA out of memory.</p> <h1>(1) Automatic Mixed Precision (AMP)</h1> <ul> <li>Reduce memory usage with AMP</li> <li><a href="https://pytorch.org/docs/stable/notes/amp_examples.html" target="_blank">https://pytorch.org/docs/stable/notes/amp_examples.html</a></li> </ul> <pre><code>scaler = GradScaler() autocast(device_type=, dtype=torch.float16): preds = model(input_data) </code></pre> <h1>(2) use DataCollatorWithPadding</h1> <ul> <li>Truncate at maximum length in mini-batch</li> <li>DataCollatorWithPadding(tokenizer=tokenizer, padding="longest")</li> <li><a href="https://huggingface.co/docs/transformers/main_classes/data_collator" target="_blank">https://huggingface.co/docs/transformers/main_classes/data_collator</a></li> </ul> <pre><code>dataset_train = BertDataset(input_x, input_y, MAX_LEN=MAX_LEN, tokenizer=tokenizer, isTrain=) dataloader_train = DataLoader( dataset_train, batch_size=batch_size, shuffle=, num_workers=num_workers, pin_memory=, collate_fn=DataCollatorWithPadding(tokenizer=tokenizer, padding=), ) </code></pre>
CommonLit - Evaluate Student Summaries
214th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I am a student majoring in computer science and technology. Due to my love for mathematical modeling, I have been exposed to machine learning algorithms and kaggle. However, for a long time, due to the limits of my ability, I can only choose to participate in contests of notebook type.<br> This was the first NLP task I have participated in.Although I achieved a very regrettable ranking , I have learned a lot through the journey.<br> <strong>Model</strong><br> I choose Debertav3+LGBM &amp; LGBM+autocorrect+Debertav3 in public notebook as a basic model。A few days after this, I tried to focus on developing the preprocessing part &amp; the LGBM part of the code.
And Optuna for finding the best parameter for LGBM.<br> <strong>Submissions overview</strong></p> <table> <thead> <tr> <th>model</th> <th>private score</th> <th>public score</th> </tr> </thead> <tbody> <tr> <td>CommonLit-ESS lgbm+autocorrect+DeBERTa V3 (Tuned) - Version 5</td> <td>0.484</td> <td>0.447</td> </tr> <tr> <td>Explained Tuned Debertav3+LGBM - Version 3</td> <td>0.485</td> <td>0.445</td> </tr> <tr> <td>Tuned Debertav3+LGBM+autocorrect - Version 2</td> <td>0.487</td> <td>0.450</td> </tr> </tbody> </table> <p>Unfortunately, I didn't get better results based on the original model.Finally, my score stabilized at 190th place in the public list.After sharking ,I ended up falling out of the prize zone.<br> <strong>Summary</strong><br> Through this competition, I understand my shortcomings, my machine learning is still at a entry level.<br> As a side note, I have always thought that the model with the same score is ranked by the order of submission time. After this competition, I found that even notebooks with the same code will have slightly different scores when submitted,my notebook has a score of 0.48475 on the first submission and 0.48497 on the second submission, which is an uncertain factor.</p>