Title of Competition
stringclasses
343 values
Title of Writeup
stringlengths
6
139
User
stringclasses
1 value
Writeup
stringlengths
419
80.4k
Inclusive Images Challenge
9th Place - Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Inclusive Images Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My solution is simple. I downloaded only train_00.zip and train_08.zip due internet speed restrictions.</p> <p>Merging datasets brings more than 343,042 images and 1,983,191 labels associated. Then I dropped labels with frequency less than 50 decreasing the number of labels to around ~ 1.6M. This way the unique number of labels in my dataset decreased to 3862.</p> <p>I used Keras NASNetLarge to train the model from scratch using 224x224x3 images. During training I monitored a custom f@2 metric after each epoch.</p> <p>Augmentation was made using albumentations lib:</p> <pre><code>def augment_flips_color(p=.5): return Compose([ CLAHE(), HorizontalFlip(.5), ShiftScaleRotate(shift_limit=0.075, scale_limit=0.15, rotate_limit=10, p=.75 ), Blur(blur_limit=3, p=.33), OpticalDistortion(p=.33), GridDistortion(p=.33), HueSaturationValue(p=.33) ], p=p) </code></pre> <p>Model trained for several epochs until I got a f@2 of around 0.63 in one small validation set. Using 4xTTA my validation improved to around 0.64 only.</p> <p>Searching for the best threshold to cut the predicted labels I found the value 0.049 that brings f@2 to 0.68 in the validation set. It means my final solution predicts all categories with prob &gt;= 0.049 for each image.</p> <p>I tried the same approach using a Xception and a VGG-16 architecture. Both scored less than the NASNet, but when I blended the three models in stage 1, it improved the LB score around +0.01. For stage 2 I used only the NASNet in the solution.</p> <p>Giba</p>
Inclusive Images Challenge
Solution description (11th place)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Inclusive Images Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>My solution - softmax output for multi label classification</h2> <p>I noticed there are four key challenges of this competition: </p> <ul> <li><ol><li>Labels are unbalanced in training data - there are 800k instance of /m/01g317 , and only 1 instance of /m/0266h</li></ol></li> <li><ol><li>The label distribution in training data and test data are very different.</li></ol></li> <li><ol><li>There are large amount of classes/labels (7178 classes in classes-trainable.csv, but there are no training data for 6 classes, so total 7172 classes)</li></ol></li> <li><ol><li>It is not allowed to use pretrained model.</li></ol></li> </ul> <p>I tried following approaches to handle the above challenges:</p> <h2>1. Weighted sampling</h2> <p>To make training labels more balanced. I developed a sampler inspired by this post ( <a href="https://www.sebastiansylvan.com/post/importancesampling/">https://www.sebastiansylvan.com/post/importancesampling/</a> ), the original code is very slow to sample large amount of samples. I made some changes to make it able to sample 100k samples in a few seconds. </p> <h2>2. Label weights</h2> <p>At the beginning of the competition, I used sigmoid activation function and binary cross entropy loss to train models for this multi label problem, but the loss and f2 score stop to improve quickly, so I tried to put more weights on rare labels and positive labels during training.</p> <ul> <li>Class weights (not working well for me)</li> </ul> <p>The rare labels should be put more weights, I calculate class weights per label counts, and calculate loss per class weights, but this did not give me obvious improvement.</p> <ul> <li>Put more weights on positive labels (a big boost)</li> </ul> <p>There are total 7172 classes, each image normally has only 1-10 labels, the default binary cross entropy loss calculate all labels equally, I would like to put larger weights on positive labels to give the model a stronger signal on positive labels, it can be done with pytorch simply as following:</p> <p>``` def weighted_bce(args, outputs, targets): w = targets*args.pos_weight + 1 # default value of args.pos_weight is 20</p> <pre><code>bce_loss = F.binary_cross_entropy_with_logits(outputs, targets, w) return bce_loss </code></pre> <p>```</p> <p>This gave me a big boost on f2 score.</p> <h2>3. Softmax output with fixed threshold 0.04 for all classes</h2> <p>I assumed that stage 2 test label distribution would be very different than stage 1, and tuning threshold for stage 1 tuning label would not work for stage 2. So I did not tune threshold for stage1 tuning labels. Typically we use sigmoid activation on output for multi-label classification, but in this competition, I tried softmax activation on model ouputs, then use a fixed threshold 0.04 for all classes to generate prediction, for a same model, this method make the prediction can be better generalized to test data with a different data distribution. I also tried sigmoid with tuning threshold for stage1 test label, which gave me around 0.48 LB at stage 1, but I did not use the approach at the end of stage 1 because I did not think that would work for stage 2.</p> <h2>4. Calculate Open Images mean and std</h2> <p>I calculated the entire training data mean and std, use them to normalize training batch.</p> <h2>5. Ensembling</h2> <p>I used two models for ensembling, since no pretrained model is allowed, it is very slow to train a single model, I trained each model for around two weeks on a single P100 GPU. Use reduce on plateau learning rate scheduler first and then cosine annealing.</p>
Inclusive Images Challenge
Solution description (3rd place)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Inclusive Images Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I have one idea why we could have so different results. We used very heavy augmentations which distort and "broke" fotos very much. May be in GEO locations from second stage there are more images with poor quality, and in case of such augmentations poor quality won't reduce performance so much.</p> <p>I have the following for single THR: 0.5</p> <p>ResNet50: 0.27622</p> <p>InceptionResnet_v2: 0.30165</p> <p>Yes. I checked through "late submission".</p> <blockquote> <p>So thresholding using the tuning set gives at least .15+ boost in both stages? Yes, it was critical to get high score.</p> </blockquote> <h2>Some thoughts</h2> <p>1) After some analysis of data we noticed that most of differences between train and test (stage 1) was more due to different labelling distributions, than to different geo locations. For example: average number of labels per image in train was 4.083 (maximum labels: 127) while in test only 2.386 (maximum labels: 9). So by assuming stage 1 and stage 2 have similar labeling distributions, we can use very similar, but less risky (potentially less overfitting), thresholds for stage 2 as in stage 1.</p> <p>2) Since we only have one model to choose in stage 2, in order to balance risk v.s. reward, we chose a slightly more conservative approach than our best LB model in stage 1 – our final selected model scored around 10th position on stage 1 LB, which has a slightly more conservative threshold optimization.</p> <h2>Models and scores</h2> <p>Our solution consists of 7 models: <em>ResNet50</em>, <em>Xception</em>, <em>Inception ResNet v2</em> x 5. All neural nets had the same final Dense (FC) layer with ~7K neurons for all available classes with sigmoid activation. However, the five inception-resnet have slightly different training and image augmentation approaches. All nets had almost the same score ~0.5 at stage 1 after threshold tuning. The best one was Inception resnet v2. ResNet50 was trained on 336x336 resolution, others on 299 resolution. I believe single model is enough here to get almost the same score (late submission for random single model gives: 0.337). For example ensemble with majority voting on stage 1: 0.505 + 0.501 + 0.511 LB: 0.516</p> <h2>Train/validation split</h2> <ul> <li>We used all labels for training: human and computer generated.</li> <li>We also used probabilities from computer generated labels rather than fixed 1.0 value.</li> <li>We didn't use boxes</li> <li>Split was non-uniform stratified by classes (see code below):</li> </ul> <pre> if count &lt; 50: part = count // 10 elif count &lt; 100: part = count // 20 elif count &lt; 1000: part = count // 50 elif count &lt; 10000: part = count // 100 elif count &lt; 100000: part = count // 500 else: part = count // 1000 </pre> <ul> <li>Train: 1728299 images Valid: 14743 images</li> </ul> <h2>Training process</h2> <ul> <li>Nothing really special: random samples from training images</li> <li>Training from scratch (not using ImageNet pretrained weights) due to competition rules.</li> <li>We used default binary_crossentropy loss for this competition</li> <li>For validation during training we used tuning labels from stage_1 and calculate F2 score on each epoch with different thresholds (THRs). At the final epoch for inception resnet v2:</li> </ul> <pre> loss: 0.0019 - f2beta_loss: -5.6904e-01 - fbeta: 0.7356 val_loss: 0.0027 - val_f2beta_loss: -5.6610e-01 - val_fbeta: 0.7038 F2Beta score tuning labels: 0.373804 Optimal THR: 0.7 </pre> <ul> <li>F2Beta score on tuning labels keep improving with improvement of validation, so it mostly used for checking rather than early stopping.</li> <li>Local score on tuning labels was almost the same as on leaderboard.</li> <li>Heavy augmentations. Very useful library <a href="https://github.com/albu/albumentations">albumentations</a> helped us here:</li> </ul> <pre> def strong_aug(p=.5): return Compose([ HorizontalFlip(), OneOf([ IAAAdditiveGaussianNoise(), GaussNoise(), ], p=0.2), OneOf([ MotionBlur(p=.2), MedianBlur(blur_limit=3, p=.1), Blur(blur_limit=3, p=.1), ], p=0.2), ShiftScaleRotate(shift_limit=0.0625, scale_limit=0.2, rotate_limit=10, p=0.1), OneOf([ OpticalDistortion(p=0.3), GridDistortion(p=0.1), IAAPiecewiseAffine(p=0.3), ], p=0.2), OneOf([ CLAHE(clip_limit=2), IAASharpen(), IAAEmboss(), RandomContrast(), RandomBrightness(), ], p=0.3), HueSaturationValue(p=0.3), ToGray(p=0.05), JpegCompression(p=0.2, quality_lower=55, quality_upper=99), ElasticTransform(p=0.1), ], p=p) </pre> <ul> <li>One epoch 40000 images. To train model we used 200-500 epochs. Training further continue improving model, but with very small effect.</li> <li>For xception and 3 more inception-resnet, we use a different image augmentation strategy. For example, one inception-resnet has image augmentation like this:</li> </ul> <pre>gen = ImageDataGenerator(horizontal_flip = True, vertical_flip = True, width_shift_range = 0.1, height_shift_range = 0.1, channel_shift_range=0.1, shear_range = 0.1, zoom_range = 0.1, rotation_range = 10, preprocessing_function=inception_resnet_preprocess_input)</pre> <h2>Threshold tuning</h2> <p>It was the most important part of the competition. We needed to output discrete set of classes for each image based on probabilities from neural networks. We had wide range of strategies to find optimal thresholds and the F2Beta metric is very sensitive. For example:</p> <ul> <li>F2Beta on validation images was about: ~0.73 </li> <li>F2Beta on tuning labels with single threshold: ~0.35 </li> <li>LB Score F2Beta with optimized thresholds: ~0.5 LB score</li> </ul> <p>Optimization process was made on tuning labels. It had 4 parameters:</p> <ul> <li>minimum probability for search [MinPS]: 0.01</li> <li>maximum probability for search [MaxPS]: 0.99</li> <li>default probability: 0.99 - We use this probability if class had no entry in tuning labels. Actually, from 7K classes only ~500 had entry in tuning labels. Default prob 0.99 means that model must be &gt;99% sure to use this class.</li> <li>minimum number of entries in class: 1</li> </ul> <p>For conservative models in our final submission we used [0.1-0.9] ranges with 2-3 minimum number entries in class and lower default probability (0.8, 0.9). While on Stage 1 LB it gives worse (less overfitting) result. Start from setting all thresholds to 0.5 then iterate all over the classes and find threshold for each class in range [MinPS; MaxPS] which maximize the F2Beta score. Do 2-3 overall iterations until F2Beta score stops increasing.</p> <h2>Ensembles</h2> <p>We used simple majority voting for ensemble. If at least 4 (out of 7) models predicted a class then this class goes to the submission file.</p> <h2>Post processing</h2> <p>For F2Beta metric it's better to output something if model didn't predict any class. So rows which have empty labels, we will use UNION across all 7 models to regenerate labels. If all 7 models have no predictions, we then will output 3 most common classes found in training set.</p> <h2>Failed experiments</h2> <ul> <li>We also tried ResNet152 on 448x448 images, but it gave worse results.</li> <li>MobileNet on 128x128 images - just to check if it could give comparable result, but it didn't</li> <li>SE-ResNext101 for Keras from this repo: <a href="https://github.com/titu1994/keras-squeeze-excite-network">https://github.com/titu1994/keras-squeeze-excite-network</a> - very very slow and almost not converges comparing to other models.</li> <li>Label selection/filtering based on “class activation maps” (size of activation field, location, etc.)</li> </ul>
Indoor Location & Navigation
10th Place Rapids cuML Solution Quick Writeup
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Indoor Location & Navigation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is a long and hard competition! I'd like to congratulate all the teams who stick to it till the end. I am looking forward to the sharing of winning solutions.</p> <p>My main goal of this competition is to experiment with <a href="https://rapids.ai/start.html" target="_blank">Rapids.ai</a> tools to accelerate the pre-processing and post-processing of deep learning models. </p> <p>Pre-processing:</p> <ul> <li>use <code>dask</code> to process raw data, convert them to dataframes and save them as <code>parquets</code>.</li> <li>use <code>dask-cudf</code> to engineer features from many small <code>parquets</code>. </li> <li>use <code>cuml LabelEncoder, TargetEncoder, Nearest Neighbor</code> and <code>Xgboost</code> to create simple models and explore the dataset.</li> </ul> <p>Model:<br> I built two RNNs using PyTorch Lightning:</p> <ul> <li>use wifi features to predict the waypoints directly. </li> <li>use IMU features to predict the shift of waypoints, <code>delta x &amp; y</code>. similar to <a href="https://www.kaggle.com/c/indoor-location-navigation/discussion/239884" target="_blank">Olaf's approach</a></li> </ul> <p>Post-processing:<br> The approach is to interleave <a href="https://www.kaggle.com/saitodevel01/indoor-post-processing-by-cost-minimization" target="_blank">cost minimization</a>, <a href="https://www.kaggle.com/rafaelcartenet/scaled-floors-geojsons-new-dataset" target="_blank">snap to corridor</a> and <a href="https://www.kaggle.com/robikscube/indoor-navigation-snap-to-grid-post-processing" target="_blank">snap to grid</a> and run them iteratively. Hyperparameters such as <code>alpha</code> in <code>cost minimization</code>, <code>threshold</code> in <code>snap grid</code> are tuned for each iteration. I rewrote these functions with <code>cupy</code> and <code>cudf</code> and they were very fast. My best submission is done with 30 iterations and 90 post-processing functions in total, which only took less than 10 minutes. I didn't do the exact measurements but that could be 100x faster than the original implementation. The speedup is very important for searching for the best hyperparameters. The post-processing improves the score by 1.1~1.2, which is quite significant.</p> <p>Other notes:<br> I didn't use the start/end points leak and the hand-labeled waypoints. I spent a lot of time creating an end-to-end neural network that incorporates <code>cost minimization</code> and <code>snap to grids</code> in the training process but it didn't go anywhere in the end. I'm looking forward to the approach of the 1st place team who successfully built an end-to-end model.</p> <p>Unfortunately, I am way behind in my other workloads so I apologize that my solution and source code will be shared later when I find the time. I'll update this thread when it's done.</p> <p>Thank you again for posting the great kernel. It's elegant and so insightful.</p> <p>A quick tip is to avoid using python <code>for</code> loops as much as possible. For snap-to-grid we can calculate all-to-all distances using numpy/cupy broadcast in one shot. For cost-minimization, we can connect paths into one long sequence and set the <code>beta</code> at the boundaries to zero. I'd like to do some further optimization to remove <code>for</code> loops completely for these two functions. </p> <p>yes, I will. Please give me some time to clean the code (and I might be able to make it even faster) and measure the speedup. I'll update this thread :)</p>
Indoor Location & Navigation
12th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Indoor Location & Navigation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all the winners. Even if I had no experience in the domain, I found this competition very interesting and decided to enter it very early. This allowed me to try different ideas. Let me present the one that I used in my final submission.</p> <p><strong>1 - Pre-processing</strong><br> I used only wifi signals for positions prediction. Beacon or magnetic data didn't help.<br> I grouped all wifi rows by block, but I reallocated wifi rows based on "last seen timestamp" to the wifi block that was the closest with respect to timestamp.</p> <p><strong>2 - Floor prediction and Wifi-based position prediction</strong><br> In a first step, I used LGBM models for floor prediction (1 model by building) and simple 2 layers feed forward NNs for (x, y) positions (1 model for each floor). The performance was OK.<br> Then, I switch to a rather radical approach that was providing better results:<br> I computed for each test point the cosine of the angle of wifi fingerprints with respect to each training point. That is: the scal prod divided by the norm of the 2 vectors.<br> Using the cosine instead of the actual scal prod is important because the intensity of the wifi signals varies a lot for different positions.<br> As a result, a perfect match would return a cosine of 1, and the value decreases as the closest training point "match" is farther away (actually, there is also a bit of post-processing to discard outlayers).<br> For each test point, the best training point "match" is computed FOR EACH FLOOR.<br> The floor prediction is then performed by checking the evolution of the cosine along the path: the floor for which the cosine is the highest more frequently is selected as the predicted floor.</p> <p>Using this approach (which is a kind of k-NN), there is no model (no parameter to train), and no CV…<br> Actually, following the simple idea presented in the thread: <a href="url" target="_blank">https://www.kaggle.com/saitodevel01/indoor-post-processing-by-cost-minimization</a>, I submitted an edited version of one of my final submissions and was happy to see that my floor prediction was 100% accurate both for public and private data:</p> <table> <thead> <tr> <th>Submission and Description</th> <th>Private Score</th> <th>Public Score</th> </tr> </thead> <tbody> <tr> <td>submission_df_leak_start_end - REFINED_PATHS-FullPath-SE-NoENF-LS_submission-2021-05-17_07-58-29 - score2.692.csv</td> <td>3.25303</td> <td>2.69255</td> </tr> <tr> <td>submission_INCREMENT_2021-05-18_17-42-06.csv</td> <td>18.25303</td> <td>17.69255</td> </tr> </tbody> </table> <p><strong>3 - Introducing accelerometer data</strong><br> With the approch discussed in section 2, the position prediction is still very noisy. It is necessary to exploit the accelerometer data.<br> At this stage, I didn't start from the raw data but I used instead the library provided by the organisers ("compute_step_positions").<br> First, I used my own post-processing (using some local averaging along paths, combining initial (x, y) predictions and step positions using a moving window). However, Saito's notebook was much more efficient. Also, the weights in the cost minimization expression could use the cosine values to reflect points for which one is more confident.</p> <p><strong>4 - Improving paths by local search</strong><br> The output of section 3 is a list of (x, y) positions that define paths.<br> I defined another cost function that combined 3 values for each path:</p> <ul> <li>error for (x, y) positions (current positions vs "starting" positions, for each point on the path)</li> <li>error for length of each segment in the path (ratio between length based on start/end positions, and associated step_positions length)</li> <li>error for heading between consecutive segments (again: variation in heading based on (x, y) positions vs variations in heading based on accelerometer data)</li> </ul> <p>Then, a population-based search was performed using some "mutation" operators to update the path and capture constraints to place points in corridors.<br> Again, no CV framework was used here, but improvements in the local search based on this cost function appeared highly correlated with LB scores.</p> <p><strong>5 - Post-processing</strong><br> I didn't explore that part that much. I just lazily invoked the snap-to-grid (<a href="url" target="_blank">https://www.kaggle.com/dragonzhang/3-3-g6-indoor-navigation-snap-to-grid</a>) and leakage (<a href="url" target="_blank">https://www.kaggle.com/tomooinubushi/postprocessing-based-on-leakage</a>) notebooks like many of us :-).<br> These notebooks were regularly improving my solutions by a value of around 0.3.</p> <p><strong>Final comments</strong><br> One direction I wanted to investigate was about correcting step_positions returned by the python library provided by the competition host. Indeed, doing some statistics with training data, one could notice significant discrepancies between segments based on actual training positions and corresponding step_positions (roughly, with a factor in the range: 0.5 to 1.5, depending on paths).</p> <p>I think the main benefit of the approach presented above is that it does not rely too much on training points (except for final snap to grid). I believe this makes it more applicable to real life applications for which positions to predict don't match known positions… However, it does not take advantage enough of the setup of this competition for which a large fraction of test positions matches training positions!</p> <p><strong>References:</strong><br> I want to thank the authors of the following 4 notebooks that helped me a lot:<br> 1) <a href="url" target="_blank">https://www.kaggle.com/saitodevel01/indoor-post-processing-by-cost-minimization</a><br> Like many of us, I also used Saito's code in my approach. At the beginning, this notebook was not improving my solution that much, but runtime was much better!<br> 2) <a href="url" target="_blank">https://www.kaggle.com/rafaelcartenet/scaled-floors-geojsons-new-dataset</a><br> I wasn't familiar with the shapely library. This notebook provided me with all the code for supporting the local search approach that I implemented. Many thanks to Rafael Cartenet!<br> 3) <a href="url" target="_blank">https://www.kaggle.com/dragonzhang/3-3-g6-indoor-navigation-snap-to-grid</a><br> 4) <a href="url" target="_blank">https://www.kaggle.com/tomooinubushi/postprocessing-based-on-leakage</a></p>
Indoor Location & Navigation
13th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Indoor Location & Navigation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you!</p> <p>In the beginning I would like to thank <a href="https://www.kaggle.com/kokitanisaka" target="_blank">@kokitanisaka</a> for unified wifi dataset, <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a> for snap to grid, <a href="https://www.kaggle.com/saitodevel01" target="_blank">@saitodevel01</a> for cost minimization and <a href="https://www.kaggle.com/joelqv" target="_blank">@joelqv</a> for MLP and shapely notebooks. They were really helpful!</p> <p>My pipeline was as follows (it is simplified, but shows the core ideas of my solution):</p> <ol> <li>train floor LSTM. I used predicted floor as an input to the following models</li> <li>train a stack of BiLSTMs which predicted position with 2 seconds frequency. I used smoothed wifi and beacons RSSI values as well as IMU data. The architecture was: <a href="https://imgur.com/9Zqxlex" target="_blank">link</a></li> <li>train a stack of MLPs using unified wifi dataset</li> <li>ensemble 2. and 3. </li> <li>train LSTM which predicted distance covered between 2 waypoints. I used the whole data, from over 100 buildings. It achieved 70 cm accuracy</li> <li>train delta correction LSTM. Refer to <a href="https://www.kaggle.com/c/indoor-location-navigation/discussion/239884" target="_blank">my previous post</a></li> <li>cost minimize 4., 5. and 6. </li> </ol> <p>At this point path trajectories looked really promising, they were usually just shifted. So I decided to come up with a method which would shift them into corridors. I defined a cost function which for every predicted coordinate in the path calculated squared distance from the closest waypoint and summed it up (I also used distance from the corridor calculated by BFS. It gave slightly worse score, but didn't require waypoints). The goal was to minimize this cost (which was intended to be local minimum). The iterative algorithm worked as follows:</p> <ul> <li>for every vector <em>v</em> in [-1, -0.5, 0, 0.5, 1] x [-1, -0.5, 0, 0.5, 1] calculate what would be the cost after shifting the path by this vector</li> <li>choose the shift that gives the smallest cost and shift the path by this vector</li> <li>repeat the procedure until the cost converges</li> </ul> <p>The results can be found <a href="https://imgur.com/a/K7u8izU" target="_blank">here</a> and <a href="https://imgur.com/a/zdbt0qP" target="_blank">here</a> (this technique is <strong>different from what other teams have used</strong>, so I recommend to take a look at the result :))</p> <p>Finally snap to grid </p> <p>I used automatically generated waypoints which were created by the following procedure:</p> <ul> <li>create a floor mask using shapely</li> <li>"flood" area around every training waypoint</li> <li>in areas that remained unchanged create dense grid<br> Example of the resulting mask is <a href="https://imgur.com/a/KP43X58" target="_blank">here</a></li> </ul> <p>Once again thank you all for the competition. It was my first one. I'll definitely take part in another in the future.</p>
Indoor Location & Navigation
14th some findings and solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Indoor Location & Navigation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I'd like to thank this great competition's hosts and congrats winners and all medalists!</p> <p>I think there are so many approach to this problem and I'm busy to read and understand other participants' awesome solusions.</p> <p>I also write a part of my solution.</p> <ol> <li><p>Findings and preprocessing<br> There are much more wifi timestamp than waypoints and it is effective to use all wifi data (interpolating waypoint data).<br> Some BSSIDs show very high-correlated RSSI and getting rid of it is effective.<br> There some MAC ADRESS for one ibeacon(uuid_major_minor) and we need to use MAC ADRESS instead of uuid.</p></li> <li><p>Models<br> I used LGBM to predict each point indipendently.<br> In this model, train and predict the waypoints where wifi data exists and predict original WAYPOINT by adding relative posision data.<br> I used both wifi and ibeacon data.<br> I used only BSSID and MACADDR which exists both train and test.<br> These data are transformed to rank.(not rssi value)</p></li> <li><p>Postprocessing<br> Cost minimization with leakage and snap to grid.<br> In cost minimization, I added leakage term into the original cost function.<br> In snap to grid, I considered to relative positions (from sensor data) when select the proper grid point.<br> The notebook is published <a href="https://www.kaggle.com/iwatatakuya/snap-to-grid-with-relative-position-data" target="_blank">here</a>.<br> In my case, this postprocesing method improved score very much.</p></li> <li><p>Ensemble<br> Ensembles some models and postprocessing.</p></li> </ol> <p>Thank you for reading this far.<br> Questions are welcome!</p>
Indoor Location & Navigation
15th place solution - Dive into compute_f without Grid Generation + Simple 100% KNN Floor
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Indoor Location & Navigation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank the host for such an interesting competition. And also thank my great teammates for working hard throughout the competition ( <a href="https://www.kaggle.com/ryotayoshinobu" target="_blank">@ryotayoshinobu</a>, <a href="https://www.kaggle.com/tubotubo" target="_blank">@tubotubo</a>, <a href="https://www.kaggle.com/columbia2131" target="_blank">@columbia2131</a> ).<br> The key factor of this competition is post processing based on algorithms rather than modeling. Also, all of us like competitive programming, so there is no other team’s name but “Algorithm is All You Need” :)</p> <p>Here is our solution.</p> <p><strong>Quick summary</strong></p> <p><img src="https://storage.googleapis.com/kagglesdsdata/datasets/1357497/2256047/Picture1.jpg?X-Goog-Algorithm=GOOG4-RSA-SHA256&amp;X-Goog-Credential=databundle-worker-v2%40kaggle-161607.iam.gserviceaccount.com%2F20220616%2Fauto%2Fstorage%2Fgoog4_request&amp;X-Goog-Date=20220616T150747Z&amp;X-Goog-Expires=345599&amp;X-Goog-SignedHeaders=host&amp;X-Goog-Signature=7de5d6ffefbd8e3fa8770da60dbacdec1604c7c080f3bf63a729f7466c446bbcc72b2ea6425bd349b5732571440fe2a7856a80873dfdaadbec80d9f1104409bd2940ec32abeae4d516a6f84c5dfa2c25308abdd160a1c199f3c2f693ac7dd3217adf9b4fdb12fdaa8d2369d081338f9cd12552af7a0e0f1f28fc2bdef89b4e43b6163cfb77d2aec9125af12c27874e079d5b99dd1625b6455118cd93c3a0e6c071d0cc2d8f7c7a33c589b0183226cb191a773eb547a23e6f906e56432765827fd0f5156eab0daaae6c528329febf77d2797ad2b128ccbe18ad9df23406520fe921f79ccc2e85d6dd3492d8637b73d74086c4134d4c77608a1a7d61878a38c4bb" alt=""></p> <p>This competition required a good amount of pre and post processing. Our final prediction was a blend of 3rd stage models trained with iterative pseudo labeling. Repeating the post processing was also important for our solution.</p> <p><strong>Simple 100% Accurate Floor model</strong></p> <ul> <li>Consider RSSI as the number of occurrences of a BSSID.</li> <li>TF-IDF vectorization.</li> <li>KNN for each site.</li> </ul> <p>Our KNN model performed 100% accuracy for both public and private dataset.</p> <p><strong>Waypoint Model</strong></p> <p><strong>Preprocess</strong></p> <ul> <li>Linear interpolation of waypoints based on WiFi timestamp.</li> <li>Fix malformed txt data <a href="https://www.kaggle.com/higepon" target="_blank">@higepon</a> ‘s notebook <br> <a href="https://www.kaggle.com/higepon/how-to-fix-malformed-train-test-data" target="_blank">https://www.kaggle.com/higepon/how-to-fix-malformed-train-test-data</a></li> <li>Remove WiFi information which exists only in training datasets.</li> <li>Applying Kalman Filter to sensor data for getting a precise result of compute_f.</li> </ul> <p><strong>Cross Validation</strong></p> <ul> <li>GroupKFold of path</li> <li>5 CV but n_splits=15 to get many combinations of path groups to enhance random seed averaging.</li> </ul> <p><strong>1st stage training</strong></p> <ul> <li>LSTM based on <a href="https://www.kaggle.com/Kouki" target="_blank">@Kouki</a> ‘s notebook <br> <a href="https://www.kaggle.com/kokitanisaka/lstm-by-keras-with-unified-wi-fi-feats" target="_blank">https://www.kaggle.com/kokitanisaka/lstm-by-keras-with-unified-wi-fi-feats</a>.<br> Hyper parameters were tuned with Optuna.</li> <li>Transformer-like Convolutional Encoder.<br> Embedding BSSID and RSSI, then Positional Encoding are added.<br> No Feed Forward Network.<br> Batch Normalization instead of Layer Normalization.<br> Multiplying attention instead of Adding.<br> Query of MultiHeadAttention is BSSID embeddings, Key and Value are RSSI embeddings.<br> Encoding process is like [Embedding -&gt; Conv1d -&gt; Multiply Attention -&gt; Conv1d -&gt; Multiply Attention -&gt; … -&gt; Dense]</li> <li>10 random seeds to learn multiple combinations of path groups.</li> </ul> <p><strong>2nd stage training</strong></p> <ul> <li>Convolutional stacking to learn correlation between different models or different random seeds, which is equal to learning multiple path combinations.</li> <li>LightGBM to learn time series and relative position information.<br> Additional input features are:</li> </ul> <ol> <li>n predictions before and after</li> <li>Difference between n predictions</li> <li>Rate of change from n predictions</li> <li>Difference between the previous and next n predictions</li> <li>Moving average</li> <li>Moving variance</li> <li>Relative position (compute_f, compute_rel_position)</li> <li>Cumulative sum of relative positions</li> <li>Difference between predicted values and cumulative sum of relative positions</li> <li>Aggregate features for each path (mean,max,min,median,std,sum)</li> </ol> <p><strong>3rd stage training</strong></p> <ul> <li>Hill Climbing to get weights which minimize loss.</li> <li>Ridge regression to suppress overfitting.</li> </ul> <p><strong>Post process</strong><br> Post processing is very important to push up scores significantly, and also a fun part for competitive programmers. Here we describe detailed steps. Summary is described below figure.</p> <p><img src="https://storage.googleapis.com/kagglesdsdata/datasets/1357497/2256047/Picture2.jpg?X-Goog-Algorithm=GOOG4-RSA-SHA256&amp;X-Goog-Credential=databundle-worker-v2%40kaggle-161607.iam.gserviceaccount.com%2F20220616%2Fauto%2Fstorage%2Fgoog4_request&amp;X-Goog-Date=20220616T150904Z&amp;X-Goog-Expires=345599&amp;X-Goog-SignedHeaders=host&amp;X-Goog-Signature=937a2b44541155f08662c1318ddbc7617da25e94aaf7e15e38c04ebadbd033350968cc93dbf5fc6ab8443f17d3d9d343ed5363cc3099a77fff710b2ea5d3133b82fc5af18813b6fd4d90cb2940b3036696ad144c3697554209973e0b1b355f164623746548bda9d05330b37b5aed9fdd2ee6a6d89adb9766b7818ea89cd2afddbaad16538844dcf3cc62db42b062c18f6666c395aa55ff71171915760401f39484c470946f39560a56bd67f9a34fe21f6ae8e6e472158e15363516fbd74380b72d324df2d44f54d61798d120e4eefeb444fb8845208de108fdc35f0babfce49bd3150c5f49d38b046350534196041275e4d8b9c97a5d3f50f4c6d61c8f4caf48" alt=""></p> <p><strong>Snap to grid</strong><br> published by <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a><br> <a href="https://www.kaggle.com/robikscube/indoor-navigation-snap-to-grid-post-processing" target="_blank">https://www.kaggle.com/robikscube/indoor-navigation-snap-to-grid-post-processing</a><br> The idea is to replace the predicted coordinates with the nearest waypoint of the training data. There are two positive effects:<br> 1) pushing the predicted waypoint into the hallway when it is inside an obstacle.<br> 2) enables us to predict the same point as the training data.<br> As you all know, there are a lot of overlapping waypoints between the training and the test data. So, selecting the coordinate from train waypoints is very effective. Since our team applied iterative post processing, we set threshold 6 in the first half post-processing parts to increase probability of snapping, and threshold 1.5 in the latter half to avoid strange snapping of already precise waypoints.</p> <p><strong>Parallel movement by Snap to grid</strong><br> As mentioned above, Snap to grid snaps predictions to the nearest point. However, the shape of the snapped path will not keep its original shape, especially most of the path is predicted to be inside a wall because snap direction is unpredictable. Therefore, before doing the snap to grid, we consider parallel moving of path to direction which is likely to be snapped while maintaining original shape. We calculate the difference between the original coordinates and the coordinates after Snap to Grid, and regard the average values of difference dx and dy as the direction in which Snap is likely to occur. By adding dx and dy to the original coordinates, we were able to move the path parallelly to the direction that is likely to be snapped while maintaining the original shape of the path.</p> <p><strong>Cost Minimization</strong><br> published by <a href="https://www.kaggle.com/saitodevel01" target="_blank">@saitodevel01</a><br> <a href="https://www.kaggle.com/saitodevel01/indoor-post-processing-by-cost-minimization" target="_blank">https://www.kaggle.com/saitodevel01/indoor-post-processing-by-cost-minimization</a></p> <p>The idea is to minimize the difference between the distance to the predicted point and the distance calculated by the sensor data.</p> <p>⊿X^ is calculated by compute_rel_positions in compute_f. Though the host's prediction of relative coordinates with compute_rel_positions function is reasonably accurate, its accuracy depends on the quality of raw sensor data. If you look carefully at the relative coordinates, you will notice that the path of relative coordinates has been scaled up and that there is a bias in the direction of rotation compared to the original path. Let’s check the precision of this function.</p> <p><img src="https://storage.googleapis.com/kagglesdsdata/datasets/1357497/2256047/Picture3.jpg?X-Goog-Algorithm=GOOG4-RSA-SHA256&amp;X-Goog-Credential=databundle-worker-v2%40kaggle-161607.iam.gserviceaccount.com%2F20220616%2Fauto%2Fstorage%2Fgoog4_request&amp;X-Goog-Date=20220616T150947Z&amp;X-Goog-Expires=345599&amp;X-Goog-SignedHeaders=host&amp;X-Goog-Signature=07cf865742e7f372aab6e278bc1b695315606c4104818bd9c1b04052c23d464e341c17ee6a361e5b39c893c3a67a438b25c178a6d674de09bf5d0d439df37076006e13f53db2ded49a9e2ed5da4d9f137d75bb6d18feee313799f4909651e27ccb850d8e6fe1a3cd850247a7cf71409c1b7692326aa7da8b3def2c683bae7cc63702fc2718ae8cbfc26faa80748c8be2df222df13da5736ee061996cd459979c638f5e4d907587124d00a7719f2ca18ca3e47518f408c8aec2434fc5135c6deb33558a5a6337b738ec1918076e54d96786a4e9b6acc09fa2905753d4d285404f6b42930196db37583d2dc63884a57ab73879718efa85b43501731c627ba2deea" alt=""></p> <p>Blue line indicates training path, and red line is relative positions. It is clear that relative position is scaled up and rotated compared to red line, so we need to calibrate compute_f function to get a good relative position. Compute_f function uses sensor data, so we try to clean up raw sensor data by Kalman Filter. Using Kalman filtered sensor data produces nice relative positions. We can get similar scale relative positions similar to the original path, which leads to a good score by cost minimization.</p> <p><img src="https://storage.googleapis.com/kagglesdsdata/datasets/1357497/2256047/Picture4.png?X-Goog-Algorithm=GOOG4-RSA-SHA256&amp;X-Goog-Credential=databundle-worker-v2%40kaggle-161607.iam.gserviceaccount.com%2F20220616%2Fauto%2Fstorage%2Fgoog4_request&amp;X-Goog-Date=20220616T151012Z&amp;X-Goog-Expires=345599&amp;X-Goog-SignedHeaders=host&amp;X-Goog-Signature=198c723ffdd4fdb1de79a1deacda24fa8606b2d5ba0eb809ba0a90e4b22fdd237c1ae08f1b3bb261f2713e472f88c21cdc8a6250a53392398ba9b3702b50708dd602e4a2ad1328d5dac30e6e4d3eca7fe90a5ce7a269dece0ed30fcc4e3d94c78831586c666f158a8d211ea60d15f0c2c7016e93acd0f7b8ac79d30fb91727bed9f4383849659a3a84110274d0e7b230ff52c63d5eb9d0794af0ea31ba8ead3e7106d92142c9a46ee2682cee211dd17afb948f8432a143995067e6c06d4f06c0471ef8b529c8f84ccac431deb10d62cd58ba0bdcbe09ef66ba4b4d5177dd3b94f95a9ff792c1b9d39e967eba1a1e5efcb0078fbe34050325ab7c908e1f04d40b" alt=""></p> <p>Next, we tackle with the rotation problems. Here is an example.</p> <p><img src="https://storage.googleapis.com/kagglesdsdata/datasets/1357497/2256047/Picture5.jpg?X-Goog-Algorithm=GOOG4-RSA-SHA256&amp;X-Goog-Credential=databundle-worker-v2%40kaggle-161607.iam.gserviceaccount.com%2F20220616%2Fauto%2Fstorage%2Fgoog4_request&amp;X-Goog-Date=20220616T151036Z&amp;X-Goog-Expires=345599&amp;X-Goog-SignedHeaders=host&amp;X-Goog-Signature=32e5d5e7c36cc9354a4c68341ac085a58c5b85953efe3e4980bcca888b514d74dbc114d916b4080ce7e258f55f8d9a5e57d495bed186d4cf0c7c0540528b245d4fa9caf36bf7347e5cdf18f7d05afb436a3aaffddef9c4f1edda32d0757096fdba1113e985423fa93dc314cfa078c600691740dbefcb38b3f9e0469b7ed87e41a04263dde99c7c098fb4b464001796ecf1587edaede52fa2a7019f00b5e74ff2cdaf14a7100d7b9d8a49ab38120f87eb83d2389d62aaba0856738e5b170f45e34df8d584b2263179b1129bb477db336be58fdc7ef1954c84e56f61663dde341d1bcc1008ab96b05f2bb522967f2e4963696d1934ec17160aa51b5892a78df5dd" alt=""></p> <p>It is obvious that the red lines are rotated some degree. We try to solve these rotation problems with minimization of Euclid distance. </p> <p>Let the predicted coordinates in a certain path after post-processing be A1, A2, … An in order from the start point, and the predicted coordinates by compute_rel_positions be B1, B2 … Bn.If there is no rotation, the movement amount delta_a of A1 → A2 and the movement amount delta_b of B1 → B2 should be roughly the same. However, as you can see from the visualization, compute_rel_positions will rotate by some degrees, so delta_a and delta_b do not match. Therefore, we tried brute force search for the rotation angle θ of delta_b which minimizes the difference between delta_a and delta_b.<br> The loss function is the sum of the distances between delta_a and delta_b.</p> <p>loss (θ) = || (A2-A1) – (B2-B1) R (θ) || ^ 2 + || (A3-A2)-(B3-B2) R (θ) || ^ 2 +… + || (An-An-1) – (Bn-Bn-1) R (θ) || ^ 2</p> <p>Where R (θ) is a rotation matrix that rotates the coordinates by θ degrees. For example, (B3 – B2) R (θ) represents B2 → B3 rotated by θ. Also, || X – Y || ^ 2 represents the square of the difference between the movements X and Y.In other words, the loss function represents the difference between the "current movement amount" and “the movement amount calculated by compute_rel_positions rotated by θ”.The smaller the value of the loss function, the closer the angle of B is to A.</p> <p><img src="https://storage.googleapis.com/kagglesdsdata/datasets/1357497/2256047/Picture6.jpg?X-Goog-Algorithm=GOOG4-RSA-SHA256&amp;X-Goog-Credential=databundle-worker-v2%40kaggle-161607.iam.gserviceaccount.com%2F20220616%2Fauto%2Fstorage%2Fgoog4_request&amp;X-Goog-Date=20220616T151051Z&amp;X-Goog-Expires=345599&amp;X-Goog-SignedHeaders=host&amp;X-Goog-Signature=7fdaab9ded4bacb795239e4200aa3bbeff897865c5177983910adadc6d812262c9006766583e93310dec882e950b3199632f76b9fba5d63697bb806f39716ffe17f23c78384d4adca42b672263802c1835b8bcfd51fa95931b9765dc13fe8a2477f3ce957016696f3e01353f1612b41ec544fe220117fac3d302ee656737966f4460ccf423eb6b0f0b0e4097596ddf75711554448bca8e8b3fb1c544d011b15253bc98c97e04fa5388f39f1a3b7469007cd73951ff74acd0b8faffc723e2345c76abcd168b515699380cc3bb1dcfc25a03bf64373a926d1eaf7ee9f498df786848f7461dff5b7d0c079a4d36f871e5f36827192a0796705b4b4f57453658c412" alt=""></p> <p>We modified the submission path by compute_rel_positions in the post process pipeline, and modified the rotation angle of compute_rel_positions by submission at the end of the pipeline. As we repeated the post process pipeline, both submission path and compute_rel_positions enhanced each other.</p> <p>The difference between original and modified compute_f are very clear.</p> <p><img src="https://storage.googleapis.com/kagglesdsdata/datasets/1357497/2256047/Picture7.png?X-Goog-Algorithm=GOOG4-RSA-SHA256&amp;X-Goog-Credential=databundle-worker-v2%40kaggle-161607.iam.gserviceaccount.com%2F20220616%2Fauto%2Fstorage%2Fgoog4_request&amp;X-Goog-Date=20220616T151109Z&amp;X-Goog-Expires=345599&amp;X-Goog-SignedHeaders=host&amp;X-Goog-Signature=3df41bfcad96849247ca344feda8468daa5b7569fb560d1df7fea21504685437ac7dcd56a8393b251b61e4e2af32ab8824ab0df56476e90965f8f0286ff0e8fd0f0bb775da3b46fc93a9495030c44a5542679ff0a2cb7e14106b92fdeb36cf0e340c75a3b3960b4cfbf3f68b1ab938945060bf9f86d17a13dac01bdba56e85a6d7dcd7dcc1b86ebaa7bd3e016a0f279d60d3f55189eba80a48e00afd71cb04d07a1284508fce1b92372624c4e9099298f24dfcf431d42da59fb98a503b726e75c952281089d4715e3289003cc3209400cd95d5406556cade8d14be1067366f8427b53ac3e7e4054432bf67b7c6f084de8047513bfc58ed94804afb0b5caf3203" alt=""></p> <p><img src="https://storage.googleapis.com/kagglesdsdata/datasets/1357497/2256047/Picture8.png?X-Goog-Algorithm=GOOG4-RSA-SHA256&amp;X-Goog-Credential=databundle-worker-v2%40kaggle-161607.iam.gserviceaccount.com%2F20220616%2Fauto%2Fstorage%2Fgoog4_request&amp;X-Goog-Date=20220616T151123Z&amp;X-Goog-Expires=345599&amp;X-Goog-SignedHeaders=host&amp;X-Goog-Signature=b073cb1f4daffe9761010eefa4cf2387650b188f410fec31029d00f163b6575e7e0f7bc3e5cf2987c234e355a14bebe892192afeac637262d15ef934ce594085df4960184a9ad7e4c988cc119708c5349192b808dc1d4b47d0522bb6b50a2647626ebe66e17db70b71f3222ea2774f4a24a29ddb1f97710d93f0ff99ba57ddb4e2201a71ef755aad92720bc8cf7e2d42b62e25abf985e2b5498e2193c1b9fcc00b4cdd8f4c563d0687f7694cb0e49ad411a7e915880aa435ef4dd03e38040782aef7f8ed90777e16495ae8407011483427fe00325de9f5ef146be85202f417a1c6ab5241afa9ab3555e842b31694f2b299a210f2df5f172ff847a903ff76ac82" alt=""></p> <p>In addition, cost minimization has two important parameters, α and β. The higher the α, the more emphasize the current prediction, and the higher the β, the more emphasize the sensor data. Since we use Cost Minimization many times in our pipeline, β was attenuated with each successive use so that Cost Minimization would not be too influenced by the sensor data.</p> <p><strong>Push to Hallway</strong><br> If predictions are located in an obstacle, we need to push them to the nearest waypoint. But when we just increase the threshold of Snap to Grid, waypoints are changed to strange waypoints since there are many hallways which don’t have waypoints. So, we tried to push the waypoints to the nearest hallway even if there were no waypoints.<br> Each pixel of the floor image was used to determine whether predictions are located in the hallway or an obstacle, and moved waypoints located in the obstacle to the nearest coordinates in the hallway. This has the great advantage of being able to modify the waypoints in obstacles which is far from training waypoints.</p> <p><img src="https://storage.googleapis.com/kagglesdsdata/datasets/1357497/2256047/Picture9.png?X-Goog-Algorithm=GOOG4-RSA-SHA256&amp;X-Goog-Credential=databundle-worker-v2%40kaggle-161607.iam.gserviceaccount.com%2F20220616%2Fauto%2Fstorage%2Fgoog4_request&amp;X-Goog-Date=20220616T151136Z&amp;X-Goog-Expires=345599&amp;X-Goog-SignedHeaders=host&amp;X-Goog-Signature=23a9f942b3a842db1df2d1ab7a051ffca9450987bcc796b7c492e8749d40b8845c41f32842723abe09e4ca53a2fa53a5894a235f1108fb4f9a175653249bbda8ef80d92406c954508358c822c29dbfd77575541191e5082c2ad71a1fd233f98a6ad11f396e1e38f60519adc63520727f9e6c85e7af77707f72273e3fc0f11eebf0e82b7d31bf3c40b38ce95ccf4fc21813e23445ad1388fcd6b7fd5a7b4cf98a6d87a5e8b125e11b6fb2b216df689a1fed4e0937f22841ba5e3aa3c7ebab450de19305b2bd3ebcf2ddd12ba451bcf74eb89b50df135176f115d7133d915af0f79472a286b609ed3d542616bc5f471da8abfeac66b146f1f29345ae033550c1b1" alt=""></p> <p><strong>Apply Start and End Points leakage</strong><br> published by <a href="https://www.kaggle.com/tomooinubushi" target="_blank">@tomooinubushi</a><br> <a href="https://www.kaggle.com/tomooinubushi/postprocessing-based-on-leakage" target="_blank">https://www.kaggle.com/tomooinubushi/postprocessing-based-on-leakage</a></p> <p>This was beyond our analytical capabilities, so we only corrected the start and end waypoints as same as the public notebook.</p> <p><strong>What didn’t work</strong><br> Time series RNN.<br> Input all WiFi and sensor data to NN.<br> each model for each site.<br> Pre training of all data and Transfer Learning.<br> Dijkstra, Warshall-Froyd, Band First Search as shortest path solving <br> Map Matching<br> Curve to Curve<br> Grid Generation</p> <p>Thank you for reading our solution. <br> Any opinions are welcome!</p>
Indoor Location & Navigation
16th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Indoor Location & Navigation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all the winners, and thanks so much for hosting such an interesting competition! I'll share our solution. </p> <h2>Summary</h2> <p><img src="https://user-images.githubusercontent.com/43205304/118599610-c62cff00-b7ea-11eb-9040-d49651249bbe.png" alt="Indoor-solution (1)"></p> <hr> <h2>dataset</h2> <p>Our dataset is based on <a href="https://www.kaggle.com/kokitanisaka/indoorunifiedwifids" target="_blank">indoor-unified-wifi-ds</a>. The major differences are as follows.</p> <ul> <li>Remove the data, if the difference between timestamp of waypoint and last seen timestamp of wifi is more than 10s. <strong>(This is very important)</strong></li> <li>The test data was also created as a wifi-based dataset in the same way as the train in order to average the timestamp during inference. So there are about three times as many lines as the original.</li> <li>The hidden waypoints are complemented by <a href="https://www.kaggle.com/kuto0633/linear-interpolation-for-waypoint-in-wifi-dataset" target="_blank">Linear Interpolation</a> and <a href="https://www.kaggle.com/arnaudcapitaine/get-indoor-location-by-means-of-kalman-smoother" target="_blank">Kalman Filter</a>. This can be expected to have a padding effect on the data.<br> <img alt="スクリーンショット 2021-05-18 8 41 25" src="https://user-images.githubusercontent.com/43205304/118569421-e2ae4480-b7b4-11eb-9b6f-713566812365.png"></li> </ul> <hr> <h2>model</h2> <h3>model for xy</h3> <p>Our Model is based on <a href="https://www.kaggle.com/kokitanisaka/lstm-by-keras-with-unified-wi-fi-feats" target="_blank">LSTM by Keras with Unified Wi-Fi Feats</a>. Since this LSTM model is not time series based, we also use the MLP model. <br> Using features is following.</p> <ul> <li>80 pieces BSSID</li> <li>80 pieces RSSI</li> <li>site id </li> <li>floor (In inference, we use values which other model predict)</li> </ul> <h3>model for floor</h3> <ul> <li>lightGBM (also try BiLSTM)</li> <li>Learning as a classification</li> <li>Create model in each site</li> <li>Create dataset each path file as one line</li> <li>StratifiedGroupKFold</li> <li>features<ul> <li>BSSID </li> <li>Mean RSSI</li> <li>Max RSSI</li> <li>Extract BSSIDs that are present in both test and train.</li></ul></li> </ul> <hr> <h2>training (2stage)</h2> <h3>1st stage</h3> <ul> <li>Almost as shown in the figure above.</li> <li>Custom loss<br> Each data in our dataset has timediff(time difference between timestamp of waypoint and timestamp of wifi group). The larger this timediff is, the larger the discrepancy between the given target and the ground truth. So we apply Weighted-Loss according to timediff.</li> </ul> <pre><code># timediff -&gt; weight timediff = df['timediff'].astype(np.float32).abs().values weight = 1- (timediff/np.max(timediff)) class WeightedMSELoss(nn.Module): def __init__(self): super().__init__() self.loss = nn.MSELoss(reduction='none') def forward(self, input, target, weight): input = input.float() target = target.float() weight = torch.stack((weight, weight), 1).float() loss = self.loss(input, target) * weight return loss.mean() </code></pre> <h3>2nd stage</h3> <p>Re-learn by adding the following elements.</p> <ul> <li>Add test data by pseudo labelig </li> <li>Remove train data if oof's error is over 40m (also try to remove over 20m)</li> </ul> <hr> <h2>post processing (pp)</h2> <p>We use three-pp which were shared in Notebook &amp; Discussion.<br> We repeated three-pp <strong>6 times</strong>. It is very effective.</p> <ul> <li><a href="https://www.kaggle.com/saitodevel01/indoor-post-processing-by-cost-minimization" target="_blank">Cost minimaization</a><br> There are difference between delta by sensor and delta by target. So we corrected sensor delta by the coefficients which obtained by linear regression in each site and each floor.</li> </ul> <table> <thead> <tr> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td><img alt="スクリーンショット 2021-05-18 8 49 51" src="https://user-images.githubusercontent.com/43205304/118569982-16d63500-b7b6-11eb-8f70-35f1c2c10c1c.png"></td> <td><img alt="スクリーンショット 2021-05-18 8 49 59" src="https://user-images.githubusercontent.com/43205304/118569988-189ff880-b7b6-11eb-9e0e-060ece210127.png"></td> </tr> </tbody> </table> <ul> <li><a href="https://www.kaggle.com/robikscube/indoor-navigation-snap-to-grid-post-processing" target="_blank">Snap to grid</a><br> We automatically generated 3 pattern extra grids instead of hand-labeling grids like following. </li> </ul> <table> <thead> <tr> <th>sparse</th> <th>dense</th> <th>edge</th> </tr> </thead> <tbody> <tr> <td><img alt="スクリーンショット 2021-05-18 8 13 53" src="https://user-images.githubusercontent.com/43205304/118567646-1dae7900-b7b1-11eb-9425-ffb414c7f923.png"></td> <td><img alt="スクリーンショット 2021-05-18 8 14 27" src="https://user-images.githubusercontent.com/43205304/118567717-42a2ec00-b7b1-11eb-957f-b18649ea3f0d.png"></td> <td><img alt="スクリーンショット 2021-05-18 8 14 07" src="https://user-images.githubusercontent.com/43205304/118567652-21420000-b7b1-11eb-92c3-29329d2d1fd5.png"></td> </tr> </tbody> </table> <ul> <li><a href="https://www.kaggle.com/iwatatakuya/use-leakage-considering-device-id-postprocess" target="_blank">device id leak</a></li> </ul> <h2>ensemble</h2> <p>We did stacking and extra grid ensemble. We did post processing with 4 different patterns. Then, did ensemble by weighted average.</p> <p>4 pattern is here.<br> ① snap to grid’s threshold=None / sparse extra grid<br> ② snap to grid’s threshold=None / dense extra grid<br> ③ snap to grid’s threshold=None / edge extra grid<br> ④ snap to grid’s threshold=5 / only train grid</p> <hr> <p>Thank you!</p> <p><a href="https://www.kaggle.com/yutoshibata" target="_blank">@yutoshibata</a> Thanks! I'm glad such your comments.</p> <blockquote> <p>you created two different models (not time-series lstm and mlp) </p> </blockquote> <p>Yes, I and another teammate created LSTM, and the other created MLP.</p> <blockquote> <p>the latter MLP model used time-series information like gru or lstm ?</p> </blockquote> <p>No, I mean that our LSTM is almost the same as MLP because it doesn't deal with time series. So our any model are not time-series.</p> <p><a href="https://www.kaggle.com/dehokanta" target="_blank">@dehokanta</a> Thanks for your comment.</p> <blockquote> <p>What was your score before post-processing?<br> Also, how effective were the repeats without extra grids?</p> </blockquote> <p>Our single model's CV is here. <br> This score is used extra grid, but the one which is not used extra grid is almost same improvement.<br> After the 3 times, the effect of pp slowed down but the CV increased, so we decided 6 times.</p> <table> <thead> <tr> <th></th> <th>CV score</th> </tr> </thead> <tbody> <tr> <td>CV(not pp)</td> <td>6.311</td> </tr> <tr> <td>CV(pp×1)</td> <td>4.672</td> </tr> <tr> <td>CV(pp×2)</td> <td>4.377</td> </tr> </tbody> </table> <blockquote> <p>Personally, I think that if there are no extra grid points, there will be no effect of repeating the process.</p> </blockquote> <p>There are also good effect by repeating without extra-grids in our case. By doing snap to grid(N-1 times.), the predicted waypoints closer to ground truth, so the result of Cost Minimaization(N times) also changed, and the score improved, I think.</p> <p><a href="https://www.kaggle.com/horsek" target="_blank">@horsek</a> Thanks and congrats your silver medal!<br> I understand your approach. Maybe it almost same effect as ours if you set the appropriate timediff-threshold(=10s in our case).</p> <pre><code>t_l: wifi last seen timestamp t_g: wifi group timestamp t_w: waypoint timestamp our timediff = t_w - t_l your timediff = t_g - t_l = (t_g - t_l) + (t_w - t_w) = (t_w - t_l) + (t_g - t_w) = (our timediff) + (t_g - t_w) </code></pre> <p>where (t_g - t_w) should be constant if we and you use kouki's dataset.</p> <p>If there are any mistakes, please point them out.</p>
Indoor Location & Navigation
18th place solution (Moro & taksai)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Indoor Location & Navigation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to the organizers for the fun competition and everyone who participated. And thank you to my teammate ( taksai <a href="https://www.kaggle.com/tsaito21219" target="_blank">@tsaito21219</a> ).<br> I share our team's solution.</p> <h1>summary</h1> <ul> <li>train xy-model with only wifi-data</li> <li>train delta-model with sensor-data for cost-minimization</li> <li>postprocess repeatly 20times</li> </ul> <h1>1. preprocess</h1> <p>we make 4 dataset for the two model described later.</p> <table> <thead> <tr> <th>dataset</th> <th>describe</th> </tr> </thead> <tbody> <tr> <td>(a)bssid-ranking</td> <td>list of bssids arranged in descending order of rssi. (common to buildings)</td> </tr> <tr> <td>(b)bssid-rssi-matrix</td> <td>bssid as a columns and rssi as data (for each building). Fill missing-value in -120. Drop rssi-value over 10000ms from last-seen-timestamp.</td> </tr> <tr> <td>(c)sensor-data(aggregate)</td> <td>aggregate data between wifi. (1 record for 1 target)</td> </tr> <tr> <td>(d)sensor-data(sampling)</td> <td>sampling raw data between wifi every 100ms (N record for 1 target). Padding to a fixed length.</td> </tr> </tbody> </table> <h1>2. models</h1> <p>we make 2 type of model. First is xy-model which predict position of waypoint, Second is delta-model which predict distance between two waypoints.</p> <h2>(1) xy-model</h2> <ul> <li>GBDT (lightgbm) with dataset-(b): each site</li> <li>MLP (keras) with dataset-(a): 1model</li> <li>MLP (keras) with dataset-(b): each site -&gt; LB=6.5 (MLP no postprocess)</li> </ul> <h2>(2) delta-model</h2> <ul> <li>This model is used for cost-minimization(postprocess). Since the delta calculated using the github function has a little error, the error is reduced by creating a prediction model. <ul> <li>error(mean of sum of squared error): 13.1(use github) -&gt; 4.48(our model)</li> <li>MAE(mean over x and y of absolute error) : 1.69(use github) -&gt; 1.08(our model) </li></ul></li> <li>MLP (keras) with dataset-(c): Dense(128) &gt; Dense(256) &gt; Dense(128) &gt; Dense(64) &gt; Dense(2)</li> <li>1d-cnn (keras) with dataset-(d): Conv1D(filters=32,kernel_size=5,padding=2) &gt; Conv1D(64,5,2) &gt; Conv1D(128,3,2) &gt; Conv1D(256,3,2) &gt; Conv1D(512,2,2) &gt; GlobalMaxPool1D &gt; Dense(64) &gt; Dense(2)</li> </ul> <h1>3. postprocess</h1> <p>We customized postprocessing based on some useful public kernel.</p> <ul> <li>1st step: (LB=6.5 -&gt; 3.0)<br> 1) ensemble: weighted averaged the predicted value of some xy-models. The same applies to delta-model.<br> 2) tune xy for leakage considering device-id<br> 3) cost-minimization: use delta calculated by delta-model instead of github function.<br> 4) snap-to-grid<br> 5) repeat 2)-4) 20times while adjusting the threshold of snap-to-grid. Move to the grid little by little. threshold: 2 (1-5 times) &gt; 3 (6-10 times)&gt; 4 (11-20 times)<br> 6) execute 2) again</li> <li>2nd step: (LB=3.0 -&gt; 2.75)<br> 1) get output(xy) of 1st step about some patterns(ensemble weight etc.)<br> 2) execute 1st step again with 1)</li> </ul> <h1>Not work</h1> <ul> <li>use data of other site(over 200). I think it is efficient for delta-model, but don't work.</li> <li>train one model which predict both xy and delta (use RNN and transformer)</li> <li>learn multiple waypoints of a path together</li> </ul> <p>we didn't make floor-model. Our team' floor-loss is big about private dataset. Big mistake… So private score is worse than public.<br> Although there were many ideas for this competition, most of them were ineffective and it was a very difficult competition.I almost gave up on the way, but since I came up with the delta-model a week ago, the score has risen by more than 1m. I'm glad I didn't give up.</p> <p>Thank you to my teammate. <br> I'd like to get the gold medal next time.</p>
Indoor Location & Navigation
1st Place Solution - Track me if you can
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Indoor Location & Navigation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Update: We have made all our code available in a <a href="https://github.com/ttvand/Indoor-Location-Navigation-Public" target="_blank">public GitHub repo</a>. Enjoy!</p> <p>A stack of 1D convolution can be thought of as a MLP which is applied to the overlapping time windows and where all weights are shared along the time dimension. <a href="https://www.quora.com/What-is-the-difference-between-a-convolutional-neural-network-and-a-multilayer-perceptron" target="_blank">Reference</a></p> <p>We would indeed unroll the GRU 500 steps in time in your example.</p> <p>We are excited to share our winning solution. It is extremely unusual to win on Kaggle with such a large margin without relying on obscure leaks. Our final solution was inspired by the tremendous sharing on the forum (especially <a href="https://www.kaggle.com/robikscube/indoor-navigation-snap-to-grid-post-processing" target="_blank">snap to grid</a> and <a href="https://www.kaggle.com/saitodevel01/indoor-post-processing-by-cost-minimization" target="_blank">cost minimzation</a> were key insights). We combined the public ideas with unique modeling and optimization insights. The dataset in this challenge was very rich and allowed for impressive progress by many of the top teams until the last day, which made this my favorite competition so far.</p> <p>We would also like to express our admiration for the team of <a href="https://www.kaggle.com/mamasinkgs" target="_blank">@mamasinkgs</a>. Your progress was very impressive throughout the challenge and your generous sharing on the forum inspired many of us to keep going. Your <a href="https://www.kaggle.com/c/indoor-location-navigation/discussion/235328#1288198" target="_blank">prophecy</a> ("To be honest, I'm afraid Tom &amp; dott team, who scores 6.2, is hiding a score now and show 0.x on the last day.") sounded like an untenable dream at the time, but after we got all the pieces together, we started to believe in it ourselves and we are proud that we actually lived up to your expectations.</p> <h2>Summary of our approach</h2> <p>Like all top teams, we realized that this is essentially an optimization problem where you need to incorporate the relevant data modalities (predominantly WiFi + sensor data) at the trajectory level to achieve the best possible score. The key challenge was how to combine the predictions of the relevant data modalities with the discreteness of the prediction problem (about 85% of test predictions occur on X-Y locations seen in training).</p> <p>Most top teams took the route of iteratively interleaving continuous optimization with discretization by snapping to the grid. We went for all-out discrete optimization instead, where we only considered the training waypoints, as well as the procedurally generated likely additional waypoints (see “Waypoint generation”) for each prediction. Discretizing the optimization allowed us to implement a customized Beam Search procedure which combined the penalties of all relevant predictions and allowed us to efficiently find the best match at the trajectory level.</p> <p>We maintained a fixed holdout set (instead of cross validation), which was achieved by randomly selecting from the longer training trajectories. Our validation score was very well correlated with the private leaderboard score.</p> <p>In what follows, we intend to focus on the insights that were not shared before on the forum.</p> <h2>Base models</h2> <h3>WiFi data</h3> <p>We considered 3 modeling approaches for using WiFi data: NN, LightGBM and K-Nearest Neighbors. But only LightGBM and K-Nearest Neighbours were used in the final pipeline, having the highest accuracy. For both of them we assume that the floor is given.</p> <p>The LightGBM models predict X and Y coordinates at the time of a WiFi observation. There are 100+ models, one per each site * level. A particular boost in performance was observed by taking the maximum signal strength in a window of +- 8s each observation. Additionally magnetic and bluetooth data was used, but didn’t give a significant boost, resulting in a distance error of 7.25.</p> <p>For the kNN model, all we had to do was to find a proper distance function between two WiFi observations, and compare WiFi observations at inference time with all WiFi observations seen during training. We settled on a distance function where you combine the average rssid strength difference for the shared devices with the fraction of shared devices. Less weight is given to device observations that are delayed (time difference between WiFi t1 and t2). A pointwise weighted kNN prediction got us to a distance error of 5.8</p> <p>The main benefit of kNN over LightGBM is that it allows for a highly nonlinear penalty function which can incorporate more information about the layout of the floor without risking overfitting (we only have a handful of parameters in the distance function which are shared between all sites and floors).</p> <h3>Sensor data</h3> <p>Sensor data was clearly the key information to reconstruct the trajectories in this competition. When using sensor data, we focused on predicting segments - pieces of trajectories from one waypoint to the next one. We ended up using 3 different sensor data models with different targets.</p> <p>The first sensor model is targeted at predicting the relative movement of a segment, i.e. changes in absolute coordinates between two consequent waypoints. The model uses 9 inputs (acce, gyro and ahrs of all 3 axes) and has a relatively simple structure: Conv1d + GRU + Conv1d + head. The presence of GRU is unnecessary, similar results can be achieved when GRU is replaced with Conv1d, kernels sizes are set wider and dilations are added. The model has MAE = 1.04 on validation (averaged over the X and Y dimensions).</p> <p>After analyzing the errors of the model, we saw that the predicted angle of the movement direction is far from perfect, though never off by more than pi/2 radians. That can indicate that the device was not properly calibrated.</p> <p><a href="https://www.kaggle.com/c/indoor-location-navigation/leaderboard" target="_blank"><img src="https://i.ibb.co/2hNvxfP/sensor-error.png"></a></p> <p>Even when the direction was not properly calibrated, the sensor data still can describe the movement well, just in slightly rotated coordinate axes. To take advantage of that, we fitted the second sensor data model, where the target relative movement was expressed not in the original coordinates, but relative to the previous segment, as illustrated below:</p> <p><a href="https://www.kaggle.com/c/indoor-location-navigation/leaderboard" target="_blank"><img src="https://i.ibb.co/StB7bqL/rotation-invariant.png"></a></p> <p>The architecture of the NN remained the same, but this time we pass 2 consecutive segments (AB and BC) to the model. The NN is applied to both segments, after that the second segment (BC) is rotated so that the prediction of the first segment (AB) is along the first axis. The rotated vector BC is the output of the model.</p> <p>This calibration issue also leads to a bias of the predicted distance between the visited waypoints - the relative movement model underestimates it (6.65 vs 6.91 on average in validation). The third model, fitted to predict the distance between the consecutive waypoints has a much smaller bias and, as the result, is better in predicting the traveled distance between waypoints (MAE of 0.67 vs 0.73). The RMSE of the distance model was 0.92. The architecture of the model remained the same, with only the head of the network changed accordingly.</p> <h2>Data leaks</h2> <p>We are grateful to <a href="https://www.kaggle.com/chris62" target="_blank">@chris62</a> for <a href="https://www.kaggle.com/c/indoor-location-navigation/discussion/234543" target="_blank">sharing the data leaks publicly</a>. At the time of the post, we were only aware of the time leak.</p> <p>The device leak was very useful to identify periods in time where the sensor data was unreliable, since the errors are clearly time dependent. This enabled us to make the optimization less reliant on sensor predictions when we predict them to be noisy.</p> <p><a href="https://www.kaggle.com/c/indoor-location-navigation/leaderboard" target="_blank"><img src="https://i.ibb.co/cC6xkc9/sensor-angle-error.png"></a></p> <p>We realized that by combining the time and device leak, one could see that apparently different device ids are likely coming from the same phone. It turns out that the time between sensor observations (after filtering outliers) correlates very well with the device ids, which could enable you to group different device ids. In the end, we never used these fused device ids, but think that there could be more to be learned from our observation.</p> <p><a href="https://www.kaggle.com/c/indoor-location-navigation/leaderboard" target="_blank"><img src="https://i.ibb.co/0Jbh69q/device-leak.png"></a></p> <p>Our test floor predictions combined the time and device leak with WiFi distances. Sadly, we got 102 private floor predictions wrong (about 0.18 total score penalty). Trajectory ”862a4ac32755d252c6948424”, for example, should apparently be F5 instead of F4.</p> <p><a href="https://www.kaggle.com/c/indoor-location-navigation/leaderboard" target="_blank"><img src="https://i.ibb.co/s23wrc5/floor-prediction.png"></a></p> <h2>Leaderboard probing</h2> <p>We quickly realized that probing was a viable strategy, since the test data set (626) was split by entire trajectories. After a couple of days of probing, we identified the 100 public test trajectories (1527 predictions in total). This was mostly useful for understanding the difference in distribution of the test data between the public and private set. The private set is clearly much harder, since it contains some notoriously hard trajectories (e.g. “e83a1c294b5d138339149fcb” and “7d401b038d6fdb08f4f8197d”). It also enabled us to speed up our submission pipeline, since we would only have to generate ~15% of the test predictions.</p> <h2>Waypoint generation</h2> <p>There is a lot more structure to the waypoints than merely filling the empty space in corridors. We built our solution to attempt to fill plausible waypoint locations without adding too much noise, especially in sensitive areas close to train waypoints.</p> <p>Our approach consists roughly of: </p> <ul> <li>Get a clean map of the corridors from the floor map GeoJSON</li> <li>Identify waypoints along walls based on the distance to the nearest wall.</li> <li>Gather statistics about nearest neighbor euclidean distance, distance to the wall and distance between points along the wall (project wall points to the wall line)</li> <li>Fill in open spots along the wall line using the distance stats and linear referencing. Corners get special consideration.</li> <li>Fill inner points aligned with known and generated wall points.</li> <li>The room layout may give overlapping generated waypoints. Resolve these to the centroid of small local clusters based on global distance stats or medium local cluster stats.</li> <li>Apply a hierarchy of filtering too close points: Known waypoints &gt; corner points &gt; wall points &gt; inner points</li> </ul> <p>Because it is hard to trust CV in this challenge, the parameters were tuned by hand and we mainly used our eyes to assess progress.</p> <p>Example of our generated waypoints:</p> <p><a href="https://www.kaggle.com/c/indoor-location-navigation/leaderboard" target="_blank"><img src="https://i.ibb.co/8b4NbnM/additional-grid-example.png"></a></p> <h2>Optimization</h2> <p>The discrete optimization is at the heart of our solution. It was obvious that search for the best trajectory would have to scale linearly as a function of the trajectory length, so exhaustive search was out of the question. Beam Search to the rescue! The optimization works as follows:</p> <p>A) Start by considering up to 2000 most likely initial waypoints, and compute a penalty for the starting point (only WiFi in our final submission)<br> B) For the trajectory of length L so far, consider 100 likely candidates for the next waypoint<br> C) Compute a penalty for the most recent segment, for the 2000*100 considered options<br> D) Order the 2000*100 trajectory candidates of length L+1 by their total penalty<br> E) Drop the candidates that don’t belong to the top 2000 and go back to B until the trajectory is completed</p> <p>Our prediction is then simply the trajectory with the lowest overall penalty. During validation, we always keep the best trajectory around, so we can understand where the optimization drops the ball. After weeks of tuning the optimization, we are now confident that we will almost always select the trajectory with the lowest optimization error.<br> Step B discards next step waypoints that are not in the half plane of the direction of the sensor prediction. We also prefer next step waypoints that are at the approximate predicted direction and distance from the previous waypoint.</p> <p>In our final submissions, we consider 7 types of penalties:</p> <ol> <li><strong>WiFi</strong>: prefer waypoints where the inference WiFi signal is close to the 20 nearest train neighbors of that waypoint location.<br> We also generate a small boost for the cosine similarity between the vector of the segment, and the vector of the WiFi best guess. This can be interpreted as: does the WiFi movement agree with the proposed segment direction.</li> <li><strong>Relative movement angle</strong>: Based on the angle between the last two segments.</li> <li><strong>Relative movement coordinate</strong>: Based on the independent X and Y differences between the predicted relative movement and the proposed segment. </li> <li><strong>Pairwise integrated relative movement coordinate</strong>. We also penalize predictions that are not consistent at the trajectory level. Every (L+1)th waypoint is assessed for compatibility with waypoints 1 through L by integrating the predicted relative movements, and comparing that integrated prediction with the vector from each waypoint to waypoint L+1.</li> <li><strong>Distance based</strong>: Linearly increasing penalty as you move more or less far between waypoints.</li> <li><strong>Time leak</strong>: Apply a fixed penalty for not agreeing with the edge points of neighboring trajectories, when those trajectories seem to be at the same location and are close in time. Additionally, apply a linearly increasing penalty for moving further away from reliable edge points.</li> <li><strong>Off grid penalty</strong>: In order to bias the optimization towards known grid points, we apply a penalty which increases as a function of the nearest known grid point. We also apply an additional penalty for selecting additional grid points in a region of dense grid points.</li> </ol> <p>On top of that, we disallow selecting the same waypoint in two subsequent steps. We also adjust the weight of the sensor penalties based on the uncertainty of those predictions (achieved through the time and device leak).</p> <p>The hyperparameters were tuned with Bayesian optimization. We spent a lot of time looking at our prime misclassifications and estimate that more than half of the error we make is due to inconsistencies in the data. </p> <p>Below you see a snapshot of the outcome of the optimization for a test trajectory, together with the most relevant predictions that the optimization builds on.</p> <p><a href="https://www.kaggle.com/c/indoor-location-navigation/leaderboard" target="_blank"><img src="https://i.ibb.co/Fbms8nT/optimization.png"></a></p> <h2>Ensembling</h2> <p>During the last two days, we had a hard time agreeing on what additional waypoint grid to choose. If you add too many additional waypoints, the optimization can sometimes pick shifted trajectories. However, if you don’t add enough waypoints, the predictions can be drastically wrong, because of the discrete nature of the optimization.</p> <p>In the end, we realized that we didn’t have to choose a single grid! Our final submissions generate predictions with 3 different grids:</p> <ol> <li>Only additional wall waypoints</li> <li>Additional wall waypoints + sparse inner waypoints</li> <li>Dense additional wall waypoints + dense inner waypoints</li> </ol> <p>We select the prediction with the lowest optimization penalty, where our submissions vary in the priority corrections. Ensembling enabled us to mostly stick with the simple grid, except when it resulted in a significant drop of the optimization penalty. Both final submissions boosted our score by about 10cm, relative to only using the sparse grid. </p> <h2>Final thoughts</h2> <p>We are happy to share all of our code in <a href="https://github.com/ttvand/Indoor-Location-Navigation-Public" target="_blank">this public repository</a>. The repository contains all our competition code, both the used and unused bits of our final solution. We also added a main script which should generate our approximate final submissions.</p> <p>About 85% of our optimisation outcomes are waypoints from the train grid. We make the leap here that the optimization is unbiased. This is likely since we observed no bias in validation.</p> <p>Generally speaking, there was no clear separation between who worked on what parts of the project. We very much functioned as a well-oiled team. I have had the pleasure to work with both Dmitry (in Kaggle) and Are (at our Norwegian startup) and we immediately hit the right collaboration dynamic. It can not be stressed enough that everyone worked very hard to be able to do as well as we did. We all worked on data exploration, brainstorming, following up on public discussions and quality checks of our predictions. The list below gives a rough approximation of the major contributors to the components discussed in our writeup:</p> <ul> <li><strong>WiFi models</strong>: Dmitry and Are tried NN modeling. Dmitry built the final LightGBM WiFi model and Tom worked on the kNN WiFi model.</li> <li><strong>Sensor models</strong>: Tom built some low-performant initial sensor models and Dmitry took charge of sensor modeling after that and made massive improvement over the initial models.</li> <li><strong>Data leaks</strong>: Mostly a collaboration between Dmitry and Tom</li> <li><strong>Leaderboard probing</strong>: Tom</li> <li><strong>Waypoint generation</strong>: Are took charge of this component and everything else related to the floor map. This was definitely one of the hardest and most important tasks in this list.</li> <li><strong>Optimization and ensembling</strong>: Tom</li> </ul>
Indoor Location & Navigation
2nd Place Solution (Overall)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Indoor Location & Navigation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi housuke, <br> for other tasks (e.g. image data), I think adversarial validation or careful extraction of the statistics from an image may work. I think it's also ok to try simple random split, but the optimal parameters of the postprocessing in validation set will be different from that in the test set.</p> <p>I uploaded discrete optimization code that runs within about 15 mins. This is a much simpler version, but achieves 1.49 public/ 2.24 private. <a href="https://www.kaggle.com/mamasinkgs/discrete-optimization-in-2nd-place-solution" target="_blank">https://www.kaggle.com/mamasinkgs/discrete-optimization-in-2nd-place-solution</a></p> <p>Thanks Kouki, your great notebook helped me a lot in the beginning of the competition. <br> I also hope I could see you again on kaggle!</p> <p>Thank you my briliant teammates ( <a href="https://www.kaggle.com/ymatioun" target="_blank">@ymatioun</a>, <a href="https://www.kaggle.com/vaghefi" target="_blank">@vaghefi</a>, <a href="https://www.kaggle.com/demonen" target="_blank">@demonen</a>, <a href="https://www.kaggle.com/rsakata" target="_blank">@rsakata</a> ), all teams who competed with us, and all people who participated in this competition. Congrats <a href="https://www.kaggle.com/tvdwiele" target="_blank">@tvdwiele</a> <a href="https://www.kaggle.com/areehdot" target="_blank">@areehdot</a> <a href="https://www.kaggle.com/dott1718" target="_blank">@dott1718</a>, who won this competition with an incredible performance!</p> <p>I'm happy because I finally became Kaggle Grandmaster with 5 gold medals, and <a href="https://www.kaggle.com/ymatioun" target="_blank">@ymatioun</a> became kaggle Master in this competition. <br> It's my 5th competition and it was fun to compete with my past teammate ( <a href="https://www.kaggle.com/vicensgaitan" target="_blank">@vicensgaitan</a> ) and people who I competed with past competitions ( <a href="https://www.kaggle.com/takoihiraokazu" target="_blank">@takoihiraokazu</a> )!</p> <p>Here, I will explain the overall approach of our team's solution. For more details of (especially) absolute position prediction models and delta prediction models, please check our teammate's solutions. </p> <ul> <li>Youri's part: <a href="https://www.kaggle.com/c/indoor-location-navigation/discussion/239880" target="_blank">https://www.kaggle.com/c/indoor-location-navigation/discussion/239880</a></li> <li>Reza's part: <a href="https://www.kaggle.com/c/indoor-location-navigation/discussion/240197" target="_blank">https://www.kaggle.com/c/indoor-location-navigation/discussion/240197</a></li> <li>Christoffer's part: <a href="https://www.kaggle.com/c/indoor-location-navigation/discussion/240025" target="_blank">https://www.kaggle.com/c/indoor-location-navigation/discussion/240025</a></li> <li>Jack's part: <a href="https://www.kaggle.com/c/indoor-location-navigation/discussion/240141" target="_blank">https://www.kaggle.com/c/indoor-location-navigation/discussion/240141</a></li> </ul> <p>Code and the visualization of our prediction are here. </p> <ul> <li>Discrete optimization code (simple version): <a href="https://www.kaggle.com/mamasinkgs/discrete-optimization-in-2nd-place-solution" target="_blank">https://www.kaggle.com/mamasinkgs/discrete-optimization-in-2nd-place-solution</a></li> <li>The valid/test predictions of our team (public 1.41, private 2.18) : <a href="https://drive.google.com/drive/folders/1lxC0DYc9K84mKaNZoFdRVk9n8YCIrQbf" target="_blank">https://drive.google.com/drive/folders/1lxC0DYc9K84mKaNZoFdRVk9n8YCIrQbf</a></li> </ul> <p>For our solution, <strong>our team does not use any leakage (e.g. start/end point leakage, raw timestamp leakage, device leakage)</strong>. We tried to use these leakages, but none of them improved the score and sometimes degraded the score, because our floor model and our WIFI model are already very good without leakage. </p> <h1>Validation Set Construction</h1> <p>By EDA, I found the samples of the test set meet these conditions: </p> <ol> <li>No non-standard floor is used, as stated in the evaluation section.</li> <li>Brand is always OPPO.</li> <li>max(timestamp) - min(timestamp) is larger than 60310.</li> <li>TYPE_ACCELEROMETER_acc is not all nan.</li> <li>TYPE_ACCELEROMETER_UNCALIBRATED_acc is not all nan.</li> <li>TYPE_GYROSCOPE_UNCALIBRATED_acc is not all nan.</li> <li>TYPE_WIFI_ssid is not all nan.</li> </ol> <p>I made these 7 masks and mulitiply them and get one mask. I chose the samples from training set using this mask. Then, from those samples, I chose 548 paths and use these 548 paths as validation set. LB and CV was always correlated using this validation set. I guess the host wants to use the good and clean paths for the evaluation of the performance, which is reasonable. </p> <h1>Absolute/Delta Prediction Models</h1> <p>We have 5 absolute position prediction models, and 3 delta prediction models. <br> These are the lists of our absolute position prediction models: </p> <ul> <li>Reza's WKNN model (euclid distance)</li> <li>Reza's WKNN model (correlation distance)</li> <li>Jack's floor-level multiclass classification model</li> <li>Youri's Fingerprinting model </li> <li>Mamas's RNN model </li> </ul> <p>These are the lists of our delta prediction models: </p> <ul> <li>Youri's PDR model</li> <li>Christoffer's LSTM</li> <li>Mamas's MLP</li> </ul> <p>For our ensembled delta prediction, we simply tried weighted average of these 3 models:<br> <code>ensemble_delta_prediction = Mamas_MLP * 0.2 + Youri_PDR * 0.33 + Christoffer_LSTM * 0.47</code><br> Our delta prediction is not so good, MAE = 1.06 in CV.</p> <p>For our ensembled absolute position prediction, we tried weighted average of these 5 models after applying <a href="https://www.kaggle.com/saitodevel01/indoor-post-processing-by-cost-minimization" target="_blank">akio's cost minimization</a> using the ensembled delta prediction. When applying akio's cost minimization to Youri's Fingerprinting model and Reza's WKNN models, we consider the uncertainty of the prediction in akio's cost minimization. To be more specific, I modified the code of akio's cost minimization as follows. </p> <pre><code>def get_xy_star_weight(delta_xy_hat, xy_hat, a, b, w): N = xy_hat.shape[0] alpha = w * a * np.ones(N) beta = b * np.ones_like(delta_xy_hat[:, 0]) A = scipy.sparse.spdiags(alpha, [0], N, N) B = scipy.sparse.spdiags( beta, [0], N-1, N-1) D = scipy.sparse.spdiags(np.stack([-np.ones(N), np.ones(N)]), [0, 1], N-1, N) Q = A + (D.T @ B @ D) c = (A @ xy_hat) + (D.T @ (B @ delta_xy_hat)) A = scipy.sparse.block_diag((Q, Q)) b = np.r_[c[:,0], c[:,1]] xy_star = scipy.optimize.nnls(A.toarray(), b)[0].reshape(2,-1).T return xy_star </code></pre> <p>where w is given by</p> <pre><code>w = (Reza or Youri's prediction's uncertainty) ** (-2) </code></pre> <p>Finally, the ensembled prediction is given by </p> <pre><code>ensembled_absolute_prediction = akio_costmin(mamas_RNN) * 0.04 + akio_weighted_costmin(youri_fingerprint) * 0.17 + akio_weighted_costmin(reza_WKNN_euclid) * 0.11 + akio_costmin(Jack_multiclass) * 0.34 + akio_weighted_costmin(reza_WKNN_corr) * 0.34 </code></pre> <p>The ensembled absolute prediction scored 4.206 in CV and 3.171 in LB.</p> <h1>Iterative snap to corridor &amp; akio's cost minimization</h1> <p>After the weighted averaging, We tried to move the prediction to the corridor. Because we didn't use additional generated waypoints unlike the 1st team, <strong>the predictions before the discrete optimization must be in the corridor</strong>, because some of these predictions will be used as the final prediction when no grid is selected in the discrete optimization process.<br> First, we fixed the incorrect map according to <a href="https://www.kaggle.com/chris62" target="_blank">@chris62</a>'s <a href="https://www.kaggle.com/c/indoor-location-navigation/discussion/230558" target="_blank">great thread</a>. Then, Christoffer automatically eliminated the stores on which the grids are present. This is the example of the problematic cases: The left image shows the map before the Christoffer's fixing map and the right image shows the map after the Christoffer's fixing map. <br> <img src="https://user-images.githubusercontent.com/23468935/118794357-775b9400-b8d4-11eb-9a50-90d4b1df4fbb.PNG"><img src="https://user-images.githubusercontent.com/23468935/118794352-762a6700-b8d4-11eb-8f52-73460a67ae21.PNG"><br> This is the only case we didn't eliminate the store, because we thought it can give a bad effect in snap-to-corridor. The effect should be smaller than 0.005, though. <br> <img src="https://user-images.githubusercontent.com/23468935/118797511-a1628580-b8d7-11eb-823b-50961db36ef5.PNG"></p> <p>After fixing the maps, We applied snap to corridor and the akio's cost minimization iteratively (100 iterations). After the iterative snap to corridor &amp; akio's cost minimization, CV score is 3.882 and LB score is 2.740. </p> <h1>Discrete Optimization</h1> <p>Like the 1st place solution, Discrete optimization using proper penalties is the true key of this competition. We used these 6 penalties and corresponding 6 parameters (alpha, beta_1, beta_2, gamma, kappa, tau). However, kappa and tau is not so important and they didn't improve the score a lot. Gamma is really important and this is the reason our method worked well without additional generated grids. </p> <ol> <li>alpha * (euclid distance between grids and the absolute prediction)</li> <li>beta_1 * abs(<strong>the angle of predicted delta</strong> - <strong>the angle of the delta between two grids</strong>.)</li> <li>beta_2 * abs(<strong>l2-norm of the predicted delta</strong> - <strong>l2-norm of the delta between two grids</strong>)</li> <li>gamma * (1 when no grid is chosen, 0 otherwise)</li> <li>kappa * abs(<strong>actual distance estimated by dijkstra algorithm</strong> - <strong>manhattan distance between 2 grids</strong>)</li> <li>tau * (1 when the delta between two grids are zero, 0 otherwise)</li> </ol> <p>The reason we estimated the actual distance using dijkstra algorithm between 2 grids are, the predictions are sometimes strange like this. <br> <img src="https://user-images.githubusercontent.com/23468935/118803259-1d5fcc00-b8de-11eb-9745-1e17358f1ae5.png"></p> <p>After applying a cost that considers actual distance, the prediction became like this. <br> (I'm sorry these predictions are not the one from our best submissions, but the very old predictions. These are just examples)<br> <img src="https://user-images.githubusercontent.com/23468935/118801914-7dee0980-b8dc-11eb-817f-a667dfd594b2.png"></p> <p>This is the example of the actual distance calculated by dijkstra algorithm (by Jack)<br> <img src="https://user-images.githubusercontent.com/23468935/118803499-63b52b00-b8de-11eb-8cee-5e6bc505913c.png"></p> <p>In the optimizaiton process, I used a simple greedy optimization algorithm, while the 1st team used the beam search, which is I guess one of the reason we lost. I have no experience in heuristics contest like Santa and couldn't do better in the optimization process. To be more specific, I split the paths by chunksize = 7, and I searched the 8 closest grids for each prediction. Then, I calculated the cost for all the possible patterns ((8 + 1) ** 7 = 4782969) and chose the grids (or sometimes original predictions) that minimize the cost. For the calculation of the backward delta of the first sample of a specific chunk, mamas's discrete optimization uses the last sample of the previous chunk.<br> After mamas's discrete optimization, CV score is 2.523 and LB score is 1.392, which is our final solution. </p> <h1>Comments</h1> <p>I told I would become the winner in the next competition in the <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/210113" target="_blank">2nd place solution of Riiid competition</a>, which was held 4 months ago, but I couldn't keep my promise. However, I have no regret because I did my best with really great teammates with the clean and completely no leakage solution and I really enjoyed this competition. Moreover, I learned a lot from <a href="https://www.kaggle.com/tvdwiele" target="_blank">@tvdwiele</a>'s team's great solution, which I think is one of the greatest solution I saw in the kaggle. <br> I agree with Reza that hiding LB score should not become a habit in Kaggle competitions, because if we all hide a score the competition will become not enjoyable for participants and probably will give a not good effect on the kaggle community. However, I sincerely respect the 1st team, because I'm sure we couldn't have won if they didn't hide a score, and their great solution is clearly a tremendous contribution to the community!<br> Anyway, I really enjoyed this competition. I will take a rest for a while, and will surely come back to Kaggle again !!! </p> <h1>Thanks all, see you again!</h1>
Indoor Location & Navigation
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Indoor Location & Navigation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks, I'm glad to join this club :) </p> <p>Thanks! Yes the amount of time spent here was huge :) </p> <p>Thanks a lot! Yes, Axa competition was my first Kaggle competition! It was great. It was track prediction but I don't think I used many of Axa competition ideas in this Indoor competition :) <br> Sure I will share the distance prediction model, I think the features are quite good but the model could be much better!</p> <p>Thanks to the hosts for the great competition, and thanks to everyone who shared insights on the forum!<br> I'm so pleased to become a GM :-) It was worth spending a huge amount of time on this competition!</p> <p>I tried many many things, kept some of them in the process, gave up with others, but it's quite difficult to say which were good or not.</p> <p>First I compute the theoretical path (path shape : deltaX/Y from a waypoint to next waypoint) :</p> <ul> <li>my distance between 2 waypoints was obtained by a glm model, fitted on more than 2000 features, calculated with the accelerometer + gyro data. My rmse on distance prediction between 2 waypoints is around 1.1m. I think it was a competition inside the competition, to get a good distance model!</li> <li>the direction is obtained by using only the magnetic field. I tried to use gyro info, but it didn't improve my prediction. I also tried some filters with no success. The direction is often quite bad, like 30 degrees of deviation, but I didn't find a way to improve this.</li> </ul> <p>Then I have the WIFI model : <br> I mainly used a closest neighbor model. I applied some filters and thresholds on both train and test data (approx. 20 hyperparameters). I used the difference between timestamp and last seen timestamp to give a weight for each WIFI source. Also I have weights for each WIFI source as hyperparameters. This gave me a lot of hyperparameters.<br> I noticed that when I tune the hyperparameters of my WIFI model, the result could change a lot. One model will work for some paths and not for some others. So I optimized the hyperparameters for each path, until the path estimated by WIFI, had the same shape as the theoretical path. I optimized randomly with simulated annealing. This process was quite time consuming… I let it run during several weeks!<br> I also added WIFI information by computing the whole path on the training data and calculating WIFI information at a shorter timestamp, which gave me other models to blend, but was less good than the model with WIFI information located at each waypoint. <br> I then used these models, along with all public notebooks WIFI models, in a quick post processing : for each path, I give a weight to each WIFI model, and optimize these weights in order to have the best fit with the shape of the theoretical path.<br> At the end of this step, my score is around 4m.</p> <p>Improving the theoretical path :<br> I calculated several improved paths : </p> <ul> <li>Starting from the theoretical path, I change the overall path so it starts and finishes where my WIFI model starts and finishes.</li> <li>I try to find similar shapes on a part of the theoretical path, and the training waypoints. When 3 or more consecutive waypoints match exactly (or nearly) with existing waypoints, after an X/Y offset, I consider that they are correct. Then I compute the remaining of the path (forward and backward) using the theoretical path. It's a "local" snap to grid, as I just move a few waypoints to the grid.</li> <li>then I did forward / backward iterations : starting from the WIFI position, I move according the the deltaXY and angle computed, and if there is a wall, I find the most logical direction to avoid the wall. This works well inside a corridor, but it's difficult to make it enter a narrow corridor. To avoid getting in the wrong corridor, I average the position at each timestep with the WIFI position. After 2 forward / backward iterations, most of the time my model gets in the right corridor.</li> </ul> <p>Finally there is my snap to grid method :<br> I used an automated waypoint generation.<br> The snap to grid is obtained by calculating a probability for all training waypoints, at each time step. For the time step i, I calculate the probability to change from any training waypoint, to any other training waypoints. That's a bit similar to Markov processes, that I discovered in a previous competition (ion competition).<br> The probability is a product of several probabilities : </p> <ul> <li>probability 0 if the path goes across a wall or a shop, </li> <li>probability linked to the distance to the WIFI model, </li> <li>probability linked to the distance to the any of the improved paths described above, </li> <li>probability linked to the distance from previous timestep (compared to the predicted distance used in the theoretical path)</li> <li>probability linked to the angle from the previous timestep (compared to the magnetic field information)</li> <li>penalty if the path from previous timestep, was already existing in the training set</li> <li>probability linked to the time leakage for first and last step</li> </ul> <p>Multiplying the probability of all waypoints at timestep i-1, by the matrix of probability described above (size n*n, where n is the number of training waypoint), gives a new probability array at timestep i.<br> I do this step forward, backward (giving a new starting point), forward again and backward again.<br> My final position is the weighed average of all waypoints (I take the average of the last 2 steps).</p> <p>This final solution is used as input for a next iteration, by feeding the WIFI model with the positions obtained. <br> I also completed training data with test data, by assuming my test data XY prediction was exact. </p> <p>It was my first solo competition, it removes part of the fun I think… But I needed my solo gold metal and thought it was the good time!</p> <p>Thanks to you too for all the sharing and nice posts during the competition! </p> <p>Thank you Rob! During the competition, I was thinking to Ion competition and how you were happy to get your GM at the end of that competition. This motivated me to work hard and get this gold!</p>
Indoor Location & Navigation
7th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Indoor Location & Navigation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for the validation! Nice!👍</p> <p>Thanks for the comment. I've heard similar comments from other teams, it seems that <a href="https://www.kaggle.com/c/indoor-location-navigation/discussion/239919" target="_blank">higepon's</a> team had also the similar effect with pseudo labeling like ours. We blended the predictions of two RNN models first and used them as pseudo label labels for LGBM and RNNs after performing postprocess.</p> <p>Thanks for the comment. I think the repeat training converged after about two or three times.</p> <p>First of all, I would like to thank the host for organizing such a wonderful conference. I would also like to take this opportunity to thank all the wonderful members( <a href="https://www.kaggle.com/mmotoki" target="_blank">@mmotoki</a>, <a href="https://www.kaggle.com/ryches" target="_blank">@ryches</a>, <a href="https://www.kaggle.com/takoihiraokazu" target="_blank">@takoihiraokazu</a> , <a href="https://www.kaggle.com/masatomatsui" target="_blank">@masatomatsui</a> ) have worked so hard throughout the competition. Finally, I would like to congratulate for <a href="https://www.kaggle.com/mmotoki" target="_blank">@mmotoki</a> becoming a GM with this medal. Congratulations! I'm as happy as myself!</p> <p>This is our team's solution. Have fun, everyone!</p> <h1>Quick summary</h1> <p>Our solution is relatively simliar to some of other teams in the same range as us. Our final submission is a blend of LSTM and LGBM models trained with repeated pseudo labeling and postprocesing. Here we provide a brief description of some of the ideas that worked for us and rough estimates of the gains that we got from them.</p> <p><img src="https://i.imgur.com/tAhTdUG.png" alt="slide"></p> <h1>Preprocessing</h1> <ul> <li>Linear interpolation of coordinate at wifi steps</li> <li>Timestamp using the fix from <a href="https://www.kaggle.com/jiweiliu/fix-the-timestamps-of-test-data-using-dask" target="_blank">Jiwei’s Fix the timestamps of test data using DASK</a></li> </ul> <h1>Model</h1> <h2>Floor Model</h2> <p>We used an MLP inspired by <a href="https://www.kaggle.com/nigelhenry/simple-99-accurate-floor-model" target="_blank">Nigel’s 99% Accuracy public notebook</a>. The idea seemed similar to target encoding where each bssid is mapped to the floor where its max rssi was observed. The MLP generalized this approach using the hard floor estimates (descried earlier), a soft floor estimates derived using a weighted average of bssid-floors where the weight comes from the softmax of rssi, a learned bssid embedding and a beacon weight derived from beacon measurements (e.g., rssi, last_seen). Our model changed the floor predictions on 11 of the paths predicted by the public notebook. Our model had 100% correct predictions on the public and private test sets. </p> <h2>Relative Position Model</h2> <p>We used a wavenet model for predicting the relative position between two waypoints.<br> Inputs are 12 Sensor data (accem gyro, ahrs, and magn). This model resulted in a mean delta positioning error of 1.68 m, and the host’s “compute_rel_positions” code results in a mean error of 2.84 m.</p> <h1>Position Model</h1> <ul> <li>LGBM - This model was based on the <a href="https://www.kaggle.com/hiro5299834/wifi-features-with-lightgbm-kfold" target="_blank">BIZEN’s public notebook</a>. The base LGBM model was not very effective, but it was particularly effective when pseudo labeling was used.</li> <li>LSTM - Our LSTM model used features from <a href="https://www.kaggle.com/kokitanisaka/lstm-by-keras-with-unified-wi-fi-feats" target="_blank">Kouki’s LSTM by Keras with Unified Wi-Fi Feats notebook</a> and extra features including the delta of x,y calculated by the host’s “compute_rel_positions”. The length of the input sequence was 2*n+1 (n each before and n after the target to be predicted). We used device id to connect the paths of train and test, and created the input sequence. Our final submission used an average of two slight variants, the score before the post process was public:4.6 and private:4.9. Averaging the raw predictions of our 2 LSTM models (no pseudo labeling) reduced the error by about 0.1 (public:4.5 and private:4.8)</li> </ul> <h1>Postprocessing:</h1> <p>Like most other teams, this step was really important for us. We used a lot of the ideas shared publicly with a few modifications and additional steps</p> <ul> <li>tomooinubushi’s<a href="https://www.kaggle.com/tomooinubushi/postprocessing-based-on-leakage" target="_blank"> time leakage</a> (1st)</li> <li>nn cost minimization (e-toppos) - similar to the public magn cost minimization but using a blend of relative position predictions and the official code estimates</li> <li>Iterative closest point algorithm. <a href="https://en.wikipedia.org/wiki/Iterative_closest_point" target="_blank">ICP</a> is often used for robot path planning and alignment of scanned point cloud data, but we thought it could also be used for alignment between estimated paths and grids. We used the training grids and the auto generated grids proposed by e-toppo </li> <li>snap to corridor - Move estimated step positions outside corridor or building to be inside nearest corridor. We used the polygon data of the corridor, and snapped to the corridor using shapely's method “nearest_points”.</li> <li>snap to grid - Almost the same as <a href="https://www.kaggle.com/robikscube/indoor-navigation-snap-to-grid-post-processing" target="_blank">the public notebook</a>, but we divided the snapping into two stages, first snapping to the training grids and then snapping the rest to the auto-generated grids.</li> <li>tomooinubushi’s time leakage (2nd)</li> <li>ganchan’s <a href="https://www.kaggle.com/iwatatakuya/use-leakage-considering-device-id-postprocess" target="_blank">Device leakage</a></li> </ul> <p>It is difficult to quantify how much each step in our postprocessing pipeline helped, but most of our gains came from the non-leak postprocessing steps. Postprocessing improved the score of our raw predictions by about as 1.0 (public:3.5 and private:3.8).</p> <h1>Repeated Pseudo Labeling</h1> <p>This was also a really important step for us. After training our base LSTM models, we averaged their predictions, and used the postprocessed predictions as pseudo labels for LGBM. We retrained our LSTM models with the LGBM pseudo labels and repeated the process until it converged. Repeated pseudo labeling improved our score by another 1.0 (public:2.5 and private:2.8)</p>
Indoor Location & Navigation
8th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Indoor Location & Navigation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Absolute position model</h1> <p>My base model used wifi and beacon events to predict absolute position in a single timestamp. It used 50 events from 2 closest blocks of wifi events with hightest values of rssi and 10 closest beacon events. Events were sorted by <code>rssi</code>.</p> <p>I trained one model on all sites. I used two bidirectional LSTMs - one on embeddings of wifi bssids + some numerical features related to each wifi event and one on embeddings of beacon_id + some numerical features related to each beacon event. Model also used embeddings of concatenated <code>site</code> and <code>floor</code>.</p> <p>I think that the biggest improvement in this model was after I started interpolating waypoints in the training dataset. My best model was trained on waypoints interpolated every 1 second. </p> <h1>Relative position model</h1> <p>I also trained model, which predicted relative position of a smartphone based on sensor data. It predicted two numbers (deltaX, deltaY) based on 1 second of sensor data - 50 timestamps. I used Kalman Smoother to get targets for this model. It was a simple 1D-CNN model based on raw events.</p> <p>Predictions from this model were interpolated, to approximate differences between positions in two consecutive waypoints. It achieved ~2.0 RMSE. </p> <h1>Postprocessing</h1> <p>The first step of my postprocessing was inspired by <a href="https://www.kaggle.com/saitodevel01" target="_blank">@saitodevel01</a>'s great notebook: <a href="https://www.kaggle.com/saitodevel01/indoor-post-processing-by-cost-minimization" target="_blank">indoor - Post-processing by Cost Minimization</a>. </p> <p>I assummed that models' predictions have the following distributions</p> <p>$$(\hat{X_i}, \hat{Y_i}) | (X_i, Y_i) \sim N_2((X_i, Y_i), \sigma_1^2 I_2)$$<br> and<br> $$(\hat{\Delta X_i}, \hat{\Delta Y_i}) | (X_{i+1}, Y_{i+1}, X_i, Y_i) \sim N_2( (X_{i+1}, Y_{i+1}) - (X_i, Y_i), \sigma_2^2 I_2) $$<br> and all pairs \( (X_i, Y_i) \) are uniformly distributed in the corridor of the building, where</p> <p>\( (X_i, Y_i) \) is a true position at timestamp \( i \) , \( (\hat{X_i}, \hat{Y_i}) \) is a prediction of absolute model, \( (\hat{\Delta X_i}, \hat{\Delta Y_i}) \) is a prediction of relative model and \( \sigma_1, \sigma_2 \) are RMSEs of absolute and relative models respectively.</p> <p>After some math, it turns out that vector of \( (X_1, Y_1, …, X_n, Y_n) \) has truncated multivariate normal distribution with easy to calculate mean and covariance matrix. It is truncated in such a way, that each pair \( (X_i, Y_i) \) lies in the corridor of the building.</p> <p>I implemented iterative method of sampling paths from this distribution similar to Gibbs sampling. In each iteration it samples pair \( (X_i, Y_i) \) using untruncated normal distribution conditioned on all the other pairs sampled so far until it samples a point inside the corridor or until it reaches given limit of trials. For each path I draw 500 samples and average last 200 samples. </p> <p>This method also allowed to use device id leak, by setting \( \sigma_1 \) to some small value for leaked timestamps.</p> <p>This method without truncating to corridors had similar performance to Cost Minimiztion notebook - in fact I think they are almost equivalent. Adding map information improved my public LB score by ~0.3.</p> <p>The second step of my postprocessing was standard Snap to Grid, without any additional waypoints.</p> <h1>Metrics</h1> <table> <thead> <tr> <th>method</th> <th>private LB</th> <th>public LB</th> </tr> </thead> <tbody> <tr> <td>best absolute model</td> <td>5.36934</td> <td>4.90242</td> </tr> <tr> <td>ensemble of 4 absolute models</td> <td>5.30067</td> <td>4.88062</td> </tr> <tr> <td>above + device id leak + sampling postprocessing</td> <td>3.33887</td> <td>3.00037</td> </tr> <tr> <td>above + snap to grid</td> <td>3.07282</td> <td>2.64846</td> </tr> </tbody> </table>
Indoor Location & Navigation
9th place solution higepon
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Indoor Location & Navigation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all the winners!<br> It has been very challenging competition and I learned a lot from my teammate <a href="https://www.kaggle.com/saitodevel01" target="_blank">@saitodevel01</a> and Kaggle community. Much appreicated.</p> <p>Please see my teammate <a href="https://www.kaggle.com/saitodevel01" target="_blank">@saitodevel01</a>'s solution <a href="https://www.kaggle.com/c/indoor-location-navigation/discussion/20087" target="_blank">here</a>.</p> <h2>Solution</h2> <p>My model was stacking of GRU and LSTM with the following data.<br> <img alt="スクリーンショット 2021-05-18 12 52 34" src="https://user-images.githubusercontent.com/54491/118587697-0636b680-b7d8-11eb-8734-98aedd52f743.png"><br> <img alt="スクリーンショット 2021-05-18 12 52 42" src="https://user-images.githubusercontent.com/54491/118587716-0b940100-b7d8-11eb-9d6b-470af3d20d9c.png"></p> <h2>Key differentiators</h2> <ul> <li>We were able create more training data using <code>compute_step_postion</code> fuction provided by the host.</li> <li>We created train data based on waypoints instead of wifi group.<ul> <li>While the popular united Wifi solution was creating training set as<ol> <li>Pick wifi group withing the same timestmap.</li> <li>Find a timestamp with the closest timestamp.</li></ol></li> <li>We did something different<ol> <li>Pick waypoint.</li> <li>Find top 200 closest wifi bssids with some meta data.</li> <li>This only works well with more waypoints coming from <code>compute_step_postion</code> .</li></ol></li></ul></li> <li><a href="https://www.kaggle.com/saitodevel01/background-ideas-of-cost-minimization-notebook" target="_blank">The magical post processing</a> by <a href="https://www.kaggle.com/saitodevel01" target="_blank">@saitodevel01</a>.</li> <li>Finally Pseudo Labeling worked really well for us in this competition.</li> </ul> <h2>Pointers to my teammate's solution</h2> <ul> <li><a href="https://www.kaggle.com/saitodevel01/background-ideas-of-cost-minimization-notebook" target="_blank">Background Ideas of "Cost Minimization" Notebook | Kaggle</a></li> <li><a href="https://www.kaggle.com/saitodevel01/11-pseudo-labeling-from-lb-2-586-with-retry" target="_blank">11.Pseudo Labeling from LB:2.586 with Retry | Kaggle</a></li> </ul> <p>Please let us know if you have any questions!</p> <p>Sorry I wasn't very clear. We pick timestamp of waypoint then find top 200 closest wifi using time delta. So for the test data, we just used the timestamp we're predicting.</p> <p>Thank you KhanhVD!</p> <p>Thank you!</p> <p>Great question. For example our first Pseudo Labeling improved score from 3.11233 -&gt; 2.89747 in public LB.</p> <p>Yes we did it repeatedly. But the thing is it didn't improve much after the first pseudo labeling for no hand labeling case (The one we submitted). But it worked better for hand labeling case. We didn't investigate why :)?</p>
Indoor Location & Navigation
Delta x,y CNN + MLP network from 5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Indoor Location & Navigation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks! Congrats on your 2nd place finish as well, and GM status!</p> <p>After reading all of the other solutions, I think the big thing I did differently was that I was able to get &lt; 1.0 MAE error (averaged over x and y) in predicting the delta x and y for the path "legs" (moving from one waypoint to another).</p> <p><br><br> The structure of the network I used was a single main CNN for the imu data, but I then combined that with a lot of other path data into a big neural network.<br> <br></p> <h1>The network</h1> <p>Here's the basic structure:</p> <p><img src="https://i.imgur.com/Lux4i5o.png" alt="CNN MLP"></p> <p>Here's a description of all the inputs and outputs:</p> <h2>Inputs</h2> <h3>IMU 12 x 100</h3> <p>I collected all the imu data (acc, gyro, mag, rot) for each axis (x, y, z) which is the "12". Then I binned that data and averaged into 100 points for each leg. If there were &lt; 100 points for a leg, then the rest of the values were just 0.</p> <p>Some legs had a lot more than 100 points, but I found 100 to be a good middle ground of not too much data for the CNN to handle, but also enough data for the steps to be clearly visible on the acceleration graph.</p> <p>I also tried 200 and 50, and saw almost no difference at 200, but slightly worse performance at 50.</p> <h3>site_id</h3> <p>I converted the uuid of the sites to an integer 1 - 24, and used that as the input to an embedding (experimented with sizes here, but mostly around ~15)</p> <h3>floor_id</h3> <p>Same thing with floor, but that number was 1 - 139. Still around 15 values in the embedding</p> <h3>time_of_day</h3> <p>I calculated the local time of day (24 hour clock) and also added that to an embedding - my thought was that during busy times the collector might walk slower than not busy times. I'm not 100% sure that it helped (it would be interesting to do more tests adding/subtracting ALL of these values)</p> <h3>day_of_week</h3> <p>Same thing as time_of_day, but for day_of_week</p> <h3>total_leg_time</h3> <p>The total time recorded to move from waypoint A to waypoint B</p> <h3>total_acc_time</h3> <p>I noticed in the acceleration graph, that it was very common for the collector to pause right at the start or end of the leg - I guess they were probably tapping on the phone, or figuring out the direction to travel etc. So, I figured out when the acceleration went above or below a 66%/33% line, and called that the "start" time, and then figured that out for the end as well.</p> <p>This is what I called the "acc time" - which is the time the collector was actually MOVING, and not just sitting holding the phone.</p> <h3>device_id</h3> <p>Using the device data leakage I outlined here: <a href="https://www.kaggle.com/c/indoor-location-navigation/discussion/234543" target="_blank">https://www.kaggle.com/c/indoor-location-navigation/discussion/234543</a> I binned any device within 2% calibration values to be probably the same device, and assigned an id to each one, then put those ids into an embedding.</p> <p>The idea was that different collectors would have different stride lengths, and that might be able to be classified in an embedding</p> <h3>device_calibration</h3> <p>Same thought as device_id, but with the raw calibration values from the sensor data.</p> <h3>pct_leg</h3> <p>I calculated what % of the path this leg was located at - so the first leg of a path would be 0%, and the last leg would be near 100%. Not sure if this helped, but it was easy to calculate and I figured it couldn't hurt… again, it would be interesting to do a study to see what actually mattered out of all of these features.</p> <h3>previous_point_xy_time</h3> <p>I calculated the previous point of the path's x, y, and time values (global x,y and time not delta x,y,time) when I was attempting to join this network with the global x,y network I was building with the wifi values.</p> <p>The idea was that I could do some pseudo-label training with this, but it never seemed to work that well. I took this out of later versions.</p> <h3>next_points_xy_time</h3> <p>Same as previous_point_xy_time but for the points after this leg</p> <h3>rays</h3> <p>In order to try to classify if the point was in a narrow hallway or wide open space, I calculated the distance from the point to the nearest wall in 16 directions from the start point. (so small values mean narrow hallway, large values mean wide open space).</p> <p>This also served as a form of pseudo labeling that got better as my test predictions got better</p> <h3>rot3_mid</h3> <p>In investigating the graphs, I saw that the rot3 (z value) was the most important imu value for determining direction of travel, so I grabbed the middle rot3 value from the path and added that as a standalone feature.</p> <h3>before_waypoints</h3> <p>I added the delta x,y (or absolute x,y depending on which version of the network) for either the entire path, or just the last 5 points (again, based on the network version). I'm not sure if this helped a lot either, more experimenting is needed.</p> <h3>after_waypoints</h3> <p>Same thing as before_waypoints, but for the waypoints that come after the leg in question.</p> <h3>floor_waypoints</h3> <p>I included the nearest 20 waypoints from the training set (delta x,y from the leg starting point). The idea is that for most points (80% - 90%), the leg would go to one of these waypoints</p> <h2>Outputs</h2> <h3>delta x</h3> <p>The distance (m) that the person traveled in the x direction (+-)</p> <h3>delta y</h3> <p>The distance (m) that the person traveled in the y direction (+-)</p> <h3>distance</h3> <p>The total distance traveled (m) - only positive values</p> <h3>angle</h3> <p>The angle from waypoint A to waypoint B, from 0 to 2 PI (it took a bit of work to calculate that; and I'm still not 100% sure it helped or was necessary)</p> <h2>Training</h2> <p>I trained with either MSE or MAE, with either Adam or SGD, varying the learning rate by hand (I could have saved time using a learning rate annealing, but didn't get it setup correctly).</p> <p>I actually found pretty good results by starting with MSE/Adam, and then switching to MAE/SGD 1/2 way through, and then sometimes switching back and forth several times. That seemed to "shake" the network out of several local minimums, and allowed me to keep training for longer.</p> <p>I also sometimes used a custom loss, using MSE or MAE for delta x and delta y, but then calculating the output distance from the delta x and y, and comparing that against the input distance and also the output distance. </p> <h2>Data Augmentation</h2> <p>The only data augmentation I did was to add noise to the input (Gaussian noise), which let me train for longer with less dropout. I found added noise produced better results than higher dropout to prevent overfitting in this case.</p> <h2>Fine Tuning</h2> <p>I trained this network on all 24 sites, but then also fine tuned on every site individually, then averaged the results from the full 24 sites, and the fine tuned networks.</p> <h2>Ensembling</h2> <p>I ended up making about a dozen of these networks with slightly different inputs, structure and hyper params, and averaged the outputs.</p> <h2>Results</h2> <p>The results were &lt; 1.0m MAE for the large multi-site network, with the fine tuning getting some sites to 0.6m or 0.7m MAE (averaged over delta x and delta y).</p> <p>Some sites had a <em>really</em> hard time getting below 2m or even 3m in one case however, so it would be interesting to go back and figure out why that was and what would make that specific site better.</p> <h1>Overall</h1> <p>I think this is one of the better results for the delta x,y networks described by other teams, and probably the reason I was able to take that output and apply post-processing so successfully.</p> <p>I also don't think I hit the limit of what could be calculated - by either ensembling another team's network (like one of the RNN networks), or by doing a better job at figuring out which of those features actually mattered and using only those.</p> <p>Also, because some of those features get better as the test predictions get better (a form of pseudo labeling), the results should only get better by running it more times with the outputs from my or other team's results (I only ran the entire pipeline 3-4 times).</p> <p>:) There are several similarities between the indoor competition and the GPS one! I think a lot of the great solutions from this challenge will translate well to that one too… 😉</p>
Indoor Location & Navigation
Memorandum for 58th - last place of a silver medal
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Indoor Location & Navigation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>oh my gosh!! I got a message from <a href="https://www.kaggle.com/kouki" target="_blank">@kouki</a>!!! A party today!!!</p> <p>… yes, when I read the discussions on kaggle, I am overwhelmed, feeling this is not my world, but of experts, which has nothing to do with me. It is sometimes hard for us beginners to just go one step forward. I would like other people like me to know, I am one of them.</p> <p>Yes, your LSTM model (and data) gave all us critical initial guidance. Thank you so much again, really, for your generosity for sharing your ideas with us all. </p> <p>Thank you!!! <a href="https://www.kaggle.com/maryadewunmi" target="_blank">@maryadewunmi</a> </p> <p>Thank you <a href="https://www.kaggle.com/optimo" target="_blank">@optimo</a> !!!!</p> <h1>Memorandum for 58th — last place of a silver medal</h1> <p>This is a lengthy memorandum of how I ended up in the last place of silver medals in the competition 'Indoor Location and Navigation'. The three quarters of this text is about what I did wrong, and what I learned from the experiments. If you are expecting a success story, this is probably not what you are looking for. I have been supported by kaggle BIPOC program during the competition.</p> <ol> <li><a href="#section-one">Problem description</a></li> <li><a href="#section-two">Solution</a></li> <li><a href="#section-three">Calculation of floor</a></li> <li><a href="#section-four">Ideas that did not work</a></li> <li><a href="#section-five">Mistakes</a></li> <li><a href="#section-six">What I did not know, but learned new</a></li> <li><a href="#section-seven">What I still do not understand</a></li> <li><a href="#section-eight">What I still would like to learn</a></li> <li><a href="#section-nine">Timeline</a></li> </ol> <p>'I have not failed. I have just found 10,000 ways that won't work'<br> — Thomas Edison</p> <hr> <h2>1. Problem description <a></a></h2> <p>A tester (a data collector, a human) is walking inside a shopping mall. The shopping mall has several stories. The smartphone of the tester received the signals from the WiFi access points installed at various places in the building. An app in her/his smartphone returns the intensities of the signals it received, to a server where the host of the competition collects the data.</p> <p><img src="https://imgur.com/atUlptc.png"><br> How the signal is sent, and how the data is collected </p> <p>There are other data collected simultaneously, which include the acceleration of the smartphone, the signals received from iBeacons, the magnetic field, and so on. </p> <p>Using these data collected by the app, we will predict 3 dimensional coordinates of the tester - x, y and the floor. The floor is the z coordinate. The test set contains the WiFi signals of 24 different buildings. Each building has 3 to 9 floors including basements.</p> <h3>Glossary :</h3> <p><strong>waypoint:</strong> The position of a moving object (airplane, ship, car, human), or the coordinates of it, shown as a function of time.<br> <strong>path:</strong> A collection of waypoints that a tester walked through. </p> <p><img src="https://imgur.com/NgVO8iZ.png"><br> Waypoint and path</p> <p><strong>BSSID:</strong> 'Basic Service Set Identifier'. Here I understand this is an ID allocated to individual WiFi Access Point.<br> <strong>RSSI:</strong> 'Received Signal Strength Indication'. Intensity of WiFi signal received.<br> <strong>trilatelation:</strong> Allocating your position using the distances from 3 reference points.<br> <img src="https://imgur.com/WqMh0Lf.png"><br> Trilatelation <br> <strong>triangulation:</strong> Measuring distance from you to a certain point, by the angles to the point from the both ends of a reference baseline.<br> <img src="https://imgur.com/BdkasXN.png"><br> Triangulation. If we know <code>a</code>, <code>b</code> and <code>c</code>, we can calculate <code>d</code>. </p> <hr> <h2>2. Solution <a></a></h2> <p>There is nothing special in my solution. It is a combination of the following notebooks</p> <h3>(Data)</h3> <p><strong>1)</strong> WiFi data created by Kouki<br> <a href="https://www.kaggle.com/kokitanisaka/lstm-by-keras-with-unified-wi-fi-feats" target="_blank">https://www.kaggle.com/kokitanisaka/lstm-by-keras-with-unified-wi-fi-feats</a></p> <h3>(Models)</h3> <p><strong>2)</strong> LSTM model by Kouki in Keras/TensorFlow<br> <a href="https://www.kaggle.com/kokitanisaka/lstm-by-keras-with-unified-wi-fi-feats" target="_blank">https://www.kaggle.com/kokitanisaka/lstm-by-keras-with-unified-wi-fi-feats</a></p> <p><strong>3)</strong> LSTM model, a rewrite of Kouki's model in PyTorch by captainqxy<br> <a href="https://www.kaggle.com/luffy521/lstm-by-pytorch-with-unified-wi-fi-feats" target="_blank">https://www.kaggle.com/luffy521/lstm-by-pytorch-with-unified-wi-fi-feats</a></p> <p><strong>4)</strong> Calculation of floor by myself<br> <a href="https://www.kaggle.com/megner/consideration-of-floor" target="_blank">https://www.kaggle.com/megner/consideration-of-floor</a></p> <p><strong>5)</strong> PyTorch Lightning all-site model by Dongkyun Kim<br> <a href="https://www.kaggle.com/dongkyunkim/pytorch-lightning-lb-4-965-with-postprocessing" target="_blank">https://www.kaggle.com/dongkyunkim/pytorch-lightning-lb-4-965-with-postprocessing</a></p> <h3>(Post processing)</h3> <p><strong>6)</strong> Snap-to-grid by Rob Mulla<br> <a href="https://www.kaggle.com/robikscube/indoor-navigation-snap-to-grid-post-processing" target="_blank">https://www.kaggle.com/robikscube/indoor-navigation-snap-to-grid-post-processing</a></p> <p><strong>7)</strong> Acceleration by Akio Saito and museas<br> <a href="https://www.kaggle.com/saitodevel01/indoor-post-processing-by-cost-minimization" target="_blank">https://www.kaggle.com/saitodevel01/indoor-post-processing-by-cost-minimization</a><br> <a href="https://www.kaggle.com/museas/order-to-use-post-processing" target="_blank">https://www.kaggle.com/museas/order-to-use-post-processing</a></p> <p><strong>8)</strong> Acceleration + magnetic field by museas<br> <a href="https://www.kaggle.com/museas/order-to-use-post-processing" target="_blank">https://www.kaggle.com/museas/order-to-use-post-processing</a></p> <p><strong>9)</strong> Start-end-point connection based on timestamp by tomoo inubushi<br> <a href="https://www.kaggle.com/tomooinubushi/postprocessing-based-on-leakage" target="_blank">https://www.kaggle.com/tomooinubushi/postprocessing-based-on-leakage</a></p> <p><strong>10)</strong> Start-end-point connection based on timestamp + device ID by ganchan<br> <a href="https://www.kaggle.com/iwatatakuya/use-leakage-considering-device-id-postprocess" target="_blank">https://www.kaggle.com/iwatatakuya/use-leakage-considering-device-id-postprocess</a></p> <p>Only thing that is at least partially my original is <strong>4)</strong>, Calculation of floor. I recap it in the next section. </p> <hr> <h2>3. Calculation of floor <a></a></h2> <p>When I started the competition, a notebook by Nigel A. R. Henry immediately caught my attention.</p> <p><a href="https://www.kaggle.com/nigelhenry/simple-99-accurate-floor-model" target="_blank">https://www.kaggle.com/nigelhenry/simple-99-accurate-floor-model</a></p> <p>It predicts the floor numbers with 99% accuracy. If true, it practically removes the floor number from the parameters to be predicted.</p> <p>The notebook is written in <em>R</em>. I tried to understand the code in order to rewrite it in Python. It was a big disappointment for me when I found I do not even understand the first two lines of the code, although I spent 3 months in on-line <em>R</em> course last year. </p> <p>I rolled back to the data. Kouki's data consists of two parts, BSSIDs and RSSIs for each waypoint (x, y, and floor). The RSSIs seemed to be some intensity of signals, however, it was totally unclear to me what is sending signals where. </p> <p>I noticed that there were somehow same numbers of columns in BSSIDs and RSSIs. It became suddenly clear to me that Kouki's data are actually <em>pair</em> of data. The signal of the first column of RSSIs belongs to the first column of BSSIDs. The signal recorded in the second column of RSSIs is paired to that is in the second column of BSSIDs, and vice versa. </p> <p>However, what BSSID could be in the first place? Is it an ID of a smartphone? An ID tagged to an each signal-transmission event? An ID of a chip installed in a WiFi Access Point?</p> <p>I was totally excited when I understood that BSSID is actually an WiFi Access Point itself. It is an ID given to each WiFi Access Point installed in the various places of the building. A smartphone is receiving a signal that is sent from a WiFi Access Point with a fixed intensity at the point of start. When that smartphone received the signal, the signal is weaker, because of the attenuation over the distance from the source (=WiFi Access Point), and the transmission is blocked by the structure of the building.</p> <p>The point is that WiFi Access Points are fixed to the building, and they do not move.</p> <p>Let us assume a smartphone at a particular waypoint receives a strong signal from BSSID-1 (=WiFi Access Point 1), and that smartphone was located on the floor 1 (we know it from the labels in the training dataset). BSSIDS-1 is likely located on floor 1! If a smartphone receives a strong signal from BSSID-1 in the test set, that smartphone is likely located on the floor 1 as well! I was excited with my finding, and wrote the notebook. The calculation of the floor is not even a prediction, but just a deterministic calculation. </p> <p>There are two votings involved in the calculation. A smartphone receives multiple signals from multiple BSSIDs in the same time. A smartphone receives sometimes very weak signal from a BSSID. In that case, that BSSID may not be on the same floor with the smartphone.</p> <p>First voting is to decide on which floor the particular BSSID (WiFi Access Point) is locate. I listed up first all the smartphones (=a signal-reception events, more precisely) that received any signal from BSSID-1 in the training set. The record was sorted according to the intensity of the received signal. The label of the floors are also attached. </p> <p><strong>BSSID-1</strong> (Hereafter all the tables are for the illustrative purpose only, do not reflect real data)</p> <table> <thead> <tr> <th>reception event ID</th> <th>intensity</th> <th>which floor ?</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>100</td> <td>1</td> </tr> <tr> <td>2</td> <td>99</td> <td>1</td> </tr> <tr> <td>3</td> <td>95</td> <td>0</td> </tr> <tr> <td>4</td> <td>80</td> <td>1</td> </tr> <tr> <td>5</td> <td>50</td> <td>2</td> </tr> </tbody> </table> <p>The voting tells us that BSSID-1 is most likely located on the floor 1. The same procedure was repeated for all BSSIDs in training set. Now we have a look-up table for all WiFi Access Points (=BSSIDs).</p> <table> <thead> <tr> <th>WiFi AP</th> <th>floor</th> </tr> </thead> <tbody> <tr> <td>BSSID-1</td> <td>1</td> </tr> <tr> <td>BSSID-2</td> <td>1</td> </tr> <tr> <td>BSSID-3</td> <td>0</td> </tr> <tr> <td>BSSID-4</td> <td>-1</td> </tr> <tr> <td>BSSID-5</td> <td>2</td> </tr> <tr> <td>BSSID-6</td> <td>3</td> </tr> </tbody> </table> <p>The second voting is to allocate a floor to each waypoint in the test sets. A smartphone at each waypoint receives signals simultaneously from many BSSIDs with various intensities. Let us say, the smartphone received signals from 3 different WiFi Access Points, BSSID-1, -2, and -3 with the signal intensity 100, 99, 90, respectively. The look-up table we made was</p> <p><strong>waypoint-1</strong></p> <table> <thead> <tr> <th>WiFi AP</th> <th>floor</th> </tr> </thead> <tbody> <tr> <td>BSSID-1</td> <td>1</td> </tr> <tr> <td>BSSID-2</td> <td>1</td> </tr> <tr> <td>BSSID-3</td> <td>0</td> </tr> </tbody> </table> <p>A reasonable allocation of floor to that waypoint-1 is 1 in this case. However, if the voting looks like this, </p> <table> <thead> <tr> <th>WiFi AP</th> <th>floor</th> <th>signal intensity</th> </tr> </thead> <tbody> <tr> <td>BSSID 1</td> <td>1</td> <td>100</td> </tr> <tr> <td>BSSID 2</td> <td>1</td> <td>99</td> </tr> <tr> <td>BSSID 3</td> <td>0</td> <td>90</td> </tr> <tr> <td>BSSID 4</td> <td>1</td> <td>89</td> </tr> <tr> <td>BSSID 5</td> <td>2</td> <td>80</td> </tr> <tr> <td>BSSID 6</td> <td>0</td> <td>78</td> </tr> <tr> <td>BSSID 7</td> <td>1</td> <td>76</td> </tr> <tr> <td>BSSID 8</td> <td>0</td> <td>55</td> </tr> <tr> <td>BSSID 9</td> <td>0</td> <td>45</td> </tr> <tr> <td>BSSID 10</td> <td>0</td> <td>35</td> </tr> </tbody> </table> <p>the result changes depending on how many BSSIDs we will count.</p> <p>My small device is to optimize the number of 'voters' using the training set. I assume that the signal senders (=BSSIDs) of top-p most intense signals will participate in the vote. The parameter p was optimized by checking the label (=floor label) in the training set. 'p' could be different for building by building as each building has different structure, and some have many WiFi Access Points installed, and others not.</p> <p>Meanwhile, there was a new notebook by J. William Hugh Dore. </p> <p><a href="https://www.kaggle.com/jwilliamhughdore/99-80-floor-accurate-model-blstm" target="_blank">https://www.kaggle.com/jwilliamhughdore/99-80-floor-accurate-model-blstm</a></p> <p>His model can predict the floor with 99.8% of accuracy. I cross-correlated my floor allocations with J. William's. We differ only on 80 waypoints out of ~10000 test set (99.2% identical). I submitted two identical (x,y) predictions with the floor allocation by J. William's and mine. Mine seemed to perform as good as J. William's. </p> <hr> <h2>4. Ideas that did not work <a></a></h2> <p>In the order of bigger to smaller ideas that did not work.</p> <p><strong>1)</strong> site-by-site model<br> <strong>2)</strong> LightGBM<br> <strong>3)</strong> TabNet<br> <strong>4)</strong> MultiProcessing<br> <strong>5)</strong> Ray ['Tune']</p> <p>Here I will discuss in detail only on <strong>1</strong>).</p> <p>The fairly accurate floor prediction made the competition goal simpler. Now not only I do not need to predict the floor, but also the floor can be used as an input to a training. </p> <p>There are three choices of scale for a model.</p> <p><strong>A)</strong> all-site,<br> <strong>B)</strong> site-by-site, and <br> <strong>C)</strong> floor-by-floor.</p> <p><img src="https://imgur.com/L9vxPcv.png"><br> There are 24 sites, each has several stories.</p> <p>An all-site model is to make one single model for the 24 buildings (=sites), and gives whole training dataset to that model. Site-ID is one of the input features.</p> <p>A site-by-site model is to make 24 separate models, one for each building. Training and test datasets are split to site by site, and only the training data for that particular site is given to each site model.</p> <p>A floor-by-floor model is to make a model for each floor of each site.</p> <p>Kouki's and captainqxy's model was a site-by-site model. I believed in many small models against one big model. This is like a factorization of matrix. If we know we do not need particular features to train a model, why should we include them in the training set? I even had a good prediction of floors that is required to go for floor-by-floor models.</p> <p>The problem of floor-by-floor model was, somehow a smaller model did not reduce the training time, and it still requires many models. Here is some numbers. </p> <table> <thead> <tr> <th></th> <th>all-sites</th> <th>site-by-site</th> <th>floor-by-floor</th> </tr> </thead> <tbody> <tr> <td>number of model</td> <td>1</td> <td>24</td> <td>~600</td> </tr> <tr> <td>data rows per model</td> <td>~250000</td> <td>~10000</td> <td>~400</td> </tr> <tr> <td>if 1 hour per model</td> <td>1 hour</td> <td>1 day</td> <td>3 weeks</td> </tr> <tr> <td>5-fold validation</td> <td>5 hours</td> <td>5 days</td> <td>15 weeks</td> </tr> <tr> <td>with optimization</td> <td>10 hours</td> <td>10 days</td> <td>30 weeks</td> </tr> </tbody> </table> <p>Floor-by-floor model turned out showing no particular advantages over site-by-site model of Kouki and captainqxy, if not worse. This was partly because I accidentally planted several critical mistakes in the codes (will be discussed in the next section), as I worked for floor-by-floor models. The most serious mistake was GroupKFold. Because of over 600 models to train, one cycle of experiment took long. It lead to a fatal delay in my noticing these mistakes.</p> <hr> <h2>5. Mistakes <a></a></h2> <p>In the order of most serious to less serious ones.</p> <p><strong>1)</strong> <code>GroupKFold</code><br> <strong>2)</strong> <code>datatable</code><br> <strong>3)</strong> root mean square is not mean root square</p> <p><strong>1)</strong> The waypoints were mistakenly grouped by the path (how the tester walked in the building). I somehow misunderstood that the training and validation data should be split according to the path. The paths in the training dataset schematically looks like this.</p> <p><img src="https://imgur.com/jHmPbGX.png"><br> Do not use <code>GroupKFold</code></p> <p>Some path is located on the southwest corner of the floor, and other path on the northeast corner. Each path consists of several waypoints.</p> <p>I split waypoints grouping by a path, which means, all waypoints that belong to a certain path are all in a training set, and all other waypoints that belong to another path are in a validation set. In the example above, training set only contains the waypoints in the southwest corner, validation set the waypoints in the northeast corner.</p> <p>This apparently leads miserable training results, because training data does not include any data from northeast corner. I should have used StratifiedKFold, as in the original model by Kouki. It was an obvious mistake, but I failed to notice it until I really started to examine the code line by line.</p> <p><strong>2)</strong> I learned in one of the previous competitions that <code>datatable</code> is the faster replacement for pandas. I also learned (in a hard way) that one has to be very careful in handling the indexing of <code>datatable</code>, because, when the index was not sorted out clean, datable joins two tables unexpected way.</p> <p>At some point I rewrote the whole code by replacing all pandas with <code>datatable</code>. The code ran indeed faster. The validation looked no problem either. However, the inference was a complete mess, with an error over 50. I smelled there was a mistake in the way I used <code>datatable</code>, but after a few days of investigation, I could not spot the bug. I had to go back to pandas, and lost few days.</p> <p><strong>3)</strong> The evaluation metric was 'mean position error', something like</p> <p><img src="https://imgur.com/THQKqpI.png"></p> <p>When I saw it for the first time, I just thought 'okay, it's a root mean square'. No! A root mean square is<br> <img src="https://imgur.com/ajQF9Nt.png"> </p> <p>A wrong evaluation metric damages training.</p> <hr> <h2>6. What I did not know, but learned new <a></a></h2> <ul> <li>How to re-write a code in PyTorch Lightning style</li> <li>How I have to put all lines of a code in functions to use <code>multiprocessing</code></li> <li>How to run Python code from VS code. </li> <li>How to run Python code from command line.</li> <li>How to fork and redirect output of a code using shell command 'tee'</li> <li>Make terminal output in color. </li> <li>Make a log file</li> <li>Save my own functions in a separate Python file, and import them</li> <li>Download a copy of a package from github, modify it, call it from a Python code, instead of using a pip-installed standard package.</li> <li>Process the training data as far as possible, and store the result in an HDFS file. Read the file in the next execution of the code to make the experiment cycle faster. </li> <li><code>df.samle(frac=0.1).reset_index(drop=True)</code> to try only a small part of the training set.</li> <li>Learned that a LightGBM model only predicts one regression parameter at a time. If I need a vector output, like a pair of (x, y), I need two models of LightGBM, one for x, and one for y.</li> <li>How to use object function of optuna to tune the hyperparamters in a LightGBM model. </li> <li>Using <code>pdb.set_trace()</code> to stop code-execution inside a function to fix a bug. I have been writing a separate code to take out the content of a function, as all the parameters are lost outside the function. <code>pdb.set_trace()</code> reduced the mental burden of fixing bugs tremendously.</li> <li>Download kaggle.json from the kaggle website, and store it in <code>~/.kaggle/</code> to submit a <code>submission.csv</code> from a local laptop, or from a virtual machine on the Google Cloud Platform, rather than do so using the web interface of kaggle.</li> </ul> <p>One interesting thing is that all these are knowledge that are commonly known by kagglers, but probably never be taught in any machine-learning courses. Kaggle is often called a best education platform, and I think this is why. </p> <p>I also learned a lot about the Google Cloud Platform Compute Engine </p> <ul> <li>How to request quota-increase to have a GPU</li> <li>How to copy data between two virtual machines.</li> </ul> <hr> <h2>7. What I still do not understand <a></a></h2> <p><strong>1)</strong> Why LSTM model works? <br> <strong>2)</strong> Why LightGBM does not? <br> <strong>3)</strong> Why TabNet model has a BN at the beginning, and it still works? </p> <p>Here I discuss only <strong>1)</strong> and <strong>2)</strong>.</p> <p><strong>1)</strong> I borrowed an LSTM model from Kouki and captainqxy, but to confess, I do not understand why it works. An LSTM model handles a data in which the order of the data is meaningful, like text or temporal sequence. Kouki's data consists of a group of columns of BSSID and another group of columns of RSSI. The signal strength in the first column of RSSI is tied to the first column of BSSID. How does a model know the data is coupled in this way?</p> <p>The column order of BSSIDs is sorted according to the intensity of the signal recorded in RSSI columns. The sequence of input data is a key to train an LSTM model, but it is modified and sorted before the data is fed to the model. What is the advantage of using LSTM then, when the sequence of data contains little meaning other than a monotonic decrease?</p> <p><strong>2)</strong> LightGBM model performed good in the training and the validation. The validation scores are about ~3 for both x and y coordinates. This should correspond to a distance-error of sqrt(3^2 + 3^2) ~ 4.2, which is about the same amount I could reach with an LSTM model. Nevertheless, the LightGBM model, the metric was never better than 8 when I submitted the prediction. I have not seen any other LightGBM notebooks that performs better than 7. I do not understand why. </p> <hr> <h2>8. What I still would like to learn <a></a></h2> <p>From more urgent issues to less urgent issues, </p> <ul> <li>How to set up (more professional-looking) directory structure for a code development.</li> <li>Use meaningful names for parameters</li> <li>How to plot data from VS Code in a way that I do not need to close a window manually each time. </li> <li>TensorFlow and how to use TPU</li> <li>How to write 'Class' objects</li> <li>How to use github</li> <li>Understand TabNet better</li> </ul> <hr> <h2>9. Timeline <a></a></h2> <p>The following text solely contains a 'story' with little technical details. Please save your time accordingly.</p> <p>I participated in this competition in the end of March, which is 8 weeks before the closing. I used the first 2 weeks to understand the problem. I learned first, that the GPS does not work inside a building, therefore we need other means to locate people indoors. To my surprise, the indoor-location is used, for instance, in audio-guide in museums. I thought there is hidden sensors or something similar behind the paintings to detect the audio guidance equipments. Feeling good to know how the world works.</p> <p>Next question was why we need a machine learning at all for this question. If you know the locations of the signal senders, and how far you are away from them, you can calculate where you are by trilatelation just like we do with 3 GPS satellites. The reason is that the exact locations of the WiFi Access Points are not known. Moreover, because of the structures of the building (like walls and corridors), how the signal is attenuated with distance cannot be calculated in a straightforward manner. We therefore need a means to convert the relative intensities of the WiFi signals to 2D locations of the tester on the floor.</p> <p>My plan was to make three models, in LSTM, LightGBM and Pytorch TabNet. I was never able to ensemble the predictions of different models, which seems to be a common strategy among kagglers. After solving the problem of the floor, I was heading first into a blind alley of floor-by-floor model of these 3 different architectures, just to lose 3 weeks in vain.</p> <p>Looking back, the toughest part during the competition was the last part of this 3 weeks. Since I already invested too much time for floor-by-floor model, I wanted to save this idea, to save my effort. I wanted to think that the model should work, it is just me, who is missing something. As the time ticks, I had to edit 3 different models in parallel on my editor to save time. I edited a code, ran it, and while waiting for the results, I edited another code, and let it run. I edited yet another code, let it run. When the first code finally returned the results, I did not remember why I ran the code. It is like 3 CPU cores running in my head, but has only one memory to be shared. As all memory cross-talked, I had to think hard to remember what was the issue in the first place…. I do not want to do that again.</p> <p>The kaggle BIPOC program allows gorgeous $1000 GCP credit to all the grantees. During that time above, I have 3 VMs running in the same time. Each costed about 1 euro per hour (they were all preemptible machines with a GPU). When I finally accepted that floor-by-floor models do not work, and I should rather go for an all-site model, and only LSTM model is worth keeping; I had only $150 left in my GCP credit. I was so much disappointed with myself, that I used up such a huge amount of money for useless efforts. Nevertheless, I would like to underscore here, that I would not even have a chance to fail without the credit that BIPOC kindly granted me.</p> <p>As I looked at a calendar, disappointed, thinking it is probably too late to rewrite my dirty code to an all-site model, a notebook by Dongkyun Kim shown up in the bulletin board. It was an all-site model written in PyTorch Lightning. At the first glance, his code was mambo jumbo to me. Nevertheless I had to make a decision between the two, if and how to use this luck.</p> <p><strong>A)</strong> read and understand a dirty code I wrote 3 weeks ago,<br> <strong>B)</strong> read and understand a clean code written by other people.</p> <p>I decided on <strong>B)</strong>. (of course)</p> <p>Since I learned the syntax of PyTorch Lightning by then, the code-structure was at least familiar to me. Many functions in his code, I just used as they are without touching them. The LSTM model in his code seemed to be truncated at the end, and did not run as it was. I plugged in the LSTM model of captainqxy, and added missing inference. How to predict in PyTorch Lightning was a mystery to me when I wrote a Lightning code myself.</p> <p><a href="https://www.kaggle.com/megner/pytorch-lightning-lstm" target="_blank">https://www.kaggle.com/megner/pytorch-lightning-lstm</a></p> <p>It is not trainer(test_dataloader), model(test_dataloader), trainer.predict(test_dataloader) or anything similar. After a wandering in the Lightning web-site for half a day, I finally found that the syntax is</p> <p><code>trainer.predict(datamodule=dm)</code></p> <p>where dm is your instantiated <code>DataModule()</code>.</p> <p>The model worked better than Dongkyun Kim claimed. I finally came back to the position (top ~30%) where I was 3 weeks ago.</p> <h3>(Acceleration and magnetic field)</h3> <p>There are several interesting post-processing proposed in the builtin board. One is by Akio Saito, who used the acceleration of the smartphone to correct the waypoints. museas added magnetic field information to make the correction better. I thankfully incorporated them to my post-processing.</p> <h3>(Leakage - snap-to-grid)</h3> <p>I had 3 days left to the closing. I did not know how I should use the time. The hottest topic on the discussion board was the 'leakage'. Leakage is a certain feature/rule/characteristics in the test data that does not show up in the real life, but are present in the dataset given in a competition, and useful to improve the prediction performance. </p> <p>One example of leakage is 'snap-to-grid' feature documented by Rob Mulla, where the correct prediction of the waypoints (x,y) are not continuous values, but snapped on the grid points seen in the training set.</p> <h3>(Leakage - start-end-point)</h3> <p>Another example of leakage is that the first waypoint of a test-set path is sometimes the last waypoint of a training-set path, and vice versa. It implies a tester was walking about the floor along a path, sending data continuously to the app server. Some waypoints of this continuous path are used as training data, and other part of the path as test data, but both were actually part of one continuous path before they were split.</p> <p><img src="https://imgur.com/iDUkrwk.png"><br> Start-end-point connection</p> <p>It was a significant barrier for me to incorporate corrections based on 'leakage' in the post-processing, as it sounds too much an expert business, and none of mine. However, I had no better idea to do for next 3 days, I started looking at the notebook by tomoo inubushi, thinking it would not do any harm at least. To my surprise, it only took a whole day to add his notebook to the post-processing. To my further surprise, it only took another whole day to add ganchan's notebook that uses device information (that identifies an individual smartphone of that tester).</p> <h3>(Ensemble)</h3> <p>The only thing I changed in the LSTM model of captainqxy were</p> <ul> <li>BSSID/RSSI features were increased from 20 to 96</li> <li>LSTM layers were increased from 2 to 3</li> <li>Batch size was decreased from 256 to 128</li> <li>floor numbers were added as an input</li> </ul> <p>While experimenting different parameters, I set aside 4 models that performs fairly good. Taking an average of the 4 of them pushed me up into the bronze range. It was for the first time since 2 months that I participated in the competition. That was, however, just 30 hours before the closing.</p> <p>On the last day of the competition, 17 May, I had 3 submission slots left. I did not have any money left to train another model, as one model costs about 10 Euro [a training takes ~10 hours on a VM that cost about 1 Euro per hour]. I did not know how to spend the 3 slots. I vacuumed the rooms, polished the bathroom. The clock was ticking too slow. I just could not stand the idea that I would lose the current position of the bronze by a 'shake-up' (the positions in the leader board change significantly when a larger test set is used for the final evaluation than during the competition period). I just hoped the day is over as fast as possible.</p> <p>… after all the rooms are cleaned and vacuumed, I came back to my laptop, and thought perhaps I should stop worrying, but play with mixing ratios of the prediction. I tried weights (0.2, 0.25, 0.25, 0.3), giving the highest weight to the best performing single model. The leader board score improved slightly. Then tried (0.15, 0.25, 0.25, 0.35). Improved again. Then (0.1, 0.25, 0.25, 0.4). The score got worse. I checked the first and the second submissions as the final submissions, and went to bed.</p> <p>Next morning, I opened my g-mail with throbbing heart. I had a mail from kaggle that I got a medal! I closed the g-mail with a sigh of relief. 0.1 second before the window was closed, I thought I saw a word 'silver' somewhere. I opened the g-mail again, and found that it said I got a silver medal. I did not understand what happened, and went to the leader board. I found I was placed 58th position, the very last place of a silver medal. I think I used up all my luck that is allocated to my life. This upward shake-up is probably because of my accurate floor calculation.</p> <p><img src="https://imgur.com/GkXGask.png"><br> very last place</p> <p>What I learned here was that I should struggle to the end until the curtain is really down, even if it may not look pretty. I would like to thumb myself on the back that I came back to my laptop anyway after no room left to be cleaned. </p> <p>My special thanks go to all my fellow BIPOC grantees, the mentors, the speakers and the organizers of the program. It took all of us to survive this amazing 3 months.</p> <p>Thank you so much,<br> Miwa Egner</p> <hr>
Indoor Location & Navigation
Part of the 30th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Indoor Location & Navigation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all the winners! <br> It was actually a tough competition but we enjoyed and learned a lot!<br> I want to say thank you to all the kagglers who participated in the comp! </p> <p>Our team got 30th place, and actually my model was not so important for our result.<br> Team mates' models perform far better than mine.<br> But our team agreed that we won't publish our solution. <br> Still, I think I want (or need) to publish some of my work so I posted it. </p> <h2>The LSMT notebook</h2> <p>I published <a href="https://www.kaggle.com/kokitanisaka/lstm-by-keras-with-unified-wi-fi-feats" target="_blank">the LSTM notebook</a> 2 months ago, and I want to show how it went after that. </p> <p><a href="https://www.kaggle.com/kokitanisaka/self-attentintive-lstm-by-keras" target="_blank">This is the notebook. </a></p> <p>I applied self-attention layer and some more modifications so it got better. <br> Actually the performance of the notebook is so so. Public score is 6.060. <br> Feel free to throw any comments or feedback. Thanks!</p> <h2>Post process</h2> <p>I tried a post process but it didn't work for public LB so we didn't use it.<br> But when I see the private score, it seems it worked. <br> So I choose to publish the following notebook as well.</p> <p>The idea is to fix the result of the snap to grid. <br> We thought that the ground truth should be on grids, so we relied on snap to grid.<br> But after we applied snap to grid, some paths looked definitely wrong, so I tackled the issue. </p> <p><a href="https://www.kaggle.com/kokitanisaka/create-arrayed-map" target="_blank">This is the preparation for the pp.</a><br> <a href="https://www.kaggle.com/kokitanisaka/fix-snapped-waypoints" target="_blank">And this is the pp.</a></p> <p>Again, thank you to all of you! I got another memorable competition that I joined. I really enjoyed it! <br> Look forward to seeing you in other competitions! </p> <p>Thank you for the kind comment! </p>
Indoor Location & Navigation
Reflections from 36th place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Indoor Location & Navigation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I wrote some weeks ago: “<em>What strikes me is the extent to which the Kaggle community has effectively solved this problem - we are now down to locating a phone in a multi-storey building to within an accuracy of about 3.65 metres. Well done to all of you folk who've contributed to this!</em>” Well, that would now be 1.5 metres, and I am even more impressed now than I was then with your collective ingenuity.</p> <p>I’ve been involved in this competition from very early on, so long ago that my first submission scored 14.845. Every journey needs to start somewhere, so huge thanks to those who provided the early public models, especially <a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a>, <a href="https://www.kaggle.com/devinanzelmo" target="_blank">@devinanzelmo</a>, <a href="https://www.kaggle.com/hiro5299834" target="_blank">@hiro5299834</a> &amp; <a href="https://www.kaggle.com/jiweiliu" target="_blank">@jiweiliu</a>. </p> <p>I began my journey by concentrating on predicting the floors, which I saw as the easiest aspect to make progress on. Early models often predicted different floors for waypoints in the same path, perhaps a useful indicator of uncertainty. In the early days, inserting my best guess at the floor prediction into any of the then-public models would yield a significant improvement in score. Not only that, but I had the fun of reaching as high as fourth place, rather like the runner who sprints towards the front of the marathon field in the first mile, however briefly.</p> <p>In fact, I settled on my final floor model as early as February 17th, and subsequent experience suggests that it was at least largely correct. In the game of staying ahead of the best public submission, the next trick was to notice an overall bias in the existing submissions, and for some weeks moving the xy-coordinates of just about any public prediction by about (+0.10, -0.50) would improve its score.</p> <p>Two notebooks published by <a href="https://www.kaggle.com/wineplanetary" target="_blank">@wineplanetary</a> and by <a href="https://www.kaggle.com/yamsam" target="_blank">@yamsam</a> were particularly helpful in pointing out both that predictions should by located in the hallway or corridor areas of the malls, not in the shops nor in the car park outside, and that this could be visualised using the maps. Actually doing this would have been very demanding, were it not for the wonderful snap-to-grid notebooks provided by <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a>, <a href="https://www.kaggle.com/mehrankazeminia" target="_blank">@mehrankazeminia</a>, <a href="https://www.kaggle.com/somayyehgholami" target="_blank">@somayyehgholami</a> &amp; <a href="https://www.kaggle.com/dragonzhang" target="_blank">@dragonzhang</a>. These were forked with gratitude. As <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a> said elsewhere in the forum “<em>This competition is not about predicting exact x,y coordinates- it is about predicting which pre-defined location was traveled to. As already mentioned, in 80%-90% of the test set we already know these pre-defined locations.</em>”</p> <p>The snap-to-grid notebooks were complemented by other public postprocessing codes, including cost minimisation and so-called “<em>leakage</em>”. Regarding the latter, the extent to which the Arrow of Time should be respected in predictions is a matter of lively debate. My view is that if “<em>where was the phone two minutes ago</em>” is still a relevant question, then I’m not so fussed about using the future to inform our understanding of the past. So thanks also to <a href="https://www.kaggle.com/aristotelisch" target="_blank">@aristotelisch</a>, <a href="https://www.kaggle.com/iwatatakuya" target="_blank">@iwatatakuya</a>, <a href="https://www.kaggle.com/tomooinubushi" target="_blank">@tomooinubushi</a> &amp; <a href="https://www.kaggle.com/saitodevel01" target="_blank">@saitodevel01</a> for the postprocessing notebooks, which were essential for me to get down from ~6 to ~4 metres in accuracy.</p> <p>The snap-to-grid notebook was designed to snap a blend of multiple submissions. While it would have been trivial to change that, in fact I found it helpful to blend in what I think of as genetic diversity, mixing together models from different origins. While the robustness of ensembles remains a subject for debate, I found that incorporating this diversity generally improved my score, and multiple rounds of iterative postprocessing kept me about 0.4 metres ahead of the best public kernel – even though there were weeks when I would seem to get completely stuck and unable to make more progress. The final improvement came in the very last couple of hours, by doing postprocessing in a different order.</p> <p>Given how much discussion there was about shakeups or shakedowns, the choice of final submissions was going to be significant. I did what I think was the obvious thing, by hedging the two entries between my floor model and other most popular one .I expected to gain an edge of around 0.35 over anyone who went for the other model on both submissions. Ultimately, rising by 27 places on shakeup was beyond my expectations, though I wasn’t that surprised to sneak back up to silver and would have been seriously disappointed to fall outside the medals having spent, I think, the whole competition in the top 100. In fact, my better chosen submission turned out to be only my second best private score, but this cost me just one place.</p> <p>I also want to thank those of you who have engaged in some fascinating discussions on the forum: <a href="https://www.kaggle.com/chris62" target="_blank">@chris62</a>, <a href="https://www.kaggle.com/ht5brer" target="_blank">@ht5brer</a>, <a href="https://www.kaggle.com/kmldas" target="_blank">@kmldas</a>, <a href="https://www.kaggle.com/lazaro97" target="_blank">@lazaro97</a>, <a href="https://www.kaggle.com/mamasinkgs" target="_blank">@mamasinkgs</a>, <a href="https://www.kaggle.com/nigelhenry" target="_blank">@nigelhenry</a>, <a href="https://www.kaggle.com/olaf2000" target="_blank">@olaf2000</a>, <a href="https://www.kaggle.com/ravishah1" target="_blank">@ravishah1</a>, <a href="https://www.kaggle.com/serigne" target="_blank">@serigne</a>, <a href="https://www.kaggle.com/suryajrrafi" target="_blank">@suryajrrafi</a>, <a href="https://www.kaggle.com/tvdwiele" target="_blank">@tvdwiele</a>, <a href="https://www.kaggle.com/zeemeen" target="_blank">@zeemeen</a> &amp; <a href="https://www.kaggle.com/zidmie" target="_blank">@zidmie</a> and I apologise to those whom I missed out. I also enjoyed reading the excellent notebooks from <a href="https://www.kaggle.com/chandrylpaternetony" target="_blank">@chandrylpaternetony</a>, <a href="https://www.kaggle.com/dailysergey" target="_blank">@dailysergey</a>, <a href="https://www.kaggle.com/deepijongwonkim" target="_blank">@deepijongwonkim</a>, <a href="https://www.kaggle.com/ghaiyur" target="_blank">@ghaiyur</a>, <a href="https://www.kaggle.com/hrshtt" target="_blank">@hrshtt</a>, <a href="https://www.kaggle.com/iamleonie" target="_blank">@iamleonie</a>, <a href="https://www.kaggle.com/jwilliamhughdore" target="_blank">@jwilliamhughdore</a>, <a href="https://www.kaggle.com/kokitanisaka" target="_blank">@kokitanisaka</a>, <a href="https://www.kaggle.com/mehrankazeminia" target="_blank">@mehrankazeminia</a>, <a href="https://www.kaggle.com/muhammadzubairkhan92" target="_blank">@muhammadzubairkhan92</a>; <a href="https://www.kaggle.com/museas" target="_blank">@museas</a>, <a href="https://www.kaggle.com/oxzplvifi" target="_blank">@oxzplvifi</a>, <a href="https://www.kaggle.com/rafaelcartenet" target="_blank">@rafaelcartenet</a>, <a href="https://www.kaggle.com/rdboyes" target="_blank">@rdboyes</a>, <a href="https://www.kaggle.com/satokiogiso" target="_blank">@satokiogiso</a>, <a href="https://www.kaggle.com/saurabhbagchi" target="_blank">@saurabhbagchi</a>, <a href="https://www.kaggle.com/somayyehgholami" target="_blank">@somayyehgholami</a>, <a href="https://www.kaggle.com/therocket290" target="_blank">@therocket290</a>, <a href="https://www.kaggle.com/tomwarrens" target="_blank">@tomwarrens</a> &amp; <a href="https://www.kaggle.com/tosinabase" target="_blank">@tosinabase</a>. I especially thank those of you who made great contributions to the notebooks or discussions, without the reward of a high final ranking on the leaderboard. Thanks also to <a href="https://www.kaggle.com/yuanchaoshu" target="_blank">@yuanchaoshu</a>, <a href="https://www.kaggle.com/juliaelliott" target="_blank">@juliaelliott</a> &amp; the Kaggle team for a great competition.</p>
Indoor Location & Navigation
[11th] LSTM + Dynamic Programming Post-processing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Indoor Location & Navigation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all the winners! And specially to my teammate <a href="https://www.kaggle.com/Ouranos" target="_blank">@Ouranos</a>, that becomes also Grand Master after this very interesting competition. We team up in the last week and that give us an amazing boost. At the end something went wrong, but we get our gold medal 😊</p> <p>My work has been mainly in the solution postprocessing. The question was how to take into account simultaneously at path level, a <strong>regression model</strong> for the positions, the relative information from the <strong>sensors</strong>, the sequence of paths obtained from identifying unique <strong>devices</strong>, and most importantly, the fact that the positions are discretized over a <strong>grid</strong>.</p> <p>The funny thing is that is possible to express that as a <strong>combinatorial optimization problem</strong> over the grid, with a cost function that express that we want a solution over the grid, as near as possible to the regression model, that agree as much as possible with the deltas obtained with the sensors, and use the information of the sequence of paths all simultaneously<br> For a finite grid of points Xg and a given path p with nt steps, with model X(t), sensor relative position Delta(t), information of previous and next path Xpr,Deltat_p, Xn,DeltaT_n, we can write a cost function:</p> <p>$$<br> \sum_{i=1}^{nt} \epsilon (Xg(t)-X(t))^2 + \alpha (Xg(t-1)-Xg(t) + Delta(t))^2 + \gamma (Xg(1)-Xp)^2/Deltat_p + \gamma (Xg(nt)-Xn)^2)/Deltat_n<br> $$</p> <p>Given suitable alpha, gamma and epsilon this expression can be optimized <strong>exactly</strong> using <strong>dynamic programming</strong> over t.</p> <p>In our case, the model was developed by <a href="https://www.kaggle.com/Ouranos" target="_blank">@Ouranos</a>. It is an average of 3 LSTM models, one of them specific for every building, trained with pseudolabeling </p> <p>Train -&gt; post-process-&gt; Pseudolabel train-&gt; post-process</p> <p>The deltas are calculated using the code provided by the organizers in their github, and the sequence is obtained in a different way that has been described in the discussions. We use a hash of the device information provided in the .txt files </p> <pre><code># Brand:OPPO Model:PBCM10 AndroidName:8.1.0 APILevel:27 # type:1 name:BMI160 Accelerometer version:2062600 vendor:BOSCH resolution:0.0023956299 power:0.18 maximumRange:39.22661 # type:4 name:BMI160 Gyroscope version:2062600 vendor:BOSCH resolution:0.0010681152 power:0.9 maximumRange:34.906586 # type:2 name:AK09911 Magnetometer version:1 vendor:AKM resolution:0.5996704 power:2.4 maximumRange:4911.9995 # type:35 name:BMI160 Accelerometer Uncalibrated version:2062600 vendor:BOSCH resolution:0.0023956299 power:0.18 maximumRange:39.22661 # type:16 name:BMI160 Gyroscope Uncalibrated version:2062600 vendor:BOSCH resolution:0.0010681152 power:0.9 maximumRange:34.906586 # type:14 name:AK09911 Magnetometer Uncalibrated version:1 vendor:AKM resolution:0.5996704 power:2.4 maximumRange:4911.9995 # VersionName:v20191120-nightly-9-gde3748b VersionCode:424 </code></pre> <p>When no information is provided, the 5 first characters of the path is a good proxy.</p> <p>Before starting the optimization, we extend the original grid dynamically with 2 types of points:<br> 1.- We add points from the X solution that are far from any original grid point.<br> 2.-When the distance between points in the X solution is bigger than a certain threshold, we introduce intermediate points.</p> <p>We are sharing the post-processing code at <a href="https://www.kaggle.com/vicensgaitan/11th-dynamic-programming-post-procesing" target="_blank">11th-dynamic-programming-post-procesing</a></p>
Indoor Location & Navigation
[17th] 🔥 PERFECT cv 🔥 & improve step by step [中英]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Indoor Location & Navigation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello everyone,</p> <p>This competition is very good, thanks to XYZ10 Technology, thanks to the organizer of kaggle, I love kaggle so much. Completing this competition I think is a very good growth for me. I have learned (reviewed) a lot of things, including: tensorfloor, pytorch, multi-threading skills, code specifications, standardized experimental records, and the importance of cv . I haven't made any breakthroughs in algorithms and features. The only thing I haven't seen anyone discuss is the construction method of my cv. I will share it later.</p> <h1>step by step</h1> <p>Now let’s take a look at my main improvement points in the past month:</p> <table> <thead> <tr> <th>Mainline</th> <th>public</th> </tr> </thead> <tbody> <tr> <td>Lstm public kernel is used with simple parameter adjustment, also use cost min+snap2grid</td> <td>4.72</td> </tr> <tr> <td>A fake test is constructed by simulating the characteristics of test data</td> <td></td> </tr> <tr> <td>Interpolate the wifi timestamp of the training set, the test set is still the timestamp of the waypoint</td> <td>4.45</td> </tr> <tr> <td>Drop out wifi log with an interval greater than 5 seconds</td> <td>4.20</td> </tr> <tr> <td>By find a timestamp very close to the train, fix start and end point</td> <td>4.18</td> </tr> <tr> <td>The training set is unchanged, the test set is also the wifi timestamp, and then de-interpolated to get the waypoints</td> <td>4.16</td> </tr> <tr> <td>Do cost min at the wifi timestamp level, interpolate to get wp, <br>snap, fix the start and end points, and finally do cost+snap again</td> <td>4.0x-390</td> </tr> <tr> <td>4 fold use fake test strategy</td> <td>3.82</td> </tr> <tr> <td>More careful fix start and end points</td> <td>3.67</td> </tr> <tr> <td>Duild delta modeling of wifi timestamp granularity improves the result of wifi cost min</td> <td>3.46</td> </tr> <tr> <td>Duild delta modeling of waypoint timestamp granularity improves the result of wp cost min</td> <td>3.36</td> </tr> <tr> <td>Do cost min at the wifi timestamp level, interpolate to get wp, <br>snap, fix the start and end points, and twice cost+snap+fix</td> <td>3.23</td> </tr> <tr> <td>Do cost min at the wifi timestamp level, interpolate to get wp, <br>snap, fix the start and end points, and three times cost+snap+fix</td> <td>3.19</td> </tr> </tbody> </table> <h1>fake test</h1> <p>When I did simple path group Nfold for the first time, there was no improvement. I realized that the distribution of the training set and the test set was inconsistent. I conducted a series of probes and found that the test set has some characteristics that should be artificially limited. <strong>The txt size of the test set&gt;=2M (look in linux), the time span&gt;=60s, and the number of path points&gt;=5</strong>. In the first half of the month, i also selected 626 paths as fake tests using above restrictions . No matter whether post-processing is performed or not, my fake test scores are consistent with the lb scores. In the last week, I used 4-fold cross-validation. I also made the above restrictions to make my cv more consistent. Later, I checked the score of private, which also conformed to the same trend.</p> <h1>Failed attempt</h1> <ol> <li>Directly use png image features as end-to-end input</li> <li>Use pytorch to reproduce the network structure and lb score</li> <li>Use MLP to replace lstm that is not a time series</li> <li>Use all site data</li> <li>Try sequential RNN, but it does not work well and it is difficult to converge <br> . . . . . .</li> </ol> <h1>I have a question</h1> <p>I haven't figured out why the lstm model with a sequence length of 1 works so well, and why can't mlp replace it. . .<br> <strong>If you have any comments on this. I will be very grateful.</strong></p> <hr> <hr> <p>大家好,</p> <p>这个比赛非常棒,感谢十域科技,感谢kaggle主办方,我太爱kaggle了。完成这个比赛我觉得是对我个人的一次非常好的成长,我学到了(复习了)非常多的东西,包括:tensorfloor、pytorch、多线程技巧、代码规范、规范的实验记录、cv的重要性。我没有在算法和特征上有什么突破,唯一有一点我没有看到有人讨论的是我cv的构建方法。后面我将会分享到它。</p> <h1>step by step</h1> <p>现在先看一下我这一个月来的主要提升点:</p> <table> <thead> <tr> <th>主线</th> <th>public</th> </tr> </thead> <tbody> <tr> <td>使用了 lstm public kernel,进行简单调参,使用cost min+snap2grid</td> <td>4.72</td> </tr> <tr> <td>模拟test data 的特点构造了一个 fake test</td> <td></td> </tr> <tr> <td>对训练集wifi时间戳进行插值,测试集还是路径点的时间戳</td> <td>4.45</td> </tr> <tr> <td>过滤掉间隔大于5秒的wifi</td> <td>4.20</td> </tr> <tr> <td>寻找时间非常接近的点,修复开始结束点</td> <td>4.18</td> </tr> <tr> <td>训练集不变,测试集也是wifi时间戳,再反插值得到路径点</td> <td>4.16</td> </tr> <tr> <td>做wifi时间戳级别的cost min,插值得到wp,<br>snap,修复开始结束点,最后再嵌套一次cost+snap</td> <td>4.0x-390</td> </tr> <tr> <td>引入4折 fake test</td> <td>3.82</td> </tr> <tr> <td>更小心的修复开始和结束点</td> <td>3.67</td> </tr> <tr> <td>对wifi时间戳粒度的delta建模提高wifi cost min的结果</td> <td>3.46</td> </tr> <tr> <td>对wp时间戳粒度误差建模提高cost min</td> <td>3.36</td> </tr> <tr> <td>做wifi时间戳级别的cost min,插值得到wp,<br>snap,修复开始结束点,嵌套两次cost+snap+fix</td> <td>3.23</td> </tr> <tr> <td>做wifi时间戳级别的cost min,插值得到wp,<br>snap,修复开始结束点,嵌套三次cost+snap+fix</td> <td>3.19</td> </tr> </tbody> </table> <h1>fake test</h1> <p>第一次做Nfold的时候没有提升,我意识到训练集和测试集的分布不一致,我进行了一系列的探测,发现测试集有一些特点应该是人为限定的。测试集的txt大小&gt;=2M(linux下看),时间跨度&gt;=60s,路径点个数&gt;=5。在前半个月的时候,这个限制同样抽出了626个路径作为fake test,无论是否进行后处理,我的fake test得分和线上的得分都保持一致。在最后1周我使用4折交叉验证,作为valid的那一份数据我同样做了上述限制,使得我的cv更加一致。后面我检查了private的分数情况,也符合相同的趋势。</p> <h1>失败的尝试</h1> <p>1.直接使用png图像特征作为端到端的输入<br> 2.使用pytorch复现网络结构和lb分数<br> 3.使用MLP替换并不是时间序列的lstm<br> 4.使用所有site的数据<br> 5.尝试序列RNN,但是它效果不好且难以收敛<br> 。。。。。。</p> <h1>我有一个问题</h1> <p>我这一个月始终都没有想明白,为什么序列长度为1的lstm模型工作的这么好,mlp为什么无法替代它。。。<br> 如果您对此发表任何看法。我将非常感激。</p> <blockquote> <p>I find my modified MLP model performs slightly better than LSTM.</p> </blockquote> <p>great,make sense, Could you please share your network structure?</p>
Indoor Location & Navigation
[22nd] place solution: Only Wifi Models + Public Post Processing + Pseudo labels
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Indoor Location & Navigation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you, updated . 👍</p> <p>I would like to thank kaggle for hosting such an interesting competition and my amazing teammates <a href="https://www.kaggle.com/shivamcyborg" target="_blank">@shivamcyborg</a> <a href="https://www.kaggle.com/dehokanta" target="_blank">@dehokanta</a> <a href="https://www.kaggle.com/shivammittal274" target="_blank">@shivammittal274</a> <a href="https://www.kaggle.com/nooblife" target="_blank">@nooblife</a> .Also, Congratulations to Tom &amp; dott's team for another win and <a href="https://www.kaggle.com/mamasinkgs" target="_blank">@mamasinkgs</a> team for dominating the competition in most half of the competition. <br> I would like to share our approach which did quite well for us and led us to secure a decent position on leaderboard.<br> Firstly, all our models were trained on just wifi features BSSID &amp; RSSI with diverse approaches. We couldn't find any better way to include more features into our model, most of them performed poorly on both validation score &amp; leaderboard, hence we decided to stick completely with Wifi data.</p> <h4>Models:</h4> <p>1.Fastai/ RNN : This was our best model with highest weightage in ensemble, baseline was built by <a href="https://www.kaggle.com/nooblife" target="_blank">@nooblife</a>, I was really amazed by its performance. One of the reason it performed better was generating the wifi data within the last 5 seconds for each position and sending them sequentially to our model. With some more parameters tuning and slightly different architecture, we managed to reach 5.26 with single model without any post processing.</p> <ol> <li><p>Site Wise RNN training: We also trained similar lstm model site wise, although its result were not as good as lstm on complete data, but it did good boost in ensemble because of low correlation. </p></li> <li><p>MLP: It also did quite well in our final ensemble of models, giving slight boost when merged with lstm models.</p></li> </ol> <p>Our models ensemble without any post processing scored <strong>4.8-4.9</strong> on public leaderboard.</p> <h4>Post Processing</h4> <p>Our final solution is solely based on public post processing with some additional tweaks and tricks. Although, we come up with brilliant way to boost local as well leaderboard score by 0.2-0.3 through post processing but with the help of hand labelled waypoints. The idea behind it was doing post processing in loops with different thresholds for snap to grid, we might have reach a lot better position than current if hand labelled grids were allowed, I hope no team is using it in current standings.<br> Our models score with post processing and optimised cost minimisation : <strong>3.57</strong></p> <h4>Pseudo Labels</h4> <p>The idea behind it was simple, as some of the waypoints were being changed by a good margin after post processing, we planned to retrain some of our models along with train+test data to get better predictions for some paths.<br> This led to our final score reach: <strong>3.43</strong> after adding pseudo label models in ensemble and making us jump to 22nd position on private leaderboard.</p> <p>Baseline version of rnn: <a href="https://www.kaggle.com/nooblife/indoor-location-rnn-v2" target="_blank">https://www.kaggle.com/nooblife/indoor-location-rnn-v2</a><br> Preparing data: <a href="https://www.kaggle.com/nooblife/indoor-location-rnn-data-v2" target="_blank">https://www.kaggle.com/nooblife/indoor-location-rnn-data-v2</a></p> <h3>Things that didn't worked:</h3> <ul> <li>Transfer learning by using other buildings than test data.</li> <li>Imu+wifi features models</li> <li>lgbm</li> <li>post processing based on shapely</li> </ul> <h3>Things we couldn't finish:</h3> <ul> <li>Automated grid points</li> <li>Efficiently use other data with wifi features</li> <li>Using time features efficiently </li> <li>in depth EDA of predictions</li> </ul> <h4>References</h4> <p><a href="https://www.kaggle.com/kokitanisaka/lstm-by-keras-with-unified-wi-fi-feats" target="_blank">https://www.kaggle.com/kokitanisaka/lstm-by-keras-with-unified-wi-fi-feats</a> by <a href="https://www.kaggle.com/kokitanisaka" target="_blank">@kokitanisaka</a><br> <a href="https://www.kaggle.com/saitodevel01/indoor-post-processing-by-cost-minimization" target="_blank">https://www.kaggle.com/saitodevel01/indoor-post-processing-by-cost-minimization</a> by <a href="https://www.kaggle.com/saitodevel01" target="_blank">@saitodevel01</a><br> <a href="https://www.kaggle.com/robikscube/indoor-navigation-snap-to-grid-post-processing" target="_blank">https://www.kaggle.com/robikscube/indoor-navigation-snap-to-grid-post-processing</a> by <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a><br> <a href="https://www.kaggle.com/tomooinubushi/postprocessing-based-on-leakage" target="_blank">https://www.kaggle.com/tomooinubushi/postprocessing-based-on-leakage</a> by <a href="https://www.kaggle.com/tomooinubushi" target="_blank">@tomooinubushi</a></p> <p>It was really nice competing in this competition, I wish I could have joined it earlier instead of last 2-3 weeks. I think we did pretty well without use of hand labels and small data. Thank you for reading our solution. </p>
Indoor Location & Navigation
[25th] Public Sub + Shortest Path Search
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Indoor Location & Navigation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you <a href="https://www.kaggle.com/duykhanh99" target="_blank">@duykhanh99</a>.<br> I hope I can use my own model in the next competition.</p> <h1>Thank you everyone!</h1> <p>My solution is very simple. Everything is available from <a href="https://www.kaggle.com/tomooinubushi/25th-public-sub-shortest-path-search" target="_blank">here</a>.</p> <p><strong>1. I use the results of <a href="https://www.kaggle.com/ahmedewida/indoorlocation-ensembling" target="_blank">public best submission</a>.</strong><br> I used the public sub, because my NN models were never better than 7.7 in CV. I am looking forward to see other solutions.</p> <p><strong>2. Correct floor predictions based on the leakages of <a href="https://www.kaggle.com/tomooinubushi/retrieving-user-id-from-leaked-wifi-feature" target="_blank">shared wifi records</a> and <a href="https://www.kaggle.com/c/indoor-location-navigation/discussion/234543" target="_blank">device IDs</a>.</strong></p> <p>I assumed that the paths with the same ID are in the same floor, which is not always true for train waypoints. Correcting start/end waypoints based on the leakages did not work well when combining with following shortest path search post-processing. This process changes floor predictions of only three paths that are in private test set i.e., this process did not change public LB score.</p> <p><strong>3. Postprocess the waypoints based on <a href="https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm" target="_blank">Dijkstra's algorithm</a>.</strong><br> I re-defined the task as a <a href="https://en.wikipedia.org/wiki/Shortest_path_problem" target="_blank">shortest path problem</a> rather than a regression task. I searched the path with minimal <a href="https://www.kaggle.com/saitodevel01/indoor-post-processing-by-cost-minimization" target="_blank">cost</a> based on Dijkstra's algorithm, in which the nodes are train and augmented waypoints.</p> <p>Instead of using hand-labeled waypoints, I generated augmented waypoints with following rules.</p> <ul> <li>Augmented waypoints have similar X and Y values of train ones (mean of the subset of train waypoints).</li> <li>Augmented waypoints are in hallways.</li> <li>Augmented waypoints are sufficiently distant from train ones and each other.</li> </ul> <p>I used codes and ideas from many many discussions and notebooks. Please notify me if I miss someone. Thank you very much.</p> <ul> <li><a href="https://www.kaggle.com/kenmatsu4/feature-store-for-indoor-location-navigation" target="_blank">https://www.kaggle.com/kenmatsu4/feature-store-for-indoor-location-navigation</a></li> <li><a href="https://www.kaggle.com/jiweiliu/fix-the-timestamps-of-test-data-using-dask" target="_blank">https://www.kaggle.com/jiweiliu/fix-the-timestamps-of-test-data-using-dask</a></li> <li><a href="https://www.kaggle.com/ahmedewida/indoorlocation-ensembling" target="_blank">https://www.kaggle.com/ahmedewida/indoorlocation-ensembling</a></li> <li><a href="https://www.kaggle.com/c/indoor-location-navigation/discussion/234543" target="_blank">https://www.kaggle.com/c/indoor-location-navigation/discussion/234543</a></li> <li><a href="https://www.kaggle.com/saitodevel01/indoor-post-processing-by-cost-minimization" target="_blank">https://www.kaggle.com/saitodevel01/indoor-post-processing-by-cost-minimization</a></li> <li><a href="https://www.kaggle.com/museas/with-magn-cost-minimization" target="_blank">https://www.kaggle.com/museas/with-magn-cost-minimization</a></li> <li><a href="https://www.kaggle.com/higepon/visualize-submissions-with-post-processing" target="_blank">https://www.kaggle.com/higepon/visualize-submissions-with-post-processing</a></li> <li><a href="https://www.kaggle.com/robikscube/indoor-nav-visualize-predictions-train-data" target="_blank">https://www.kaggle.com/robikscube/indoor-nav-visualize-predictions-train-data</a></li> <li><a href="https://www.kaggle.com/nigelhenry/simple-99-accurate-floor-model" target="_blank">https://www.kaggle.com/nigelhenry/simple-99-accurate-floor-model</a></li> </ul> <p>Thank you for your comment.<br> This is very important point and <a href="https://www.kaggle.com/c/indoor-location-navigation/discussion/234543" target="_blank">already asked by 2nd place winner</a>.<br> These thresholds (5,500) are based on EDA of raw timestamps. I found whether start and end points are the same is related to the temporal difference of start and end point. You can see this relationship <a href="https://www.kaggle.com/iwatatakuya/use-leakage-considering-device-id-postprocess" target="_blank">in the notebook by ganchan</a> while this notebook uses more sophisticated method for finding leaked start/end waypoints. <br> I created the histogram of diff raw timestamps grouped with whether it create chain (coincidence of start and end waypoint) or not. At the around 5,500 ms, the probability of chain becomes 50/50.<br> I did not think these thresholds are optimal, but I thought it is OK if I could demonstrate the existence of the leakage at that time.</p>
Indoor Location & Navigation
[34th] My first medal!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Indoor Location & Navigation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks mate! 🎉</p> <p>Thanks <a href="https://www.kaggle.com/josephjzk" target="_blank">@josephjzk</a> !<br> Hmm.. well 34th now. Congrats to you too, i see you got a silver medal too! 🎉</p> <p>Thanks <a href="https://www.kaggle.com/hiro5299834" target="_blank">@hiro5299834</a> !<br> Hey, i follow you, all your notebooks and your contribution in the community are awesome!</p> <p>Hello everybody! 😊<br> This is my first official (with medals and prizes) competition and i'm very happy to get a silver medal. Honestly, i was very worried of overfiting, especially when i saw this <a href="https://www.kaggle.com/c/indoor-location-navigation/discussion/239801" target="_blank">discussion</a> and then i saw many users with a great historial in low position. Literally i saw the leaderboard all the day for checking if exists possibility of a shakeup.. until the grandmasters uploaded their submission (1.00) and finally I could be calmer. 😂<br> About my solution.. I'm not a grandmaster (at the moment, only a contributor-expert), i didn't do something awesome, i only studied all materials, notebooks, topics and papers that the people uploaded and shared.. Well, i hope to learn more! And its a pleasure to see many professionals and researchers willing to solve these kinds of complex problems. Thanks kaggle community! and congrats to the winners!.. I will read each line of their solution! 🙌<br> <strong>Edit: Hmm please check out the solution of the first places, i read their information they shared, they deserve all the claps. When I'm not busy, I'll check out too!</strong></p> <p>Yeah. I'm happy to get good results in competitions. Here <strong>Top 3%</strong> and in the tabular competition too (<strong>Top 1%</strong>,<strong>Top 2%</strong> <strong>Top 5%</strong>). And these topics are a way to celebrate and <strong>save</strong> important information, notebooks or some curious experiences.. because this is a great journey! 😊<br> Pd: The downvote is because of <a href="https://www.kaggle.com/c/indoor-location-navigation/discussion/239987" target="_blank">https://www.kaggle.com/c/indoor-location-navigation/discussion/239987</a><br> That is a joke of this topic and my effort in the competition </p>
Instacart Market Basket Analysis
#11 Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instacart Market Basket Analysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to winners. This is a really interesting competition and I have learned a lot from it. Here is a brief overview of my solution.</p> <h2>Models:</h2> <ul> <li><p><strong>User-Product pairs</strong>: The probability of user-product pair is predicted by LightGBM model. The model is trained with all training data + part of priors data. Limited to my ram, I add the last 2 prior orders of every user, which boosts my score about 0.001. Adding the last 3 or 4 prior orders may help to get a better score.</p></li> <li><p><strong>Pnone</strong>: Pnone is calculated in two ways:1) (1-P1)(1-P2)…(1-Pn) from user-product model. PNone is underestimated in this way since it doesn’t consider the correlation. I multiply a coefficient to calibrate PNone and have a slight improvement on the score. 2) I build a separate model to predict PNone, and features of this model are from order-level features in main models. Average PNone of these two ways show a better performance compared with single none predict model.</p></li> </ul> <h2>Features:</h2> <p>I have generated about 300 features and 120 features are used in final models. Here are some of the important features in my model that I have not found in public kernel or sh1ng’s script.</p> <ul> <li><p><strong>User_product_diversity</strong> : This feature is to capture user diversity of each aisle or depart. It is calculated as the number of distinct products in this aisle/depart for the user. It can be calculated in the last 3 orders or all prior orders.</p></li> <li><p><strong>User_dumb_degree</strong> : If we predict the Nth order that the user will repeat all products in the N-1 th order, then we can calculate precise, recall and f1 score of the Nth order in this dumb strategy. This is a new order-level feature to capture user dump degree, and we can calculate the mean, std, last of dump degree of each user. This idea is inspired by @Jonathan Weisberg.</p></li> <li><p><strong>User_product_history</strong>: This feature tries to record user product in the last five orders. If a user has 8 orders, user_product_history such as ‘00101’ represents that he buys the product in the 5, 7th order and does not buy the product in the 3,4,6th order. This feature is like order streak and contains more information.</p></li> <li><p><strong>User_time_order_ratio</strong> : The proportion of user orders in this dow (hours, days_since) to user_total_orders. Hours and days_since_prior_orders are divided into four groups since every user only has a small number of orders. These features try to capture whether this order is out of ordinary.</p></li> </ul> <h2>Ensemble:</h2> <p>Final submission is an ensemble of 4 LightGBM models with different features, datasets, params. The weights are decided by LB score and cv.</p> <p><br> Thanks all.</p> <p>thanks</p> <p>yes</p>
Instacart Market Basket Analysis
12th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instacart Market Basket Analysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The weights is from LB score. Actually split random is overfitting,but I found the LB score is better...</p> <p>Base features LB:0.4055. Add data LB:0.4064. Ensemble LB:0.4075. The rest improvement is overfitting...</p> <p>Word2Vec need GPU.Maybe the nearly score,because the "none model" I didn't update for it has some features based on "user_products_model".Then you need to change some path by yourself.</p> <p>Congrats to winners especially for weiwei. And have a surprise that Faron didn't submit finally. Thanks for sh1ng's baseline and Faron's F1 optimizer script. sjv's DL solution is wonderful,thanks for sharing.</p> <p><strong>MODEL:</strong></p> <p>1.user product pair model.</p> <p>2.if none model.</p> <p><strong>FEATURES:</strong></p> <p>User X Order features:</p> <p>Products features:</p> <p>Aisle features:</p> <p>Department features:</p> <p>User features:</p> <p>User X Products features:</p> <p>User X Aisles features:</p> <p>User X Department features:</p> <p>Products X time features:</p> <p>order_number features:</p> <p><strong>DATA:</strong></p> <p>I add the last prior as train data.Also,I add the second to the last prior,but not didn't improve much.</p> <p><strong>ENSEMBLE:</strong></p> <p>all["label"]=0.28*all["prediction_arbor"]+0.12*all["prediction_lgbm"]+0.36*all["label_2"]+0.24*all["label_1"]</p> <p>prediction_arbor AND prediction_lgbm is from sh1ng's baseline.</p> <p>label_2 trained with shuffle, label_1 traind split by userid.They have the same features.</p> <p><strong>SUB:</strong></p> <p>Use faron's script with my "none model".</p> <p>code link:<a href="https://github.com/plantsgo/Instacart-Market-Basket-Analysis/tree/master">https://github.com/plantsgo/Instacart-Market-Basket-Analysis/tree/master</a></p> <p>Thanks~and hope for more sharing~</p> <p>yes,I used it as features of the "none model"</p>
Instacart Market Basket Analysis
2nd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instacart Market Basket Analysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here is my awful submission selection.</p> <p>First of all, thanks for holding this memorable and intriguing competition! As others mentioned, there is no leak here and data is clean. This is marvelous thing isn't it?</p> <p>I thought I got 1st place, but congrats @weiwei!</p> <p>Also, @sjv your approach is splendid than I expected! Thank you for motivating me on LB. It was really amazing time.</p> <p>Anyway, I really enjoyed this competition. Again, thank you for holding this competition, and all kagglers!</p> <p>Here is my <a href="https://github.com/KazukiOnodera/Instacart">git</a>.</p> <p>I'm not familiar with submission selecting so much. But if I select submission seriously, I'll evaluate it by CV.</p> <p>Edit: Also it depends on structure of train test split. </p> <p>Actually, I didn't think about hyperparameters so much. I only cared about necessary time.</p> <p>I used feature importance of xgboost. We can get weight, gain and cover. I normalized those, sum, and then pick up from higher feature.</p> <p>Good question, I don't think one time zone suffice. I should have added another one.</p> <blockquote> <p>Did you use last 3 orders of all the users for taining???</p> </blockquote> <p>Yes, you need 200GB memory or so for running my model. Read README.md.</p>
Instacart Market Basket Analysis
3rd-Place Solution Overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instacart Market Basket Analysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes, all high cardinality features were embedded (user_id, product_id, aisle_id, etc.)</p> <p>The best CNN/RNN gets logloss around .241, with public LB of about .407. The lgb model reduced logloss to .239.</p> <p>It's an empirical process for the most part. The CNN was inspired by an existing successful architecture for sequence processing (WaveNet) and the LSTM was a vanilla LSTM. The most important architectural parameters are the number of layers in the CNN/LSTM and the hidden state sizes, so those are the only ones I really played around with.</p> <p>All the networks could have been implemented in Keras, and with less code. There's still a lot of redundant code outside of the network itself. I've written the code for the training loop, logging, checkpointing, early stopping, etc. many many times so I decided it would be worthwhile to abstract some of it.</p> <p>Hello everyone,</p> <p>First off, thanks to the competition organizers for a clean, interesting, and leak-free dataset and congrats to the winners!</p> <p>My solution was heavily dependent upon the use of neural networks for feature extraction. I decided not to add any manual feature engineering since I thought it would be cool to try to win a competition with pure deep learning. In retrospect, this wasn't one of my best ideas, and I probably could have benefited from some additional features. Anyways, I managed to keep my codebase clean throughout the competition, and I've posted all the code on github in addition to a slightly more in-depth description of my approach.</p> <p>Here's the link - <a href="https://github.com/sjvasquez/instacart-basket-prediction">https://github.com/sjvasquez/instacart-basket-prediction</a></p>
Instacart Market Basket Analysis
4-th Place Tips
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instacart Market Basket Analysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I tried several functional forms and did a grid search to come up with the factor. The bottom line is that when n is large, you would want the factor to be close to 1, and when n is small, you would like it to be a little bit greater than 1. I capped the probability at 0.99. </p> <p>For correlation, suppose whenever product A is purchased, product B is also purchased( i.e. A and B are perfectly correlated). Then if we have P(A) = P(B) &gt; P(C), there are situations in which it is better to choose (A,C) or (B,C) than (A,B), if C is not correlated with A and B. </p> <p>Just try to multiply your probability by a factor. It is the easiest thing to add ^.^</p> <p>@&lt;^..^&gt;: For the first optimization method, it is just multiplying a factor before putting them into Faron's Script, or its Rcpp equivalent. It is quite fast and is even slightly better than the second method. </p> <p>My current code for the second method is a bit messy and extremely slow because it involves estimating the empirical count distribution for each product, which requires more tuning than the first method. The O(n^3) exact F1-optimization method is well written on page 5 of the paper and is quite easy to implement (it only took me less than 15 minutes), especially if you understand Faron's script well enough. However, it is not useful unless we give a decent count distribution. I do not like to use this method because it is kind of slow compared to the O(n^2). </p> <p>Thanks. Engineering is fun when it improves the score and gives insights. But @sjv's method is so cool that mine becomes boring in comparison. </p> <p>I only used the user-product repurchase rate imputed by LDA and NMF. </p> <p>My solution is some ad-hoc feature engineering, which is rather boring and trivial compared to sjv's clean and cool neural network solution. Here are some tips: </p> <p><strong>Uncertainty and Correlation of Estimated Probabilities:</strong> Faron's O(n^2) F1-optimization script is sub-optimal because it does not address correlation. Uncertainty is also an issue: when all of the estimated probabilities have high uncertainty, it is better to choose all the products to optimize F1. I have two ways to address this issue: </p> <ol> <li><p>Multiply the probability by a factor that is negatively correlated with the conditional basket size. I tried some parametric function, and I chose 1.6^(1/n^2), where n is the average conditional basket size: conditional on user purchasing a certain product, how many products he purchase in this order. For "None", n=1. </p></li> <li><p>Exact F1 Measure, based on the paper An Exact Algorithm for F-Measure Maximization. The algorithm is O(n^3) if we supply the joint probability matrix(n*n) of a product being purchased and the corresponding basket size of the order. When the purchase probability are independent, it is easy to compute such theoretical matrix using O(n^2) and the result would be the same to Faron's F1 optimization. I constructed an empirical distribution of conditional basket size and take a weighted average between the probability matrix with independent assumption and this empirical one. I gained about 0.001-0.0015 from the above optimization. </p></li> </ol> <p><strong>catboost</strong>: catboost is my best single model, with a 0.4086 public LB and 0.4070 private LB. It is about 0.0003-0.0005 better than lightgbm and xgboost. I turned on the "accurate" option of xgboost, and it is 0.0015 better than the approximate one. </p> <p><strong>LDA and NMF</strong>: I also tried to use LDA and NMF features, which gives me 0.0005-0.001 increase.</p> <p><strong>What did not worked for me</strong>: </p> <ol> <li>I ended up not using lag1 data because they did not perform well based on my feature engineering. It's possible that when I was constructing summary statistics such as product average, product standard deviation, they got messed up. It would be great to see how others are using the lag1 data. </li> </ol>
Instacart Market Basket Analysis
54th solution with a nice user_product purchase encoding
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instacart Market Basket Analysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hey,</p> <p>I publish a bit lately my solution. <a href="https://github.com/jacquespeeters/instacart-market-basket-analysis">https://github.com/jacquespeeters/instacart-market-basket-analysis</a></p> <p>Most of my code is done with Pandas not in pure python (i'm an R user, and therefore i use dplyr/data.table a lot) I'm still learning Python, therefore some code might not be good. </p> <h3>Strongest feature</h3> <p>My strongest feature is a binary encoding of the purchase pattern of products by users. Binary encoding is a nice way to encode a series of 1/0, isn't it?</p> <p><code> order_prior["UP_order_strike"] = 1 / 2 ** (order_prior["order_number_reverse"]) users_products = order_prior. \ groupby(["user_id", "product_id"]). \ agg({..., \ 'UP_order_strike': {"UP_order_strike": "sum"}}) </code></p> <h3>None handling</h3> <p>I created a model specifically to predict None.</p> <p>The main idea is to re-use feature engineering already done for the main model (like the user profile) and to create new features based on the prediction of the main model. </p> <p>Here is the piece of code highlighting the new features created in order to predict None.</p> <p><code> df_pred = df_pred.groupby(["order_id", "user_id"]). \ agg({'pred_minus': {'pred_none_prod': "prod"}, \ 'pred': {'pred_basket_sum': "sum", 'pred_basket_std':'std'}}) </code></p> <p>Obviously it is done in a cross-validation way in order to avoid any leakage.</p> <p>It gave me a nice overall boost of +0.01 </p> <h3>Parallel groupby in Pandas</h3> <p>I should write something about it, because you don't want to // every group because it creates a lot of data chunk. One would prefer to create fewer chunk and apply groupby on these chunk (i use a rule of thumb of 3 chunk per core). Because transferring data chunk to cores cost computing power, therefore you only want to parallelize the minimal amount of chunk. But you also need more chunks than cores, because some cores might be faster than others, therefore faster cores must be able to process multiple chunks and slower cores fewer.</p> <h2>Conclusion</h2> <p>I'm happy to won my first silver medal but i'm a not bitter given the fact i was better placed before Faron and Sh1ng script. Anyway i'll do better next time :)</p>
Instacart Market Basket Analysis
6th place solution overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instacart Market Basket Analysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Around 1000 features and 100Gb RAM.</p> <p>Expanding training set decreased logloss error by ~0.0002-0.0003 but I didn't check how much it improved LB score.</p> <p>Honestly speaking, I didn't. I had enough RAM and patience :)</p> <p>My approach is good old "feature engineer till you drop" so I suppose it's not so interesting after all but I hope somebody will find something. :) Here is brief overview of approach.</p> <p>Solution follows standard scheme. Train logloss-model for user-product pairs, train logloss-model for None, use probabilities for F1-optimisation.</p> <p>Let us denote current user as U, last order of U as LO, current product as P and its aisle and department as A and D. Let us denote order as "A order" if it contains at least one product from aisle A (same for P and D).</p> <ol> <li><p>Product only features:</p> <ul><li>P is Low Fat/Organic :)</li> <li>200 binary "P_is_X_product" features for top-200 most popular products</li> <li>binary "A_is_X_aisle" and "D_is_X_department" features for all X aisles and departments</li> <li>average day/count interval between all P orders</li></ul></li> <li><p>User only features</p> <ul><li>7+24 binary day of week and hour of day features for O</li> <li>Ratio of reordered products in last N orders/last M days orders of U</li> <li>Average similarity of product sets for sequential orders (e.g. Jaccard measure of such sets)</li> <li>Average similarity of product sets for same day orders (it could be useful in case LO has days_since == 0)</li></ul></li> <li><p>User-product features:</p> <ul><li>Number of days/orders between LO and most recent order of U with P/A/D</li> <li>Number of orders between most recent order with P and most recent order with A (trying to see if user switched products inside A)</li> <li>Binary "it's been X orders since last order with P" features where X is from 1 to 10</li> <li>Average day/count interval between all U's orders with P/A/D</li> <li>Share of P products/orders among all/A/D products/orders bought by U (how ofter U buys P)</li> <li>Same share but only for orders since first P order (how ofter U buys P since he began to do so)</li> <li>Same share but for last N orders</li> <li>Same share but for orders during last M days</li> <li>Average position of P in orders of U (raw or normalized by size of order)</li> <li>Ratio of average interval between U's orders with P to average interval between all orders with P</li> <li>Ratio of number of days/orders since most recent U's P order to average interval between all orders with P</li> <li>Let's assume it's been N days/orders since last P order. Using all prior orders for each product P we can calculate probability to reorder P if it's been N days/orders since last P order.</li> <li>Same could be done using only U's prior orders (but it's much more noisy of course). I've tried to smooth one probability with another buy couldn't achieve good results.</li> <li>Product similarity features. Let's pretend that users often switch from product A to product B (for example Bananas and Organic Bananas). How could we try to detect such product pairs. We could assume that buying product B should considerably lower probability of buying product A in the same order. Let's consider ratio = P(A in current order|A in previous orders, B in current order) / P(A in current order|A in previous orders). For unrelated products this ratio should be close to 1. To similar products this ratio should be close to 0. Such probabilities could be easily calculated using prior orders. For example top-10 pairs with lowest ratio (only for pairs with enough statistics) are:</li></ul></li> </ol> <p>Large Brown Eggs - Organic Large Brown Grade AA Cage Free Eggs</p> <p>Large Brown Eggs - Organic Large Grade AA Brown Eggs</p> <p>Green Seedless Grapes - Grape White/Green Seedless</p> <p>Organic Large Grade AA Brown Eggs - Large Brown Eggs</p> <p>Organic Broccoli - Organic Broccoli Crowns</p> <p>Banana - Organic Banana</p> <p>Organic Banana - Banana</p> <p>Organic Milk Reduced Fat - Organic Reduced Fat Milk</p> <p>Banana - Bag of Organic Bananas</p> <p>Half And Half Cream - Organic Half &amp; Half</p> <p>Using these pairs we can generate several features like "how many products similar to P user U bought since last P order". If such number is great it could mean user switched from product P to similar product Q.</p> <ul> <li>Average share of P aggregated by months.</li> <li>Linear extrapolation of mentioned above month aggregated share.</li> <li>P orders streak length of user U</li> </ul> <p>I've expanded training set using last prior order for all test users with at least 5 prior orders.</p> <p>Main user-product probability model is average of 8 CatBoost models. I've used product ID as categorical feature for CatBoost (yeah, that's why CatBoost has such name :))</p> <p>For None-classifier I've used the same training users. I've used my per-user features from main model and used features based on predictions of main model. It's a little bit not very clean and tidy approach because main model predictions are a little bit overfitted for training users (so features on train distributed in slightly another manner in comparison to features on test) but this approach gave me good results, and I was unable to invent better approach. :) Splitting train users for main/none classifiers gave me considerably worse results. So, None-classifier is a single CatBoost model.</p> <p>F1-optimization. At first I used a Monte-Carlo simulation using predicted probabilities to determine solution with max expected F1 for each user. But Faron's approach gave me considerably better results. So I've used it instead of Monte-Carlo. :)</p> <p>Thanks!</p> <p>I've tried catboost, xgboost and lightgbm. Catboost gave better results, so I chose it over other two algorithms. :)</p> <p>What is LabelEncoder? :)</p>
Instacart Market Basket Analysis
8th place solution - overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instacart Market Basket Analysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers (I &lt;3 well sampled test sets). Also thanks to Faron for a beautiful script and making me read an interesting paper. </p> <p>In general I mostly used lightgbm in combination with a separate model for predicting none and f1 optimization. Additionally i found that due to the structure of predicting unique user product combinations, the model didn't really have a notion of user_id (order_id) groupings. Therefore adding features made from grouping of features as well as out-of-fold model predictions on a user_id basis helped quite a bit (literally taking features.groupby('user_id').mean()). Below I tried to list some of the other, maybe more interesting features I used.</p> <p>I definitely learned a lot during this competition. One of them being: you can do (almost) anything in pandas in reasonable time if you spend enough time on planning and vectorizing :)</p> <p>Features:</p> <ul> <li>various days since metrics. by creating a mapping of order id onto cumsum of days since last order i made various timeseries features such time since purchased, exp weighed moving average of last few days, days since last none order etc</li> <li>various moments of distributions of product orders within set all user orders, such as mean ratio, skewness etc. note that a lot of this becomes much more efficient in dense representation using order_number</li> <li>various frequency metrics, such as: user based median days between product repurchases minus # of days since user last bought the product. gives you a metric on whether a user is on-cycle or off-cycle to repurchase that product</li> <li>association rule learning: allows you to find out things like: salt is usually bought together with pepper. problem is you can only reliably use such features for products that are bought rarely. this revealed some interesting patterns, but didn't help all too much in logloss</li> <li>use prior error in prediction. a common tactic for timeseries predictions is to take the prediction error of a model for the previous user order: I kicked out all train orders and reordered everything so that the last order for each user becomes the train set. we can then extract an error for each user prediction. seemed to work but not help much in the basic model sadly :(</li> <li>make user order 'sentences' out of product id purchases combined with order number. eg user 0 ordered: 1_2762 1_298 1_6378 2_298... this allows you to do things like dim reduction etc to get user clusters as well as other cross product features using sparse lgb. worked sensibly well but didn't seem very additive to manual features</li> </ul> <p>Model tuning:</p> <ul> <li>small learning rate: even though it takes much longer, 0.01 seemed to give me significantly better results</li> <li>average over feature samples: since i used feature fraction 0.5 i had remarkable variation in logloss depending on sorting of features. therefore i took the mean over multiple models with varying sorting of features</li> <li>categorical features: lgb now offers a bunch of tuning metrics to reduce chances of inevitable overfitting with high cardinality categoricals. i played with this and didn't get huge success, but better than nothing</li> </ul> <p>thanks! let me try to give a specific example. using days since last order purchased we can do a groupby cumsum shift to get something like a user specific order time with date granularity. next we want to know what is the mean difference in days between repurchases of each user product combination. for this we could do a groupby user+product apply diff, however this is extremely slow. instead we can be a bit clever and do a groupby user+product shift and then take the difference between the shifted an unshifted vector. after that we can easily take mean, med, std etc. the reason is that there are a lot of user product combinations and operations become very expensive, but applying shift is very fast and then we can take the vectorized difference, which again is very fast. rule of thumb, apply with axis=1 is evil and rarely really needed. groupby apply is often needed but should still be done with care, because some functions are a lot more expensive than others. especially i the return type is not a constant but a series or dataframe. always try to verctorize! :)</p> <p>what i mean is, let's say we are interested in user 66 and product A. we can list the order history as either [1, 0, 0, 0, 1, 0] sorted by order_number (sparse representation), or we can list it as the actual order numbers [1,5] (dense representation). the second usually takes up less memory and is faster for computing order statistics. similarly in an extreme case you could pivot your whole table to be order_number * user_id * product_id, and fill all non occurring cells with nans, but of course that's a waste of resources and the wrong representation of this data</p> <p>thanks, I used python's mlextend package which includes association rule handling. most of the stuff you will easily find is organic products like bananas and strawberries bought together. but when you filter to high confidence and lift there are some other more interesting patterns like salt &amp; pepper. so for those products, let's call them A and B I added a feature: if i want to predict whether client 007 bought A i add my out of fold model prediction for that client for product B (and -1 if he never bought B before). for feature sorting i think i saw variation on the order of 0.0002 in logloss</p>
Instacart Market Basket Analysis
9th place Approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instacart Market Basket Analysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>dart almost always performs better than <strong>gbdt</strong> for me. the trick is to put high learning rate to offset the drop-rate. </p> <p>the difference between 0.01 and 0.005 was 0.00005 , lets say from .4082160 to 0.4082660</p> <p><a href="https://github.com/kaz-Anova/StackNet#data-format">Sparse format </a> is supported too . Most of the times I use the sparse format. You also need to put </p> <pre><code>sparse=true </code></pre> <p>in the command line.</p> <p>I had around 40...but only 15 models. Actually I forgot to mention this. apart from the normal predictions of these 15 models, I also created ranks per customer. Like X times was ranked ith for the Z customer. This made it 30 features.</p> <p>Then I also included a more aggregate features per customer and some flags for whether it is a new item (to capture 'None's) </p> <p>I actually use only StackNet for tuning , but I put only 1 model for 1st layer and a dummy one (lets say a logistic regression) for the second layer . Then I keep tuning the parameters of the first model doing one change at a time. For example I will change max_depth from 5 to 6 and keep note of the improvement (or not) in validation . Once I am done with one Algorithm (e.g not further improvement) , I replace with another algorithm . Once I am done with all algorithms , I put them all together and I run a full StackNet, saving the outputs. Then I can run a second stack net on the outputs,</p> <p>For more information, please have a look at the Tip: <a href="http://blog.kaggle.com/2017/06/15/stacking-made-easy-an-introduction-to-stacknet-by-competitions-grandmaster-marios-michailidis-kazanova/">To tune a single model, one may choose an algorithm...</a></p> <p>I have also included some slides you can get <a href="http://www.filedropper.com/stacknetmeta-modellingframework">from this link</a> , that also include the above tip in a more visual way.</p> <p>Let me know if these help.</p> <p>Congrats to the winners - great work!</p> <p>I am committed to sharing my approach every time I get a good position. I will share my approach even though I am pretty sure the top competitors' approaches will be far more useful. Generally I did not expect to get that high, given that I never even managed to make the metric working internally - I was optimising using <strong>log loss</strong> the whole time . By the way, if anybody has an implementation of f1_score (not the one is sklearn that seemed to be off in my case) for python that would be great! No wonder I over-fitted private LB a bit as I used public LB to find the right cut-off to the F1 optimizer. </p> <p>(I also shamelessly invite you to follow <a href="https://www.facebook.com/StackNet/">my new FB page</a> where I will share insight about using data science tools and approaches)</p> <p>Using the forums/kernels has been very useful in this competition . My score has been greatly boosted by:</p> <ul> <li><a href="https://www.kaggle.com/mmueller/f1-score-expectation-maximization-in-o-n">Farons F1 optimizer script</a> </li> <li><a href="https://www.kaggle.com/c/instacart-market-basket-analysis/discussion/37697">Sh1ng fetaures</a> and <a href="https://github.com/sh1ng/arboretum">arboretum</a> too</li> <li><a href="https://www.kaggle.com/mmueller/order-streaks-feature">Faron's Streaks</a></li> <li>Features from <a href="https://www.kaggle.com/paulantoine/light-gbm-benchmark-0-3692/code">this script</a></li> <li>And the features from <a href="https://www.kaggle.com/nickycan/lb-0-3805009-python-edition">this script</a></li> </ul> <p>I give big credit for arboretum. I found it (obviously faster and) more accurate than xgboost ( but not by much) . Lightgbm was still my best (and faster) sinlge model on a set of 300 features (all the above plus some of mine ) with these params:</p> <pre><code>LightgbmClassifier boosting:dart num_leaves:22 num_iterations:3000 max_drop:5 drop_rate:0.01 categorical_feature:0,1,2* learning_rate:0.12 threads:12 max_depth:7 feature_fraction:0.7 min_data_in_leaf:10 bagging_fraction:0.85 lambda_l1:1 lambda_l2:0.01 bagging_freq:1 seed:1 verbose:false </code></pre> <ul> <li>categorical_feature:0,1,2 are product, aisle and departement.</li> </ul> <p>I made a separate simple model to predict <strong>None or not</strong> for each offer and fed it to the f1 optimiser.</p> <p>My own (extra) features included:</p> <ul> <li>counts of purchased items/aisle/deps per customer in different intervals . (like last 15, 30 days etc)</li> <li>Timing features. like when was the last time the item was bought.</li> <li>Likelihoods on various categorical features including products, aisles, deps</li> <li>Cart position of the item and features around this area were also useful. Average position of item and features that show the shift in cart position over time</li> <li>Cardinality features like number of different departments, products, asiles the customer shops from</li> <li>User-based features showing the tendency of buying new items -e.g average proportion of new items in each order.</li> </ul> <p>Things that did not work:</p> <ul> <li>Making a model per product/aisle/department. </li> <li>Collaborative filtering</li> <li>Exploring the description of items. E.g text analysis was useless. </li> </ul> <p>For Stacking I used both StackNet and separate Stacking (e.g. used the output of Stacknet on another meta - see below) . I ensembled around 15 models - mostly lightgbms , <a href="http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science.html">H2o algos</a> and <a href="https://keras.io/">keras</a>' nns. However i did not get that much from Stacking in this competition. It was much more important the metric's optimisation and the feature engineering.</p> <p>My best final meta model was an NN using keras and theano. The structure was this:</p> <pre><code>models = Sequential() models.add(Dense(output_dim=1024, input_dim=input_d, init='lecun_uniform')) models.add(Activation('relu')) models.add(BatchNormalization()) models.add(Dropout(0.5)) models.add(Dense(512, activation='relu',init='lecun_uniform')) models.add(Activation('relu')) models.add(BatchNormalization()) models.add(Dropout(0.4)) models.add(Dense(2, init='lecun_uniform')) models.add(Activation('softmax')) opt = optimizers.Adam(lr=0.01) models.compile(loss='binary_crossentropy', optimizer=opt) </code></pre> <p>For the f1 optimizer I used <strong>probabilities higher than 0.005.</strong></p>
Instacart Market Basket Analysis
My Learnings from 18th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instacart Market Basket Analysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My solution was nothing fancy, a single lightgbm model with Faron's F1 optimization code. I ended up creating more that 170 features in the end, but doing feature selection became very difficult with them in the end. Apart from the common frequency/distribution and interaction based features here are some interesting features that I created:</p> <ul> <li><strong>product name length:</strong> seems weird why this should matter but it did improve the model a little bit</li> <li><strong>product name rarity:</strong> basically mean of idf of each word in a product (I guess these features help captures people who are connoisseurs of exotic products)</li> <li><strong>product/user/aisle/department cluster</strong> based on their word2vec</li> <li><strong>product similarity with none:</strong> captures if the product is usually present in orders with no reorder <ul><li>product vector * user vector</li></ul></li> <li>product organic/gluten_free</li> </ul> <p>Apart from these I created many features to capture recency, streak and past order similarity and diversity which helped.</p> <p>Another thing that I tried was a voting ensemble of my final submissions which gave me decent lift on public LB whenever there was enough diversity in submissions (roughly 0.0005-0.001)...but in the end apparently my best ensemble was overfitting and would have dropped me 20 positions had I not selected my single best model (4th best submission on public LB) also.</p> <p>Finally I had a lot of fun with this competition especially because of the clean and un-anonymised and learned a lot from all the insightful discussions on the forums.</p> <p>Thank You</p> <p>Feature selection became a huge problem for me....I didn't know what was working and what wasn't, but here are the things I tried:</p> <ul> <li><p>feature importance in lightgbm using based on split and gain - typically I would remove/modify feature coming in bottom 5 in both the lists</p></li> <li><p>correlation with reordered: not a good technique since only linear relationship would get captured</p></li> <li><p>correlations within independent variables: highly correlated variables (corr &gt; 95%) don't add much to the model may even adversely impact the performance</p></li> <li><p>ratio of variable mean in train and test: it was mostly close to 1 for this competition but I was still able to improve on LB a little bit by removing a few features with higher deviation</p></li> <li><p>for continuous features, segment them into 20-50 buckets and measure diversity in the mean of y. I used kinetic energy to capture diversity but entropy works too. For binary variables something like KL divergence can be used. Higher diversity is better.</p></li> </ul> <p>with 170 features and after extensive cleaning it became very difficult same variable would come important with one metric but useless with other. In the end I think best way was to remove a few features and see the model performance but that is too time consuming.</p> <p>Hope this helps.</p> <p>Regards,</p> <p>I tried several approaches to capture recency based on order number and days_ago variable. I created days_ago feature using days_since last order such that if for a user with four orders of the days_since_last order values were None, 7, 10, 3 then days_ago would look like 20, 13, 3, 0 such the last order (train/test order) is always set at day zero. Next, to capture recency there are various techniques like simply taking sum of these numbers or square of these numbers (squaring will give more weight to older orders with days ago and newer order with order number). Another feature could be sum of days_ago with exponential decay, I tried an exponential decay with half life of 15/30/60 days.</p> <p>Hope this helps.</p> <p>Regards,</p> <p>You are right to create vectors I treated the orders as sentences and product_ids as words. To get the vectors for aisle_id and department_id as well, I simply added them to the words as well. For example a typical sentence would look like a1, p1, d1, a2,p2,d2....and so on. This also helps bring the vectors belonging to same aisle and department together in the vector space and you get better embedding event when you have less data (not the case in this competition). To get user vectors I used doc2vec and used the user_ids as tags for each order/sentence. </p> <p>Regards,</p>
Instacart Market Basket Analysis
One week to Silver - LB 86 Approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instacart Market Basket Analysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Just checked, the impact is significant - 0.0014 in both public and private LB. I used so two sets of customer segments. Based on the kernel by Andrea, I did KMean with K=3 and then K=4</p> <blockquote> <p><strong>A^b wrote</strong></p> <blockquote> <p>Good job on synthesizing the key ideas and adding a useful personal touch on them! Did you have time to study the impact of the customer segmentation on your score? How many customer segments did you include in your model? -Cheers</p> </blockquote> </blockquote> <p>Thanks! wouldn't have done it without working with you and learning from you :) </p> <p>yes, definitely we can improve further from what I have present above. Due to the time constraint, there was no time to fine tune on most aspect of my pipeline. We can definitely use more sophisticated approach to perform product and user segmentation, and maybe even product &amp; user pair segmentation. Looking at the solutation sharing threads, there are several other things I really should have tried </p> <p>In fact, I just realised that the LGB model I was using had huge value for data point per leaf -&gt; it was set to 255 (carried over from Sh1ng's sharing). I changed that to a more reasonable had a go this afternoon and it definitely improves the LGB model. This probably would also improve the final blending. </p> <p>I am afraid there is no magic there :)</p> <p>for fast iteration, It is mainly LGB (0.05) and XGB 'Hist' mode (0.1) on higher learning rate - I switched to finer learning rate for my final solution. With this competition, XGB 'Hist' has issues handling this amount of data and crash once out of every few runs, so I relied more on LGB. If I take part in a competition for a longer period of time, sometimes I also use BayesianOptimisation to generate a list of different parameter combination, I can then from this list of combination to pick and choose depending on if I want to go fast or accurate. </p> <p>In terms of debugging or minimising human coding error, I faced the same challenge, and as I mentioned decided it wasn't worth taking to time to debug my k-fold/oof routine. Generally speaking, I rely on functions/routine that I built up from previous competitions - this is very much still an ongoing progress. but if I get to re-use a function that I previously built, I have better confidence that it would work. </p> <p>I have only been seriously competing for about 2 years, and only starting to build up my own collection of ML related functions for about 1 years. I hope this collection will grow to be more completed as I take on more Kaggle challenges. </p> <p>In term of implementing different branch of ideas making sure on traceability, I am old fashioned, I use a google spreadsheet to keep my a record on key information like algo., key parameter, early stopping, features, validation methods, val score, PL for each key runs. Additionally, I make backup copies of codes/notebooks. </p> <p>Hello all, firstly congratulation to the winners and top performers, your effort and sharing are great and I have learnt a lot in a very short space of time thanks to you. Also, I want to say well done for people who had been working on this from the beginning, in the end, tenacity will prevail in the form of knowledge gained, medals and good ranking.</p> <p>Basically, I have treated this competition as a week-long hackathon and spent one intensive week (50h+) on it. I am very lucky to be ranked within top 100. This wouldn't have been possible without all the sharing. So big thanks from me. Here is the overall sketch of my approach to achieving top 100 ranking in this short space of time:</p> <p>Shared Features:</p> <ol> <li>Features from LB0.3805009 baseline by 🐳鲲(China) </li> <li>Features from LB0.4029970 baseline <a href="https://www.kaggle.com/c/instacart-market-basket-analysis/discussion/37697#latest-213541">Sh1ng</a></li> <li>Product W2V PCA features by <a href="https://www.kaggle.com/omarito/word2vec-for-products-analysis-0-01-lb">Omar Essam</a>, I used the top 5 components.</li> <li>Customer Segment features by <a href="https://www.kaggle.com/asindico/customer-segments">Andrea</a></li> <li>Order streak by <a href="https://www.kaggle.com/mmueller/order-streaks-feature">Faron</a></li> </ol> <p>My own features: 6. Lag 1 features - User total products, and user reorder sum in n-1 order (last order in prior) 7. Lag difference - User total products difference between orders: n-1 &amp; n-2, n-2 &amp; n-3, n-3 &amp; n-4 8. Skew statistic on user product state: so let' say a user had 5 orders, for a given product, the product purchase states in these orders were - for argument sake - [0 0 1 1 1 ]. so I capture this series and calculate the Skew and IPR measure - but only the skew measure was useful.</p> <p>These yields just above 100 features. </p> <p>Modelling &amp; CV Approach: I used GBM tools from LightGBM, XGBoost and Arboretum - reused most parameters from the original sharing with minor changes at learning rate. Time limit means I didn't have time to do any series tuning. I had also use Keras to build NN model, but the result wasn't good enough to be featured in the final blend </p> <p>I tried to setup 3-fold OOF pipeline for this. I must have made some silly mistake on setting up user_id into folds or aggregating prediction values. My LB score from the OOF pipeline was way off the normal range, and I decided that I didn't have enough time to get it right (by this time 4 days left). So bye-bye OOF, bye-bye stacking and bye-bye K-fold CV. I then switched to simply validating on the same 25% of train orders, took the risk of overfitting - I was hoping the amount of data in this competition would help to address the risk of overfitting. </p> <p>Post-processing - Optimising F1 Score: Here is the part that I spent most energy on getting to grip with. I had no idea so much academia efforts (like <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4442797/">this</a> &amp; <a href="https://www.comp.nus.edu.sg/~leews/publications/fscore.pdf">this</a>) had been done behind such a seemingly simple metric! So I took some long time to read those papers, and tried to understand as much as possible the code share by <a href="https://www.kaggle.com/mmueller/f1-score-expectation-maximization-in-o-n">Faron</a> and <a href="https://github.com/sh1ng/imba">Sh1ng</a>. I then integrate the Faron's F1 optimisation routine in ways very similar to what is shared by <a href="https://www.kaggle.com/c/instacart-market-basket-analysis/discussion/37874">Ivan</a></p> <p>Final Blending: A blending of ARB*0.7+XGB*0.15+LGB*0.15, this ration was checked and valided in the 25 validation data I left out. For each of these individual model I also used a lower learning rate in the last day of the competition to retrain - turned out this gave by far my best private LB results (+0.0002) better than anything else.</p> <p>My final thoughts on this competition: - It is fun to treat it as 1-week hackathon - I had to force myself not to dwell on small details, and move as quick as I can along the ML pipeline. - I also had too made some tough trade-off - i.e. giving up on fixing my OOF routine, and work on other aspects - By the end, I half-anticipated some strong shake-up. I notice that some of my models with similar local log-loss were giving some significant difference in the LB via MF1. So in the last two days, I decided to include some features that were not improving my validation logloss which I still believed to be reasonable features. Would be very interesting to see how other's logloss correlate with their LBs :) - I have benefited a lot from sharing in this competition, but I have also faced significant challenges by late sharing, as discussed in-depth in this post by <a href="https://www.kaggle.com/c/quora-question-pairs/discussion/33801">CPMP</a> here on the Quora competition. My view is simply no matter what, don't lose heart. In the long run, most of us would benefit from learning from such sharing. </p> <p>That is it! not a top solution, but quick &amp; dirty to get some decent result, thanks again!</p> <p>Thanks Miguel, it was a mad-dash for silver to make master tier, and I am super glad it worked out. Still nowhere close to GMs like Kaza and Stanislav Semenov, so there is always waysssss to raise the limit :)</p> <p>@germayne A bit more news since this morning, this is now officially in <a href="https://github.com/Microsoft/LightGBM/milestone/3">v3.0</a> milestone - did you do something to wake up the gods? ^ ^ </p> <p>I am afraid I am a LightGBM noobs, but a rough glance seems that this is <a href="https://github.com/Microsoft/LightGBM/issues/283">work in progress</a> - the best way I think is to follow up with this LGB github issue and see when they would implement it, or better still, do some contribution yourself :) You can also head over to the <a href="http://kagglenoobs.herokuapp.com/">KaggleNoobs</a> slack to ask Laurea - he is a contributor to the LGB github R package. </p> <p>Aside from this, I normally do CV routine manually, this give some extra options to tailor how the folds are separated, and also some feature engineering required in-fold creation. </p>
Instacart Market Basket Analysis
RNN Solution - top 50
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instacart Market Basket Analysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Haven't seen a lot of neural network approaches posted so far, so I thought I'd share mine. I think this model could have been pushed a lot further in terms of rank, but I got a bit burnt out toward the end, and didn't spend a lot of time optimizing. (I only did 5 submissions total, and they used a model trained on my training fold, which had around 10k users held out).</p> <p>Code is <a href="https://github.com/colinmorris/instacart-basket-prediction">here</a>. The core of the rnn model is <a href="https://github.com/colinmorris/instacart-basket-prediction/blob/master/baskets/rnnmodel.py">here</a> (implemented in tensorflow). </p> <p>Basically, for each user, for each product <code>p</code> in that user's history, I generate a training sequence, where the RNN predicts 0/1 at each step for "didn't order"/"did order". </p> <p>At each time step, the RNN gets some inputs describing the current order (day of week, hour of day...) and the previous order (number of products, whether <code>p</code> was in the order...). I also train embeddings for products (and departments + aisles) as part of the RNN. The embedding for <code>p</code> is repeated as input to the RNN at each time step.</p> <p>Because the goal is just to predict *<em>re</em>*orders, I apply a mask to zero out the loss for all orders up to and including the first one containing <code>p</code>. </p> <p>A few notes of interest:</p> <ul> <li>The model is trained to predict <em>all</em> reorders - the final order isn't treated any differently from the rest. I tried implementing a 'finetuning' mode where I would train an existing model for a few epochs with only the final order counting toward the loss, but it seemed to make things go haywire. I think the sparse learning signal led to some exploding/vanishing gradients? Would have liked to explore this more.</li> <li>Anecdotally, sampling/weighting at the user level seemed to be essential to getting good results with this model - which didn't seem to be the case in my experiments with xgboost and other non-sequential methods. I had good results with the simple approach of sampling a single product per user in each epoch (the final code does something a little more complex for performance reasons). One good argument for doing this is that prolific users will generate more training sequences (because they have more products), <em>and</em> each sequence will be longer than average. So naively iterating through the dataset would lead to the model spending orders of magnitude more time on some users than others.</li> <li>A nice aspect of using a recurrent model was that I was able to get away with very few/simple features. For example, I didn't bother to write any features encoding frequency/recency because the model should be able to figure it out based on the sequence of previous labels observed. I basically only had about 6 features + product/aisle/dept embeddings.</li> </ul> <p>My submissions were a blend of RNN predictions + a boring gradient boosting model, with expected F1 optimization. (For your amusement, my repo contains an <a href="https://github.com/colinmorris/instacart-basket-prediction/blob/master/baskets/fscore.py">O(2^n)</a> and an <a href="https://github.com/colinmorris/instacart-basket-prediction/blob/master/baskets/clever_fscore.py">O(n^3)</a> implementation of expected F-score that predate the great spoiler kernel.)</p> <p>The repo has a few other models I played with that didn't make the cut (factorization machines with libfm, learning pairwise product interactions with logistic regression), mostly because I ran out of time scrambling to put together my submissions on the last day :)</p> <p>This was my first Kaggle competition, and I have to say, I had a lot of fun and I learned a <em>lot</em>. Thanks to everyone on the forums for the interesting ideas and discussions.</p> <p>In terms of the implementation, I got started by reading through <a href="https://github.com/tensorflow/magenta/blob/master/magenta/models/sketch_rnn/model.py">this code</a> from Google's sketch-rnn model and trying to understand it. I used that class as a template when I implemented my RNN, and it helped a lot.</p> <p>In terms of the theory behind how they work, I found <a href="http://colah.github.io/posts/2015-08-Understanding-LSTMs/">this blog post</a> very helpful (especially the diagrams). Though I think you can get along pretty well even if you don't understand their inner workings and mostly treat them as a black box.</p>
Instacart Market Basket Analysis
SQL feature engineering + XGBoost (.4026 private LB, 1/2 of #100 solution)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instacart Market Basket Analysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Once I'm fully seasoned, I think it would be. Near the end I was able to make new feature groups relatively quickly (the department-as-product stuff at the end)</p> <p>The enforced separation of concerns is nice to have in any case.</p> <p><a href="https://github.com/happycube/kaggle2017/tree/master/instacart">https://github.com/happycube/kaggle2017/tree/master/instacart</a> </p> <p>This isn't much compared to sjv's #3 code post*, but I'll share what I've got anyway :) It's a pretty basic model using features I generated largely based on Sh1ng's kernel, with some new features that handle departments like they were products (marginally useful) - the new thing is the use of SQL queries etc to run the feature engineering.</p> <p>I started with SQLite but needed arrays and aggregate functions, which led me to set up Postgres for the first time. I'm completely new at it, so it's not as idiomatic and elegant as it could be, but I'm still pretty proud of it as a first project, and it fills in a hole in my knowledge a bit.</p> <p>Hope you find something useful here, especially for those reading this after the next competition with relational data comes up :)</p> <p>(* - I wasn't expecting a medal at all, being #277 on the public LB - but I was still planning to post anyway!)</p>
Instacart Market Basket Analysis
Solution LB 39
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instacart Market Basket Analysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>deleted</p> <p>Congratulations to everyone who completed this competition. This is in fact my first kaggle competition and I really enjoy it . Here I am sharing my solution. Advice would be highly appreciated ! </p> <h2>Solution</h2> <p>The problem is formulated as approximating P(u,p | user's prior purchase history) which stands for how much likely user u would repurchase product p given prior purchase history.</p> <p>The main model should be a binary classifer and features created manually or automatically are fed to the classfier for generating predictions.</p> <h3>Features</h3> <p>I construted both manual features and automatic features using unsupervised learning and neural networks.</p> <p>Manual features include statistics of prior purchase history. As for automatic features, I used the following </p> LDA <ul> <li>each user is treated as a document and each product is treated as a word. </li> <li>generate topic representation of each user and each product</li> <li>calculate score by taking inner product for </li> <li>similar operations on aisle and department level</li> <li>both score and compressed topic representations of users/items serves as good features</li> </ul> WORD2VEC <ul> <li>similar as LDA, but only on product level</li> </ul> LSTM <ul> <li>Interval between user u's sequential purchase of product p is modeled as a time sequence.</li> <li>Use LSTM to construct regression model for predicting next value of this time sequence.</li> <li>The predicted next interval serves as a good feature.</li> </ul> DREAM <ul> <li>RNN and bayesian personalized rank based Model, refer to <a href="https://github.com/LaceyChen17/DREAM">this repo</a> for my implementation</li> <li>DREAM provides scores, dynamic user representaions and item embeddings.</li> <li>It captures the sequential information such as periodicity in users' prior purchase history.</li> </ul> <h3>Classifier</h3> <p>I constructed both lightgbm model and xgboost model. </p> <h3>Optimization</h3> <p>I used bayesian optimization to tune my lightgbm model.</p> <h3>Post-classification</h3> <p>I used <a href="https://www.kaggle.com/tarobxl/parallel-version-of-faron-s-script/">this script</a> to contruct orders from pair. Thanks to faron, shing and tarbox !</p> <h3>Ensemble</h3> <p>I trained big models (500+ features), median models(260 + features) and small models(80+ features). Final submissions were generated by bagging top models using median.</p> <p>Here is <a href="https://github.com/LaceyChen17/instacart-market-basket-analysis/blob/master/README.md">link</a> for my code. It's a bit messy (´・ω・`) </p>
Instant Gratification
0.975+ Private Solution (72nd) with a dumb way to incorporate GMM cluster membership
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for your comment, Chris!</p> <blockquote> <p>The trick with GMM and n_components=6 is that you must apply it to all the data combined. You must combine train, public test, and private test</p> </blockquote> <p>I misspoke a little about what I did with GMM. Since GMM is unsupervised, I did run GMM on train+test (public+private) at the same time and added cluster membership id to all of them before QDA.</p> <blockquote> <p>(You'll need to run GMM a couple times and check with an if-clause).</p> </blockquote> <p>I think this is where I made my mistake. I only did one run of GMM for each magic split. I just did some experiments and I can see that cluster membership distribution can vary quite a bit, and that trying multiple times until you get near-uniform distribution is essential.</p> <p>First of all, I'd like to thank Kaggle for organizing this fun competition to beta test the new Kernels-only format. IMO Kaggle did a great job making this "synthetic" and "trial" competition still fun and exciting.</p> <p>I joined this competition late, after the community had already reverse engineered how the dataset was generated (using <code>sklearn.datasets.make_classification</code>). I created my own synthetic dataset locally using <code>make_classification</code> with <code>flip_y=0.05</code> and the same number of <code>n_informative</code> per magic split as with the original. I noticed that my local CV scores were too high compared to those of the original dataset, so I started tweaking the parameters of <code>make_classification</code> and figured out that <code>n_clusters_per_class=3</code> causes my local CV scores to closely agree with those of the original. Based on that, I used <code>sklearn.mixture.GaussianMixture</code> with <code>n_components=6</code> and <code>max_iter=10000</code> to figure out the cluster membership for the training set. Like other competitors, I initially tried to use this info to map the membership back to the training set target, but my initial experiments did not give great results. Instead, I simply concatenated the cluster membership to the training data (a mindless approach - basically it was just a single, extra int input column ranging from 0 to 5 along with the original data with unmodified target) and trained QDA. Still, this simple modification gave me a boost from 0.97002 Public LB to 0.97327. To improve my Public LB to 0.97426, I did several rounds of outlier removal based on model predictions and also averaging predictions based on models trained via different random seeds. My best Public LB 0.97426 gave me 0.97518 Private LB. It turns out that my best Private LB was 0.97529 (0.97424 Public LB) but I did not end up submitting it; this would have only given me one extra position in the Private LB, so I don't feel too unlucky. And I feel good about making the 0.975+ club, though I was at pretty much the bottom of this club :)</p> <p>All in all, I had way more fun and learned more than I initially thought I would! Hats off to the community who collective have done a lot of great detective work and openly sharing their findings... especially @cdeotte who was clearly the MVP of this competition in my mind!</p> <p>Thanks!</p>
Instant Gratification
0.97596 Private LB. My best-scored submission (not chosen)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Surprisingly, <a href="https://www.kaggle.com/merkylove/best-conserative-unfair-cv-with-rf">this</a> was my best private score submission. It shares the same idea that <a href="https://www.kaggle.com/merkylove/10th-public-8th-private-solution?scriptVersionId=15757147">my final chosen submission </a> but I use another booster (gbdt instead of goss) and several other parameters</p>
Instant Gratification
1st Place Solution Sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Well the answer is no. Actually pseudo labeling 2 times got no improvement on CV and LB for me, while it gives 0.0002 boost on private LB...</p> <p>Thanks! We couldn't make it without your helpful posts</p> <p>You are welcome and any question about the code is welcome</p> <p>Thanks and you are welcome</p> <p>Thanks and you are welcome</p> <p>Thanks and you are welcome</p> <p>Thanks!</p> <p>Thanks!!</p> <p>Thanks!</p> <p>Thanks and same to you!</p> <p>Thanks uncle, some of your posts helped us too :P</p> <p>Thanks and see you in next competition</p> <p>Thanks!</p> <p>It is really a surprise to me that we finished first, a bit of luck lol</p> <p>I looked into the problem in the last few days and the sharing really helps a lot, especially Chris's <a href="https://www.kaggle.com/cdeotte/pseudo-labeling-qda-0-969">post</a> explaining QDA and Pseudo Labeling, and Infinitewing's <a href="https://www.kaggle.com/infinitewing/create-validation-datas-by-make-classification">post</a> explaining how many clusters are in each class. Many thanks!</p> <p>Our solution is kind of simple and direct,</p> <h3>Step 1</h3> <p>use QDA to get cv predictions</p> <h3>Step 2</h3> <p>add pseudo labels to testing data and flip y with not matched labels for training data to generate new training data</p> <h3>Step 3</h3> <p>use GMM to divide each batch to 6 clusters, with a simple method to evaluate the clustering performance, and transform clustering labels to class labels</p> <h3>Step 4</h3> <p>repeat step 2 and step 3 once.</p> <p>We didn't do much tuning except for some threshold selections and got 0.9754 for CV, 0.9744 for public LB and 0.97598 for private LB.</p> <p>The full code is <a href="https://www.kaggle.com/infinite/v2-all-gmm">here</a>.</p> <p>So just keep it simple and happy kaggling. :)</p> <p>Thanks and you are welcome!</p> <p>Thanks and you are welcome!</p> <p>You are welcome and thanks!</p> <p>Thanks and keep kaggling!</p> <p>You are welcome</p> <p>Thanks!</p> <p>Thanks!</p> <p>Thanks and you are welcome</p>
Instant Gratification
20th position kernel -- LB 0.97572 (private) 0.97438 (public)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for all the people who shared kernels/ideas during this competition and after the competition end. I started after the QDA, pseudo-labeling have been shared (i.e. Public LB 0.969 kernel). Once I came across the post where <code>make_classification</code> was mentioned as the data generator, I thought of GMM and that was my first Kernel. My main idea was to use GMM to cluster the data (train and test data) then use the probability from GMM as additional features to pass to QDA. I played briefly with different <code>n_components</code> for GMM and optimized that for each of the 512 models using information criteria. Then my best CV was for brute force by simply concatenating features for multiple GMM with <code>n_components</code> from 2 till 9 (these should have been even numbers !!). played around with different regularization parameter for QDA. Tried pseudo-labeling but obviously it was reinforcing the bad labels. So, I used bagging to try to eliminate the effects of the flipped labels on QDA.</p> <p><a href="https://www.kaggle.com/ggrizzly/model-gmm-v4-even-clusters?scriptVersionId=16002313">Here is the link to the kernel (version 5)</a></p> <p>One questions is still in my mind, what would be a correct CV scheme for this kind of problems where the test data are always available for clustering and/or dimension reduction. A pure approach where each folds' data is removed before GMM does not corresponds to true deployment phase as the test data is available. Similar dilemma probably exists in all semi-supervised learning approaches.</p> <p>Instead of mentioning my best private LB, I am proud to say that all my submissions scored &gt; 0.97524 in the private LB except the first submission !!</p>
Instant Gratification
5th solution of Instant Gratification
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Chris! You are the BEST!</p> <p>I have been lucky for the last few weeks, got 5th of this competition and 40th of the lanl competition. I entered this competition last Thursday and make not much progress until yesterday. I would like to special thanks to CPCP! I won't come to the right idea with his help! And also thank to Chris and Dieter! I used the code templete from theirs to try my own idea!</p> <p>My model is GMM. The trick is to have 2 gaussian components for each class. Here is my code:</p> <p><a href="https://www.kaggle.com/waylongo/gaussianmm">5th ranking solution</a></p> <p>Congrats! I'm just lucky in this competition. To be honest, I don't think I do better than other top 50.</p> <p>Please look at the version 4 in my Kernel, which is my 5th ranking solution. (I'm not quite familiar with Kernel sharing …)</p>
Instant Gratification
6th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks. Your comment gives me a lot of positive energy. Hope to see you winning medals soon. </p> <p>Thanks Taemyung. Congratulations to you too. </p> <p>Thank you and you're welcome.</p> <p>Thank you. I am glad it was helpful to you. Now that William has shared how they have generated the dataset it seems like we figured it out almost perfectly except for the random seed. </p> <p>Thanks.</p> <p>Thank you. I appreciate your comment.</p> <p>Thanks. Congratz to you too. keep up the good job</p> <p>I know. That was my target from the beginning as you might already knew. It was a looong shot but it was worth a shot. Congratulations for your medal btw.</p> <p>Thanks nosound. That's very kind of you. I am glad to see you win a solo gold too. Now the way is smooth towards becoming a competitions grandmaster for you. I enjoyed reading your solution.</p> <p>Thanks Robert. congratulations to you too. </p> <p>Thanks Chris. You did an amazing job too.</p> <p>Thank you. I was very excited for this competition at all stages. I can't believe I won a solo gold yet :) still processing it in my mind </p> <p>This was a fun competition and I enjoyed it much. Hope you all have had the same experience. </p> <p>Most the things I did was revealed in public kernels. The only unrevealed key to this competition was to cluster each class before modeling it. I explained pictorially why it works at <a href="https://www.kaggle.com/mhviraf/instant-gratification-lb-score-0-974">https://www.kaggle.com/mhviraf/instant-gratification-lb-score-0-974</a>. Everything else was revealed by the many genius and talented people in this competition. They were not only smart but kind and generous enough to share their findings. I am not gonna name them one by one here because we all know who they are.</p> <p>I experimented so many different things for this competition including: * <strong>different number of models to blend</strong>: I believed the more number of models I blend together the more robust answer I would get so I tried to maximize this parameter and have the maximum number of models I could have in 1 run which ended up being 42 models of the same type. * <strong>different number of folds per model</strong>: this item was another important factor because it had a trade-off in it. the more folds I used, the more robust predictions. I achieved robustness by increasing number of models I used though, so I used this parameter for regularization. I chose 2 folds in the end so that despite we have only 512 samples per each <code>wheezy-copper-turtle-magic</code> I only used half of them to train a model and the other half to validate my model. I noticed if my model can have AUC 0.9748+ on only 50% of the data and generalize well on both the training set and public leader board, why not? let's continue with this. * <strong>different clustering algorithms</strong>: because I knew the data was multivariate gaussian distribution around the nodes of a hypercube (refer to <code>make_classification</code> documentation on sklearn and its source code on github) I assumed the best way to cluster them would be by using <code>mixture.GaussianMixture</code> and I ended up using it too. However, I though a lot about different clustering algorithms and studied the document at <a href="https://scikit-learn.org/stable/modules/clustering.html">https://scikit-learn.org/stable/modules/clustering.html</a> carefully and experimented with other clustering algorithms I believed might be useful. * <strong>different number of clusters</strong>: I used the elbow rule and tsne and other algorithms to figure this one out but I couldn't verify what is the exact number of cluster per class they used in data generation. Nonetheless, I ran an experiment on my own synthetic data and by analyzing 1000 experiments I figured whether they are 2 or 3 clusters in the data itself if I lump data into 3 clusters I would get better validation and test AUCs so I continued with 3 clusters per class. * <strong>different classifier algorithms</strong>: It was kinda obvious that GMM is the way to go (Thanks to <a href="/christofhenkel">@christofhenkel</a>, he was very underrepresented in this competition despite his great contributions. You have my respect). Nonetheless, I tried two different algorithms as well. * <strong>different scalers</strong>: First of all I figured I need to add scaler to ensure non-ill-conditioned matrices for covariance calculations. I tried two different scalers, results were not that much different but I continued with <code>StandardScaler</code> again because features were linear combination of standard normal distributions. * <strong>different regularizations</strong>: other than using 2 folds for training I used regularization parameters in both of the clustering and classifier algorithms. In the end I decided to go with their default values since training on 50% of the data was enough regularization. * <strong>model by model hyperparameter tuning and regularization</strong>: I tried these options as well but they didn't work out well for me. * <strong>averaging ranks instead of probabilities</strong>: I tried it both ways sometimes ranks worked out better sometimes probabilities. </p> <p><strong>my full code</strong>: <a href="https://www.kaggle.com/mhviraf/mhviraf-s-best-submission-in-instant-gratification">https://www.kaggle.com/mhviraf/mhviraf-s-best-submission-in-instant-gratification</a></p> <p><strong>how data was generated</strong>: <a href="https://www.kaggle.com/mhviraf/synthetic-data-for-next-instant-gratification">https://www.kaggle.com/mhviraf/synthetic-data-for-next-instant-gratification</a></p> <p>Below is the TL;DR of <a href="https://www.kaggle.com/mhviraf/instant-gratification-lb-score-0-974">https://www.kaggle.com/mhviraf/instant-gratification-lb-score-0-974</a></p> <p><img src="https://www.kaggleusercontent.com/kf/16019409/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..jKX7oeRp9n8vXS4jU1p_FA.8jtkixQmRslkeBCAyKsGy2kXmOPDxZER9Z0Zud0AT8uSDhMEG_-5Qjov5X9LTzhWx6bdgcAVH56qkIvIzERPdaQM5no1505hT0bsBidIGd93TTUGVAUg1YKsLADVtepqaXPEBPUQjAO-zO6-GnvjAoQ6He0WDuOBxkvlHPpzrRY.zeCjOzfRQVEgKzKBrtkz_g/__results___files/__results___7_2.png" alt=""></p> <p><img src="https://www.kaggleusercontent.com/kf/16019409/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..jKX7oeRp9n8vXS4jU1p_FA.8jtkixQmRslkeBCAyKsGy2kXmOPDxZER9Z0Zud0AT8uSDhMEG_-5Qjov5X9LTzhWx6bdgcAVH56qkIvIzERPdaQM5no1505hT0bsBidIGd93TTUGVAUg1YKsLADVtepqaXPEBPUQjAO-zO6-GnvjAoQ6He0WDuOBxkvlHPpzrRY.zeCjOzfRQVEgKzKBrtkz_g/__results___files/__results___9_2.png" alt=""></p> <p>Happy kaggling. Hope to see you all soon in the next competitions.</p> <p>Thanks Vishy. I'm glad I could be helpful.</p> <p>Thanks. Congrats to you too. You managed to get an excellent position in a fairly limited time. Hope to team up next time.</p> <p>Oh so that's great! congratulations. </p> <p>Thanks and you're welcome.</p> <p>Thanks Raju.</p>
Instant Gratification
A way to get silver medal without magic
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to every teams who go through the competition. I have found that most top team solutions are related to Gauss distribution and data generation, so I'm going to talk about some that have not been mentioned. In this competition, stacking and pseudo labeling are enough to win the silver medal, provided they work together.</p> <p>Before I find the wonderful use of GMM, I used QDA, NuSVC and other models to stack and try to improve my scores. At that time, pseudo labeling brought us to 0.969 but 0.970 was still a hard barrier to break through.</p> <p>Maybe a lot of people, like me, have found that <strong>stacking doesn't work with pseudo labeling</strong>. After a few days of observation and reflection, I found <strong>stacking could improve CV, but LB did not</strong>. Then I came to a conclusion that some modifications to the public kernels about pseudo labeling were needed to make stacking work. </p> <p>Key to the problem is that the original code uses the same test points for training and prediction. If we use cross validation to predict the test points that used to make pseudo labels, stacking will start to work.</p> <p>More detailed code <a href="https://www.kaggle.com/wanliyu/how-to-make-stacking-work-with-pseudo-labeling">here</a>. The kernel uses QDA, GMM(not magic), NuSVC and Label Spreading to stack and gets 0.97248 in private score. That's enough to win a silver medal without magic. </p>
Instant Gratification
GMM solution overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks, Chris)</p> <p>Agree, that is also a good option.</p> <p>I entered the Instant Gratification competition when it was 16 days to go, so there was already a lot of important information available in Kernels and Discussions. Shortly, in the Instant-Gratification competition we deal with a dataset of the following structure:</p> <ul> <li><p><a href="https://www.kaggle.com/c/instant-gratification/discussion/92930#latest-541375">Data seemed to be generated by groups</a> corresponding to the column <strong>wheezy-copper-turtle-magic</strong>, which means that we have 512 independent datasets of the size approximately 512 by 255 for training our models.</p></li> <li><p>Out of 255 features <a href="https://www.kaggle.com/fchmiel/low-variance-features-useless/">only features with high variance supposed to be important</a>. The number of important features varies from 33 to 47.</p></li> <li><p><a href="https://www.kaggle.com/speedwagon/quadratic-discriminant-analysis">QDA proved to be a good modeling approach</a> for this case and can give AUC of 0.966. </p></li> <li><p>This high quality can be explained by the fact that the <a href="https://www.kaggle.com/mhviraf/synthetic-data-for-next-instant-gratification">data probably was generated with make_classification function</a>.</p></li> </ul> <p>Knowing that let's look closer at what make_classification function has under the hood. This function has the following parameters:</p> <ul> <li>n_samples: The number of samples.</li> <li>n_features: The total number of features.</li> <li>n_informative: The number of informative features.</li> <li>n_redundant: The number of redundant features, linear combinations of informative features.</li> <li>n_repeated: The number of duplicated features.</li> <li>n_classes: The number of classes.</li> <li>n_clusters_per_class: The number of clusters per class.</li> <li>weights: The proportions of samples assigned to each class.</li> <li>flip_y: The fraction of samples whose class are randomly exchanged.</li> <li>class_sep: Larger values spread out the clusters/classes and make the classification task easier.</li> <li>hypercube: Parameter corresponding to the placement of the cluster centroids.</li> <li>shift: Shift of the feature values.</li> <li>scale: Scaling of the features by the specified value.</li> <li>shuffle: Shuffling the samples and the features.</li> <li>random_state: Random state.</li> </ul> <p>I invite you to follow the sequence of my thoughts about it.</p> <hr> <h3>Step 1. Size and features (first 6 parameters of make_classification function).</h3> <p>From the data structure (512 groups and 255 variables) and size (262144 = 512 * 512), we can assume that <strong>n_features=255</strong> and <strong>n_samples=1024</strong> were taken to generate both training and test data sets (both private and public).</p> <p>Now, let's talk about features. First of all, repeated features are exact copies of important features, and because we don't see such columns in competition data we'll set <strong>n_repeated=0</strong>. Let's generate data set with make_classification function and following parameters:</p> <p>~~~~ fake_data = make_classification( n_samples=1024, n_features=255, n_informative=30, n_redundant=30, n_repeated=0, n_classes=2, shuffle=False) ~~~~</p> <p>By setting shuffle=False, I force the first 30 columns to be informative, next 30 to be redundant and others to be just noise. By doing it 1000 times lets look at the distribution of the standard deviation of the informative, redundant and random features.</p> <p><a href="https://ibb.co/HG8WWDR"></a></p> <p>We see a clear difference in standard deviation for informative, redundant and random features, that's why selecting important features with 1.5 threshold works so well. Moreover, there are no features in the competition data that have std bigger than 5, which leads us to an assumption that <strong>n_redundant=0</strong>.</p> <p>Number of important features ranges from 33 to 47, so <strong>n_informative</strong> is in <strong>{33,...,47}</strong>. Number of classes is obviously <strong>n_classes=2</strong>. </p> <hr> <h3>Step 2. Shift, Scale, and randomness (last 4 parameters of make_classification function).</h3> <p>Because mean and standard deviation for random columns are 0 and 1 respectively, we can assume that shift and scale were set to default <strong>shift=0</strong>, <strong>scale=1</strong>. On top of that, the standard deviation for important columns also looks the same for competition data and for data that we've just generated. So it's a pretty promising assumption and we can go further. Parameters <strong>shuffle</strong> and <strong>random_state</strong> are not so important, because they don't change the nature of the data set.</p> <hr> <h3>Step 3. Most interesting parameters (why QDA is not so perfect?).</h3> <p>Parameters <strong>n_clusters_per_class, weights, class_sep, hypercube</strong> are the ones that we are not very sure about, especially <strong>n_clusters_per_class</strong>. Weights look to be the same because the target seemed to be balanced. What probably makes it impossible to get 100% accurate model is that <strong>flip_y&gt;0</strong>, and we can not predict this random factor in any case. However, what we can try is to build a model that can perfectly predict when <strong>flip_y=0</strong> and leave the rest for the luck.</p> <p>QDA shown to be a very good approach, but let me show you the case when it's working not so good:</p> <p><a href="https://ibb.co/bKhGFTq"></a></p> <p>Data set was generated with make_classification:</p> <p>~~~~ make_classification( n_samples=1000, n_features=2, n_informative=2, n_redundant=0, n_classes=2, n_clusters_per_class=2, flip_y=0.0, class_sep=5, random_state=7, shuffle=False ) ~~~~</p> <p>Now, lets look how QDA algorithm can hadle it:</p> <p><a href="https://ibb.co/pWRTdtD"></a></p> <p>Pretty good, however, we see that it doesn't see this structure of four clusters in the data. From make_classification documentation one can find that these clusters are Gaussian components, so the best way to find them is to apply a Gaussian Mixture model. </p> <p><em>Note: Gaussian Mixture model will only give you clusters, you need to bind these clusters to one of the classes 0 or 1. That you can do by looking at the proportion of the certain class in each cluster and assigning cluster to the dominant class.</em></p> <p>Lets look how GMM algorithm performed at this data:</p> <p><a href="https://ibb.co/Q9sPPxm"></a></p> <p>Nearly perfect! Let's move to step 4.</p> <hr> <h3>Step 4. Effect of flipping the target</h3> <p>Knowing that we can build nearly the best classifier, what is the effect of flipping the target? Suppose that your classes are perfectly separable and you can assign probabilities that will lead to a perfect AUC. Let's see an example of 10000 points, 5000 in each class:</p> <p>Classes = [0, 0, ..., 0, 1, ..., 1, 1]</p> <p>Predictions = [0.0000, 0.0001, ..., 0.9998, 0.9999]</p> <p>Lets flip the target value for 2.5% (250) of the points (1) in the middle, (2) on the sides, (3) randomly, and look at the AUC:</p> <p><a href="https://ibb.co/M1cNwkw"></a></p> <p>We see that the impact of flips can dramatically change the result. However, let's face two facts:</p> <ol> <li>We can not predict this randomness.</li> <li>We can assume that the flips are evenly spread across our predictions.</li> </ol> <p>Let's also look at the distribution of AUC for the randomly flipped target.</p> <p><a href="https://ibb.co/PjBttyk"></a></p> <p>It means that results on unseen data can deviate by more than 0.005 in AUC even for the perfect model. So our strategy will be not to find the parameters to raise on the LB of the competition but to find the parameters that can perfectly predict the target for most of the data set.</p> <hr> <h3>Step 5. Experiments with synthetic data</h3> <p>How we can build a model that perfectly predicts data and not overfit to it? Because we have a strong hypothesis on how data was created we can tune model parameters such that it will predict perfectly for most of the synthetically generated data sets. Not showing competition data to it will save us from overfitting.</p> <p>We will be generating a data set of the same structure as the competition data:</p> <ul> <li><strong>n_samples=768</strong>, 512 for training and 256 for test</li> <li><strong>n_features=255</strong></li> <li><strong>n_important=33</strong></li> <li><strong>n_clusters_per_class=3</strong></li> <li>by variating <strong>weights</strong>, we will change the propotion of the dominant class from 0.5 to 0.6</li> <li><strong>flip=0</strong> to experiment if GM model can be completely correct</li> <li><strong>class_sep</strong> will be a random number between 0.9 and 1</li> <li><strong>hypercube</strong> will be chosen randomly between True and False</li> </ul> <p>Next thing to do is to generate hundreds of such data sets and optimize parameters of the GMM.</p> <p>For the first attempt, we'll set the parameter n_components in the GMM to 6, because we know that there will be 3 clusters per each of the 2 classes, and do the modeling with other parameters set to default. Let's look at the minimum, maximum and mean score that we can get for 100 different data sets:</p> <p><code> Min score = 0.44783168741949336 Mean score = 0.8159223370950822 Max score = 1.0 </code></p> <p>Wow! Sometimes we even reached the perfect classifier. Let's try to optimize some of the GMM parameters. It has parameters n_init and init_params which means that GMM model will be run several times starting from random initializations and then it selects the best iteration. Let's try to set n_init to 20 and init_params to random.</p> <p><code> Min score = 1.0 Mean score = 1.0 Max score = 1.0 </code></p> <p>Now it's perfect! Let's try it for more than 33 features (for example, let's take 47):</p> <p><code> Min score = 0.562083024462565 Mean score = 0.7917138432059158 Max score = 1.0 </code></p> <p>For 47 features it worked not so well. The secret sauce is the regularization of the covariance matrix. Let's look at the model quality for different values of reg_covar parameter of GM model:</p> <p><code> reg_covar = 1: Min score = 0.6936146721462308, Mean score = 0.8782139741161032 reg_covar = 5: Min score = 0.991250080422055, Mean score = 0.9998679217326921 reg_covar = 10: Min score = 0.9987766087594813, Mean score = 0.999945378120418 </code></p> <p>To tell you the short story, by a simple grid search for the best reg_covar parameter one can find that more features require larger reg_covar for better performance. By doing so you can find that the good parameters are:</p> <ul> <li>33 features: 2.0,</li> <li>34 features: 2.5,</li> <li>35 features: 3.5,</li> <li>36 features: 4.0,</li> <li>37 features: 4.5,</li> <li>38 features: 5.0,</li> <li>39 features: 5.5,</li> <li>40 features: 6.0,</li> <li>41 features: 6.5,</li> <li>42 features: 7.0,</li> <li>43 features: 7.5,</li> <li>44 features: 8.0,</li> <li>45 features: 9,</li> <li>46 features: 9.5,</li> <li>47 features: 10.</li> </ul> <p>We still have one thing not clear.</p> <h3>Important note: how do we know the number of clusters in advance?</h3> <p>We were building models in the assumption that there are 3 clusters per class, but how can we know it? Well, the answer is pretty simple, the right choice of the n_components will give better results:</p> <p>#Clusters per class = 4, #Components of GMM = 4: AUC = 0.7170955882352941</p> <p>#Clusters per class = 4, #Components of GMM = 6: AUC = 0.8263480392156862</p> <p>#Clusters per class = 4, #Components of GMM = 8: AUC = 0.9587009803921569</p> <p>#Clusters per class = 3, #Components of GMM = 4: AUC = 0.7400829259236339</p> <p>#Clusters per class = 3, #Components of GMM = 6: AUC = 1.0</p> <p>#Clusters per class = 3, #Components of GMM = 8: AUC = 0.9844049755554181</p> <p>#Clusters per class = 2, #Components of GMM = 4: AUC = 1.0</p> <p>#Clusters per class = 2, #Components of GMM = 6: AUC = 1.0</p> <p>#Clusters per class = 2, #Components of GMM = 8: AUC = 0.9924635532493205</p> <p>For me it seemed like for most of the groups the best option was taking 3 clusters per class, sometimes 2 was slightly better (but it might be the effect of the flipped target). I assumed that this parameter was set to 3 for all groups.</p> <hr> <h3>Conclusion</h3> <p>Parameters found in step 5 will give you an AUC of 0.99 minimum (for the true target of course)! So we will use these parameters to build a model for competition data. </p> <p>That approach gave me a 52nd place. Thanks for reading!</p>
Instant Gratification
How I dropped from first place to 63 place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Totally agree</p> <p>consider coin tossing game, each toss is random and independent with each other, but if you toss many times, the ratio is close to 0.5. Maybe the problem here is that number of flipping is too small, hence mathematically the expected value is not stable..</p> <p>Thanks Chris, yes the public result doesn't works for private, it all to all still a game regarding random :)</p> <p>The kernel is here: <a href="https://www.kaggle.com/kongliangyu/public-first-place-and-private-60-place">https://www.kaggle.com/kongliangyu/public-first-place-and-private-60-place</a> fun time anyway :)</p> <p>Hi, Chris, I feel honored that you digging so deep into even other people's solutions. I learnt a lot form your way of analyzing and experimenting! </p> <p>Regarding my solution, here is my opinion, since the 512*256 numbers are not getting totally unique random scores, but there is only 512(sub-groups) * 2 (for 1 label and 0 label) possible scores, hence if by luck the predicted fllipped number in one test set group is roughly correct, then 256 numbers in that group will all do positive contribution to AUC(based on the tricks for AUC calculation I mentioned in the kernel). If by chance there is several dozens more test groups whose flip number get correctly predicted, the AUC boosts quickly. </p> <p>But I must admit that I don't have strong math skills, above is just my impression :)</p> <p>Thanks Chris, I now understand how lucky I am for I reached a temp high score based on a wrong math :) Also thank you a lot CPMP and nosound for pointing out my mistakes. A great lesson I learnt. </p> <p>oh no, what a big mistake I made..</p> <p>Sorry Chris that I also misleading you, the total tosses is not 51, it is random. </p> <p>And CPMP, please ignore my last comment, you're <strong>Correct</strong> . </p> <p>Thanks Chris for the clarification. We posted the same contents almost at same minute, what a low possibility :) </p> <p>Thanks a lot to let us know Gambler fallacy, I learned a lot there. </p> <p>But back to this situation, I don't think the flips are independent between train and test. Because the number of total tosses is <strong>fixed</strong> - about 51. </p> <p>&gt; Flips are independent from each other, the probability that a ball is flipped only depends on the ball itself. It has 5% of chances to be selected for flipping, and, if selected, 50% of chances to be flipped, resulting in a 2.5 % chance to be flipped overall. </p> <p>In above formulation, it has the possibility(although very low) that the number of total flips could exceed 51, but according to make_classification's source code, that's not true.</p> <p>so if we saw more flips in train set, the expected toss number is more in train set, hence the remaining toss number is less in test set, resulting less flips in test. What do you think?</p> <p>Thanks!</p> <blockquote> <p>Kongliang has not won a competition medal yet and he was 1st on the public leaderboard. He must feel very disappointed that he dropped to 63rd. Your first comment to him is how he is wrong. I am being considerate and encouraging.</p> </blockquote> <p>From your sentences and words I received lots of encouraging and kindness, You're a so so so nice and great person! I respect you, fully.</p> <p>And <a href="/cpmpml">@cpmpml</a> , your spirit of insisting the truth impressed me. Without you, I may still not able to find my mistake. fault is fault, thank you again for pointing that out, I am happy that you - a grand master, taught me those :)</p>
Instant Gratification
How to Score LB 0.975
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you YLeF</p> <p>Good question. Before GMM (and QDA), I was using k-means (and QDA) and scored 0.97200. I'd have to revisit my old code to investigate your question.</p> <p>Thanks. The first two images were not made with code. They were made with with <a href="https://www.coreldraw.com/en/">Corel Draw</a> (which is similar to <a href="https://www.adobe.com/products/illustrator.html">Adobe Illustrator</a>). These are graphic arts software.</p> <p>Yes. That is what I mean. When using <code>clusters=2</code>, the first cluster contains two clusters merged and the second cluster adds the remaining third cluster. (I edited my post to be more clear).</p> <p>Thanks. I'm sorry to hear about your bereavement.</p> <p>If your big dataset has lots of rows, say 200,000 then you can compute the mean for each column using only <code>target=1</code>. Then look at each mean. Since sample size is 100,000 the CLT tells us that the mean will be precise to plus or minus <code>3*4/sqrt(100000) = 0.03</code>.</p> <p>If there is only one cluster per class than all the means will be either 1 or -1 within 0.03. If there are two clusters then at least one mean will be 0 within 0.03 and the others will be 1 or -1. If there are three clusters than at least one will be 1/3 within 0.03. If four clusters then at least one will be 1/2 within 0.03. If five, one will be 3/5 within 0.03. Etc, etc.</p> <p>(To be more precise when using <code>make_classification</code>, find the biggest mean and that will be <code>class_sep</code>. Then search for <code>class_sep/3</code> instead of 1/3).</p> <h1>LB 0.970</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F50e532f1b9ca5d0ddd783e04e7422f20%2Fimg1.jpg?generation=1602508821789459&amp;alt=media" alt=""></p> <h1>LB 0.975</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2Fc2be5521e7c585f7776c21272d0ec925%2Fimg2.jpg?generation=1602508834244506&amp;alt=media" alt=""></p> <h1>Summary</h1> <p>It was shown <a href="https://www.kaggle.com/cdeotte/pseudo-labeling-qda-0-969" target="_blank">here</a> that QDA + Pseudo Labeling will score LB 0.970. The competition data appears to be created with Sklearn's <code>make_classification</code>. EDA shows that <code>n_clusters_per_class=3</code>. To score LB 0.975, we need to use QDA with 6 ellipses instead of 2 ellipses. Code is posted <a href="https://www.kaggle.com/cdeotte/3-clusters-per-class-0-975/" target="_blank">here</a>. (with more advanced techniques in Appendix 2).</p> <p>We find the 6 ellipses with <code>sklearn.mixture.GaussianMixture</code>. First find the 3 ellipses of <code>target=0</code> then the 3 ellipses for <code>target=1</code>. Then label each row of the training data with its ellipse number 0, 1, 2, 3, 4, 5. Finally call <code>QDA.fit(X,labels)</code> and <code>QDA.predict_proba(test)</code>. Then <code>test_pred = Pr(in ellipse 3) + Pr(in ellipse 4) + Pr(in ellipse 5)</code> where <code>Pr()</code> denotes probability.</p> <h1>EDA to determine <code>n_clusters_per_class</code></h1> <p>Clusters are located at hypercube corners. Therefore clusters have centers whose coordinates are 1's and -1's. To determine how many clusters per class, plot a histogram of all the variables' means grouped by target. If there is only 1 cluster per class we will see bumps at -1, and 1. If two clusters per class, we will see bumps at -1, 0, and 1. If three clusters per class, we will see bumps at -1, -1/3, 1/3, and 1.</p> <p>Within <code>magic=0</code> sub dataset, there are approx 40 features and 2 targets. Calculate the means of these 80 groups of numbers. Do this for every sub dataset. In total, we have approx 41,000 means. Now plot their histogram.</p> <p>More explanation why this works is provided in Appendix 1 <a href="https://www.kaggle.com/cdeotte/3-clusters-per-class-0-975/" target="_blank">here</a>.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F37c5c7d4faa1689bf77c01efa433ab66%2Fimg3.png?generation=1602508847384837&amp;alt=media" alt=""></p> <p>I get to LB 0.970 with QDA and Pseudo Labeling using this kernel <a href="https://www.kaggle.com/cdeotte/pseudo-labeling-qda-0-969">here</a></p> <p>Thanks. Congrats to you !!</p> <p>Thanks Carlos</p> <p>Yes. The correct number is 3 shown by EDA <a href="https://www.kaggle.com/cdeotte/3-clusters-per-class-0-975">here</a>. But if you make a perfect classifier you will score private LB 0.97560. To get a gold medal, you must add randomness so that your classifier is better than perfect. Using <code>clusters=2</code> merges two (of the 3) clusters as one cluster. It performs nearly as well as 3, but most importantly it adds randomness and "wiggles" the ellipses. Thus allowing you to achieve better than perfect by randomly identifying some flipped labels.</p> <p>Yes. Congrats on placing 2nd.</p> <p>Thank you</p> <p>QDA will always work, but if the distributions are not Gaussian then QDA won't be the best classifier. This is shown in the kernel <a href="https://www.kaggle.com/wanliyu/how-to-make-stacking-work-with-pseudo-labeling">here</a>. If we assume there is 1 cluster per class, then ensembling QDA with other methods like SVM increases the LB score over using QDA alone. When we assume 3 clusters per class, QDA performs best (because each cluster is a Gaussian) and ensembling with other types of models does not increase LB.</p>
Instant Gratification
Hoxosh solution (private 37) with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>See the notebook I shared for the implmentation <a href="https://www.kaggle.com/cpmpml/fork-of-cpmp-017-2-stage-mean-precisision-76aa21?scriptVersionId=16037412">https://www.kaggle.com/cpmpml/fork-of-cpmp-017-2-stage-mean-precisision-76aa21?scriptVersionId=16037412</a></p> <p>The goal is just to take into account that a proporiton of targets is flipped.</p> <p>I never thought that trying to identify flipped target was going to be useful. </p> <p>I shared one of our kernels so you can see how the solution works <a href="https://www.kaggle.com/cpmpml/fork-of-cpmp-017-2-stage-mean-precisision-76aa21?scriptVersionId=16037412">https://www.kaggle.com/cpmpml/fork-of-cpmp-017-2-stage-mean-precisision-76aa21?scriptVersionId=16037412</a></p> <p>I reported this one as vote manipulation ;) It happens a lot on my posts and comments. I think they discount votes from people who vote too often for a given person.</p> <p><a href="/cdeotte">@cdeotte</a>, I totally missed that QDA could be multi class! What you did is way closer to what we did than I thought.</p> <p>Thank you Chris. Congrats on your solo gold (probably <a href="https://www.kaggle.com/c/instant-gratification/discussion/96681#558374">the only real solo gold</a> here ;) )</p> <p>How do you run QDA with 6 clusters? </p> <p>Here is a sketch of our solution. I didn't plan to share given there are better ones, and, more importantly, because I thought our solution would be similar to others. In fact there are two differences, most important one being due to my team mate <a href="/cashfeg">@cashfeg</a> hence the title. Given he is very modest he didn't think it was worth sharing. You can also have a look at how he found 3 clusters per class <a href="https://www.kaggle.com/c/instant-gratification/discussion/96632#latest-558215">here</a>.</p> <p>We started from the good QDA models disclosed by Vlad. QDA is great, but it was hard to get above 0.9736 on public LB with it if I remember well. The reason was clear to me, it was because QDA only used half of the data, the train data. </p> <p>A model close to QDA is Gaussian Mixture, it is the unsupervised equivalent of QDA. Issue is GM with 2 components hardly gets you better than 0.80 CV and LB, which is probably why people did not use it till the last week of competition. Then I had the idea to use QDA output to init GM. However, I didn't thought of using means and precisions from QDA to init GM (I did later), or use Gaussian Lasso as Dieter proposed later <a href="https://www.kaggle.com/christofhenkel/graphicallasso-gaussianmixture">here</a>. I rather subclassed sklearn GaussianMixture with an additional way to initialize it with cluster representatives. Feeding QDA prediction to my GM was able to reach 0.9703 on public LB.</p> <p>This is the point when <a href="/cashfeg">@cashfeg</a> moved us near the top. He found that there were 3 clusters per class. He also had this simple, yet clever idea: he ran GM with 3 clusters on train with target 0, and a separate GM with 3 clusters on train with target 1. Then the 6 clusters were input to my GM model and got 0.973x on public LB.</p> <p>The rest is great team work with everybody contributing. W,e explored variants, tuned GM parameters, and tried to exploit more of the data generation properties. We didn't find a good way to align centroids on hypercube vertices, but we did filter seeding clusters by keeping runs where the seeding clusters had similar size (another <a href="/cashfeg">@cashfeg</a> idea).</p> <p>What didn't work? </p> <p>Using a likelihood function that takes into account target flip rate (yet another idea of <a href="/cashfeg">@cashfeg</a> ). We are still wondering why it didn't help. </p> <p>Kernels were a pain. The submission process where you have to run kernel twice, in commit, then in submit. There were outages in the last few days, and also lot of time lost because of lack of warning or proper error messages. I think KO competitions are a good idea, but I will rely on notebooks I run on another machine if I enter another KO competition. I'll then copy notebooks to kernels for submissions. I do think kernels will keep improving and that my concerns will be memories, but until then I'll rely on Anaconda ;)</p> <p>Main takeaway for me is that I was very happy to team with such talented team mates.</p> <p>Update: I made public one of our kernels. It is not the very best nor one that we selected, but it is close to best, and simpler, with private LB above 0.975. <a href="https://www.kaggle.com/cpmpml/fork-of-cpmp-017-2-stage-mean-precisision-76aa21?scriptVersionId=16037412">https://www.kaggle.com/cpmpml/fork-of-cpmp-017-2-stage-mean-precisision-76aa21?scriptVersionId=16037412</a></p> <blockquote> <p>I think most people did the same thing of running GMs on split target </p> </blockquote> <p>This is not mentioned in any of the writeups I read.</p> <p>That others had the same idea is not surprising. But it has not been describe din writups nor in the kernels I looked as, eg 1st and 4th solutions.</p> <p>I guess you had the idea and don't get why I describe it.</p> <p>They hinted at it in a discussion about vote manipulation. Don’t remember where precisely.</p>
Instant Gratification
IG's magic 22nd Public / 33rd Private [LB 0.9756]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you !</p> <p>Thank you <a href="/cdeotte">@cdeotte</a> ! and good luck in the 3x grandmaster road </p> <p>Thank you Mouna !</p> <p>Thank you ! Stabilizing abs(cv-lb) was they key. Unfortunately, we did not have enough time to try more models since one experience took between 3 and 4 hours to return the LB score. </p> <p>In our case, the key to break the 0.974 score was clustering each label to 4 clusters. Actually, we ran many experiments like 3 clusters for label 1 VS 3 clusters for labels 0, 3VS2, 2VS2 and 4VS4. Our best CV was with models which use 3:3 but LB score was always lower and showed that 4:4 gave better result. <strong>So, we continued working on 3:3 and especially on 4:4 models.</strong> </p> <p>After breaking 0.745 on LB. CV and LB scores were no longer correlated and our best submission which was 0.97488 has 0.968 CV. So we decided to stop EDA and modeling and we started focusing on finding a way that would stabilize the gap between CV and LB... <strong>The key for that was ensembling</strong> : We used many models for ensembling but the best results were based on these combinations: model1: GMM 4vs4 model2: model1 + pseudolabeling with thresholds 0.9 for 1 and 0.1 for 0 model3: GMM 3vs3 model4: model3 + pseudolabeling with thresholds 0.9 for 1 and 0.1 for 0</p> <h3>What did not work for us :</h3> <ul> <li>Outliers removal</li> <li>Inverting outliers 1 to 0 and 0 to 1 using isolation forest</li> <li>Neural nets with embedding layers</li> <li>Stacking with linear metamodels</li> <li>PCA features </li> <li>Ordering features according to their importance to finally concatenate all subdatasets together.</li> </ul> <p>Finally I want to thanks <a href="/abhishek">@abhishek</a> for being such an amazing teammate and for all his great ideas. PS: This magic was discovered in the last 3 days and we were using 1:3 models during the first 2 days. Our all top models were submitted 1 hours before the competitions would end. So try your best and never give up. <a href="https://www.kaggle.com/rinnqd/kmeans-4-clusters-per-label-gmm-33th-lb-9756">You will find here one of our selected kernels</a></p>
Instant Gratification
Instant solution summary, 3th place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>In the spirit of the competition I prepared this summary before the competition ended, so it is an instant (well, almost) solution summary. I hope my final placing will not be too embarrassing.</p> <p>My final model <a href="https://www.kaggle.com/zaharch/instant-success-gmm-3th-place">kernel is here</a>. It is a Gaussian Mixture model with several tweaks to accommodate for the make_classification() special structure. In general, QMM here works completely unsupervised on both train and test data, and the train set labels are used to assign each cluster to a 0/1 class.</p> <p>I started from <a href="https://www.kaggle.com/christofhenkel/graphicallasso-gaussianmixture">this awesome kernel</a> by <a href="https://www.kaggle.com/christofhenkel">@christofhenkel</a>. </p> <ol> <li>First I modified M step in GMM to round means estimations to -1 or 1 values (just after calculating them), to align it with hypercube vertices of make_classification and assuming class-sep equals default value of 1. This change alone with 3 clusters per class gave me 0.974+.</li> </ol> <p>The following 3 additional changes each gave me small improvements</p> <ol> <li><p>Running GMM with 2, 3 and 4 clusters per class and choosing the "best performing" one. This is a weak point of my model, I tried many approaches to make this identification better but it didn't work too well. At the end, I am not even completely sure that the number of clusters per class is a random number. My best model gives [46, 367, 99] distribution for 2 to 4 clusters per class respectively. </p></li> <li><p>Also on M step of GMM I re-weight probabilities of sample points in clusters to make total probabilities per cluster equal. This is motivated by the fact that make_classification() creates clusters with equal number of points.</p></li> <li><p>Increased covariance regularization in QMM from reg_covar=0.001 to 0.1</p></li> </ol> <p>Now, I am really looking forward to see what other people have done. 80% of the work was published online, but this last 20% is still a meaningful challenge. Finally, thanks to Kaggle for giving us this beautiful puzzle :)</p> <p>P.S. I jumped from 43 public to 3th private. Well, I didn't expect anything like that to happen. I wrote the post above before the results were known, and I publish it as it is. </p> <p>Thanks ;), you know I put a lot of time into assuming that your team name "Aiming high! 2,12 -- пройденный этап" means that clusters per class are distributed from 2 to 12! It was a dead end. Can you please tell me now, what did it actually mean?</p> <p>Thank you! It is more luck than brain here, but I hope one day I will get a result which is both gold and smart.</p>
Instant Gratification
Please, share your basic idea! [17th place inside]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>tldr; I used <code>sklearn.GaussianMixture</code> with parameter <code>n_components=6</code>. My best result is CV: 0.97533, Public: 0.97444 Private: 0.97578</strong></p> <p>Well, we know that datasets possibly were generated using <code>sklearn.make_classification</code>! So, let's play with 2D synthetic <code>make_classification</code> dataset:</p> <p><code> X, y = make_classification( n_samples=2000, n_features=2, n_informative=2, n_redundant=0, n_repeated=0, n_classes=2, n_clusters_per_class=2, flip_y=0.05, class_sep=2.5, hypercube=True, shuffle=False, random_state=777 ) </code> Let's fit <code>sklearn.QuadraticDiscriminantAnalysis</code>: <img src="https://i8.wampi.ru/2019/06/21/qda.png" alt="qda"> Looks like a not bad. But wait what do we know about these clusters? Yes. They all have Gaussian nature! Let's try <code>sklearn.GaussianMixture</code>:</p> <p><code> gm = GaussianMixture( n_components=4, n_init=20, init_params="kmeans", random_state=456 ) </code> <img src="https://i8.wampi.ru/2019/06/21/gm.png" alt="gm"> Hmmm... really better! And we just need to map these clusters to target. Let's do it. Check <strong>InstatGratificationClassifier.py</strong> file: <code> igc = InstantGratificationClassifier( estimator=GaussianMixture( n_components=4, n_init=20, init_params="kmeans", random_state=456 ) ) </code> <img src="https://i7.wampi.ru/2019/06/21/igc.png" alt="igc"> It looks greate 🥈 </p> <p>And it is really easy to estimate that all of 512 blocks were generated with parameter <code>n_clusters_per_class=3</code>. This is my basic idea to beat 0.974+ score.</p> <p><strong>Please, share your basic idea!</strong></p>
Instant Gratification
Public 13th / Private 14th Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I was confident that I made a perfect classifier which correctly classifies (0 or 1) all of the test samples except the flipped ones. Assuming this, prediction made with <code>predict_proba</code> actually is no better than any of the random but ordered (ex. [0, 0, 1, 1, 1] -&gt; [2, 1, 5, 3, 4]) predictions.</p> <p>Actually, in my kernel, I generated predictions with log probability of scipy <code>multivariate_normal</code>. It scored 0.97577 on private and 0.97475 on public.</p> <p>The key point was how to deal with flipped samples. I wanted to put flipped labels in the <code>good</code> places in the line of ordered predictions. So I generated many random ordered predictions, submitted them, and examined the public LB score to select the worst two submissions. (I thought if flipped labels of public test set are placed in <code>bad</code> positions, then flipped labels of private test would more likely be placed in <code>good</code> positions)</p> <p>Thanks :)</p> <p>Thanks Congrats to you too mhviraf!</p> <p>Oh thank you fixed the link;) Regarding #2, I just meant transforming [0 0 0 1 1 1] to something like [1 2 3 4 5 6] or [1 3 2 5 4 6] to get a higher auc score. I randomly did this 90 times and selected the lowest public scoring subs.</p> <p>First of all, congratulations to the winners and all of those who learned from this competition and thank you to Kaggle team for hosting this interesting competition.</p> <p>Detailed explanation of my journey and the code to reproduce the result can be found here( <a href="https://www.kaggle.com/harangdev/instant-gratification-14th-solution">https://www.kaggle.com/harangdev/instant-gratification-14th-solution</a> ).</p> <p>In this thread I'll summarize methods that I used.</p> <ol> <li><p><code>GaussianMixture</code> with <code>n_components=6</code> and <code>n_init=20</code> can classify perfectly except the flipped labels if you fit all of the data including private test set.</p></li> <li><p>To leverage the fact that the metric is roc auc, I randomly assigned integer to each test samples within each predicted class such that max of negative class is smaller than min of positive class. (That is, I ordered the test samples in line, rather than grouping them into 0 and 1)</p></li> <li><p>Then I selected two submissions that scored lowest on public leaderboard. I thought that with fixed number of flipped labels, low score in public set means high score in private set. I actually confirmed it by experimenting with synthetic dataset. But it turned out that this method wasn't much helpful. (Maybe I should dig deeper why -&gt; <strong>6/25 EDITTED: Actually my experiment setting was wrong. Getting low score for public leaderboard has nothing to do with getting high score for private leaderboard. So it was completely random after making a perfect classifier.</strong>)</p></li> </ol> <p>I had about 35 submissions that could have brought me gold, and 15 submissions that could have brought me to the 1st place, but I don't regret cause I had my rules when selecting the submissions.</p> <p>Lastly, special thanks to <a href="/cdeotte">@cdeotte</a> and <a href="/mhviraf">@mhviraf</a> for sharing important ideas about the dataset in the early stage.</p> <p>Thanks Congratulations to you too! :)</p> <p>Thank you!</p> <p>Thanks! Congrats to you too</p> <p>Thank you Giba!</p> <p>Congrats on your silver too!</p> <p>Thanks!</p> <p>Yeah it was my last idea on this competition. Thanks!</p>
Instant Gratification
Public 19st Solution/ Private 70st Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks</p> <p>Thanks. Next time, we will get gold!</p> <p>Thanks</p> <p>Thanks</p> <h3>Preprocessing</h3> <ul> <li><ol><li>Feature Selection with variance </li></ol></li> <li><ol><li>KERNEL PCA</li></ol></li> </ul> <p>Try and Fail : PCA, SVD, AutoEncoder, DAE etc</p> <h3>Feature Engineering</h3> <p>We used the following three methods to make variable of the distribution of the dataset.</p> <ul> <li><ol><li>GMM_PRED </li></ol></li> <li><ol><li>GMM_SCORE</li></ol></li> <li><ol><li>HIST_PRED</li></ol></li> </ul> <p><img src="https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fk.kakaocdn.net%2Fdn%2FcdwpIM%2Fbtqwfl7ksHu%2FHrOefi5xBBPdVrJGSd2tSk%2Fimg.png" alt=""></p> <p>By the way, one of the mistakes I found was that GMM_PRED and GMM_SCORE would increase their score if they were duplicated. The results below are still a mystery to our team.</p> <ul> <li>Gmm_pred 1 + Gmm_score 0 : LB 0.97465</li> <li>Gmm_pred 2 + Gmm_score 0 : LB 0.97475</li> <li>Gmm_pred 3 + Gmm_score 0 : LB 0.97479</li> <li>Gmm_pred 4 + Gmm_score 0 : LB 0.97480</li> <li>Gmm_pred 5 + Gmm_score 0 : LB 0.97481 </li> <li>Gmm_pred 5 + Gmm_score 1 : LB 0.97482</li> <li>Gmm_pred 5 + Gmm_score 2 : LB 0.97482</li> <li>Gmm_pred 5 + Gmm_score 3 : LB 0.97473</li> </ul> <h3>Model</h3> <ul> <li>1st layer : Nusvc + Nusvc2 + qda + svc + knn + lr / Stratified with GMM_LABEL </li> <li>2nd layer : Lgbm + mlp / Stratified</li> <li>3rd layer : 1st layer + 2st layer / Average <img src="https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fk.kakaocdn.net%2Fdn%2FVj3FG%2Fbtqwf3ylVfO%2FvvVCmYTe7z4TFIn8AKkSK1%2Fimg.png" alt=""></li> </ul> <h3>Private Test with Make Classification</h3> <p>the process is as follows. - 1. Estimate the number of private magic n(0~511) has number using the linear relationship between the number of magic and auc. (ex. private magic 0 auc 0.97xx then magic number 250) - 2. Create public and private data by seed using make_classification with estimated magic number. - 3. Test Model with maked public, private dataset and check score cv, pb, v - 4. Finally, choose a model with cv + pb + pv close to 0.975. Others thought that overfitting. <img src="https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fk.kakaocdn.net%2Fdn%2FcsSklH%2FbtqweIV0tL1%2FecqDnPjxbQ0CKLUPodtEm0%2Fimg.png" alt=""></p> <p>But actually it did not work out.</p> <h3>Resource</h3> <p>Code is below - kernel public 0.97491 : <a href="https://www.kaggle.com/chocozzz/hyun-stacking?scriptVersionId=15820611">https://www.kaggle.com/chocozzz/hyun-stacking?scriptVersionId=15820611</a> </p> <p>Best Score <img src="https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fk.kakaocdn.net%2Fdn%2FbX5sHq%2FbtqwfkURVtU%2FNzDybSbf4K5mKcC6whKhf0%2Fimg.png" alt=""></p> <p>Thanks. Private score is so lol :D </p>
Instant Gratification
Public 4th place / Private 32th place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I express my gratitude to you 👍 </p> <p>Thanks! :)</p> <p>Thank you! I congratulate you, too.</p> <p>Before I begin, I'm thankful to my teammates. The idea is below.</p> <p><strong>1. Feature Engineering</strong> - KernelPCA(cosine kernel) - GMM - AgglomerativeClustering</p> <p><strong>2. Outlier Replace</strong> - I predicted values using nusvc. - Based on the threshold, the true values were replaced when it differ from the predicted value. - Re training</p> <p><strong>3. Pseudo Labeling</strong> - Run each 512 cases at ratio 0.2.</p> <p><strong>4. Model</strong> - nusvc</p> <p><strong>5. Partial Prediction Value Scaling</strong> - StandardScaler(Improve Lb 0.002)</p> <p><strong>6. Seed ensembles</strong> - Using four seeds</p> <p>My code is below - Kernel public score : 0.97513 - Kernel private score : 0.97565 <a href="https://www.kaggle.com/hyeonho/32th-solution-public-0-97513-private-0-97565?scriptVersionId=16042798">https://www.kaggle.com/hyeonho/32th-solution-public-0-97513-private-0-97565?scriptVersionId=16042798</a></p>
Instant Gratification
Public LB 37 and private LB 10
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Chris, Congrats for your solo gold!!! you deserve it.</p> <p>My model is very simple Just use GMM and QDA. 1. Removed all the irrelevant features having variance less than 2. 2. Further removed all those feature whose Pearson correlation with target is less than 0.010 3. Trained a GMM model for 6 clusters(3 for each class) 4. Used 6 predicted probabilities as feature for next classifier mentioned in step 6. 5. Used a QDA classifier for classifying 6 label obtained in GMM and used its predicted probabilities as 6 another feature. 6. Used another QDA classifier with 12 above feature to predict target class. 7. Trained this model for 3 iterations and in iteration 2 and 3, inverted the labels of 90% miss-classified targets using oof of previous iteration. pseudo leveling was decreasing my score so I didn't use it.</p> <p>My kernal's URI: <a href="https://www.kaggle.com/raghaw/my-gratification-v2-10th-place-on-public-lb?scriptVersionId=16030728">https://www.kaggle.com/raghaw/my-gratification-v2-10th-place-on-public-lb?scriptVersionId=16030728</a></p>
Instant Gratification
Solution high light and how to choose submission
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks, You're winner, too. I'm just a little luckier than you.</p> <p>Thanks, because there are just 2.5% false positive samples, So the AUC should be 0.975.</p> <p>Private Leaderboard = sorted(Public Leaderboard, reverse=True) in top50, LOL</p> <p>hah</p> <p>Thanks@Chris Deotte. Thanks for your sharing. I re-learn traditional machine learning method in this competition.</p> <p>High light.</p> <ol> <li><p>After training GMM model, re-training GMM by initializing the means of each class to [(means[0]+ means[1])/2, (means[1]+ means[2])/2, (means[2]+ means[0])/2,] for avoiding falling into local minimum.</p></li> <li><p>Assuming your samples’ mean is u, Class 0’s mean is init_u0, Class 1’s mean is init_u1. Because some targets are flipped(5%). So the real Class means is: u0 = init_u0 + 0.05 * (init_u0 - init_u1) u1 = init_u1 + 0.05 * (init_u1 - init_u0) So, I use x0’ = 2 * u0 – x0, x1’ = 2 * u1 - x1 to augment data.</p></li> </ol> <p>How to choose submission.</p> <p>I tested my model on data generated by make_classification. I found that the AUC of 512*512 test samples always close to 0.975. I concluded that the (Public score + Private score) / 2 would close to 0.975. So I choose a low Public score model as my one submission.</p> <p>My kernel: <a href="https://www.kaggle.com/daishu/gmm-3-3-2">https://www.kaggle.com/daishu/gmm-3-3-2</a></p> <p>It’s really hard to post in English for me. </p>
Intel & MobileODT Cervical Cancer Screening
20th place solution/approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Intel & MobileODT Cervical Cancer Screening <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My strategy was as per below (I quickly verified it in the beginning by doing manual crops and training a simple network which gave 0.73 on LB):</p> <p>Annotated 3 points on external OS (two extremes and mid point), so the idea was to get the location of external OS and approximate zoom of the external OS. Took max distance between these 3 points as a indicator of size of external OS. Based on this size indicator, I zoomed out/in images to get uniform sized external OS images and cropped them around external OS center. For this I used custom CNN and MSE as loss function, but could not get sufficiently low loss. Overall network performed well in giving external OS and surrounding areas. </p> <p>Next stage was to feed cropped images got from above to a CNN for classification. I used network architectures similar to inception, resnet and xception with reduced filters as these networks were over-fitting the data. I used just one fold CV on train images and another fold on train+additional images(manually removed some duplicates). Trained the networks on these two sets of data and took average of approximate 10 models prediction for final submission.</p> <p>Looking at the other solutions, I have a feeling that I should have worked with more folds, specially on the train image data. I still have a feeling that producing better crops from stage-1 should have given better results. I did not retrain model based on stage 1 ground truths for final submission.</p> <p>Overall it was a great learning experience for me. Thanks Kaggle and sponsors for providing this opportunity.</p>
Intel & MobileODT Cervical Cancer Screening
26th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Intel & MobileODT Cervical Cancer Screening <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First thanks to the organizers and the congratulations to winners, and those who are satisfied with their leaderboard placement and/or effort! Thanks to my team mate! As usual I learned a lot.</p> <p>Team OdiNN formed just hours before the team merger deadline. We (Graham and me) had a short conversation before the deadline and agreed to team up.</p> <p>We then started to discuss our individual models, and we realized that we had two totally different approaches to a solutions. Graham had used ROI on the train images and fully trained neural networks with VGG16 and VGG19 structures and then averaging over different seeds.However I had used train and additional dataset and done transfer learning from Keras imagenet pretrained neural networks. (vgg16, vgg19, resnet50, xception, inception_v3). The I trained many models and stacked them.</p> <p>Since we had this totally different approaches and had not agreed on any common train/test split or sharing a common seed, we basically made the following masterplan: We take the each of our best models (Grahams best and my best) and then we simply average the two based on Stage1 leaderboard feedback.</p> <p>My stacking model. I extracted the features for all images (train and additional) through pretrained vgg16, vgg19, resnet50, xception and inception_v3. Each image was flipped and rotate to give me 8 different flips and rotations. Each of these feature set was then trained in 5 fold cross validation in six different learning algorithms. (I safely kept flips and rotations of one image within the same fold to avoid leakage over the folds) The six learning algorithms where: KNN, XGBoost, Random Forest, ExtraTrees, Logistic Regression and fully connected neural network. I kept the same hyperparameters for these learners for all feature sets. This then became 30 models (5x6) where each was trained in 5 fold cv. Out-of-fold predictions where taken and saved of course.</p> <p>These 30 models where then stacked to a three second level models. An XGBoost model, Logistic Regression and another fully connected neural network. </p> <p>At the third level I tried to do weighted geometric averaging, however it then appeared to me that the there must have been a leak over my CV folds. I guess some of the images are actually from the same patient. :-( So, the weighted geometric (nor arithmetic) averaging did not work and I ended up with the plain unweighted arithmetic mean. I didn't notice CV-fold leak until the second day of stage two, and training the first stage took about 50 hours, so I had no chance of solving this problem.</p> <p>The other model that was blended with my stacked model was based on ROI extraction of the images and then fully retrain neural networks of VGG16 and VGG19 neural network. He only used the original train set (not the additional data). Graham may elaborate on this. If I understand him correctly he is averaging each over five training sessions with different seeds. The problem he got was that the vgg19 neural net did not manage to make it to the deadline so one vgg19 net was used in the final mix.</p> <p>Thanks all!</p>
Intel & MobileODT Cervical Cancer Screening
27th place solution: Polar Coordinates
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Intel & MobileODT Cervical Cancer Screening <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>But isn't the amount of visible blood information also present as black regions in the G channel?</p> <p>Actually placed at 26th after LB update. Idea was to exploit the approximate radial symmetry in the problem to a number of advantages that I would describe below, of course, with some definite LB gain! (Any other physicists in the room!?) Polar coordinates seem to be a "natural" frame for the features that were important to the cervix type detection (transformation zone being a ring).</p> <p>I think there is a lot more scope for improvement here as I very hastily implemented this in the last few days of the competition. Posting this in case it turns out helpful for MobileODT in some way.</p> <h2>First, a few Basic transformations applied on the Images:</h2> <p>$ Since images could be green filtered images, and it seemed color didn't add any information, I had replicated the green channel for R, G, B (so that I can make use of the pretrained models), instead of using grayscale values for R, G, B as greyscale values of a green filtered test images would be from a different distribution then, as average = (g + 0 + 0)/3 in that case, making the mean 3 times smaller than expected distribution. </p> <p>$ These G, G, G channel images were then resized to a maximum dimension of 598x598 with aspect ratio preserved.</p> <p>$ Then a center crop of 448x448 (75% of 598) was extracted without any resizing. These were used as the base images before any more processing.</p> <p>Then trying out a bunch of pretrained finetuned models (Vgg16BN, InceptionV3, ResNet, Xception) on these base "cartesian" images gave a best single model of around 0.69-0.71 PBLB at best, which gave a fairly good score after weighted ensembling. For tuning these weights of the ensemble, I had used a separate hold-out set of data. At this point adding any other model to the ensemble didn't add much to the final score.</p> <h2>Polar Coordinates</h2> <p>A polar transform is a map (not one to one, actually) that takes (x,y) from Cartesian coordinates to (r, theta) in polar coordinates as follows:</p> <p>$$ (x,y) \to (r,\theta)=\left(\sqrt{x^2 + y^2}, ~\text{tan}^{-1}\frac{y}{x} \right) $$.</p> <p>Notice that this is not a one to one map (it is a one-to-many map). For example, $$(r, \theta), ~\text{and} ~(r,\theta + 2n\pi)$$ trivially maps to the same point in cartesian coordinates. Also, the center/origin in x-y coordinates maps to an infinite number of points $$(r=0,~\theta)$$ for any theta (basically the theta axis).</p> <p>At first I tried to implement this in a python function (fairly straightforward), but then resorted to ImageJ's plugin <a href="https://imagej.nih.gov/ij/plugins/polar-transformer.html">PolarTransformer.class</a> due to its fast pixel crunching. An example of this transform on a spherically symmetric image:</p> <p><img src="http://i.imgur.com/WCCyVab.png?raw=True" alt="" title=""></p> <p>One may analyze the effect of this on cervix images by considering the below illustrative picture where the blue/green represents some background noise (the arrows are drawn to indicate the direction of the convolutional kernels over the image): <img src="http://i.imgur.com/4VEXVO2.jpg" alt="" title=""> The black region towards the right most end of the polar image is due to the fact that the original image is rectangular shaped and not a circular one, and hence there are always some missing pixels over the circular ring with radius r &gt; r_0; where r_0 = size/2 (size being the width or height of the square image in cartesian coordinates).</p> <p>The noise can now be easily cropped and then checked for quality by reconstructing Cartesian from Polar using the inverse map which is a many-to-one map: $$ (r, \theta) \to (x, y) = (r \cos\theta, r\sin\theta) $$ as follows: <img src="http://i.imgur.com/6Yvho2R.jpg" alt="" title=""></p> <p>The advantages with this transform are two-fold:</p> <p>1 - Considering the green and blue parts above to be the black borders of the MobileODT tube or some non-cervix background, that can be now, easily eliminated by cropping out upto a limited value along the radius axis of the polar image (width) leading to less overfit models. Example: Consider train_data/Type_2/1038.jpg: <img src="http://i.imgur.com/brk4HXG.jpg" alt="" title=""> (360 here corresponds to max theta in degrees) In my case I used these 216x360 cropped images for bulding models over polar data. To inspect the quality of information in the cropped polar images, one may check by reconstructing images by inverse transform: <img src="http://i.imgur.com/oU8uxRz.jpg" alt="" title=""> Pretty good reconstruction! The black background of the original image can now be seen painted in the reconstructed image, which is due to those pixel maps that have multiple cartesian coordinates for the same polar coordinate (many to one). One can easily find a good cutoff for such noise cropping manually as region around a cervix over a certain radius is anyhow not much important for cervix type classification (ofcourse assuming cervix centered images).</p> <p>2 - Applying CNNs on polar images makes the square kernels "compatible" with the transformation zone ring. A kernel running over Cartesian images is represented by the green arrow above, while a kernel running over the Polar ones is represented by the yellow arrow.</p> <p>But when pretrained models were built on these polar images alone, single best model was much worse than cartesian - standing at around 0.78. But I was atleast sure this was not as much overfit.</p> <p>To get the best features out of the cartesian and polar (216x360), finally, I resorted to using the flattened last convolution layers from two separately finetuned resnet models on cartesian and polar, and feeding them to xgboost. Adding this model to the ensemble gave me about a 0.07 boost on the private LB.</p> <p>The xgboost used was not cross-validated across various parameters as the competition was nearing end. There could be a scope for improvement here. Additionally, I didn't use any detect framework to get the cervix-centered crops, instead, I assumed images were more or less centered anyway and applied the polar transform. This is another possible area of improvement. Also, one may experiment with the reconstructed cartesian images from cropped polar because the pitch black background generally kills the kernels and overfits.</p> <p>Let me know if you find these features helpful. Thanks!</p> <p>Just look up what's called as "transfer learning", where you pluck off the final 1000 way layer (and maybe a few other layers above), and train the network with your new 3-way softmax.</p>
Intel & MobileODT Cervical Cancer Screening
94th solution, with 4G RAM and GPU 740M . Extract ResNet feature using Keras
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Intel & MobileODT Cervical Cancer Screening <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I just read your post again and feel confused about one point:</p> <p>I find you used several base models like KNN and RF.However, in my test, RF overfitted seriously and changed a lot when I set different seed in split. That means the model have much confidence on his predict. In the condition, If 'luckily' split one patient in 5, is it the possible reason lead to leak on CV, for I don't notice you changed the seed to split or test it on a small data?</p> <p>Hi, Johansen:</p> <p>I used all the data, include the additional because I find the more data,the more stable. Then resized them to 224x224 as the model needed. 0.95 is my private LB.</p> <p>Here is my score records(Not Accurate):</p> <p>PL: 0.89xx (train only) 0.83xx (add additioin) </p> <p>By the way, your sharing is really instructive, I learned a lot from it.</p> <p>Not good enough, but I would like to share my solution.</p> <p>With intel i5, 4G RAM and 740M GPU, after several times memory error,I finished this comp and got my first kaggle medal.</p> <p>Method: I used pretrianed ResNet50 to extract features and give it to xgboost, so thats my score : 0.95030. <a href="https://github.com/kuhung/Tips_for_Data_Scientist/blob/master/Feature_Engineering/Extract_ResNet_Feature_using_Keras.py">Extract_ResNet_Feature_using_Keras.py</a> according to <a href="https://www.kaggle.com/kelexu/extract-resnet-feature-using-keras?scriptVersionId=1244781">script from Planet</a> and <a href="https://github.com/fchollet/deep-learning-models">fchollet/deep-learning-models</a></p> <p>Improvement: The result can be improved by using Vgg19, but limited by my 4G RAM when merge the features. Dimensionality reduction can help 0.003 and Ensemble is a good way too. </p> <p>The next time you may try it in a CV competition:) </p> <p>Sorry, I will attach it here. </p> <pre><code>import pandas as pd import os from tqdm import * ID = [] Type = [] for count in range (1,4): path = 'train/Type_%s'%count for file in tqdm(os.listdir(path)): if 'DS' not in file: ID.append(file) Type.append('Type_%s'%count) train_label = pd.DataFrame({'id':ID,'label':Type}) train_label.to_csv('train_label.csv',index=False) stg1 = pd.read_csv('solution_stg1_release.csv') stg1['Type_2'] = stg1['Type_2'].apply(lambda x: x*2) stg1['Type_3'] = stg1['Type_3'].apply(lambda x: x*3) stg1['label'] = stg1['Type_1']+stg1['Type_2']+stg1['Type_3'] def label_convert(x): if x==1: return 'Type_1' if x==2: return 'Type_2' if x==3: return 'Type_3' stg1['label'] = stg1['label'].apply(lambda x: label_convert(x) ) stg1.drop(['Type_1','Type_2','Type_3'],axis=1,inplace=True) stg1.columns = ['id','label'] stg1.to_csv('test_stg1_label.csv',index=False) </code></pre> <p>Yes, you are right, Same seed for model campare, I agree with you. After this leaking, the next time we will perform better, won't we? :P</p>
Intel & MobileODT Cervical Cancer Screening
9th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Intel & MobileODT Cervical Cancer Screening <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The input for XGBoost blender are predictions on validation set from each neural net model. So neural net predictions used as features. Validation set has the same length as train since it combined from 5 Folds.</p> <p>With bounding boxes training process was more stable for me, without big jumps and lower speed of overfitting. It's because with boxes I was able to make larger crops without affect of area of interest. In the beginning I used just 10% random crops from all sides of image. </p> <p>Bounding boxes gave me improvements on stage 1 LB, but I don't remember exact value.</p> <h2>Used tools</h2> <p>Windows 10 + Python 3.4 + Keras 1.2 + Theano 0.9</p> <h2>ZF_UNET_224 (version 1)</h2> <p>To find important parts of images I used <a href="https://github.com/ZFTurbo/ZF_UNET_224_Pretrained_Model">my own version of UNET</a>. First UNET was trained on segmentation of train images, <a href="https://storage.googleapis.com/kaggle-forum-message-attachments/194936/6708/polygons.json">which I made by hands</a> with Sloth. In this segmentation I tried to cut all elements which are totally useless like speculum and everything outside it. The predictions look nice:</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/194936/6709/ZF_UNET_v1_predictions.jpg" alt="enter image description here" title=""></p> <h2>ZF_UNET_224 (version 2)</h2> <p>Second UNET was trained on rectangles <a href="https://www.kaggle.com/c/intel-mobileodt-cervical-cancer-screening/discussion/31565">provided by Paul</a>. This net finds the main region of interest. The predictions look even better.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/194936/6710/ZF_UNET_v2_predictions.jpg" alt="enter image description here" title=""></p> <p>I didn’t use these predictions directly. I only extract bounding box rectangles for each image to form rectangles.csv file which I used in later training process.</p> <h2>Zoo</h2> <p>I used the following set of CNNs: VGG16, VGG19, RESNET50, INCEPTION_V3, SQUEEZE_NET, DENSENET_161, DENSENET_121. The training process for them was mostly similar. 5 KFold validation, vary only learning rate parameters and batch size. For each model I obtain: train OOF predictions with same length as number of training images and test predictions.</p> <h2>Augmentations</h2> <p>I think augmentation was important key for this problem, since we had very small amount of data. I used: </p> <ol> <li>Random crops based on rectangle.csv generated from UNETs. These crops were in very big range from UNET_v1 prediction to UNET_v2 predictions. </li> <li>Random perspective transformation </li> <li>Random rotations (mean color or border reflect at random) and mirroring </li> <li>Lightning change </li> <li>Rare random blur</li> </ol> <p>Basically neural nets never see the same pictures during training.</p> <h2>Batch generator</h2> <p>For learning process I used fit_generator function from Keras, which I recommend to use for everyone. You only need to create your own batch_generator function. No need to store many different images in memory or on HDD. In my batch generator I add some fraction of images from “additional” folder ~25%. It greatly improves validation and leaderboard score. And we all know now it’s because of leakage of test images in additional folder.</p> <h2>Ensemble</h2> <p>For ensemble I used XGBoost blender. Final solution was average on 500 XGBoost iterations with different seed and random run parameters. </p> <h2>Submission</h2> <p>The only difference for my 2 final submissions was to train with usage of “additional” images or without them. My final models used “train” and “test_stg1” images for training.</p> <p>Submission v1 (with additional): Validation score: 0.57457 Private LB: 0.88856</p> <p>Submission v2 (without additional): Validation score: 0.64254 Private LB: 0.83209</p> <p>I believe low scores on private LB depends on bad data preparation, mostly because of mislabeling. We could clean it by hands, but in case private test set had the same labeling quality it would make everything worse.</p> <h2>Code</h2> <p>You can find my code as it was prepared for Kaggle <a href="https://github.com/ZFTurbo/KAGGLE_CERVICAL_CANCER_2017">on GitHub</a></p> <p>Thanks! ) Congrats with another one 1st place :P</p> <p>I stopped comparing in the early beginning of contest. In my first attempts XGBoost blender was better.</p>
Intel & MobileODT Cervical Cancer Screening
Brief overview of #2 solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Intel & MobileODT Cervical Cancer Screening <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for Intel and MobileODT for sponsoring this competition, and for Kaggle for hosting it. Hopefully, the results can indeed make a significant impact in early detection and treatment of cervical cancer.</p> <p>For this competition, I decided to trust my own validation. So I did not probe the test labels during stage 1; in fact, I did not make any submissions at all during stage 1. Also my final models did not incorporate any of the stage 1 test data. I did, however, use the additional data in addition to the training data.</p> <p>My validation set consists of 10% of the train data; the validation set did not include any additional data. Unlike the number 1 team's approach, I did not try to make sure that the patients in the validation data were not present in the remaining data; in hindsight, had I done this, it might have reduced overfitting in the models I trained. In any case, for the remaining train data + additional data, I went through each image to flag those that I deemed to be of bad quality (blurry, or just not relevant at all). This exercise left me with around 5000 images available to train my models.</p> <p>Out of these 5000 images I created two sets:</p> <ul> <li>256x256 image scaled from the original</li> <li>256x256 image cropped using the cervix segmentation kernel <a href="https://www.kaggle.com/chattob/cervix-segmentation-gmm">here</a>.</li> </ul> <p>Several models using as base the densenet161 and resnet152 models (with custom classification layers, just basic fully connected layers with custom number of hidden layers) were trained on these two sets, some data augmentation were included during training. The final submission consisted of a simple blend of the predictions of each of these models, inspired by Felix Laumon's <a href="https://github.com/felixlaumon/kaggle-right-whale/blob/master/scripts/blend_submission_files.py">script</a>. FWIW, I used PyTorch.</p>
Intel & MobileODT Cervical Cancer Screening
Forum/Kernel based solution scoring 0.87401/0.83271
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Intel & MobileODT Cervical Cancer Screening <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Sorry for the confusion. It scores only 0.87401 on private.</p> <p>Here's a slightly modified public forum/kernel based solution that scores 0.87401/0.83271 (private/public) Modifications include: 1) BatchNormalization 2) Bagging 3) A few more layers of Conv and Dense. 4) Slightly larger input image resize. Should run in maybe an hour or so on AWS P2. Props to whoever made the original public leaderboard script.</p>
Intel & MobileODT Cervical Cancer Screening
Sharing our 15th solution using PyTorch
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Intel & MobileODT Cervical Cancer Screening <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is the link: <a href="https://github.com/wangg12/intel-cervical-cancer">https://github.com/wangg12/intel-cervical-cancer</a>.</p> <p>We've learned a lot in this competition, especially from those top winners ( and congratulations to all top winners!). </p> <p>Also we decide to share our own PyTorch solution, which definitely has much room to be improved. </p> <p>Hope it can help you to some extent and any possible discussions are welcomed!</p>
Intel & MobileODT Cervical Cancer Screening
Solution Sharing and Congratulations
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Intel & MobileODT Cervical Cancer Screening <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Sure...for faster-rcnn we used tensorflow (tf-faster-rcnn by endernewton) and mxnet, each trained for around 10 epochs and thenused the most confidently predicted bounding box for each image to create crops. We then resized these to 197 x 197 and did basic image classification in keras with 15% simple random holdout for 30 reps (bagging), including a typical suite of data augmentation methods and saving the best model from each rep based on its holdout set. </p> <p>We'll need to see how the organizers want to handle things moving forward. At the very least guessing we should be able to share what software packages we used and enough detail to get you close.</p> <p>Great job and congrats to team TEST for a very nice win, another one right after Fisheries. Some smart guys from Lithuania for sure--they've got some secret sauce cooking! Impressive solo second by I Rustandi and excellent work by all top finishers.</p> <p>A quick overview of our #3 solution: First want to say what a pleasure it has been teaming with Giba, Xulei, and Joe. We had a lot of fun learning and strategizing together and effectively used slack to communicate across time zones. To start out, we probed the leaderboard and had the public test set labels to train on a few weeks before the model submission deadline. For most of our models we used yolo or faster-rcnn to do object detection--a big thanks to Paul for the bounding box annotations. We then created crops and used bagged image classification with resnet50 and vgg16 backbones and then hillclimbing to ensemble based on 2-fold cv with the public train and test sets as our splits. We only used the additional training data in one of our models and also included one custom model from Joe, plus one more based on unet from Scotty--thanks for sharing and raising the big concern about the near-duplicate images in Stage 2. Hope to add some more details later; please feel free to ask any specific questions.</p> <p>Most importantly, hoping we can all pitch in to really improve treatment decisions and early prevention of this disease.</p> <p>Hi gkericks, no, my understanding is this is still an issue with tensorflow. Fortunately I think the Kaggle organizers are okay with it and are instead, and rightly so, focused on making sure we have reproducibility within a reasonable random range and are compliant with the rules.</p>
Intel & MobileODT Cervical Cancer Screening
Summary of rank 6 solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Intel & MobileODT Cervical Cancer Screening <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>dbscan, a clustering algorithm in sklearn. I used it to good effect in the nature conservancy fisheries competition, and just applied it as is to this comp.</p> <p>I didn't have bb annotations for the additional training set and didn't need it. SSD works pretty well with just the original set. I used 3-fold CV on that as well and applied all three models to all images to produce the bb list, and then used DBS to combine the bb list into one. The best submission uses only crops to train.</p> <p>Congratulations for the winners for a great effort! Glad to see the top solutions differ significantly in score, look forward to hearing your solutions!</p> <p>The most significant gain for my solution is using SSD to create bounding boxes for the Os. Huge thanks to <a href="https://www.kaggle.com/deveaup">Paul</a> for providing the bounding box annotations.</p> <p>My model is an ensemble of vgg-19, xception, resnet 50 and inception v3 models. The top two models are: vgg-19 fine-tuned with 7 frozen layers and xception with 2 frozen layers. They are similar in score. The rest, various fine-tuned and bottle-necked models, are significantly weaker but added slightly to the ensemble's effectiveness. All models ran with 3-fold CV, and are combined with linear regression to produce the final result.</p> <p>I attempted to include some "leak" features in a second submission, but its result is much worse. As far as I can see, the final results are straight. Great thanks to the organizers and admins for making this a useful competition!</p> <p>The whole pipeline ran on a gtx970 for five days, barely making the submission deadline. The top two models ran for 1.5 and 2 days each.</p> <p>More to pick out the top 1 (cancer comp) or few (fisheries comp). Higher confidence if several models pick out bb's that cluster together.</p>
Intel & MobileODT Cervical Cancer Screening
validation strategy that worked
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Intel & MobileODT Cervical Cancer Screening <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The competition was very challanging because of the quality and the origin of the data. The main challange for us was how to make use of additional data and have a solid validation framework. </p> <p>First thing we noticed, that additional dataset had strong correlation with train images. Looking thorougly at each image we found, that training set included the best quality image of a single patient, and all other frames were put into additional data file.</p> <p>So to avoid simillar image leakage to our validation set we did the following:</p> <p>1) calculate image colour histograms. 2) run k-means clustering with k = 100 3) take random 20 clusters and use that for validation</p> <p>Its quite easy solution, but it really helped to merge simillar images into a cluster and we avoided validation set contamination with simillar images in training set. As a result we could use additional dataset without too much worries (we did put less weight on additional data in our final models).</p> <p>On top of that, we removed (by hand) all the blurry or not clear images in additional dataset (roughly 15% of additional images).</p> <p>In the end we had decent validation set, with a mix of original and additional dataset images. Our models on validation set scored 0.75-0.8 - pretty close to our final result, which I am very happy with!</p> <p>Special thanks to this kernel which I used to calculate histograms and k-means for our validation: <a href="https://www.kaggle.com/vfdev5/type-1-clustering">https://www.kaggle.com/vfdev5/type-1-clustering</a></p> <p>It really deserves more upvotes:)</p> <p>Very interesting to hear, how other teams did validaiton. Please share:)</p> <p>Unique patient detection was unnecessary and you overcomplicated yourself, I think.</p> <p>The point is not to have same patient both in training and validation. From what I understood you did not achieve that.</p> <p>having simillar images in training set does not hurt, because you would usually do some image augmentations anyway, so similar images could be considered as some kind of augmented data as well.</p> <p>Yes, that is indeed what I have done - merged train+additional and ran your k-means. Thanks again!</p> <p>You did reverse what we did :) the point was to not pick one image from cluster, but use whole cluster as valdiation:)</p>
JPX Tokyo Stock Exchange Prediction
10th Place - Simple Features + LGBM
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: JPX Tokyo Stock Exchange Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This result has been a big surprise to me. Mainly because I had to stop working on it half-way through due to holiday and lack of time. </p> <p>Anyway, the notebook can be found <a href="https://www.kaggle.com/code/vuk1998/jpx-submission-template/notebook?scriptVersionId=95590449" target="_blank">here</a>.</p> <p>You can see that the model is very basic. There are some simple features such as various returns (open-&gt;close, close-&gt;close), amplitude, volatility, moving averages etc.</p> <p>Locally, I performed experiments (walk forward CV) to come up with the lgbm model attached.</p> <p>Additionally, since I was using some aggregate features I also came up with some sensible default values until they become available.</p> <p>Thanks to organisers for organising this!</p> <p>Thank you!</p>
JPX Tokyo Stock Exchange Prediction
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: JPX Tokyo Stock Exchange Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><blockquote> <p>a quick question: do you combine the results from model 1 and 2?</p> </blockquote> <p>In this competition, we can submit two models. So, one submit is model1 and the other submit is model2.(model2 is 4th place)</p> <h4>update</h4> <p>We published the code in the following notebook.<br> <a href="https://www.kaggle.com/code/flat831/4th-place-model/notebook?scriptVersionId=100052889" target="_blank">https://www.kaggle.com/code/flat831/4th-place-model/notebook?scriptVersionId=100052889</a></p> <hr> <p>Thank you JPX for sharing your data and hosting a fun competition. Thank you Kaggle. Thank you Kagglers for sharing many helpful discussions and notebooks. Thank you my teammates, K-azu and hayato03!</p> <h2>Model Summary</h2> <p>Our model is very simple. To be honest, this result is mostly luck.</p> <p>model1(private score: -0.196)<br> Rank return_1day in ascending order<br> If ExpectedDividend is greater than 0, make it the lowest.</p> <p>model2:(private score: 0.347 &lt;- this model is 4th place!)<br> Rank return_1day in descending order.<br> If ExpectedDividend is greater than 0, make it the lowest.</p> <p>return_1day is based on the code in this <a href="https://www.kaggle.com/code/smeitoma/train-demo" target="_blank">notebook</a>.<br> <code>return_1day = feats["AdjustedClose"].pct_change(1)</code></p> <p>Below is a brief explanation of the reasons for this approach.</p> <h2>Difficulty in model evaluation</h2> <p><a href="https://www.kaggle.com/competitions/jpx-tokyo-stock-exchange-prediction/discussion/320323" target="_blank">https://www.kaggle.com/competitions/jpx-tokyo-stock-exchange-prediction/discussion/320323</a></p> <p>As noted in the discussion above, the random model scores can be approximated by a normal distribution of μ = 0 and σ = 0.13785. Therefore, even if we were to create a model with a score of -0.3~0.3, we cannot assume that it is a good model. Furthermore, if we create many models, the probability of a good score by chance increases. (Like the multiple comparisons problem in statistics)<br> Finally, we gave up on creating a ML model and decided to submit a simple approach to avoid errors.</p> <h2>ExpectedDividend</h2> <p>We adopted this variable because we were confident that it would be effective.</p> <p>The definition of ExpectedDividend column is as follows.</p> <blockquote> <p>Expected dividend value for ex-right date. This value is recorded 2 business days before ex-dividend date.</p> </blockquote> <p>And, the definition of target column is as follows.</p> <blockquote> <p>Change ratio of adjusted closing price between t+2 and t+1 where t+0 is TradeDate</p> </blockquote> <p>Thus, since t+2 is the ex-dividend date and t+1 is the date of record, target is likely to be negative.<br> (However, since ExpectedDividend is almost null, this alone does not get high score…)</p> <h2>Flipped ranking</h2> <p><a href="https://www.kaggle.com/competitions/jpx-tokyo-stock-exchange-prediction/discussion/356038" target="_blank">https://www.kaggle.com/competitions/jpx-tokyo-stock-exchange-prediction/discussion/356038</a></p> <p>As noted in the discussion above, flipped the ranking and we get a score multiplied by -1. In the competition, we can choose two models, so adopting this strategy will ensure a score &gt; 0.</p>
JPX Tokyo Stock Exchange Prediction
62nd place solution, Linear-Tree, Noise-reduced target, Ranking Method, and Feature Engineering
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: JPX Tokyo Stock Exchange Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It has been some time since the competition ended, but I would like to share my solution as a look back.</p> <h2>Model Selection</h2> <p>Instead of a complex, highly predictive model such as LightGBM or a neural network, I decided to use Linear-Tree, a simple but slightly more expressive model than linear regression models. I thought that the stock market is very noisy and a highly expressive model would easily learn such noise. Early stopping is effective in reducing this kind of overlearning, but it is still optimized to the validation data used for early stopping. Hyperparameter tuning using cross-validation and back-testing also optimizes parameters for the validation period, but it is difficult to verify how effective these values are in predicting future markets. Therefore, I decided to limit the degrees of freedom of the model by hand from the beginning. <br> For more information on Linear-Tree, please refer to <a href="https://github.com/cerlymarco/linear-tree" target="_blank">Marco Cerliani's repository</a>.</p> <h2>Objective Variable</h2> <p>I used the following y as my objective variable.</p> <p>y = residual return / std<br> residual return = (1 day Return on each stock) - (Average 1 day return of 2,000 stocks)<br> std = 20 days standard deviation of 1 day returns for each stock</p> <p>In this competition we need to rank each stock according to its relative attractiveness based on estimated returns and other factors. What is important is not the value of the return itself, but the relative return among 2,000 stocks. The use of residual returns allows for forecasts that are independent of market trends, as it removes fluctuations in the overall market.<br> I further divides this residual return by 20 days standard deviation of 1 day return to keep the signal-to-noise ratio constant. I thought that the same 1% return would have a different return importance for a stock with a std of 0.01 versus a stock with a std of 0.1.</p> <h2>Ranking Method</h2> <p>I used seven models to predict returns and created a score by dividing the mean of the seven predictions by the standard deviation of the seven predictions. I sorted the stocks according to the score. The standard deviation of the portfolio must also be reduced because the competition is evaluated not by the return of the portfolio but by the Sharpe ratio, which is the return divided by the standard deviation. I tried to reduce the standard deviation of the portfolio by using the standard deviation of each stock's return predictions as the forecast uncertainty and lowering the score for stocks with large forecast uncertainties.</p> <h2>Validation Method</h2> <p>CPCV</p> <h2>Feature Engineering</h2> <ul> <li>Basic technical indicators</li> <li>Basic financial indicators YoY or QoQ</li> <li>Percentage error between market forecasts and actual values of basic financial indicators that decrease exponentially from the date of publication</li> <li>Cosine similarity between basic chart pattern and price history</li> <li>Cosine similarity between this week's and last week's sector rankings</li> <li>Number of days from the date of disclosure of financial results</li> </ul> <h2>Other Prescriptions</h2> <p>I excluded returns that straddle the earnings disclosure date from the training data. </p> <h2>Unimplemented ideas</h2> <ul> <li>Feature Neutralization (for more detail, check <a href="https://www.kaggle.com/competitions/jpx-tokyo-stock-exchange-prediction/discussion/335393" target="_blank">@UKI's post</a> and <a href="https://www.kaggle.com/code/code1110/janestreet-avoid-overfit-feature-neutralization/notebook" target="_blank">@KATSU1110's notebook</a> )</li> <li>Quantile Regression for estimating prediction uncertainty (<a href="https://arxiv.org/abs/2012.07245" target="_blank">inspired by this paper</a>)</li> <li>Direct prediction of standard deviation of returns</li> </ul>
JPX Tokyo Stock Exchange Prediction
8th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: JPX Tokyo Stock Exchange Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all thanks to JPX for hosting such an interesting competition and thanks to all contributors that shared their ideas and code. It helped a lot to understand what the goal was and how to approach it. This was our ( <a href="https://www.kaggle.com/MrChrissie" target="_blank">@MrChrissie</a> ´s &amp; my ) first competition in the area of finance and we gained a huge amount of knowledge through other contributions. <br> This post is inspired by <a href="https://www.kaggle.com/flat831" target="_blank">@flat831</a> who posted his team solution earlier.</p> <p><strong>Approach and Model:</strong><br> After a lot of trial and error, we came up with a relatively simple idea. <br> We divided the stocks by the 33SectorName into groups and trained a separate LGBM regressor model for each group. Our entire model then consisted of one LGBM model for each sector. This idea came under the assumption that stocks of the same sector tend to correlate. The prediction target of our model was the change ratio aka. the target column. For the submission, we ranked the predicted change ratio.</p> <p>Since we ran out of time in the end, a single LGBM model was tuned to all stocks using Optuna and the resulting hyperparameter values were then used for our sub-models. <br> We only tuned the following parameters:</p> <ul> <li>num_leaves</li> <li>max_depth</li> <li>learning_rate</li> <li>n_estimators</li> </ul> <p>For the training, we used available features from the stock_price and stock_list files.</p>
JPX Tokyo Stock Exchange Prediction
About the 2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: JPX Tokyo Stock Exchange Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First congratulations to the <a href="https://www.kaggle.com/uioiuioi" target="_blank">winners</a>. </p> <p>Unfortunately, I think the solution is flawed. To be exact, I don't think the <a href="https://www.kaggle.com/code/uioiuioi/2nd-place-solution" target="_blank">script</a> is engineering the features as described <a href="https://www.kaggle.com/competitions/jpx-tokyo-stock-exchange-prediction/discussion/361482" target="_blank">here</a></p> <p>For example, my understanding of <code>return_1month: 20 business day return of closing price</code> is the 20 day pct_change per security code, not the pct_change between the nth row and n+20th row. </p> <p>To illustrate my point, here is the comparison of actual 20day pct change and the values from this <a href="https://www.kaggle.com/code/uioiuioi/2nd-place-solution" target="_blank">script</a> for <code>SecuritiesCode==1301</code>:<br> The blue line is the true 1month return, while the orange line is the values from the winning script. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6379868%2Fadecf2e7f45571189c7ba4611d673165%2FScreen%20Shot%202022-11-05%20at%205.33.14%20PM.png?generation=1667640816645828&amp;alt=media" alt=""></p> <p>I copied the 2nd place notebook and revised it so the intended features were created (e.g. per security 1 month return). The notebook is <a href="https://www.kaggle.com/code/xxxxyyyy80008/dissecting-and-correcting-2nd-place-solution/notebook" target="_blank">here</a>. The late submission score is -0.214, which means had the author correctly engineered the features s/he intended to create, the solution would be far from being on the top.</p>
Jane Street Market Prediction
10th Place Solution: Geometric Brownian Motion and Mixture Density Networks
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jane Street Market Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Given that the our solution is still number 2 after the second rerun, we have slightly more confidence to write a post about the methodology. we also use regression which is, I think, unique in this competition. </p> <p><strong>Key Points of the Method:</strong></p> <ol> <li>Geometric Brownian Motion with Drift Fitting </li> <li>Mixture Density Network with negative log likelihood loss function</li> </ol> <p>These two are the only significant key differences I have seen between our solution and the other posted notebooks. </p> <p><strong>Geometric Brownian Motion with Drift Fitting:</strong></p> <p>One of the most basic models used when modeling financial assets is geometric brownian motion with drift. Assuming that our data here roughly follows such a distribution, sampling the value of the price at a given time gives us very little information on this financial asset because of the stochastic nature of the process. The drift in price however is much more fundamental to the particular stock than the return at any particular time so this is a great feature to engineer. We use this as a target feature when training the neural network.</p> <p>The following code computes the drift factor for each asset in the training through MLE on resp_1 to resp_4. This notebook really influenced the code: <a href="https://www.kaggle.com/pcarta/jane-street-time-horizons-and-volatilities" target="_blank">https://www.kaggle.com/pcarta/jane-street-time-horizons-and-volatilities</a><br> Here is my code: I am happy to answer questions in comments but the above notebook really explains everything, all I did was use a more complex model (with drift) and do the same maximum likelihood estimation, basically converting math to pytorch. </p> <pre><code>import torch from tqdm import tqdm mus = None sigmas = None horizons = None def train_gbm(): device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") dT_init = np.ones(len(columns)-1).astype(float) sigma_init = np.random.randn(len(resps))*.01 + 1 mu_param = torch.tensor(np.random.randn(len(resps))*.01, device=device).requires_grad_(True) with torch.no_grad(): dT_param_log = torch.log(torch.tensor(dT_init)).to(device).requires_grad_(True) sigma_param_log = torch.log(torch.tensor(sigma_init)).to(device).requires_grad_(True) W = torch.tensor(resps.values + 1, device=device).float() W_init = torch.cat((torch.ones((W.size(0),1)).to(device), W[:,:-1]), dim = 1) opt = torch.optim.Adam([dT_param_log, sigma_param_log, mu_param], lr = 0.001) iteration = 0 for iteration in tqdm(range(15000)): opt.zero_grad() # add the fixed 1 at the beginning of the time deltas dT_log = torch.cat([torch.zeros(1, device=device) , dT_param_log]) dT = torch.exp(dT_log).view((1,-1)) sigmas = torch.exp(sigma_param_log).view((-1,1)) mus = mu_param.view((-1,1)) ln_W = torch.log(W) ln_W_init = torch.log(W_init) log_prob = -ln_W - torch.log(sigmas) - 1/2 * torch.log(dT) - (ln_W - ln_W_init - (mus - 0.5*sigmas**2)@dT)**2/(2*sigmas**2 @ dT) row_liklihood = log_prob.sum(dim = 1) total_liklihood = row_liklihood.mean(dim = 0) negative_total_liklihood = -total_liklihood if(iteration % 1000 == 0): print(negative_total_liklihood) negative_total_liklihood.backward() opt.step() times = torch.exp(torch.cat([torch.zeros(1, device=device) , dT_param_log])).cumsum(dim = 0) mus = mu_param.detach().cpu().numpy() sigmas = torch.exp(sigma_param_log).detach().cpu().numpy() horizons = times.detach().cpu().numpy() return mus, sigmas, horizons mus, sigmas, horizons = train_gbm() import scipy from scipy import special expected_return = np.exp(mus * horizons[3]) - 1 lam = sigmas * np.sqrt(horizons[3]) row = (mus - 0.5*sigmas**2)*horizons[3] probability_greater_zero = 0.5 - 0.5 * special.erf(-row/(np.sqrt(2)*lam)) </code></pre> <p>This gives me the expected value of resp assuming that the asset follows a geometric brownian motion distribution as well as the drift parameter and the probability that resp is greater than 0. All of which could be used to fine tune the training of the network. I played around with all three as targets and also thought of label smoothing techniques for binary classification using the sigma parameter. </p> <p><strong>Mixture Density Network:</strong><br> A mixture density network is just a fancy name for a neural network that predicts a conditional probability distribution as an output. I assumed that the distribution of my target variables (resp_1, resp_2, resp_3, resp, resp_4, drift, expected_return) of financial asset conditioned on the 130 features provided is approximately normal. I don't have a very good justification for this assumption but considering that the unpredictable inherent randomness of financial asset is most likely a combination of a large number of smaller random variables we can use a central limit theorem argument to justify this.</p> <p>The MDN predicts the parameters to a multivariate normal distribution and it does so by returning a lower triangular matrix of shape 6x6 and a mean vector of length 6. The six components are (resp_1, resp_2, resp_3, resp, resp_4, drift) respectively. And the loss function is negative log likelihood. </p> <p>Here is the code for anyone interested:</p> <pre><code>class MDNDecisionMaker(nn.Module): def __init__(self, in_dim, out_dim, hidden_dim, dropout_prob): super().__init__() self.hidden = hidden_dim self.out_dim = out_dim self.dropout = dropout_prob self.feed_forward_network = nn.Sequential( nn.BatchNorm1d(in_dim), nn.Linear(in_dim, self.hidden), #1 nn.SiLU(), nn.Dropout(p = 0.3), nn.Linear(self.hidden, self.hidden), #2 nn.SiLU(), nn.Dropout(p = 0.3), nn.Linear(self.hidden, self.hidden), #3 nn.SiLU(), nn.Dropout(p = 0.3), nn.Linear(self.hidden, self.hidden), #3 nn.SiLU(), nn.BatchNorm1d(self.hidden) ).double() # predict mean value of multivariate gaussian distribution self.mean_network = nn.Sequential( nn.Linear(self.hidden, self.hidden), #3 nn.SiLU(), nn.Linear(self.hidden, out_dim) ).double() # predict non diagonal lower triangular values of matrix self.cholesky_nondiag_sigmas_network = nn.Sequential( nn.Linear(self.hidden, self.hidden), #3 nn.SiLU(), nn.Linear(self.hidden, out_dim*out_dim), #2 ).double() # predict the diagonal elements, these must be non zero to ensure invertibility self.cholesky_diag_sigmas_network = nn.Sequential( nn.Linear(self.hidden, self.hidden), #3 nn.SiLU(), nn.Linear(self.hidden, out_dim) ).double() self.bceloss = nn.BCELoss() def forward(self, x, return_covariance = False): parameters = self.feed_forward_network(x.double()) means = self.mean_network(parameters) cholesky_lower_triangular = torch.tril(self.cholesky_nondiag_sigmas_network(parameters).view(-1, self.out_dim, self.out_dim), diagonal = -1) cholesky_diag = torch.diag_embed(torch.exp(self.cholesky_diag_sigmas_network(parameters)).view(-1, self.out_dim)) cholesky_sigmas = cholesky_diag + cholesky_lower_triangular if return_covariance: covariances = torch.bmm(cholesky_sigmas, torch.transpose(cholesky_sigmas, 1, 2)) return mean, covariances return means, cholesky_sigmas def liklihood_loss(self, means, cholesky_covariances, samples): batch_dim = samples.size()[0] means = means.double() samples = samples.double() cholesky_covariances = cholesky_covariances.double() log_probs = -1/2 * torch.bmm((samples-means).view(batch_dim, 1, self.out_dim), torch.cholesky_solve((samples - means).view(batch_dim, self.out_dim, 1), cholesky_covariances)).view(-1,1) \ -torch.log(torch.diagonal(cholesky_covariances, dim1=-2, dim2=-1)).sum(dim = 1, keepdim = True) \ - self.out_dim/2 * 1.83787706641 log_liklihood = log_probs.sum(dim = 0)/batch_dim return -log_liklihood </code></pre> <p>A few key points of the network include:</p> <ol> <li>Outputting a valid positive definite gaussian kernel required me to output a lower triangular matrix with a positive diagonal. Then I can multiply this by its adjoint to get a positive definite matrix (this is called the cholesky decomposition). Getting the right type of matrix required some reparameterization with exponentials to ensure I got positive numbers.</li> <li>The mixture density network is very very difficult to train. In particular NaN's pop up everywhere because of all the inverses of matrices. Dropout was carefully tuned to prevent overfitting while also not causing erratic optimizer behaviour that would drive the loss to NaNs. </li> <li>There probably was a lot of scope for tuning the architecture of the network, but I went with a reasonable choice of layers and breadth (SiLU also looked good from the public kernels).</li> <li>My partner used k-fold purged cross validation to choose the best neural network model to select. And we tried ensembling models from different folds to boost performance. <br> <strong>Predictions</strong><br> We played around with this for a while, I used monte-carlo simulation where I sampled from the geometric brownian motion process I parameterized earlier. It turned out that nothing worked better for me than taking the mean of the output mean vector from the neural network and trading if that was positive and not trading if it was negative. </li> </ol> <p>I wasn't able to fully pinpoint where the NaN's were coming up. I had full control over the diagonal entries in my lower triangular matrix (these are the eigenvalues) so I initially (unsuccessfully) used an ELU (exponential linear unit) activation function to parameterize the eigenvalues to have some minimum value (10^-3 for instance). I think this would have have a similar effect to clipping singular values. My best guess is that when the neural network predicted a near singular covariance matrix, the gradient of the loss function would explode. A stable optimizer, higher precision and a small dropout made the network outputs less noisy and less likely to predict a near singular covariance matrix. <br> Every row in the training set I treated as an individual asset. I didn't use any of the features aside from inputs to my network. </p> <p>I should have included the definition of columns but you are correct. Also we use returns to estimate the time horizons for each of the resp (which is what dT is) and we also estimate the volatility (which is the sigma parameter in geometric brownian motion). </p> <p>I think of the GBM step as a version of feature engineering as I am creating a new target (drift) with the given targets (resp_1…resp_4, resp). The intuition behind this feature is that assuming that geometric brownian motion models the price of the financial asset, the drift parameter is much more fundamental to the financial asset than the price at any given time. Thus, I hypothesis is that drift is more correlated with the 130 provided features than resp_i. This means that the conditional distribution of drift given the 130 features is more peaked. </p> <p>While the GBM step found the optimal estimate for drift given resp_i, the MDN is used to figure out the conditional distribution of drift given the 130 features. Once I found the conditional distribution of drift I can make a probabilistic decision on my trading or passing based on that distribution. </p> <p>I think the confusion arises because I am estimating drift in the GBM step and then again training a neural network to estimate the drift again. But estimating drift in the GBM step I am using resp_i and I am pretty sure I am estimating drift with high accuracy. Think of this as a preprocessing step, like filtering an image for noise before doing image classification. On the other hand, the MDN predicts resp based on the 130 features.</p>
Jane Street Market Prediction
24th place solution: Couple more tricks, and how I kinda sorta cheated
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jane Street Market Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Inspired by <a href="https://www.kaggle.com/dmitryvyudin" target="_blank">@dmitryvyudin</a>'s recent post, I thought I'd also comment on a couple of tricks I used that I haven't seen anyone else mention. I think the first couple are good ideas, while the last is getting into a bit of a cheating grey area, and is certainly not deployable. Curious to hear if anyone else tried any of these!</p> <p><strong>1. Dynamic Label Smoothing based on resp</strong><br> From very early on, I was frustrated that regression approaches were not working, and felt that classification was throwing away valuable information contained in the Resp targets. When I came across label smoothing (1 -&gt; 0.99, 0 --&gt; 0.01), I had the idea to use resp to control how much smoothing to do. So my classification labels were actually sigmoid(a*resp), where a is a large, tunable parameter. This way, if resp was very close to zero, the 0/1 label would actually be blunted to something a bit closer to 0.5. This was repeated for all 5 resp targets, fed into multi-target NN, and I simply averaged the final predictions. This had a measurable benefit to my CV scores.</p> <p><strong>2. Log the weights in my CV.</strong><br> The most challenging part of this contest IMO was measuring which models were actually doing well. I would constantly get conflicting advice on different runs, even when ensembling many random seeds together or repeating experiments. In addition to focusing on metrics like AUC, something else that helped was log-transforming my weights for my internal CV utility function. This seemed to help mitigate the effects of large-weighted trades without ignoring them entirely.</p> <p>(and as many others have mentioned, I almost entirely ignored the public LB. Both my submissions were in the bottom half in public LB rank.)</p> <p><strong>3. Use weights to triage to faster/slower models in deployment.</strong><br> I thought this was pretty clever, but it's unclear how much it actually helped me. If you look at the weight distribution and the utility score, it turns out that a few observations make up the vast amount of the total weight. Because of this, you can actually deploy much better/slower models on just a few obs, and get massive benefit. For example, obs with a weight of 30 or more only make up 3 % of the observations, but a third of the totalweight. I threw an ensemble of 20 MLPs, 5 Densenets, and 5 resnets at these observations. The lowest weighted obs barely mattered, so they got 1 MLP, and ran in 1/30th the time. In the end, my deployment run time was FAR below the max allowed, whereas probably would have timed out if I had thrown the full 30-model ensemble at every observation. </p> <p><strong>4. Head I win, tails you lose.</strong><br> From early on, I had decide that I was going to use my two submission for opposing thresholds. </p> <p>(An aside: I was never convinced that the utility score actually penalized risk-taking like the hosts intended. The sharpe ratio balances risk and return, but because it is multiplied by total return, the return term is actually in the utility equation twice, and thus outweighs the volatility penalty in the denominator in practice. I found no evidence that a "conservative" approach in my submissions ever helped. But maybe I'm missing something…)</p> <p>All that is to say: I think 0.5 is the best/correct threshold if you have to pick one. However, because we have two submissions, I submitted one that was "bearish" and one that was "bullish". So basically one submission with a threshold of 0.49, and one with a threshold of 0.51.</p> <p>But I took this one step further: If feat_0 is in fact long/short or buy/sell, then shorts should be more aggressive in a bad market, and longs should be more aggressive in a good one. So in the end, my "bearish" submission uses a 0.49 threshold for feat_0 == 0 obs, and a 0.51 thresh for feat_0 == 1 obs, and my other submission does the opposite. If the market booms relative to the training data, one of my submissions will do well. If it crashes, the other will do well. I could have gone more aggressive with something like 0.47 and 0.53, but I also wanted to do reasonably well if the market did exactly as well as it did during the training data. </p> <p>If Jane Street can find a way to trade in two parallel universes, and only suffer the consequences of the one that works out better for them, I'd highly recommend this trading strategy.</p> <p>Other than these tricks, everything I did was pretty similar to stuff found in published notebooks. I ensembled MLPs, densenets, and resnets, with the most weight on MLPs. 3-4 layers, dropout, swish, very few epochs. First 85 days INCLUDED, and weight == 0 observations included. I'm not sure why so many folks took this data out.</p> <p>Curious to hear feedback, and looking forward to the next 6 months of suspense! </p> <p>Thanks! I didn't have any super slow models that were performing dramatically better, so I don't know if it really helped me that much. But it did help me sleep easier knowing that a timeout error was far less likely. </p> <p>I'm not totally sure what you mean by "model so big it would need a small model first", but yes, I also couldn't find any big/complex model that performed particularly well. Everything fancy I tried basically overfit. That's why my "big" model was actually just an ensemble of a ton of relatively simple networks with different seeds. Sorry you got stuck with a simple model, but if it makes you feel better, the stock market seems basically unaware of the current crisis (at least since August), so maybe a simple model will do well over the next 6 months… very curious to see if the shakeup continues.</p>
Jane Street Market Prediction
3217th -> 17th place simple LSTM model with custom loss function
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jane Street Market Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Ah sorry link didn't work as there wasn't a saved version. <a href="https://www.kaggle.com/faisalalsrheed" target="_blank">@faisalalsrheed</a> it should work now.</p> <p>To make a custom loss function, you can just define a function my_loss(ytrue, ypred) that returns the loss, and pass it to the model.compile function.</p> <p>I was very surprised at how well my model is doing on the reruns given its final public LB score was only ~4000, so thought I would share my method!</p> <p>It's a simple, small LSTM network where I've trained on the time series data by splitting it into chunks of between 5 and 10 time points - 7 in the submission.</p> <p>The only explanation I can think of for the increase in relative performance on the rerun data is that I used a loss function that tries to penalise high variance in returns in a similar way to the utility function:<br> $$L = - \sum_i p_i + \beta \sum_i p_i^2$$<br> where <br> $$p_i = \text{response}_i.\text{action}_i$$</p> <p>and beta is a tunable parameter for the extent of the penalty.</p> <p>Maybe as we get further from the training regime the variance of the models' responses increases and this loss function is able to prevent that to some extent by penalising 'risky' trades.</p> <p>Here's the notebook:<br> <a href="https://www.kaggle.com/scottishkimchi/lstm-with-custom-loss" target="_blank">https://www.kaggle.com/scottishkimchi/lstm-with-custom-loss</a></p>
Jane Street Market Prediction
39th Place - Solution Overview & Code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jane Street Market Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I thought I’d post an overview of my solution while I am relatively high on the leaderboard which might not last for long. My solution is based on a simple MLP with some elements that I have not seen posted before including a bit of feature and target engineering and weight-dependent voting in post-processing.</p> <h1>1. CV Strategy</h1> <p>For me this is by far the most important part – with the amount of noise in the data everything else would not be possible to evaluate without a decent CV strategy.</p> <p>I’ve spent a huge chunk of time trying to find a validation approach that gave at least somewhat consistent and logical results. I eventually landed on using mean OOF utility scores similar to what I posted <a href="https://www.kaggle.com/dmitryvyudin/jane-street-oof-utility-scores" target="_blank">here</a>. From the very start I decided not to use the public LB other than to check that the code runs and does not time out – otherwise the temptation to overfit the public LB would be too difficult to resist. </p> <p>I used all the data for both training and validation based on <code>GroupKFold</code> split using dates as groups – this was a risky decision as I assumed there was no leakage between training and validation dates. I have a few reasons to think that this leakage indeed was not a problem:<br> • First, this seems to be an HFT competition – as <a href="https://www.kaggle.com/gregorycalvez" target="_blank">@gregorycalvez</a> noticed in this excellent <a href="https://www.kaggle.com/gregorycalvez/de-anonymization-time-aggregation-tags" target="_blank">notebook</a>, the <code>NaN</code> patterns suggest that the data contains some rolling intraday features. This does not mean that there are no other rolling features covering many days, but from my point of view, it does make it less likely/important.<br> • I tried splitting data into groups of consecutive days – from 1 to 100 days in one group. In case of leakage you could expect to get better results with 1-day groups compared to 100-day groups because there is potentially much more interaction between training and validation data. But after tons of experiments I just could not see this effect.<br> For the actual training I used groups of 50 consecutive days.<br> • At the beginning I tried splitting data completely randomly, i.e. using trades from the same date both for training and validation. In this setup the intraday leakage was noticeable but even then my model had to run for a lot of epochs to pick up this information with validation scores continuing to improve even after 100’s of epochs. I have not observed this behavior with <code>GroupKFold</code>.</p> <h1>2. Feature Engineering</h1> <p>There were a number of discussions about the meaning of <code>feature_0</code> (buy/sell, long/short?). I have no idea what the correct answer is – my hypothesis is that it is produced by a separate JS model that selects the trading opportunities. And if this is true then its recent history might be indicative of some sort of market condition. This gave me an idea to build a series of rolling 'lag' features based on <code>feature_0</code> and this resulted in a modest but noticeable improvement in the CV scores.</p> <p>I have also added a few other features based on the ‘clock’ <code>feature_64</code> that together also improved the CV score:<br> • Binary feature representing part of the trading day (before/after lunch)<br> • Number of trades suggested by JS algorithm earlier today (for the first part of the day) or after lunch (for the second part of the day) - the intuition here that together with 'clock' this feature could also represent a market condition (e.g. more trade opportunities = more volatility)<br> • 'Gradient' of <code>feature_64</code> with respect to timestamp - similar intuition to the previous point</p> <h1>3. Target Engineering</h1> <p>Like many others, I have noticed that treating this task as multi-label classification leads to better results compared to trying to predict just one label - <code>resp</code>. I have made a couple of adjustments compared to most public notebooks:<br> • I did not use <code>resp_4</code> - my CV always went down when I tried to add it. This could be to some extent explained by the fact (conjecture?) that the time horizon of <code>resp_4</code> is longer than that of <code>resp</code>.<br> • Instead, I added the mean value of <code>resp</code>, <code>resp_1</code>, <code>resp_2</code> and <code>resp_3</code> as a separate target which did improve the CV score. This can be thought of as a proxy for the general direction of returns over the whole <code>resp</code> time horizon.</p> <h1>4. Model &amp; Training</h1> <p>Nothing interesting architecture-wise – very basic 3-layer MLP with batch normalization and dropout. </p> <p>For optimization I used <a href="https://arxiv.org/abs/1904.00962" target="_blank">LAMB </a> with Lookahead – according to the paper, LAMB is supposed to work well with large batch sizes and it seemed to outperform other optimizers I tried. </p> <h1>5. Inference &amp; Post-Processing</h1> <p>I did not like the idea of playing with the threshold – values other than 0.5 seem artificial and lack any intuitive meaning. Instead, for most of the competition, I used a ‘qualified’ majority voting. I.e. only accepting the trade opportunity if 66% of models ‘voted’ for it. </p> <p>Later on, I started taking the weight of the trading opportunity into account - the more the weight of the opportunity, the more confident I have to be to act on it. This results in slightly better utility scores since the utility score formula punishes high variance.</p> <p>Overall I used 50 models (5 folds &amp; 10 seeds) with each model having 5 votes – one vote per target.</p> <h1>6. Code</h1> <p>• Training - <a href="https://www.kaggle.com/dmitryvyudin/jane-street-tf-lamb" target="_blank">https://www.kaggle.com/dmitryvyudin/jane-street-tf-lamb</a><br> • Inference - <a href="https://www.kaggle.com/dmitryvyudin/jane-street-weighted-voting-inference" target="_blank">https://www.kaggle.com/dmitryvyudin/jane-street-weighted-voting-inference</a></p> <h1>7. Other</h1> <p>Some of the things I tried but could not make work based on my CV strategy:<br> • Treating the problem as regression as opposed to classification – much better results with L1 loss than MSE but still worse than classification<br> • Using weight or log(weight) in the loss function<br> • Adding noise to targets<br> • Knowledge distillation as a target de-noising technique – this looked very promising but I found a leakage literally hours before the submission deadline and had to revert to earlier, simpler models<br> • <a href="https://arxiv.org/abs/2006.00719" target="_blank">AdaHessian </a> optimizer (was very excited about the results initially but in the end LAMB performed better)<br> • TabNet – found it too slow for this competition </p> <p>I used all the features. I played with feature selection for a bit but did not see any meaningful performance gains.</p>
Jane Street Market Prediction
3rd place solution: Ensembles of deep (49 layer) MLPs
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jane Street Market Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>between 13k and 14.5k</p> <p>No, I didn't used pretrained layers, just random init. The number of epochs is quite dependent on the batchsize. When I used ~5k samples per batch, then 10-20 epochs yield the best results. But then I found this nice notebook <a href="https://www.kaggle.com/aimind/1dcnn-search-resnet" target="_blank">notebook </a>, which motivated me to test larger batchsizes. <br> I totally felt the same about the k-fold and purged CV. And actually I was also quite "nervous" about using such a simple approach, but I am happy that I am not the only who did so.<br> Well, I also tried many different things for feat_0, but nothing really worked out. Even the models that did not use f0 worked pretty well. I am curious if anybody find some way to make real advantage of feat_0.<br> Thanks :)</p> <p>After many initial experiments with more or less „complicated“ approaches I ended up with almost no feature engineering, no CV, but an esemble of 15 deep MLPs, each of them consisting of 49 layers. </p> <p><strong>Data:</strong></p> <ul> <li>Remove first 85 days</li> <li>Rows with weight=0 were included</li> <li>NaN filling: median conditioned on f0 (I think this was actually my best idea throughout this competition :D )</li> </ul> <p><strong>Model:</strong></p> <ul> <li>input block: batchnorm + logarithmic feature extension (→ 130 std features, 130 log features)</li> <li>block 0: 3 dense layers with 100 units, batchnorm, dropout (0.35) and mish activation</li> <li>block 1-23: 2 dense layers with 100 units, batchnorm, dropout (0.35) and mish activation</li> <li>skip connections between Block 0 and Block i (i=[1,..,23]</li> <li>output: Dense layer with 5 units (all 5 ‚resps‘) - sigmoid activation</li> <li>15 ensembles of these deep models</li> <li>action: mean over all 15x5 outputs; threshold = 0.5</li> </ul> <p><strong>Training:</strong></p> <ul> <li>batch size = 30k</li> <li>Adam with lr = 1e-3</li> <li>200 epochs</li> <li>loss: mixture of BCE (label smoothing = 0.1) and a „utility-like“ loss</li> </ul> <p><strong>Inference:</strong><br> The models are converted to tf-lite interpreters. This really speeds up submission. Otherwise it never would have been possible to evaluate so many deep models.</p> <p><strong>Validation:</strong><br> The basic model shape and the hyperparameters where chosen from a simple train/ validation split on my local machine. In between I also used a GKF CV but I wasnt really happy about it, so I went back to this very simple train / validation split (train: 85-300, val: 350-500).<br> To reduce the impact of different seeds, I rerun my tests up to 8 times. This gave me quite a good estimate of how new implementations really perform.<br> For the final model, the number of epochs for training were found by retrain the models and submit them. So one could say I used the public LB data for early stopping. As we have 1 million rows in the public set, this seems legit to me.</p> <p><strong>Ensembles:</strong><br> It is often said that ensembles only improve the performance if the models learn something different, i.e. if they are trained with different data. I think this is true in general, but here the situation is a little different. We saw an extreme dependence of seeds for different public models. I also found this for my single models. Using many models trained from different seeds kinda solves this issue. Like fighting fire with fire, fight randomness with randomness.</p> <p>Yes, correct. Each block is connected to the previous block, so there are 49 layers in total per model.<br> Regarding the skip connections, I was wondering when somebody would ask this question . Actually, it was by accident, but I only realized it after I run the training and I found it does a quite OK job. I think this kind of skip connections, where the residual between layer output and low level features is used, can serve as additional regulariziation, assuming the low level features are still noisy.<br> Unfortunatelly, there was no time left to check how standard skip connections perform.</p> <p>No problem. Block 0 is connected directly to the input block, block x fox x={1,..,23} is connected to residual of block x-1 and block 0. I hope its clear, if not, I will plot the model :)</p>
Jane Street Market Prediction
Current 29th place approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jane Street Market Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks a lot for your reply!</p> <p>I'll try to understand the effect of ensemble more clearly. </p> <p>I'm relatively new to kaggle, so I'm surprised to be high in the leaderboard (maybe because of much luck). I learned a lot from this competition. I would like to share my approach. I published <a href="https://www.kaggle.com/hedwig100/js-inference-notebook" target="_blank">inference notebook</a> and <a href="https://www.kaggle.com/hedwig100/nn-training-notebook" target="_blank">training notebook</a>. </p> <p>Later in this competition, I noticed ensembling models boosts my local CV. I think this is caused by stable prediction. So, I trained various models, and ensembled these models. Details are shown below. </p> <p><br> </p> <h1>Training Strategy</h1> <ul> <li>PurgedGroupTimeSeriesSplit(5fold and 20gap)</li> <li>but maybe leakage was caused when training encoder… </li> <li>watched AUC and UtilityScore in each fold.</li> <li>used Early Stopping by valid-auc </li> </ul> <p><br></p> <h1>Preprocess</h1> <ul> <li>fill NaN by 0 and FeatureNeutralization(p=0.25)</li> <li>fill NaN by mean </li> <li>denoising autoencoder</li> </ul> <p><br></p> <h1>Model</h1> <ul> <li>simple NN, CNN and DenseNet. </li> <li>I also used <a href="https://www.kaggle.com/a763337092/pytorch-resnet-starter-training" target="_blank">this famous notebook</a>'s model architecture. I trained this model with my CV strategy. </li> <li>In first submission, I used my 3model. </li> <li>In second submission, I used my 3model + 1pytorch model</li> <li>For each model, I used weight trained in last fold, and weight trained with all data(except for weight = 0 and first 85days)</li> <li>second submission achieves higher score. </li> </ul> <p><br></p> <h1>Ensemble</h1> <ul> <li>weight average</li> <li>weight was decided by CV. </li> </ul> <p><br> </p> <p>Thank you for reading!</p>
Jane Street Market Prediction
My (perhaps) over-complicated LSTM solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jane Street Market Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks. <a href="https://www.kaggle.com/c/jane-street-market-prediction/discussion/216833" target="_blank">https://www.kaggle.com/c/jane-street-market-prediction/discussion/216833</a> </p> <p>tl:dr: <a href="https://www.kaggle.com/snippsy/js-over-complicated-lstm" target="_blank">Notebook</a></p> <p>This was the first Kaggle competition I have more than a little effort into. <br> It has been great to discuss and share ideas in the community. </p> <p>I see now from some other published solutions that going simpler would have been better as I'm not that high up on the leaderboard. A saving grace is that at least my model scores better on the private than the public leaderboard. From 4900 -&gt; 3700 (given 60% of the data the score would be 2900 if the data was the same). This could perhaps be down to underfitting. </p> <p>The three things I did in the solution was:</p> <ol> <li><p><strong>Using features 41-43 to separate stocks</strong>. From <a href="https://www.kaggle.com/gregorycalvez/de-anonymization-price-quantity-stocks" target="_blank">this</a> notebook. <br> This enabled me to use a time-series based model. </p></li> <li><p><strong>Time embedding</strong>. Each stock could have from a few to hundreds of data points at irregular times. I wanted to emphasise this in the model, therefore, based on <a href="https://arxiv.org/abs/2003.09291" target="_blank">this</a> paper I created dense time features based on <code>feature_64</code>. </p></li> <li><p><strong>Target denoising</strong>. Based on the idea from <a href="https://www.kaggle.com/lucasmorin/target-engineering-patterns-denoising" target="_blank">this</a> notebook, I used a linear regressor to denoise the targets. This improved the stability of the model massively. </p></li> </ol> <p>In addition to this, I dropped the <code>dates &lt;85</code> and data with <code>weight = 0</code>. I tried using these data as well but yielded worse results. <code>CombPurgedKFoldCV</code> was used with 4 folds and 2 test periods per fold. </p> <p>I did not have too much time to spare in the last few weeks of the competition, so I did minimal hyperparameter optimization (just manually), and therefore, the model is fairly simple, with <code>&lt;100k</code> parameters, if I remember correctly). </p> <p>Finally, thank you to Jane Street for providing this great puzzle, and to all that have contributed actively to the discussions. This competition has been a great learning opportunity. </p> <p>Best of luck to all. </p>
Jane Street Market Prediction
RANK 119 - Simple Solution 😉
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jane Street Market Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here we go 😎<br> -&gt; Removed features which had high collinearity with others and used feature selection to keep the needful<br> -&gt; Used Optuna to get the best hyper-parameters!.<br> -&gt; Trained 5 XGB_models on 5 different targets #Multi-target<br> -&gt; Used tree-lite for faster inference (Was able to predict within 2:40 mins)</p> <p>That's all 😂 </p> <p><strong>Most surprising thing is that this was my first Kaggle competition and I had just started learning Machine Learning through YouTube and online courses.</strong></p> <p><strong>I know it so simple!!!😁 There is nothing to boast about it</strong>. But I was quite happy because I had done this all within the first one-two months of learning ML and Kaggling.💛🧡. </p> <p>Learnt a lot from this Jane Street competitions!!!</p> <p>Here is the <strong>notebook</strong>🔽<br> <a href="https://www.kaggle.com/oldwine357/janestreet-xgb-rank-80-73-362-314-317?scriptVersionId=62345420" target="_blank">https://www.kaggle.com/oldwine357/janestreet-xgb-rank-80-73-362-314-317?scriptVersionId=62345420</a></p> <p>Most of all I would like to thank <a href="https://www.kaggle.com/code1110" target="_blank">@code1110</a>. For introducing the treelite, I wouldn't have done it without faster inference method.🧡</p> <p><a href="https://www.kaggle.com/code1110/janestreet-faster-inference-by-xgb-with-treelite" target="_blank">https://www.kaggle.com/code1110/janestreet-faster-inference-by-xgb-with-treelite</a></p>
Jane Street Market Prediction
Thanks! Becoming (Perhaps) One of the Youngest Competition Master
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jane Street Market Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My first gold medal! A huge thank you to the Kaggle community, without the discussion posts and public notebooks, I could not have reached this point! </p> <p>I posted an <a href="https://towardsdatascience.com/my-journey-to-kaggle-master-at-the-age-of-14-e2c42b19c6f7" target="_blank">article</a> writing about the story and some tips I learned along the way .</p> <h2>A Brief Summary of My Journey</h2> <p>Hi, I'm Andy Wang, a 14 year old high school freshman. I became interested in Data Science and Machine Learning about a year ago. I found Kaggle hoping to enhance my skills and gain more experience of the industry. With my friend <a href="url" target="_blank">Andre Ye </a>, we started joining Kaggle competitions hoping to learn and discover new things. It was a shock to us when we received our silver medal in the <a href="https://www.kaggle.com/c/lish-moa" target="_blank">first competition</a> that we attended from a huge shakeup. Afterwards, we joined 2 more competition and was amazed by the results. I never thought that I could reach this point, the biggest thanks to the Kaggle community and my friends Andre. The knowledge I acquired from attending the competitions is more than that of the results. </p> <p>Finally, the biggest lesson that I've learned from competitions is that <strong>simple solutions can go a long way.</strong> With the right cross validation scheme and a stable CV, the results were never too bad (Especially in this competition).</p> <p>Again, I cannot thank the Kaggle community enough for teaching me and leading me through the fascinating field of Machine Learning. </p> <p>(Aiming for Competition GM now! 😃 )</p>
Jane Street Market Prediction
Yirun's Solution (1st place): Training Supervised Autoencoder with MLP
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jane Street Market Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>No domain knowledge is needed.</p> <p>I have various more complex solutions but not submitted. Not sure how they perform.</p> <p><strong>Update:</strong> The single-model AE-MLP scores 6022.202 on the private leaderboard, which is still 1st place!</p> <p><strong>Notebook (Training):</strong> <a href="https://www.kaggle.com/gogo827jz/jane-street-supervised-autoencoder-mlp?scriptVersionId=73762661" target="_blank">Jane Street: Supervised Autoencoder MLP</a></p> <p>In this competition, I have used a supervised autoencoder MLP approach and my teammates use XGBoost. Our final submission is a simple blend of these two models. Here, I would like to explain my approach in detail. </p> <p>The supervised autoencoder approach was initially proposed in <a href="https://www.kaggle.com/aimind/bottleneck-encoder-mlp-keras-tuner-8601c5" target="_blank">Bottleneck encoder + MLP + Keras Tuner 8601c5</a>, where one supervised autoencoder is trained separately before cross-validation (CV) split. I have realised that this training may cause label leakage because the autoencoder has seen part of the data in the validation set in each CV split and it can generate label-leakage features to overfit. So, my approach is to train the supervised autoencoder along with MLP in one model in each CV split. The training processes and explanations are given in the notebook and the following statements.</p> <p><strong>Cross-Validation (CV) Strategy and Feature Engineering:</strong></p> <ul> <li>5-fold 31-gap purged group time-series split</li> <li>Remove first 85 days for training since they have different feature variance</li> <li>Forward-fill the missing values</li> <li>Transfer all resp targets (resp, resp_1, resp_2, resp_3, resp_4) to action for multi-label classification</li> <li>Use the mean of the absolute values of all resp targets as sample weights for training so that the model can focus on capturing samples with large absolute resp.</li> <li>During inference, the mean of all predicted actions is taken as the final probability</li> </ul> <p><strong>Deep Learning Model:</strong></p> <ul> <li>Use autoencoder to create new features, concatenating with the original features as the input to the downstream MLP model</li> <li>Train autoencoder and MLP together in each CV split to prevent data leakage</li> <li>Add target information to autoencoder (supervised learning) to force it to generate more relevant features, and to create a shortcut for backpropagation of gradient</li> <li>Add Gaussian noise layer before encoder for data augmentation and to prevent overfitting</li> <li>Use swish activation function instead of ReLU to prevent ‘dead neuron’ and smooth the gradient</li> <li>Batch Normalisation and Dropout are used for MLP</li> <li>Train the model with 3 different random seeds and take the average to reduce prediction variance</li> <li>Only use the models (with different seeds) trained in the last two CV splits since they have seen more data</li> <li>Only monitor the BCE loss of MLP instead of the overall loss for early stopping</li> <li>Use Hyperopt to find the optimal hyperparameter set</li> </ul> <p><code>classification = 3718.025(6842.609 Previous Leaderboard)</code><br> Is this one your overfitting classification model? Its score looks not that 'overfitting' because lots of overfitting models score 10000+ in the previous LB.</p> <p><a href="https://www.kaggle.com/carlmcbrideellis" target="_blank">@carlmcbrideellis</a> That makes sense. 👍</p> <p>I use hyperopt to search the global hyperparameter set. It improves the score a lot.</p> <p>I only remember that I was criticised for publishing the neural network starter notebook which uses the vanilla group CV split (not purged time-series CV) and it potentially contains time-leakage and overfitting problems. Actually, he was right on this, so I added some explanation later.</p> <p>Sounds very interesting. Please let me know if it is published someday. 👍</p> <p>Please see more details in <br> <a href="https://www.kaggle.com/c/jane-street-market-prediction/discussion/229623" target="_blank">https://www.kaggle.com/c/jane-street-market-prediction/discussion/229623</a><br> <a href="https://www.kaggle.com/xiaowangiiiii/current-1th-jane-street-ae-mlp-xgb" target="_blank">https://www.kaggle.com/xiaowangiiiii/current-1th-jane-street-ae-mlp-xgb</a></p> <p>Please check the summary figure in my notebook, which should answer your question about target information.</p> <p>You only need the encoder part of the autoencoder when predicting.</p>
Jigsaw Multilingual Toxic Comment Classification
100th place solution (+ GitHub)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you <a href="/gauravdahiya">@gauravdahiya</a> </p> <p>Thanks <a href="/mdselimreza">@mdselimreza</a> </p> <p>I did not place so well this time, but still got a medal, so I think it still worth it to share my solution. Congratulation to all winners and new GMs. This was a very interesting competition, being able to work with huge datasets and models was very challenging. Luckily my best submission was the last one that I made 😄. I had many more Ideas to try including using external data but had no time. I also have a <a href="https://github.com/dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification">Git repository</a> with my experiments.</p> <h2>Quick summary</h2> <ul> <li>Model: 5-Fold <code>XLM_RoBERTa large</code> using the last layer <code>CLS</code> token. <ul><li>5-Fold <code>XLM_RoBERTa large</code> concatenated <code>AVG</code> and <code>MAX</code> last layer pooled with 8 multi-sample dropout.</li></ul></li> <li>Inference: I have used <code>TTA</code> for each sample, I predicted on <code>head</code>, <code>tail</code>, and a mix of both tokens.</li> <li>Labels: <code>toxic</code> cast to <code>int</code> then pseudo-labels on the test set.</li> <li>Dataset: For the 1st model I used a 1:2 ratio between toxic and not toxic samples, for the 2nd model use 1:1 ratio, also did some basic cleaning. (details below)</li> <li>Framework: <code>Tensorflow</code> with TPU.</li> </ul> <h2>Detailed summary</h2> <h3>Model &amp; training</h3> <ul> <li>1st model <a href="https://github.com/dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification/blob/master/Model%20backlog/Train/99-jigsaw-fold1-xlm-roberta-large-best.ipynb">link for the 1st fold training</a> <ul><li>5-Fold XLM_RoBERTa large with the last layer <code>CLS</code> token</li> <li>Sequence length: 192</li> <li>Batch size: 128</li> <li>Epochs: 4</li> <li>Learning rate: 1e-5</li> <li>Training schedule: Exponential decay with 1 epoch warm-up</li> <li>Losses: BinaryCrossentropy on the labels cast to <code>int</code></li></ul></li> </ul> <p>Trained the model 4 epochs on the train set then 1 epoch on the validation set, after that 2 epochs on the test set with pseudo-labels.</p> <ul> <li>2nd model <a href="https://github.com/dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification/blob/master/Model%20backlog/Train/136-jigsaw-fold1-xlm-roberta-ratio-1-8-sample-drop.ipynb">link for the 1st fold training</a> <ul><li>5-Fold XLM_RoBERTa large with the last layer <code>AVG</code> and<code>MAX</code> pooled concatenated then fed to 8 multi-sample dropout</li> <li>Sequence length: 192</li> <li>Batch size: 128</li> <li>Epochs: 3</li> <li>Learning rate: 1e-5</li> <li>Training schedule: Exponential decay with 10% of steps warm-up</li> <li>Losses: BinaryCrossentropy on the labels cast to <code>int</code></li></ul></li> </ul> <p>Trained the model 3 epochs on the train set then 2 epochs on the validation set, after that 2 epochs on the test set with pseudo-labels.</p> <h3>Datasets</h3> <ul> <li>1st model <a href="https://github.com/dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification/blob/master/Datasets/jigsaw-data-split-roberta-192-ratio-2-upper.ipynb">dataset creation</a> 1:2 toxic to non-toxic samples, 400830 total samples.</li> <li>2nd model <a href="https://github.com/dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification/blob/master/Datasets/jigsaw-data-split-roberta-192-ratio-1-clean-polish.ipynb">dataset creation</a> 1:1 toxic to non-toxic samples, 267220 total samples.</li> </ul> <p>Both models used a similar dataset, upper case text, just a sample of the negative data, data cleaning was just removal of <code>numbers</code>, <code>#hash-tags</code>, <code>@mentios</code>, <code>links</code>, and <code>multiple white spaces</code>. Tokenizer was <code>AutoTokenizer.from_pretrained(</code>jplu/tf-xlm-roberta-large', lowercase=False)` like many more.</p> <h3>Inference</h3> <p>For inference, I predicted the <code>head</code>, <code>tail</code>, and a mix of both <code>(50% head &amp;amp; 50%tail)</code> sentence tokens.</p> <p>Thanks <a href="/yihdarshieh">@yihdarshieh</a> </p> <p>TTA is <code>test time augmentation</code> it is more common on image data, where you rotate, zoom or do some other transformations on inference time, to get an average prediction, in my case I used 192 tokens from the original sentence, so in case of very long sentences, I was predicting, in the first 192, last 192, and on the first 96 + last 96, this way I can have a better context of the whole text. Multi-sample dropout is a technique to that is supposed to decrease training time and increase generalization of models by combining random masks of the upper model, here is the <a href="https://arxiv.org/pdf/1905.09788.pdf">paper</a> and an <a href="https://towardsdatascience.com/multi-sample-dropout-in-keras-ea8b8a9bfd83">article</a> about it, I can say that this was not very significant in my case, and the <code>8</code> comes because I used 8 samples of it, on the link for the 2nd model you can check my Tensorflow implementation, is very simple.</p> <p>Thank you <a href="/rohitsingh9990">@rohitsingh9990</a> </p> <p>Thanks <a href="/haythemtellili5">@haythemtellili5</a> </p>
Jigsaw Multilingual Toxic Comment Classification
10th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to my wonderful teamates <a href="/naivelamb">@naivelamb</a> <a href="/wuyhbb">@wuyhbb</a> <a href="/terenceliu4444">@terenceliu4444</a> <a href="/hughshaoqz">@hughshaoqz</a> , Thank you for helping me to get my 5th gold medal!</p> <p>Congratulations to all winners!</p> <h1>Summary</h1> <p>Here's main ideas that worked for us.</p> <ul> <li>Translated Data</li> <li>Pseudo Labelling</li> <li>Multi-Stage Training</li> <li>Freeze Embed Layer</li> <li>UDA (<a href="https://arxiv.org/pdf/1904.12848.pdf">https://arxiv.org/pdf/1904.12848.pdf</a>)</li> <li>Mono Language Modeling</li> <li>Test Time Augmentation</li> </ul> <h1>Details</h1> <h3>Translated Data</h3> <p>We use translated data listed here for training and testing (for TTA)</p> <p><a href="https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification/discussion/159888">https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification/discussion/159888</a></p> <p>Thank you guys for publishing this excellent datasets!</p> <h3>Pseudo Labelling</h3> <p>We trained some baseline models then blend them with some public submission files to get to LB <code>0.9481</code> then use it as Pseudo Label till the end.</p> <ul> <li><a href="https://www.kaggle.com/shonenkov/tpu-inference-super-fast-xlmroberta">https://www.kaggle.com/shonenkov/tpu-inference-super-fast-xlmroberta</a></li> <li><a href="https://www.kaggle.com/shonenkov/tpu-training-super-fast-xlmroberta">https://www.kaggle.com/shonenkov/tpu-training-super-fast-xlmroberta</a></li> </ul> <p>When training on this test data with pseudo label, we found that use all test data by KL-Div loss with soft label gave us best performance.</p> <h3>Multi-Stage Training</h3> <p>We all know that fine-tuning the model on validation set after training on train set can boost the LB score, and we call it <code>2-stage training</code>. (<a href="https://www.kaggle.com/xhlulu/jigsaw-tpu-xlm-roberta">https://www.kaggle.com/xhlulu/jigsaw-tpu-xlm-roberta</a>) By adding more stages on our training pipeline we were able to further boost our LB score. Some of our pipeline is like:</p> <ul> <li>Pseudo Labelling (5epo) -&gt; Train1 (1epo) -&gt; Valid (3epo)</li> <li>Train2 (1epo) -&gt; Train1 (1epo) -&gt; Valid (3epo)</li> <li>Train1 (1epo) -&gt; Pseudo Labelling (3epo) -&gt; Valid (3epo)</li> <li>...</li> </ul> <p>(Train1: training data of the 1st jigsaw competition on 2018) (Train2: training data of the 2nd jigsaw competition on 2019) (Pseudo Labelling: test data of this competition) (Valid: validation data of this competition)</p> <p>Note that we usually don't train on the full dataset when using Train1 and Train2, but a subset of it.</p> <h3>Freeze Embed Layer</h3> <p>Freeze Embed Layer of transformers can save the GPU memory so that we were able to use bigger batchsize when training, while speeding up the training process.</p> <p>By Multi-Stage Training, Pseudo Labelling and Freeze Embed Layer we were able to get to LB <code>0.9487</code> without blending any public submission file.</p> <h3>Test Time Augmentation</h3> <p>We do the inference on all 6 languages, and the blending weight between the original language and the other 5 languages is 8:2 We got to LB <code>0.9492</code> by this TTA.</p> <h3>UDA</h3> <p>reference: <a href="https://arxiv.org/pdf/1904.12848.pdf">https://arxiv.org/pdf/1904.12848.pdf</a></p> <p>Then we blend some models that used UDA while training to get to LB <code>0.9498</code>, by some further tuning we got to LB <code>0.9500</code>.</p> <h3>Mono Language Modeling</h3> <p>At the last week, we found that blend with models that only trained on single language data can also boost the CV score. But since in validation data we only got <code>es</code>, <code>it</code>, <code>tr</code>, so we only do <code>Mono Language Modeling</code> on these 3 languages.</p> <p>By blending models that only trained on <code>es</code> or <code>it</code> or <code>tr</code>, we finally got to our best public LB <code>0.9504</code> and this is our best submission on private LB as well. </p> <hr> <h1>Thank you for reading!</h1>
Jigsaw Multilingual Toxic Comment Classification
11th place solution(Hui Qin's)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations all winners. Thanks Kaggle supply this funny competition. Thanks my teammates @Yang Zhang @MaChaogong @Hikkiiiiiiiii @Morphy</p> <p>It is my first to solve the Multi-Lingual problem. Thanks for those great starter kernels: <a href="https://www.kaggle.com/miklgr500/jigsaw-tpu-bert-two-stage-training">https://www.kaggle.com/miklgr500/jigsaw-tpu-bert-two-stage-training</a> <a href="https://www.kaggle.com/xhlulu/jigsaw-tpu-xlm-roberta">https://www.kaggle.com/xhlulu/jigsaw-tpu-xlm-roberta</a> I follow these codes at the very beginning and quickly got a lb 0.9415 single xlmr large model. </p> <p>After working 3 months for this problem, I got something useful tricks. </p> <p>1) What is the best training policy ? There are three training policies : one stage training,two stage training and three stage training. a) One stage training means that we always train the same data in training . <br> This policy always get the lower scores. But we can get some diversity. <br> b) Two stage training means that we will have two different data for training. e.g. <br> We firstly train on "Toxic Comment Classification" dataset +"Unintended Bias in Toxicity" dataset . After that we train on 8k validation data. <br> This policy always get the higher scores. <br> c) Three stage training means that we will have three different data for training. e.g. We firstly train on "Toxic Comment Classification" dataset +"Unintended Bias in Toxicity" dataset . Secondly we train on 200k open subtitle data. <br> Finally we train on 8k validation data. This policy always get the similar scores with Two stage training. But we get the higher private scores. For example, my xlmr large model got lb 0.9407, private score 0.9414. my another xlmr large model got lb 0.9420, private score 0.9424. </p> <p>Therefore , Three stage training is the best training policy. I don't know why. <br></p> <p>2) How to use data? </p> <p>At the very beginning , my teammate found that when we train on "Toxic Comment Classification" dataset +"Unintended Bias in Toxicity" dataset ,the valid auc are always lower than training on "Toxic Comment Classification" dataset only. To get a higher valid auc, we can not combine many different data to train. <br> Before 9 days ago, I found that if we use our best single model to predict the training data's target and use the predicting target as the label instead of the 0/1 label.Then we can get a higher valid auc score. e.g. We use the our xlmr large model(lb 0.9411) to predict "Toxic Comment Classification" dataset and get the predicted target of them. Something like these : id toxic preds 1 0 0.341 2 1 0.813 3 0 0.211</p> <p>Then we use the predicted probabilities as label target to train. Using this trick, we always get a 0.003 boost in valid auc score. This trick can use on other datasets which has 0/1 labels. After using this trick on both "Toxic Comment Classification" dataset and "Unintended Bias in Toxicity" dataset , we can use them to train our model. The more data, the higher valid auc score. </p> <p>Based on it, I trained a xlmr large model with 790k training data("Toxic Comment Classification" dataset and "Unintended Bias in Toxicity" dataset and open subtitle dataset) . it got a lb 0.9464 score and a 0.9445 private score. </p> <p>That's all . Hope these two tricks can help you. </p> <p>Thanks for sharing.</p> <p>You are welcome. You can use the BCE loss as your loss function. For example, F.binary_cross_entropy_with_logits</p>
Jigsaw Multilingual Toxic Comment Classification
13th Place Solution Part I. My first gold medal
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>嘿嘿多谢老大哥😊 </p> <p>First of all, a huge thank you to my teammates <a href="/euclidean">@euclidean</a> <a href="/nullrecurrent">@nullrecurrent</a> <a href="/luohongchen1993">@luohongchen1993</a> <a href="/sherryli94">@sherryli94</a>, you guys are so creative and diligent, and I learned tremendously from all of you. This gold medal is the first gold for all of us, and I feel so honored to share this experience with you. Cheers! </p> <p>And congratulations to all medal winners :)</p> <p>Most of the top winners have already covered many of our "what worked/what didn't work", I want to use this post to record some points that I think we did differently. </p> <h3>Pipeline</h3> <p>Our final pipeline is: 1) train on a selective part of training set 2) train on another part of training set + PL on test set 3) 5cv on validation + albumentation of validation (thanks to <a href="/shonenkov">@shonenkov</a> 's kernel) </p> <p>In 5cv, using soft label (validation prediction + validation true label)/2 improves 5cv. Some very controversial labels become ~0.5 which is intuitive to me. </p> <p>Also, randomize the order of samples in 5cv while keeping the same comments (original and its augmentation) in the same batch was helping to add additional variability. Our number of augmentation vs original comments was 1:1. More augmentation didn't help on LB for us. </p> <h3>Training Diversity</h3> <p>We added many variability in training using LR decay, Cosine annealing, SWA, class weights, sample weights (on augmentation/language), use pre-trained models, freeze embedding layers/encoding layers. Except for XLMR, we also tried LSTM, etc. These model may not work well alone, but the diversity helped in our stacking phase. </p> <h3>Stacking</h3> <p>We ended up with 1900+ first level features in stacking, which is quite amazing. We eventually passed 2 StackNet models and 1 forward selection model (all with lb 9494) to post processing phase. </p> <p>Initially greedy based forward selection ensemble overfits validation auc massively. But adding random initialization, and adding logloss criteria along with auc criteria helped to get a well balanced blending. </p> <h3>Post Processing</h3> <p>We scaled up the predicted probability of comments that contain cursed words of different languages. That improved about 3 bps for each stacking model, and rank mean of them finally sent us to the edge of gold zone. </p> <h3>Things that didn't work for me</h3> <ul> <li>Using PCA of embedding to construct similar training set compared to validation and test set. </li> <li>PCA selected train set to substitute validation set. </li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F834077%2Fb41ed87175b5b6ac45a779a7afc40ab1%2Fttagoutou.jpg?generation=1593209576045814&amp;alt=media" alt=""></p> <p>Thank you!</p> <p>Thank you Piyush! Yes as a matter of fact, we wouldn't have reached the gold range without PP, although the improvement was only 3-4 bps compared to some top teams' 10+/20+ bps. It was crucial to get to the top. </p> <p>Sorry about the late reply! There is no such question as dumb :) <br> Forward selection basically is the method that first choose one model with the highest cv (in this case validation auc) as the ensemble set, and greedily/iteratively adding more models that increase the cv score the most to the current ensemble set, one model at a time. </p>
Jigsaw Multilingual Toxic Comment Classification
16th Place Solution: Simple Calibrations
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to winner! Thanks to Kaggle team, organizers, educative and interesting notebooks ( @riblidezso , @shonenkov , @jazivxt ). That was my first nlp competition, here I acquainted with SOTA models in NLP (GloVe/FastText + LSTMs -&gt; Transformer -&gt; BERT -&gt; XLM-ROBERTa, excited journey). Take a gold medal will be too good to be true (I am the one who lost the most positions in top10), but anyway I glad to take my silver medal (the only one my medal 😃 ).</p> <h1>Things that worked for me</h1> <ol> <li>XLM-ROBERTa with LSTM + MAXPOOLING head lead to <strong>0.9328 private</strong> (0.9346 public)</li> <li>Ensemble models from 1st step <strong>0.9453 private</strong> (0.9474 public)</li> <li>Find bot comments by popular templates <strong>0.9458 private</strong> (0.9481 public)</li> <li>Make simple calibration to the every lang in test set (convert pd from 3rd step to logit and then add some constants for every lang to logit, so it occurs that I need to add positive constant for fr and es, negative to it and pt), such additions give my final <strong>0.9482 private</strong> (0.9505 public) </li> </ol> <p>As I can see now I was overfitted from 2nd step on ensembles, didn't find good CV strategy</p>
Jigsaw Multilingual Toxic Comment Classification
19th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to <a href="/rafiko1">@rafiko1</a> and <a href="/leecming">@leecming</a> for winning the first place, besides being in the top of the public LB for the longest time, and <a href="https://jigsaw.google.com/">Google Jigsaw</a> for hosting the third edition of toxicity classification challenge.</p> <p>We decided to divide our models into 3 parts. Number is listed alongside each model to denote what part they belong to.</p> <h2>Models:</h2> <ul> <li>XLM_RoBERTa_Base with different architectures/epochs/learning_rate/augmentation (I)</li> <li>XLM_RoBERTa_Large with different architectures/epochs/learning_rate/augmentation (I)</li> <li>XLM_RoBERTa_Large_MLM_training (I)</li> <li>Multilingual_BERT (cased and uncased variants) (II)</li> <li>BERT (cased and uncased variants) (II)</li> <li>RNN: pooled Bidirectional GRU with fasttext aligned word vector embeddings on translated train (III)</li> <li>RNN: pooled Bidirectional LSTM + GRU + attention with fasttext + glove + paragram embedding averaged on translated test (III)</li> <li>WeakLearner: NB-SVM on translated train as well as translated test averaged (III)</li> <li>WeakLearner: SGD on translated train as well as translated test averaged (III)</li> </ul> <p>Overall, there were a combination of about 50 models with the base models belonging to one type (parent) averaged. So, all varieties of XLM_RoBERTa_Base models were averaged and so on.</p> <p><strong>First-Level Blending procedure:</strong></p> <p>&gt; (I) -&gt; (XLM_RoBERTa_Base * 0.1) + (XLM_RoBERTa_Large * 0.5) + (XLM_RoBERTa_Large_MLM_training * 0.4)</p> <p>&gt; (II) -&gt; (Multilingual_BERT * 0.2) + (BERT * 0.8)</p> <p>&gt; (III) -&gt; (RNN * 0.6) + (WeakLearner * 0.4)</p> <p><strong>Final Blending procedure:</strong></p> <p>&gt; <code>toxic</code> = (I) * 0.8 + (II) * 0.1 + (III) * 0.1</p> <h2>Post Processing: (0.0005 boost)</h2> <p>I think this is the major highlight of our approach. Just a day before the competition finish, <a href="/veryrobustperson">@veryrobustperson</a> discovered that most of our lower scoring submissions were over-predicting while compared to the higher scoring submissions. Based on this hypothesis, we tried to introduce a re-scaling factor for predictions &gt;0.8 as well as &lt;0.01 through the use of probabilistic random noise that introduces a small penalty. I am sure there was a lot to explore here, but due to shortage of submissions, and lack of time, we couldn't delve deeper into it and optimize further. For two different submissions that we tried, the score increased by about 0.0005, so we thought it would generalize well even on the private LB test data. Luckily, it did. </p> <h2>Things that didn't / couldn't make it to work:</h2> <ul> <li>pseudo labelling by making &lt; 0.2 -&gt; 0 and &gt; 0.8 -&gt; 1.</li> <li>training a gradient boosting algorithm on the 1024 laser embeddings.</li> <li>hard-coding test probabilities to ground truth in validation as we found about 1000 samples overlapping based on cosine similarity of laser embeddings.</li> <li>changing the optimal threshold of toxic probabilities in <code>jigsaw-unintended-bias-train.csv</code> to a value ranging from 0.2 to 0.3, as 0.5 (default rounding) looked extremely non-toxic centric.</li> <li>using external datasets for hatespeech, toxic word list for 6 languages.</li> <li>label smoothing. </li> <li>Wordbatch FM_FTRL.</li> <li>power averaging of base models.</li> </ul> <p>Lastly, I thank my team-mates <a href="/veryrobustperson">@veryrobustperson</a> and <a href="/ipythonx">@ipythonx</a> for the wonderful discussions we've had and coming up with newer ideas from time to time. We almost made it to the gold zone :)</p> <p>Hope you all had fun competing! </p> <p>cheers, Nickil</p> <p>Thank you <a href="/cdeotte">@cdeotte</a>. I wish it was 0.005, but no it's indeed 0.0005.</p>
Jigsaw Multilingual Toxic Comment Classification
1st place solution overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We didn't train to convergence - typically trained something along the lines of 3 epochs on the 2018 training dataset (subsample) + PL and 3 epochs on the validation data. </p> <p>Private more or less tracked with public. Not sure about lesser contribution but the two biggest contributors were: 1) monolingual models, and 2) our team merger ensemble boost.</p> <p>Yup, my team-mate did leverage the bias dataset in some of his training runs on TPU. I didn't have any success with it. </p> <p>I'll ask my team-mate <a href="/rafiko1">@rafiko1</a> to outline the post-processing trick in detail. Think it's useful as a generic PP technique in any competition. </p> <p>We’d like to start off by thanking Kaggle/Jigsaw for a drama-free competition and also by congratulating the other medallists. </p> <h2>TL;DR</h2> <ol> <li>Ensemble, ensemble, ensemble</li> <li>Pseudo-labelling</li> <li>Bootstrap with multilingual models, refine with monolingual models</li> </ol> <p>I haven't competed on Kaggle since I co-won the 2018 Toxic Comments competition. The state-of-the-art for NLP classification at the time was non-contextual word embeddings (e.g., FastText) and I was curious if I’d make a good showing in a world of Transformers. I'm pleasantly surprised to have done so well with my team-mate @rafiko1. </p> <h2>Our public LB milestones</h2> <ul> <li>Baseline XLM-Roberta model (Public LB: 0.93XX - 31 March)</li> <li>Average ensemble of XLM-R models (0.942X - 3 April)</li> <li>Blending with monolingual Transformer models (0.9510 - 23 April)</li> <li>Team merger - weighted average of our individual best subs (0.9529 - 28 April)</li> <li>Blending with monolingual FastText classifiers (0.9544 - 5 May)</li> <li>Post-processing and misc. optimizations (0.9556 - 13 June)</li> </ul> <p>You’ll note we hit 1st place on the <strong>FINAL</strong> public LB by end April. Put another way, we were waiting for 2 months for the competition to end :)</p> <h2>CV strategy</h2> <p>We initially used a mix of k-fold CV and validation set as hold-out but as we refined our test predictions and used pseudo-labels + validation set for training, the validation metric became noisy to the point where we relied primarily on the public LB score. </p> <h2>Insights</h2> <h3>Ensembling to mitigate Transformer training variability</h3> <p>It’s been noted that the performance of Transformer models is impacted heavily by initialization and data order (<a href="https://arxiv.org/pdf/2002.06305.pdf">https://arxiv.org/pdf/2002.06305.pdf</a>, <a href="https://www.aclweb.org/anthology/2020.trac-1.9.pdf">https://www.aclweb.org/anthology/2020.trac-1.9.pdf</a>). To mitigate that, we emphasized the ensembling and bagging our models. This included temporal self-ensembling. Given the public test set, we went with an iterative blending approach, refining the test set predictions across submissions with a weighted average of the previous best submission and the current model’s predictions. We began with a simple average, and gradually increased the weight of the previous best submission. For the training data, we largely used sub-samples of the translations of the 2018 toxic comments for each model run. </p> <h3>Pseudo-labels (PL)</h3> <p>We observed a performance improvement when we used test-set predictions as training data - the intuition being that it helps models learn the test set distribution. Using all test-set predictions as soft-labels worked better than any other version of pseudo-labelling (e.g., hard labels, confidence thresholded PLs etc.). Towards the end of the competition, we discovered a minor but material boost in LB when we upsampled the PLs. </p> <h3>Multilingual XLM-Roberta models</h3> <p>As with most teams, we began with a vanilla XLM-R model, incorporating translations of the 2018 dataset in the 6 test-set languages as training data. We used a vanilla classification head on the CLS token of the last layer with the Adam optimizer and binary cross entropy loss function, and finetuned the entire model with a low learning rate. Given Transformer models have several hundred million trainable weights put to the relatively simple task of making a binary prediction, we didn’t believe the default architecture required much tweaking to get a good signal through the model. Consequently, we didn't spend too much time on hyper-parameter optimization, architectural tweaks, or preprocessing. </p> <h3>Foreign language monolingual Transformer models</h3> <p>@rafiko1 and I stumbled on this technique independently and prior to forming a team. We were both inspired by the MultiFiT paper (<a href="https://arxiv.org/pdf/1909.04761.pdf">https://arxiv.org/pdf/1909.04761.pdf</a>) - specifically: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1029053%2Fb40975c9453fba6f640836e4fe297746%2FCapture.PNG?generation=1592872625708960&amp;alt=media" alt=""></p> <p>We observed a dramatic performance boost when we used pretrained foreign language monolingual Transformer models from HuggingFace for the test-set languages(e.g., Camembert for french samples, Rubert for russian, BerTurk for turkish, BETO for spanish etc.). </p> <p>We finetuned models for each of the 6 languages - by combining translations of the 2018 Toxic Comments together with pseudo-labels for samples in that specific language (initially from the XLM-R multilingual models), training the corresponding monolingual model, predicting the same samples then blending it back with the “main branch” of all predictions. It was synergistic in that training model A -&gt; training model B -&gt; training model A etc. lead to continual performance improvements. </p> <p>For each model run, we’d reload weight initalizations from the pretrained models to prevent overfitting. In other words, the continuing improvements we saw were being driven by refinements in the pseudo-labels we were providing to the models as training data. </p> <p>For a given monolingual model, predicting only test-set samples in that language worked best. Translating test-set samples in other languages to the model's language and predicting them worsened performance. </p> <h3>Finetuning pre-trained foreign language monolingual FastText models</h3> <p>After we exhausted the HuggingFace monolingual model library, we trained monolingual FastText Bidirectional GRU models a la 2018 Toxic Comments, using pretrained embeddings for the test-set languages, to continue refining the test set predictions (albeit with a lower weight when combined with the main branch of predictions) and saw a small but meaningful performance boost (0.9536 to 0.9544). </p> <h3>Post-processing</h3> <p>Given the large number of submissions we were making, @rafiko1 came up with the novel idea to make use of the history of submissions to tweak our test set predictions. We tracked the delta of predictions for each sample for successful submissions, averaged them and nudged the predictions in the same direction. We saw a minor but material boost in performance (~0.0005)</p> <p>We were concerned about the risk of overfitting with this post-processing technique so for our final 2 submissions, selected one that incorporated this post-processing and one that didn't. It ended up working very well on the private LB. </p> <h2>Misc -</h2> <h3>Training setup</h3> <p>@rafiko1 and I kept separate code-bases. He trained on Kaggle TPU instances with Tensorflow code derived from the public kernels by @xhlulu and @shonenkov while I trained on my own hardware (dual RTX Titans) using from-scratch PyTorch code. I’d attribute part of our outsized merger ensemble boost (0.9510 -&gt; 0.9529) to this diversity in training. </p> <h3>What didn’t work</h3> <p>We went through a checklist of contemporary NLP classification techniques and most of them didn’t work probably due to the simple problem objective. Here is a selection of what didn’t work :- - Document-level embedders (e.g., LASER) - Further MLM pretraining of Transformer models using task data - Alternative ensembling mechanisms (rank-averaging, stochastic weight averaging) - Alternative loss functions (e.g., focal loss, histogram loss) - Alternative pooling mechanisms for the classification head (e.g., max-pool CNN across tokens, using multiple hidden layers etc.) - Non-FastText pretrained embeddings (e.g., Flair, glove, bpem) - Freeze-finetuning for the Transformer models - Regularization (e.g., multisample dropout, input mixup, manifold mixup, sentencepiece-dropout) - Backtranslation as data augmentation - English translations as train/test-time augmentation - Self-distilling to relabel the 2018 data - Adversarial training by perturbing the embeddings layer using FGM - Multi-task learning - Temperature scaling on pseudo-labels - Semi-supervised learning using the test data <br> - Composing two models into an encoder-decoder model - Use of translation focused pretrained models (e.g., mBart)</p> <p>We'll release our code later this week.</p> <p>Only samples in that specific language from the test/val dataset + translations from the orig 2018 training.</p> <p>A training run would for example have 10K test + 70K subsampled 2018 translations + 2.5K val. These models are quite good at few-shot learning so &lt;100K is sufficient to learn. </p> <p>Soft labels. </p> <p>Pretty much public LB. To reduce the need for CV, we also used temporal self-ensembling: predicting the test set every epoch and averaging them for the final predictions (<a href="/rafiko1">@rafiko1</a> went beyond and predicted multiple times an epoch). This removed the need to pick a best epoch for predictions.</p> <p>I did monitor training loss - for larger models that would occassionally fail due to instability, I'd stop and rerun the training if I saw training loss jump up. </p> <p>Thanks! WTF was my first gold and a solo gold at that so I'll alway remember it :)</p> <p>I'm hoping to hit GM by end of the year so hopefully there'll be opportunities for us to team up in another competition.</p> <ul> <li>We tried everything on HuggingFace. Larger Transformers tended to perform better but also tended to be more unstable (we'd have to re-run training when we observed losses increasing - i saw this especially with models like Flaubert-large)</li> <li>I consider SSL distinct from PL in the NLP context, referring more to stuff like the Unsupervised Data Augmentation paper which uses a consistency loss to handle unlabelled samples</li> <li>For FT, we used official pretrained embeddings in the 6 test set languages so didn't touch English data</li> </ul> <p>The predictions are an exponential moving average of all past model predictions and the current model's prediction. Increasing the weight of the previous best submission means you're putting more weight on past model predictions versus the current model.</p>
Jigsaw Multilingual Toxic Comment Classification
21st Place solution : Magic of Ensemble
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The idea is based on this notebook so just change submission files to this competition should work.</p> <p>Big thanks to <a href="/riblidezso">@riblidezso</a> who share this amazing notebook: <a href="https://www.kaggle.com/riblidezso/train-from-mlm-finetuned-xlm-roberta-large">https://www.kaggle.com/riblidezso/train-from-mlm-finetuned-xlm-roberta-large</a> .</p> <p>In short, ensemble and second step fine-tuned by language is magic (and overfitting too).</p> <h1>PB score timeline</h1> <p>Ensemble Gmean of top score public submissions -&gt; 0.9473 (8 June) Ensemble LGBM solution and Gmean submission -&gt; 0.9478 (10 June) (Magic) Add external data to validate dataset and change to validate by language in second step training of XLM and ensemble by language -&gt; 0.9500 (21 June) Unfortunately, we boosted the score to 0.9500 in the last 2 days but no more TPU left to train new external data.</p> <h1>Validate Dataset</h1> <p>We separated validate data to 3 set by language and find another 3 languages from external data to use in second step training in <a href="/riblidezso">@riblidezso</a> notebook. The out of fold score looks promising so we blend them to our main submission and boost the score to 0.9500.</p> <h1>Ref</h1> <p>Gmean notebook: <a href="https://www.kaggle.com/paulorzp/gmean-of-low-correlation-lb-0-952x">https://www.kaggle.com/paulorzp/gmean-of-low-correlation-lb-0-952x</a> XLM-Roberta: <a href="https://www.kaggle.com/riblidezso/train-from-mlm-finetuned-xlm-roberta-large">https://www.kaggle.com/riblidezso/train-from-mlm-finetuned-xlm-roberta-large</a> LGBM solution: <a href="https://www.kaggle.com/miklgr500/lgbm-solution">https://www.kaggle.com/miklgr500/lgbm-solution</a> External data: <a href="https://www.kaggle.com/blackmoon/russian-language-toxic-comments">https://www.kaggle.com/blackmoon/russian-language-toxic-comments</a> , <a href="https://www.kaggle.com/shonenkov/open-subtitles-toxic-pseudo-labeling">https://www.kaggle.com/shonenkov/open-subtitles-toxic-pseudo-labeling</a> (Magic) Validate and fine-tuned by language: <a href="https://www.kaggle.com/medrau/train-from-mlm-finetuned-val-per-lang">https://www.kaggle.com/medrau/train-from-mlm-finetuned-val-per-lang</a></p>
Jigsaw Multilingual Toxic Comment Classification
34th Place Solution - Layerwise and Sampling
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I simply modified AdamW implementation by huggingface <code> def _decay_weights_op(self, var, learning_rate, apply_state): do_decay = self._do_use_weight_decay(var.name) #added layer_coef = self._get_layer_wise_coefficient(var.name) if do_decay: return var.assign_sub( learning_rate * var * layer_coef * apply_state[(var.device, var.dtype.base_dtype)]["weight_decay_rate"], use_locking=self._use_locking, ) return tf.no_op() </code></p> <p>Then i added this function ```</p> <pre><code>def _get_layer_wise_coefficient(self, param_name): """gets layer wise coefficient for the `param_name`.""" if self._layer_wise_coefficient == 1: return tf.math.pow(self._layer_wise_coefficient, 0) if self._exclude_from_layer_wise: for r in self._exclude_from_layer_wise: if re.search(r, param_name) is not None: return tf.math.pow(self._layer_wise_coefficient, 0) for l in range(self._number_of_layers): if re.search(self._layer_re_name + f'{l}/', param_name) is not None: return tf.math.pow(self._layer_wise_coefficient, l) return tf.math.pow(self._layer_wise_coefficient, self._number_of_layers ) </code></pre> <p><code>`` XLM-R-large model by jplu, the layers could be distinguished by</code>'layer_._#/' <code>so I had</code>self._layer_re_name = 'layer_._`</p> <ol> <li>I trained everything with 1s and 0s ( &lt; 0.5 = 0, &gt;= 0.5 = 1)</li> <li>I fixed seed for everything when I was training the models initially to test out new ideas, but for my final model I simply trained the model 3 times then averaged the predictions. However, even with fixing it was not possible to replicate the result entirely because of TPU, so maybe that's why some people opted to just never fixing the seed.</li> </ol> <p>I want to thank the organizers for this amazing competition and Kaggle for providing v3 TPU where people without hardwares like myself were able to compete without much disadvantage. I also want to congratulate the winners. This was my first Kaggle competition and I learned a lot about SOTA NLP models/techniques through the process. It was tough at first, but it was exciting to see my ideas work and move up the leaderboard. </p> <h2><strong>Summary</strong></h2> <ol> <li>ensemble of XLM-R models</li> <li>layer-wise learning rate/weight decay </li> <li>pseudo-labelling</li> <li>sampling strategy </li> </ol> <h2><strong>Ensemble of XLM-R models</strong></h2> <p>For all of my models, I trained on the training data for 2 epochs using [CLS] token without any extra layers. I tried different heads on top but I saw a drop in the score for all of them. I believe this is because the last-layer of a pretrained deep NN is too specific for the previous task (MLM objective in our case). Adding more layers on top just made the training more unstable. </p> <p>I further finetuned on the validation set for 2 epochs using 5-fold then averaged the oof predictions. This 5-fold CV AUC score was a good indicator of the public LB score. I ensembled only XLM-R models because other multilingual models (like XLM-17, m-BERT) were not very useful. I also trained each model 3 times and averaged the predictions for stability. </p> <p>Mainly, I trained 1. XLM-R-large english only model (~0.9356 on public LB) 2. XLM-R-large translated only model - pretrained more with MLM using test data for 4 epochs (~0.9471 on public LB) 3. XLM-R-large translated only model with last 50 (~0.9449 on public LB)</p> <h2><strong>Layer-Wise Learning Rate/Weight Decay</strong></h2> <p>I tried two different layer-wise strategies: layer-wise learning rate and layer-wise weight decay. I saw that layer-wise learning rate was effective in finetuning BERT in this article <a href="https://arxiv.org/abs/1905.05583">https://arxiv.org/abs/1905.05583</a> and found it to be very useful for this competition as well. From this result, I also tried out a variant of this layerwise training strategy where I regularized the model more in the lower layers. I found that this worked much better than the layer-wise learning rate (about ~0.008 boost in public LB). It works just like the layer-wise learning rate strategy but instead of multiplying the learning rate by the layerwise coefficient alpha^(total_num_layers-layer_number) (where layer_number = 0 for the first layer), you multiply the weight decay rate by the coefficient alpha^layer_number. Thus, you are regularizing the lower layers more than the upper layers. This made sense because a wide exploration of the parameter space was needed for the upper layers in order to adapt to the new objective while the lower layers didnt need to change much. I used 0.01 for the weight decay and 0.99 for the layer-wise coefficient (alpha). Layer-wise learning rate strategy could actually be better but it was very sensitive to hyperparameter choices, so with limited TPU time I worked with layer-wise weight decay. </p> <h2><strong>Pseudo-labelling</strong></h2> <p>I used my best model's predictions on the test set as the pseudo-labels and included in the training set. Even using pseudo-labels of a relatively worse model (scoring ~0.935) was effective. It was interesting to observe the instability of the XLM-R-large model where switching only the pseudo-labels had resulted in diverging of the model. I had to lower the learning rate from 1e-5 to 6e-6 in order to converge.</p> <h2><strong>Sampling-strategy</strong></h2> <p>I downsampled training data to 1:1 for all models. For the XLM-R translated only models, I mixed yandex and google translation and saw a little boost in local CV and public LB. I also made sure that when sampling for non-toxic there were no duplicates across different languages for non-toxic.</p> <h2><strong>Other Ideas that Didn't Work</strong></h2> <ol> <li>text pre-processing</li> <li>label-smoothing</li> <li>extending the test dataset by translating back from english</li> <li>making multiple predictions - predicting at val finetune epoch 1 + val finetune epoch 2 and combining</li> <li>meta-learning </li> <li>embeddings - I tried using bilingual Attract-Repel embeddings</li> </ol> <p>Lastly, I learned that strictly planning your ideas by ranking them by their importance was really crucial especially when working with only 30 hours of TPU time. I found myself spending majority of my TPU time trying to tweak the working model to make it better rather than trying more of other novel ideas: striking a good balance between Exploitation and Exploration is the key! It was quite sad to see one of my ideas (making expert models using monolingual transformer models) in the middle of my untried idea list in the solution for the 1st place solution, but I have learned my lesson :)</p> <p>I want to thank the authors of these wonderful kernels for their work. I have learned a lot from them. <a href="https://www.kaggle.com/shonenkov/tpu-training-super-fast-xlmroberta">https://www.kaggle.com/shonenkov/tpu-training-super-fast-xlmroberta</a> <a href="https://www.kaggle.com/riblidezso/train-from-mlm-finetuned-xlm-roberta-large">https://www.kaggle.com/riblidezso/train-from-mlm-finetuned-xlm-roberta-large</a> <a href="https://www.kaggle.com/xhlulu/jigsaw-tpu-xlm-roberta">https://www.kaggle.com/xhlulu/jigsaw-tpu-xlm-roberta</a></p> <p>I used simple constant decay with warm up. I got the hyperparameter setting from the ROBERTA paper and found it to work well.</p>
Jigsaw Multilingual Toxic Comment Classification
3rd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Re Multi-fold: I mean training the model on different subset of data, and then taking average of predictions of test data using the resulting models.</p> <p>Re Scaling: There was the option of scaling probabilities based on validation data / LB. We ended up using just the raw probabilities as they were working up pretty well and wanted to avoid over-fitting risk. </p> <p>Re: TTA - yes, you are right. </p> <p>As per our teammate Igor who worked on this, just using raw probabilities coming out of the model worked pretty well. We could have done scaling based on validation data / LB, but wanted to avoid the risk of over-fitting. </p> <p>We did weighted average of rank. Weights were decided based on individual performance on LB. The weights may not have been optimal</p> <p>First of all we are grateful to Jigsaw for organizing such an interesting competition, and flawless execution in terms of data quality, train / validation/ test split etc. Also very thankful to Kaggle for providing the free TPUs which made quick experimentation possible. This is the first time I used just Kaggle Notebook (kernels) for training / inference and was impressed with how well they work most of the time, with a slight exception of lacking full blown IDE functionality, love everything about the Kernels. </p> <p>Last, but not the least, I want to thank my amazing teammates - @brightertiger (Ujjwal), @soloway (Igor) and@drpatrickchan (Dr Patrick) - who made our LB position possible and were super fun to work with. Our best submission was made in the last 15 min of the competition, so this was a relentless team, which never stopped improving. We were constantly looking and re-evaluating new angles to improve our solution. Although, I am writing the solution description, but I speak on everyone’s behalf here. </p> <p>Following were the major parts of our best submission:</p> <p><strong>RoBERTa XLM pre-trained</strong>: Like we saw with all the publicly shared kernels, RoBERTa XLM was the workhorse, which just delivered great performance with minimum effort / training. Used a lot of things discussed in the forum like translated data, open subtitles data along with averaging across multiple folds. Typically about ~200K or so observations were enough to train a single model, so there was a lot of scope to do multi-fold averaging given the huge data available at hand. A big thanks to @shonenkov and @xhlulu for their excellent kernels. </p> <p><strong>Language Specific Pre-trained Bert Models</strong>: Used language specific pre-trained Bert models for all the 6 test languages. A given model with a given capacity is any day more powerful for a single language vs multiple languages. The only challenge was to merge the probabilities coming out of different models (which could have different distributions) into a single ranking which could be especially problematic for a ROC evaluation metric (Even if you got the rank ordering perfectly right within each language you can mess things up when combining across languages). Our teammate Igor could make it work magically. Big thanks to @shonenkov for his great <a href="https://www.kaggle.com/shonenkov/tpu-training-super-fast-xlmroberta">kernel</a> that is used for training these Bert Models.</p> <p><strong>RoBERTa XLM MLM</strong>: Borrowed the concept of training the model on domain specific data using this excellent <a href="https://www.kaggle.com/riblidezso/finetune-xlm-roberta-on-jigsaw-test-data-with-mlm">kernel</a> by @riblidezso. While the performance from these models were similar to our existing XLM-Roberta models, they worked well in the blend. We added more data to the MLM step by using translations of training data, validation data and test dataset. This gave us a couple versions of the pre-trained model that we averaged in the blend. </p> <p><strong>Post Processing</strong>: We observed that around 5-10% of the comments looked like automated / template based messages. E.g - Look at Test ID 39482. What’s really happening here is - the username / page title / attachment name is quoted by the bot here, which may contain profanities. But most likely due to the way data labels would have been generated, these are likely marked as non-toxic - our models unfortunately get confused by this. We applied regular expressions, clustering and other heuristics to adjust scores coming directly out of the models for these comments. This gave us close to additional 0.001 on LB</p> <p>Other things which impacted results a tiny bit: <strong>TTA</strong>: Some of our models did test time augmentation - where we did a weighted average of the prediction over foreign and english language. <br> <strong>Label smoothing</strong>: Was used in most of our models including the mono lang models.</p> <p>A weighted blend of the components # 1-3 put together and post processing described in # 4 gave us our 0.9523 on Public and 0.9509 on Private LB. </p> <p>Thanks Dieter. You guys had a meteoric rise after entering late in the competition (kudos for doing it so consistently!). You almost pipped us of # 3 on the last day :-)</p> <p>Thanks you, appreciate it. </p> <p>Thank you so much. </p>
Jigsaw Multilingual Toxic Comment Classification
43rd place. Sampling with Adversarial Validation
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle and all kagglers who were generous enough to share their expertise. The learning was rich thanks to: Dezso Ribli's <a href="/riblidezso">@riblidezso</a> - <a href="https://www.kaggle.com/riblidezso/finetune-xlm-roberta-on-jigsaw-test-data-with-mlm">Finetune XLM-Roberta on Jigsaw test data with MLM</a>, Alex Shonenkov <a href="/shonenkov">@shonenkov</a> with Pytorch <a href="https://www.kaggle.com/shonenkov/tpu-training-super-fast-xlmroberta">[TPU-Training] Super Fast XLMRoberta</a>, DimitreOliveira <a href="/dimitreoliveira">@dimitreoliveira</a> - <a href="https://www.kaggle.com/dimitreoliveira/jigsaw-tpu-optimized-training-loops">Jigsaw - TPU optimized training loops</a>, Abhishek <a href="/abhishek">@abhishek</a> with his invaluable 'real-time coding' youtube videos and <a href="https://www.kaggle.com/abhishek/i-like-clean-tpu-training-kernels-i-can-not-lie">I Like Clean TPU Training Kernels &amp; I Can Not Lie</a>, Xhlulu <a href="/xhlulu">@xhlulu</a> - <a href="https://www.kaggle.com/xhlulu/jigsaw-tpu-xlm-roberta">Jigsaw TPU: XLM-Roberta</a>, Michael Kazachok’s <a href="/miklgr500">@miklgr500</a> translations <a href="https://www.kaggle.com/miklgr500/jigsaw-train-multilingual-coments-google-api">dataset</a> and <a href="https://www.kaggle.com/miklgr500/jigsaw-tpu-bert-with-huggingface-and-keras">Jigsaw TPU: BERT with Huggingface and Keras</a></p> <h3>Adversarial Validation.</h3> <p>The main difference of the solution is the way the training and validation sets were sampled. I used adversarial validation in this <a href="https://www.kaggle.com/isakev/jigsaw-adversarial-validation-folds-0-1">kernel (previous version)</a> to sample <a href="https://www.kaggle.com/miklgr500/jigsaw-train-multilingual-coments-google-api">translations</a> by Michael Kazachok to pick samples ‘most similar to test set’ (280k-480k samples for training and 4k samples for validation (+8k original validation.csv). Adversarial Validation, the idea successfully used often across Kaggle (e.g. <a href="https://www.kaggle.com/tunguz/quora-adversarial-validation">Quora Adversarial Validation </a> and <a href="https://www.kaggle.com/konradb/adversarial-validation">Adversarial validation</a></p> <ul> <li>selecting samples this way, at least, did not make the performance worse than lucky picks of random sampling</li> </ul> <h3>The rest.</h3> <p>The best submission is the blend of predictions of 6 models, all Roberta-XLM-large MLM, 3 of them are based on the MLM finetuned to test set by <a href="/riblidezso">@riblidezso</a> Dezso Ribli (link)[https://www.kaggle.com/riblidezso/finetune-xlm-roberta-on-jigsaw-test-data-with-mlm]. Diversity to models comes from different loss functions (apart from BCE, used focal loss and MSE (with soft labels)), 3 models with 4 top hidden layers’ outputs concatenated, 1 model sums those 4 layers. 3 models use differential learning rates for head and transformers as in <a href="https://www.kaggle.com/riblidezso/train-from-mlm-finetuned-xlm-roberta-large">this kernel</a>, different lengths 192 and 256, for those samples that exceed max_len=192 - concatenating the last 25%*max_len of text to the beginning of text,, Alex Shonenkov’s <a href="https://www.kaggle.com/shonenkov/tpu-training-super-fast-xlmroberta">kernel</a> with addition of opensubtitles data, training on english text and without, using training data from 200k to 480k filtered by similarity to test set and/or similarity by language ditribution and/or similarity by ‘toxic’ target distribution.</p> <p>All were run in tensorflow and keras on TPU only. The GPU had been used for predictions only, or as in the case of Adversarial Validation when smaller Roberata-base model was employed.</p>
Jigsaw Multilingual Toxic Comment Classification
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>And to be fair, if we would have had more time, I would have tried to incorporate fixing the disturbed individual language distributions in the modelling, instead of fixing it via pp. Often post-processing fixes the symptoms but does not solve the root cause. </p> <p>yeah, its a good pp trick :D</p> <p>Thanks to kaggle for hosting such an interesting and challenging competition! Multilingual NLP is something very interesting yet difficult. In hindsight I (Dieter) wish I’d spend less time on the tweet sentiment extraction competition and more on this one. </p> <h1>Brief Summary</h1> <p>Our solution is a simple blend of several transformer models (mainly xlm-roberta-large) paired with some post-processing. The architecture was the same as in public available kernels with the classification head taking either max+mean pooling of hidden states or the hidden state of the CLS token. We used a 3-step approach for training our models, where starting from 7 languages we fine-tuned to 3 languages and finished at a single language. As this 3-step approach results in distortion of global predictions we shifted languages individually by a factor as a post-processing step. </p> <h1>Detailed Summary</h1> <p>We are still astonished by the great result in such a short time. While @aerdem4 worked a bit longer on this competition and already had a good understanding of specific challenges @cpmpml and @christofhenkel joined right after the tweet sentiment competition a few hours before team merger deadline. Two days ago we were still at a 100+ spot and were hoping for a silver medal at best. But with persistence and the right amount of intuition what ideas to proceed with, and of course some luck we managed to climb right to the upper gold position.</p> <h2>Preprocessing</h2> <p>None. That's NLP in 2020 :D</p> <h2>Speeding up training</h2> <p>We did two things. One was to downsample negative samples to get a more balanced dataset and a smaller dataset at the same time. If you only use as many negative as positive then dataset is reduced 5x roughly, same for time to run one epoch. Some of our models were trained that way.</p> <p>Another speedup came from padding by batch. The main idea is to limit the amount of padding to what is necessary for the current batch. Inputs are padded to the length of the longest input in the batch rather than a fixed length, say 512. This is now a well known technique, and it has been used in previous competitions. It accelerates training significantly. <br> We refined the idea by sorting the samples by their length so that the samples in a given batch have similar length. This reduces even further the need for padding. If all inputs in the batch have the same length then there is no padding at all. </p> <p>Given samples are sorted, we cannot shuffle them in training mode. We rather shuffled batches. This yields an extra 2x speedup compared to the original padding by batch. Training one epoch for xlm-roberta-large on the first train set takes about 17 minutes on a V100 GPU.</p> <h2>Cross-validation</h2> <p>As time was short and hence we did not have many submissions to spare for getting LB feedback we worked on getting a reliable cross-validation. We used a mix of Group-3fold per language and simple 3fold of the validation data to represent the unknown languages ru,pt,fr as well as the known languages it, es and tr. which results in a 6fold scheme. So do for example fold1 represent what if es would not be in valid.csv which should behave the same as pt is not in valid. Taking the mean AUC of all folds was a very good proxy for Public and Private LB.</p> <h2>Architectures</h2> <p>As most teams we used xlm-roberta large as the main backbone with a simple classification head that either uses max+mean pooling or the hidden states of the CLS token. In total we used the following backbones with their approximate weighting in our final ensemble in brackets.</p> <ul> <li>5x xlm-roberta-large (85%)</li> <li>1x xlm-roberta-base (5%)</li> <li>1x mBart-large (10%)</li> </ul> <h2>Training strategy</h2> <p>One main ingredient for our good result is a stepwise finetuning towards single language models for tr, it and es. Let us illustrate the whole approach before explaining:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1424766%2Fd03ded105462fb5ba610393ba97e4ad8%2Fjigsaw%20arch2.001.jpeg?generation=1592909197036996&amp;alt=media" alt=""></p> <p>As input data we used the english jigsaw-toxic-comment-train.csv and its six translations available as public datasets which combined are roughly 1.4M comments. I realized that training a model directly on the combined dataset is not optimal, as each comment appears 7x in each epoch (although in different languages) and the model overfits on that. So I divided the combined dataset into 7 stratified parts, where each part contains a comment only once. For each fold we then finetuned a transformer in a 3step manner:</p> <ul> <li>Step 1: finetune to all 7 languages for 2 epochs</li> <li>Step 2: finetune only to the full valid.csv which only has tr, it and es</li> <li>Step 3: 3x finetune to each language in valid resulting in 3 models</li> </ul> <p>We then use the step1 model for predicting ru, the step2 model for predicting pt and fr and the respective step3 models for tr, it and es. Using the step2 model for pt and fr gave a significant boost compared to using step1 model for those due. Most likely due to the language similarity between it, es, fr and pt.</p> <p>We used mainly a batchsize of 32 using gradient accumulation and a learning rate of 3e-6 with linear decay with AdamW optimizer. One thing worth mentioning is that we train on a max sequence length of 512 due to the dataloader mentioned above which sorts the data by length and then randomly serves batches of same length comments. Only the huge speed-up paired with the non padding of shorter comments made using a 512 max length reasonable. </p> <p>Apart from one model which was built on top of a public kernel, all models were trained using pytorch and GPU.</p> <h2>Ensembling</h2> <p>As for ensembling we did team member individual ensembling and then combined the models of the team members and public kernels by weighted sum of rank percentile </p> <h2>Post-processing</h2> <p>Another key ingredient, which we luckily found on the last day is post processing of the final predictions. Although languages are derived from individual models, the competition metric is sensitive to the global rank of all predictions and not language specific. So we took care of the languages having a correct relation to each other, by shifting the predictions of each language individually. In our final submission for example we used the following factors </p> <p><code> test_df.loc[test_df["lang"] == "es", "toxic"] *= 1.06 test_df.loc[test_df["lang"] == "fr", "toxic"] *= 1.04 test_df.loc[test_df["lang"] == "it", "toxic"] *= 0.97 test_df.loc[test_df["lang"] == "pt", "toxic"] *= 0.96 test_df.loc[test_df["lang"] == "tr", "toxic"] *= 0.98 </code></p> <p>We derived the factors by matching the test prediction mean with the public LB mean for each language individually. We had already made 6 submissions for probing the test set by setting one language to 1 and the rest to 0. Then we made sure that means are aligned to give similar experimental AUC. That posprocessing moved us from 14th place to 4th place on public LB! </p> <p>While this postprocessing might also help other teams, we think it specifically fixes the divergence of global prediction distributions introduced by having 5 models to predict 6 languages, and might not help much if a team used a single model approach.</p> <p>Thanks for reading. Questions welcome.</p> <p>To me it made sense that global ranking matters, and especially the 'ru' prediction <em>should</em> be off as the only thing the step1 model has seen were translations from en, which I think are weaker in terms of toxicity as original russian comments. Hence we expected those predictions to be in average too low. Then we checked shifting the predictions on the validation set, and it gave a significant improvement in score. So we worked on a method to shift test predictions too...</p>
Jigsaw Multilingual Toxic Comment Classification
59th Place Solution (And My First Silver Medal)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="/rashidulhasanhridoy">@rashidulhasanhridoy</a> Thank you!</p> <p><a href="/mdselimreza">@mdselimreza</a> Thank you mate</p> <p><a href="/cdeotte">@cdeotte</a> Thank you! And congrats as well 👍 </p> <p><a href="/kamruzzamanselim">@kamruzzamanselim</a> Thank you!</p> <p><a href="/frtgnn">@frtgnn</a> Thank you! Congratulations to your team too! 👍 😄 </p> <p><a href="/drpatrickchan">@drpatrickchan</a> Thank you! And congratulations to your team as well! Awesome 3rd :)</p> <p><a href="/haythemtellili5">@haythemtellili5</a> Thank you bro! Congratulations to you and your team too!</p> <h1>Overview</h1> <p>My two best submissions were from a stacking pipeline. I've trained several models on Kaggle and Colab, which I later stacked their predictions using various stacking techniques (not just the usual mean, median, minmax) to achieve higher LB scores. </p> <h2>The Two Best Submissions</h2> <ul> <li>Stacking &gt; 12 models (used mean of the predictions of 12 submission files, from 9 single models and 3 blends) <strong>(Private LB: 0.9469, Public LB: 0.9485)</strong></li> <li>Blend of stacked predictions from above that gave 0.9485 on the public LB with <code>ExtraTreesClassifier</code> predictions trained on the validation data with <a href="https://www.kaggle.com/jazivxt/howling-with-wolf-on-l-genpresse/notebook">https://www.kaggle.com/jazivxt/howling-with-wolf-on-l-genpresse/notebook</a> <strong>(Private LB: 0.9471, Public LB: 0.9485)</strong></li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1738095%2F78d1c6c510639908e3d10b481e1ec8ea%2Fbest_subs.png?generation=1592885422854659&amp;alt=media" alt=""></p> <h1>Models Used For Stacking</h1> <ul> <li>XLM-RoBERTa</li> <li>XLM-RoBERTa Large</li> <li>XLM-RoBERTa Large + MLM training</li> <li>ExtraTreesClassifier</li> </ul> <p>P.S. Might have missed out one or two, will update here should I find any missing.</p> <h1>How Models Were Trained</h1> <ul> <li>For the MLM model (third above), I trained it on Kaggle </li> <li>For the XLM-RoBERTa models, I trained them on Google Colab TPU</li> </ul> <h1>Notebooks With Models Referenced Above</h1> <p>A big thanks to all whom contributed to the following notebooks, you have taught me quite a lot! I'm still relatively new to NLP, and your sharing has made me more comfortable and interested in the domain! :)</p> <ul> <li><a href="https://www.kaggle.com/riblidezso/train-from-mlm-finetuned-xlm-roberta-large">https://www.kaggle.com/riblidezso/train-from-mlm-finetuned-xlm-roberta-large</a> </li> <li><a href="https://www.kaggle.com/jazivxt/howling-with-wolf-on-l-genpresse">https://www.kaggle.com/jazivxt/howling-with-wolf-on-l-genpresse</a></li> <li><a href="https://www.kaggle.com/hamditarek/ensemble">https://www.kaggle.com/hamditarek/ensemble</a></li> <li><a href="https://www.kaggle.com/sai11fkaneko/data-leak">https://www.kaggle.com/sai11fkaneko/data-leak</a></li> <li><a href="https://www.kaggle.com/shonenkov/tpu-inference-super-fast-xlmroberta">https://www.kaggle.com/shonenkov/tpu-inference-super-fast-xlmroberta</a></li> <li><a href="https://www.kaggle.com/shonenkov/tpu-training-super-fast-xlmroberta">https://www.kaggle.com/shonenkov/tpu-training-super-fast-xlmroberta</a></li> <li><a href="https://www.kaggle.com/aiaiooas/parcor-regularised-classification">https://www.kaggle.com/aiaiooas/parcor-regularised-classification</a></li> <li><a href="https://www.kaggle.com/shahules/fine-tune-xlm-kfold-cv-0-93-lb">https://www.kaggle.com/shahules/fine-tune-xlm-kfold-cv-0-93-lb</a></li> </ul> <h1>Notebooks Used For Submission</h1> <ul> <li><a href="https://www.kaggle.com/khoongweihao/ensemble-ii-the-dark-side-of-stacking">https://www.kaggle.com/khoongweihao/ensemble-ii-the-dark-side-of-stacking</a> <strong>(Mine)</strong></li> <li><a href="https://www.kaggle.com/jazivxt/howling-with-wolf-on-l-genpresse">https://www.kaggle.com/jazivxt/howling-with-wolf-on-l-genpresse</a> <strong>(Public)</strong></li> </ul> <h1>Ensemble Methods That Did Not Work</h1> <ul> <li>Bi-weighted correlations stacking (something that stemmed from my personal research, which worked on other regression and classification problems and datasets, just not this one sadly. It actually worked at the start, but in the last month of the competition I could not find a suitable configuration due to the lack of submissions. See my notebook above for details)</li> <li>Power-weighted stacking/blend (intention was to penalize the classification probabilities, idea came from the winning solution of the previous Jigsaw competition. Sadly this did not work out too)</li> <li>Distance-weighted stacking (main idea is to give higher weights to predictions with lower distances such as correlation/euclidean distance. This did not work too)</li> </ul> <p><a href="/piyushmishra1999">@piyushmishra1999</a> Thank you!</p>
Jigsaw Multilingual Toxic Comment Classification
5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all winners!</p> <p>I would like to thank Kaggle and Jigsaw team for hosting this interesting competition. I am also grateful to this great community. Without those great public kernels I do not know how and where to start this project. I feel honored to end up this competition with a solo gold medal. It is a fantastic learning experience. </p> <p>As the private leaderboard is finalized, I decide to summarize what I have learned during this process. My solution has a lot in common with other top solutions. The two helpful techniques/tricks are <strong>ensemble of diverse models</strong> and <strong>post-processing</strong>. At certain point I also thought of using monolingual models as discussed in the <a href="https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification/discussion/160862">1st place solution</a>. But I failed because I wrongly used XLM-R.</p> <h1>Ensemble of diverse models</h1> <h2>Diverse models</h2> <p>For my personal understanding, langugae models are diverse if the train sets, tokenization methods or model architectures are different. In fact, I did not develop any methods from scratch. Instead, I started from public kernels, and improve them by trial and error. Below is a list of kernels I took advantage of for ensembling in my final submission. </p> <ul> <li><p><a href="https://www.kaggle.com/shonenkov/tpu-training-super-fast-xlmroberta">XLM-R TPU on PyTorch using Pseudolabeled (PL) opensubtitles</a> by <a href="https://www.kaggle.com/shonenkov">Alex Shonenkov</a></p></li> <li><p><a href="https://www.kaggle.com/xhlulu/jigsaw-tpu-xlm-roberta">The first functioning XLM-R model</a> by <a href="https://www.kaggle.com/xhlulu">xhlulu</a></p></li> <li><p><a href="https://www.kaggle.com/riblidezso/train-from-mlm-finetuned-xlm-roberta-large">XLM-R with fine-tuned MLM</a> by <a href="https://www.kaggle.com/riblidezso">Dezso Ribli</a></p></li> <li><p><a href="https://www.kaggle.com/miklgr500/jigsaw-tpu-bert-two-stage-training">Two-stage BERT using translated dataset</a> by <a href="https://www.kaggle.com/miklgr500">Michael Kazachok</a></p></li> <li><p><a href="https://www.kaggle.com/jhoward/nb-svm-strong-linear-baseline">NB-SVM</a> by <a href="https://www.kaggle.com/jhoward">Jeremy Howard</a></p></li> </ul> <p>The final submission is an ensemble of 11 models, including two NB-SVM models trained with <a href="https://www.kaggle.com/miklgr500/jigsaw-train-multilingual-coments-google-api">Google API translated multilingual train set</a>, two BERT models trained with English train, validation and test sets, one XLM-R model trained with downsampled unintended bias train set and original validation set, one XLM-R model trained with PL test set, one XLM-R model trained using multilingual train set, and one XLM-R model with multilingual train set and PL test set, two XLM-R models with fined-tuned MLM, and three XLM-R models trained using PL opensubtitles. The ensemble weights are found by intuition and probing the public leaderboard score. The weights are</p> <p><code>python weights = {} weights['bert_1'] = weights['bert_2'] = 0.4/2 weights['nbsvm_1'] = weights['nbsvm_2'] = 0.035/2 <br> weights['xlm_r_en'] =0.7; weights['xlm_r_pl_en'] = 0.3 <br> weights['xlm_r_multilingual']= 0.35; weights['xlm_r_pl_multilingual'] = 0.2 weights['xlm_r_mlm_en'] = weights['xlm_r_mlm_en_2'] = 0.25/2 <br> weights['xlm_r_mlm_multilingual'] = weights['xlm_r_mlm_multilingual_2'] = 0.25/2 weights['xlm_r_opensubtitle'] = weights['xlm_r_opensubtitle_2'] = weights['xlm_r_opensubtitle_3'] = 0.27/3 </code></p> <h1>Post-processing</h1> <p>As highlighted in many other top solutions, post-processing contributes a lot to AUC score, since what matters is not the accuracy of single prediction, but the rank/distribution of all predictions. As we need to predict six-language comments in the test set, it is likely that the model may over-predict toxicity for one language while under-predict toxicity for another language. Therefore, scaling the prediction for individual language might be helpful. I first tried to put a 1.3 scaling coefficient for <strong>fr, es and ru</strong>. It gave my a boost of around 0.002 Public LB score for my best ensemble model. I also tried this with two single model predictions and also saw a similar trend. It gave me confidence that this should work generally, at least for the models I used. So for as long as two weeks, I experimented on the scaling coefficients, in the end I found the following coefficients that work best for me.</p> <p><code>python scaling = { 'fr': 1.175, 'ru': 0.975, 'es': 1.475, 'it': 0.88, <br> 'pt': 0.8, 'tr': 1.0 } </code></p> <p>Another language-wise post-processing trick is to separate the prediction somehow, it leads to a tiny LB score increase.</p> <p>```python def post_process(d): # d: list of numbers representing a distribution d = np.array(d) q_low, q_high = np.quantile(d, q=0.55), np.quantile(d, q=0.95) mask_nontoxic, mask_toxic = d&lt;=q_low, d&gt;=q_high d[mask_nontoxic] = 0.9*d[mask_nontoxic] d[mask_toxic] = 1.1*d[mask_toxic] return d</p> <p>langs = test.lang.unique() for language in langs: mask = test.lang == language sub.loc[mask, 'toxic'] = post_process(sub.loc[mask, 'toxic']) ```</p> <h1>Things that worked</h1> <ul> <li>Ensemble</li> <li>Post-processing: language-wise scaling</li> <li>Add another dense layer before the final dense layer</li> <li>Train on more balanced data</li> <li>Train on PL test set</li> </ul> <h1>Things that did not work</h1> <ul> <li><p>Ensemble including logistic regression models trained and predicted on either English or multilingual data</p></li> <li><p>Monolingual models using XLM-R</p></li> <li><p>Add more dense layers</p></li> <li><p>Learning rate scheduler</p></li> <li><p>Preprocess the data</p></li> <li><p>other post-processing techniques</p></li> </ul>
Jigsaw Multilingual Toxic Comment Classification
6th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks! 1. I didn't care if the randomly selected samples were seen in the training set. This was not done for any logic reasons but for simpleness. And another reason was that there were huge numbers of negatives, which made the omission of some samples negligible. 2. <code>weight</code> could be greater than 1. It was 1 in the final submission due to the better performance than other values.</p> <p>Thanks! I am also very excited to win my first gold medal! And thanks for the compilation of top solutions!</p> <p>Thanks to Jigsaw to hold such amazing competition! And also thanks to my teammate <a href="/tonyxu">@tonyxu</a> for diligence and brilliant ideas. This is our second jigsaw competition and also the first time to join such multilingual nlp competition. </p> <h2>Brief summary</h2> <p>Before team merger, I mainly focused on how to train a single model with decent performance while my teammate worked on training diverse models, especially training on single language data and combine them together. Before merger, we both achieved 0.9487 on public leaderboard. As you can see, we worked in relatively different directions and achieved 0.9499 right after team merger. After team merger, we focused on training more diverse models for diversity, which gave us great boost but also caused us to ignore the post processing trick most top winners found.</p> <h2>Detailed summary</h2> <h3>Data sampling</h3> <p>We used both english training data and its translation shared publicly. Since there were tremendous amount of training data available, the main issue was not too little but too much. I mainly adopted hard sampling technique to sample the training data. </p> <p>First, I included all positives and randomly sampled negatives to train a XLM-Roberta Large model and then used it to make predictions on all training data including all english data and translated jigsaw toxic comment. </p> <p>Then for english data, we set a threshold for negatives while we took all positives due to the data unbalance. The data with gap between the predicted probability and label greater than the threshold were all taken and the data with gap smaller than that were sampled. The same logic for translated data except the threshold setting. A single threshold may cause issues because translation led to information loss. So I set a value range, say [0.3, 0.7] and any data in this range were all taken. I didn't convert soft label to hard 0/1 label because it worked better in the last jigsaw competition. The number of training data is ~934k. XLM-Roberta Large trained on this data could reach 0.9469 on public lb and 0.9458 on private lb. </p> <h3>Multilingual Models</h3> <p>Our multilingual models mainly consist of <code>XLM-Roberta-Large</code>, <code>XLM-Roberta-base</code>, <code>Bert-multilingual-base-uncased</code>. My teammate also did language model finetuning on <code>jigsaw-unintended-bias</code> data. We don't know the exact number for each of them used in the final ensemble but the best public scores are 0.9469, 0.9364 and 0.9325 respectively. </p> <p>In terms of model structure, I used a vanilla classification head on the CLS token of the second to last layer or the concatenation of last four layers while my teammate took the <a href="https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/discussion/103280">architecture of winning model</a> in the last jigsaw competition.</p> <h3>Monlingual Models</h3> <p>Just one week before the deadline, we found the monolingual language models in <a href="https://huggingface.co/models">huggingface communities</a>. We trained the models listed below on corresponding translated language, which gave us boost from 0.9507 to 0.9512 on public lb. </p> <p>&gt; * es: - <a href="https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased#">https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased#</a> - <a href="https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased#">https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased#</a> - <a href="https://github.com/dccuchile/beto">https://github.com/dccuchile/beto</a> <br> * fr: - <a href="https://huggingface.co/camembert/camembert-large">https://huggingface.co/camembert/camembert-large</a> - <a href="https://huggingface.co/camembert-base">https://huggingface.co/camembert-base</a> - <a href="https://huggingface.co/flaubert/flaubert-large-cased">https://huggingface.co/flaubert/flaubert-large-cased</a> - <a href="https://huggingface.co/flaubert/flaubert-base-uncased">https://huggingface.co/flaubert/flaubert-base-uncased</a> - <a href="https://huggingface.co/flaubert/flaubert-base-cased">https://huggingface.co/flaubert/flaubert-base-cased</a> * tr: - (128k vocabulary) <a href="https://huggingface.co/dbmdz/bert-base-turkish-128k-uncased">https://huggingface.co/dbmdz/bert-base-turkish-128k-uncased</a> - (128k vocabulary) <a href="https://huggingface.co/dbmdz/bert-base-turkish-128k-cased#">https://huggingface.co/dbmdz/bert-base-turkish-128k-cased#</a> - (32k vocabulary) <a href="https://huggingface.co/dbmdz/bert-base-turkish-uncased#">https://huggingface.co/dbmdz/bert-base-turkish-uncased#</a> - (32k vocabulary) <a href="https://huggingface.co/dbmdz/bert-base-turkish-cased#">https://huggingface.co/dbmdz/bert-base-turkish-cased#</a> <br> * it - <a href="https://huggingface.co/dbmdz/bert-base-italian-xxl-uncased#">https://huggingface.co/dbmdz/bert-base-italian-xxl-uncased#</a> - <a href="https://huggingface.co/dbmdz/bert-base-italian-xxl-cased#">https://huggingface.co/dbmdz/bert-base-italian-xxl-cased#</a> - <a href="https://huggingface.co/dbmdz/bert-base-italian-uncased#">https://huggingface.co/dbmdz/bert-base-italian-uncased#</a> - <a href="https://huggingface.co/dbmdz/bert-base-italian-cased#">https://huggingface.co/dbmdz/bert-base-italian-cased#</a> <br> * pt - <a href="https://huggingface.co/neuralmind/bert-large-portuguese-cased#">https://huggingface.co/neuralmind/bert-large-portuguese-cased#</a> - <a href="https://huggingface.co/neuralmind/bert-base-portuguese-cased#">https://huggingface.co/neuralmind/bert-base-portuguese-cased#</a> <br> * ru - <a href="https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased">https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased</a> - <a href="https://huggingface.co/DeepPavlov/rubert-base-cased#">https://huggingface.co/DeepPavlov/rubert-base-cased#</a></p> <h3>English models</h3> <p>To add more diversities, We also trained <code>roberta-large, bert-large, albert</code> on english training data and made prediction on data translated into English and to our surprise, this did improve the performance! And we also had all cross-lingual models predict on english-translated test data and added into our final ensemble.</p> <h3>Training</h3> <ul> <li><a href="https://arxiv.org/abs/1905.05583">dynamic learning rate decay</a></li> <li>Adam optimizer</li> <li>binary cross entropy loss function</li> <li>learning rate of 1e-6 to 3e-6 for 2 epochs for large models and 3 epochs for base models</li> <li>after training on english and translated data, we both finetuned further on validation dataset for 1-2 epochs.</li> </ul> <p>My teammate had two different training schemes: * took <code>jigsaw-toxic-comment-train.csv</code> and its translated datasets to train 7 models, one for each language and combine prediction together in the way below</p> <p><code> test = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/test.csv") sub = pd.read_csv('/kaggle/input/toxu-submissions/xlmroberta-large-lm-all/submission.csv') test['toxic'] = 0.0 for lang in ['en', 'fr', 'es', 'it', 'pt', 'ru', 'tr']: lang_df = pd.read_csv(f'/kaggle/input/toxu-submissions/xlmroberta-large-lm-all/submission_{lang}.csv') idx = test[test['lang'] == lang].index test.loc[idx, 'toxic'] += lang_df.loc[idx, 'toxic'] * weight idx = test[test['lang'] != lang].index test.loc[idx, 'toxic'] += lang_df.loc[idx, 'toxic'] test['toxic'] /= 7 </code> * took <code>jigsaw-unintended-bias-train.csv</code> and its translation to train in the first epoch and used the same data in the above point</p> <h3>Postprocessing</h3> <p>After seeing the trick used in nearly all the top solutions, we felt it was really a pity that we missed it because once my teammate forgot to average predictions for 7 language-specific models, causing some predictions to be greater and 0.0001 improvement. We didn't delve deeper into this due to the tiny improvement!</p> <h3>Ensemble</h3> <p>Power averaging with power 3 in the <a href="https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/discussion/100661">2nd place solution</a> in the last jigsaw competition but didn't bring improvement compared with normal weighted average.</p> <h3>What didn't work</h3> <ul> <li>auxiliary task with 5 other labels in the training data</li> <li>sample weights for different languages</li> <li>extra labelled hatespeech data</li> <li>pesudo labelling</li> <li>multi-sample dropout</li> <li>cyclic learning rate</li> <li>checkpoint ensemble</li> <li>training multilingual bert from scratch on multilingual wikipedia data dump</li> </ul>
Jigsaw Multilingual Toxic Comment Classification
75th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks man <a href="/kurianbenoy">@kurianbenoy</a> </p> <p>Thanks buddy <a href="/rohitsingh9990">@rohitsingh9990</a> </p> <p>Congrats bro :)</p> <p>Congratulations everyone! First of all my teammates @rftexas @tanulsingh077 and @ibtesama was really awesome and supportive.</p> <p><strong>Overview of our models</strong></p> <ol> <li><p>Fold CV with 250k translated data and 8k validation dataset using custom pre-trained XLM Roberta large version. ( score : 0.9433). </p></li> <li><p>Single fold normal training XLM large and prediction on Test Time Augmentation (TTA). Test translated to 6 languages and avg predictions.( score : 0.9427)</p></li> <li><p>Single fold training on pre-trained XLM large with Multisample dropout architecture.(score 0.9417)</p></li> <li><p>Three Stage training on training data. - ( Train on English -&gt; inference) - (Train on translated lang -&gt; inference) - (Train on validation -&gt; inference) Finally weighted average of these predictions gave us (0.9399)</p></li> <li><p>Two-stage XLM Base with 400k samples and TTA. (0.9363)</p></li> </ol> <p><strong>Tricks that worked</strong></p> <ul> <li>Token length = 224</li> <li>Balanced the dataset in language distribution.</li> <li>Balanced the dataset in target class distribution ( 125k for each class)</li> <li>Head + tail encoding.</li> <li>Learning Rate scheduling.</li> <li>Label Smoothing.</li> </ul> <p>Finally blending using <a href="https://www.kaggle.com/paulorzp/gmean-of-light-gbm-models-lb-0-951x">GMEAN blend.</a></p>