Title of Competition
stringclasses
343 values
Title of Writeup
stringlengths
6
139
User
stringclasses
1 value
Writeup
stringlengths
419
80.4k
Google AI Open Images - Object Detection Track
8th place solution [0.49 on private LB]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI Open Images - Object Detection Track <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Using out-the-box solutions (pre-trained models)</h1> <p>To make initial submit we used pre-trained *faster_rcnn_inception_resnet_v2_atrous_oid* model from <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.mdhttps://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md">TF Model Zoo</a> . We achieved 0.216 on public LB. As this model was frozen with threshold 0.3 and we decided to refroze this model without any thresholds. After this model gave us 0.36 on public LB and we use it as base for our final submissions.</p> <h1>Data and Metric Analysis</h1> <ul> <li><strong>Unbalanced data.</strong> First thing we noticed that data is very unbalanced. For example Top-50 classes (by count of b-boxes) is 87.9% of training b-boxes or 84.3% of training images. On the other hand we have less than 1% of b-boxes and images for Bottom-50. It means that we have 200K samples per class for Top-50 and only 200 samples per class for Bottom-50. Due to that fact that evaluation metric is mAP all classes have the same impact on the final score.</li> <li><strong>Hierarchy structure.</strong> All classes in OID organized into one huge classes tree. Therefore there are a lot of classes that share common features and this is must help a lot for model, but such hierarchical structure also rise new problems: model confused by almost same classes (what class will have Girl age of 19? Girl or Woman?) and meta-classes start to contain some trash (can not define is this Woman or Man? Let’s mark as Person) </li> <li><strong>Bounding Boxes.</strong> We analyzed b-boxes by its area size and aspect ratios but didn’t find strong correlation. By default aspect ratios in RPN Anchor Generator are 1, 2, 0.5. We added ⅓, 3, ¼, 4 ratios but it did not give notable improvement.</li> <li><strong>Missing annotations.</strong> We visualized classes b-boxes and looked threw some training images. We noticed that there are many images which contains partially annotated instances for large number of classes. Missing annotations would lead to false negatives during training. <img src="https://cdn1.savepice.ru/uploads/2018/9/6/e11465bd14a191d264444dc53911414f-full.png" alt="Image"></li> <li><strong>Validation dataset.</strong> We used validation set proposed by organizators. During submissions we noticed that results on Public LB are 0.13 lower than on validation. This gap left almost unchangeable from submission to submission.</li> </ul> <h1>Detection Architecture</h1> <p>As our experiments based on framework from tensorflow/models we choose Faster R-CNN + backbone. Also we tried Single Shot Multibox Detector approach but it gave much worse results for OID (even for small amount of classes). Training single model on the whole dataset At first stage we tried to train on all 500 classes and to figure out what accuracy we can achieve with single model. We experimented with different backbones for Faster R-CNN:</p> <ul> <li><strong>Inception v2.</strong> One of the fastest models for training/inference (except MobileNet) in TF Model Zoo. We knew that we won’t achieve comparable results to Inception ResNet v2 but we could check different solutions relatively fast. Best mAP: 0.298 on public LB</li> <li><strong>Inception ResNet v2.</strong> As it had pre-trained weights on OID we fine-tune it but stuck on mAP ~0.38 on public LB</li> <li><strong>NasNet Large.</strong> Consume large amount of resources during training/inference (2-3x in comparison to Inception ResNet v2). Best mAP ~ 0.42 on public LB</li> </ul> <h1>Dataset Split</h1> <p>Taking into account that data is very unbalanced we split it into 6 subsets according to its data distribution. Then we trained Faster R-CNN + ResNet-101 on each subset and gained improvement about 0.12 in comparison to single model.</p> <ul> <li><strong>Bottom 0-100:</strong> 263 imgs/class</li> <li><strong>Bottom 100-200:</strong> 691 imgs/class</li> <li><strong>Bottom 200-300:</strong> 1552 imgs/class</li> <li><strong>Bottom 300-400:</strong> 4206 imgs/class</li> <li><strong>Bottom 400-450:</strong> 14375 imgs/class</li> <li><strong>Bottom 450-500:</strong> 202171 imgs/class</li> </ul> <h1>Missed annotations:</h1> <p>Open Image Dataset contain a lot of unlabeled object. This is produce noisy and some time even wrong training signal for our models. To leverage this problem we tried following ideas:</p> <ul> <li><strong>Pseudo-labeling.</strong> This is common approach that used on competitions to get some additional data and to fix bad annotation. We applied this trick to only to bottom 0-100 classes subset with prediction threshold &gt; 0.5. This trick increase mAP for bottom 0-100 classes subset by 1%. </li> <li><strong>Overlap Soft Sampling (OSS, <a href="https://arxiv.org/abs/1806.06986">paper</a>).</strong> Another idea is to reweight training samples depending on highest IOU with any ground of truth. Core idea here is when sample contain significant part of any annotated object then lower probability of wrong training signal. This approach in average increase performance on 0.3%, but if look closer, than we can sees that some classes decreased in performance up to 50% AP and some classes increased in performance up to 50% AP, so this is was really good news, in terms of ensembling. </li> </ul> <h1>Ensembling</h1> <ul> <li><strong>Best only.</strong> In this approach for each class we simply select predictions from model that predicts this class the best, based on validation. We used it at starting point. </li> <li><strong>Performance-based with restore procedure.</strong> This approach share same idea as “Best only”, but instead selecting just one model to predict some class, we select several models that perform best for this class and add they predictions. Before adding some predictions to final model we rescore all confidences of this model for this class by following formula: <img src="https://cdn1.savepice.ru/uploads/2018/9/6/0d7f56dae42f4d016d54ade327af674d-full.png" alt="Image"> Here performance is class average precision and N - number of models that predict this class. We achieved ~1% improvement in comparison to base strategy (Best only) </li> </ul> <h1>Result Pipeline</h1> <ol> <li>Split training set into 6 subset based on training sample (see dataset split topic).</li> <li>Training models. Generally we train Faster R-CNN with ResNet-101 backbone for each subset.</li> <li>For each class we selected models (based on evaluation results) that will be used to predict this class. Note that we used pretrained Faster R-CNN with Inception-ResNet v2 backbone as base model.</li> <li>Then we concatenate all selected prediction from each model, expand all classes with meta-classes predictions and apply Soft-NMS to suppress multiple predictions.</li> </ol> <h1>Hints and tricks:</h1> <ul> <li>We increase output size for RPN and Result predictor from 300 to 500. Improved mAP ~0.1%.</li> <li>We expanded predicted b-boxes from leaves on superclasses. Improved mAP ~0.5%</li> <li>Using <a href="https://arxiv.org/abs/1704.04503">Soft-NMS</a> above default NMS improved mAP ~2.5% </li> </ul> <h1>Other approaches that did not improve performance:</h1> <ul> <li>Classify image classes and suppress false positives from detectors</li> <li>Split prediction head into classification and regression</li> <li>RMSProp/Adam instead SGD with momentum</li> <li>Dropout</li> <li>Using ImageLevel annotations during training</li> <li>Augmentations: Random Distort Color, Random Adjust Brightness, Random Black Patches, Random Gaussian Noise</li> </ul>
Google AI Open Images - Visual Relationship Track
45th place solution (an economical one)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI Open Images - Visual Relationship Track <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I was busy preparing for the TrackML contest (and other things in life), so I only had around three weeks to prepare for this year's Open Image. I bet that there are other solutions that would result in way better score, but this solution might be the best tradeoff between time, cost, and performance. Apart from the (stupid) attempt of trying to retrain the pretrained network on V4 dataset, it also does not use much GPU resources since I don't have access to any dedicated server.</p> <p>In short, I tried to re-create results obtained from the paper "Visual Relationship Detection with Deep Structural Ranking." (AAAI 2018) Their GitHub repo can be found at <a href="https://github.com/GriffinLiang/vrd-dsr">https://github.com/GriffinLiang/vrd-dsr</a>. First, I used the <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md">faster_rcnn_inception_resnet_v2_atrous_oid</a> pretrained model, found in the tensorflow model detection zoo. I then tweaked the inference script to generate kaggle prediction strings, and ran in parallel on 8 Nvidia K80 GPUs. Considering the current hourly rate on GCP of $0.45 USD per GPU per hour, the cost is around 35 US dollars. This on-the-shelf inference gave me 0.23 public LB on the detection track, but according to <a href="https://www.kaggle.com/anokas"></a><a href="/anokas">@anokas</a> a few other techniques can boost the LB drastically to 0.37.</p> <p>Then, I scp'd the detection track back to my personal workstation and closed the cloud instance. Here's the network architecture of the vrd-dsr network: <img src="https://preview.ibb.co/mtsHL9/net.png" alt=""></p> <p>Since we've already had spatial features (provided by the bounding boxes), we can construct a training pipeline with clever code-tweaking to avoid using the memory-expensive object detector. I didn't get the word2vec semantic embedding of words to work, but if I had more time it's definitely worth a try. The pretrained vgg16 CNN is also available on the GitHub repo, so all we need are the two following files: train.pkl and so_prior.pkl</p> <p>so_prior stands for (subject-object prior). According to the author, "so_prior.pkl contains the conditional probability given subject and object. For example, if we want to calculate p(ride|person, horse), we should find all the relationships describing person and horse and the exact relationships of person-ride-horse. We use N and M to denote the number of the above two relationships, and p(ride|person, horse) can be simply acquired as N/M." So we can create such a file for this google open image challenge, and it has a shape of (subject, object, predicate).</p> <p>train.pkl description from the repo: python list each item is a dictionary with the following keys: {'img_path', 'classes', 'boxes', 'ix1', 'ix2', 'rel_classes'} 'classes' and 'boxes' describe the objects contained in a single image. 'ix1': subject index. 'ix2': object index. 'rel_classes': relationship for a subject-object pair.</p> <p>So, I just needed to re-format the training data into this train.pkl. Since 'classes' and 'rel_classes' had to be integers, I simply used a not-so-fancy numeric encoding by a python dictionary, and use the inverted version in the end to decode. Since I needed to filter samples in the official train csv according to the classes I put in so_prior.pkl, making the so_prior from vrd classes didn't really work because there were too few samples that fit inside the vrd class. I don't know if it's a bug in my code or not. In the end, I created the so_prior from boxable classes, added with visual attribute classes (in a later attempt), so the shape is either (601,601,9) or (606,606,10).</p> <p>It takes around 4 hours to train the network on my Nvidia Quadro M2000M with 4GB ram, so it's pretty achievable. I could've create a separate vrd-dsr network just for visual attributes, but I didn't have time to work on another network. I will probably release the tweaked code next week if anyone's interesed, but now the working directory is just too messy.</p> <p>I know that a 45/234 place isn't that impressive, but I believe that this is the simplest solution you can find that gives you a silver. It also doesn't use much resources (I would be really happy if I could use a dedicated 1080Ti server to try on more experiments), so I believe that it's worth a look. This competition has the largest dataset I've ever seen, and I learned a lot from trying to work it out with limited time and resource. Again, great work guys, and congratulate <a href="/anokas">@anokas</a> for being the youngest GM in the competition category!</p>
Google AI Open Images - Visual Relationship Track
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI Open Images - Visual Relationship Track <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thank you Google AI and Kaggle team for hosting this great competition. I really enjoyed it and learnt a lot from the impressive dataset. </p> <p>My approach consists of 2 models. The first model is to solve "is" visual relationship. The second is "non-is". I took this way as I wanted to start simple in the beginning and to leverage a model established for the object detection track.</p> <p><strong>1. “is” relation</strong></p> <p>The distinct triplets of “is” relation were only 42 therefore I treated each of them as a separate class and trained SSD Resnet50 ( Retinanet ) with the 42 target classes. I chose this model as it was relatively lightweight and it was compatible with GCP TPU. </p> <p><a href="https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync.config">https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync.config</a></p> <p>In the given triplet labels challenge-2018-train-vrd.csv, there were 194K “is” relations out of 374K samples. I generated my “is” only model training set based on these triplets for the positive labels, mixing-in 10K images which don’t have any “is” relations as negative labels.</p> <p>This first model gave me 0.09 on LB. </p> <p><strong>2. “non-is” relations</strong></p> <p>Next, I moved to “non-is” relations such as “under”, “on”, “holds” etc. I trained another Retinanet with all the 62 target classes just to detect single boxes. This second training set was all the images with challenge-2018-train-vrd-bbox.csv positive labels. Then I calculated the probabilities of possible valid box combinations in each image for the most 100 confident boxes. In other words, for each image, I got the most confident 100 combinations out of 10000 combinations of boxes ( 100 x 100 ). I used this formula to estimate the confidence for each combination C_c. </p> <p>C_c = F_r x sqrt ( C_box1 x C_box2 )</p> <p>Where C_box1 and C_box2 are the confidences for each box given by the box detection model. F_r is the relationship coefficient function for each box1-box2-relation combination. I used GBDT ( LightGBM ) to calculate this coefficient using simple features like box labels, relation label, Euclid distance of boxes, relative distance ( distance divided by sum of total box area ), relative x-y position of box1 to box2, etc. </p> <p>This second model gave me 0.16 on LB.</p> <p><strong>3. Ensemble</strong></p> <p>This is just a concatenation of the first and second model outputs. This is simply ok as they detect different types of relations. There shouldn't be any conflict or performance degradation.</p> <p>The result was 0.25 on LB.</p> <p><strong>Resources</strong></p> <p>I didn't use any external dataset. I used only the given open image v4 dataset and labels. The 500 USD GCP credit was very helpful even though it was quite challenging to manage everything within the GCP credit for this size of dataset. GCP worked great flexibly and it allowed me to do all the work. My local env ( macbook pro ) was just too small for this dataset :) I just feel it’s quite impressive that this quality and size of computer vision dataset is publicly available for ML experiments.</p> <p>Thank you all,</p>
Google AI Open Images - Visual Relationship Track
5th place solution overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI Open Images - Visual Relationship Track <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://twitter.com/radekosmulski/status/1035496977847517184">tldr; version (Twitter thread)</a></p> <p>I trained two models - one for the 'is' relationship and one for all the others. For the 'is' relationship I used a sigmoid output layer with 5 units to predict whether the item belongs to one or more of the 5 'is' categories.</p> <p>For all the other relationships I presented the model with a of items and asked it to predict whether they are related or not (binary classification).</p> <p>I fed 128x128 crops to the models. For the 'is' relationship I blew up the detections to occupy the entire image. For all the remaining relationships I took crops without distorting the aspect ratio. Further to that, I masked out the area outside of detections.</p> <p>I sent the crops through a CNN, inception v4, and I concatenated the outputs with information I derived from the coordinates, feeding the coordinates as well. I calculated intersection over union between detections, relative distance and relative size. Further to that, I also used embeddings for the relationship and both of the labels. For the 'is' model I don't include some of the information that does not make sense in this context (IOU, relationship embedding, etc).</p> <p>I used the <a href="https://github.com/fastai/fastai">fastai library</a> which is built on top of pytorch. This gave me a lot of flexibility. I developed and ran the solution in Jupyter notebooks. Despite not being able to give the competition as much time as I would like, this combination of tools allowed me to test quite a few hypotheses. For instance, turns out there is a lot of information contained in the coords, it seems important how you pose the question to be answered (though more investigation into this could be beneficial), etc.</p> <p>Once I have the predictions I scale them by relationship frequency in the train set. I also scale the probabilities by the initial detection scores coming from yolov3. I provide an overview of how I trained the model I use for detection <a href="https://www.kaggle.com/c/google-ai-open-images-object-detection-track/discussion/64734#379875">here</a>.</p> <p>For the vrd track, I did all the work on a machine with a Ryzen 5 CPU, 16 GB of RAM and a single 1080TI. I obtained and resized the crops before the training and stored them in a mongo database.</p>
Google AI Open Images - Visual Relationship Track
7th Place Solution Summary - anokas
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI Open Images - Visual Relationship Track <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is true for the triplet relationships. For the attribute relationships (<code>chair is wooden</code>, <code>bottle is plastic</code>, etc) I trained a single CNN multi-label classifier.</p> <p>Removing the thresholding is what gave me the boost from 0.27 -&gt; 0.37 :)</p> <p>Thanks Alexander. I guess you missed Google's pre-trained model then (like it seems many did), as it basically worked out of the box for this competition - I only spent a few hours working on object detection. The fun stuff was the rest of it! :D</p> <p>Thanks!</p> <p>Congratulations to the winners and thanks to Google AI for hosting this fun and interesting competition. My solution was relatively simple, taking a submission file from the object detection competition as an input and building some heuristic models based off bounding boxes.</p> <p><strong>Object Detection</strong></p> <p>For object detection bounding boxes I used Google’s <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md">pre-trained Faster R-CNN model</a> trained on V2 of the Open Images dataset - the labels they were trained on were actually a superset of the labels in this dataset so it was just a case of mapping the labels and filtering out ones we weren’t interested in. After expanding the label hierarchy, this model scored about 0.37 in the object detection competition. I also predicted on flipped images and combined the predictions, netting me <strong>0.375</strong> overall.</p> <p>One thing that led to a huge increase in score in object detection was to not threshold the predictions and leave the low confidence predictions in the submission file. Because of the way average precision works, you cannot be penalised for adding additional false positives with a lower confidence than all your other predictions, however you can still improve your recall if you find additional objects that weren’t previously detected.</p> <p><strong>Visual Relationships</strong></p> <p>This challenge can actually be split up into two subproblems. The first subproblem involves the <code>is</code> label (for example <code>chair is wooden</code>), and in fact only involves a single object and an attribute. This subproblem spans 57 relationships across 5 different descriptive attributes.</p> <p>The second subproblem is indeed based around visual relationships, and involves a pair of objects as well as a proverb, such as <code>chair at table</code> - there are about 250 different triplets that occur within the dataset.</p> <p>While the two types of problems have about equal occurences in the dataset, the triplet relationship detection has about 5x more distinct relationships - meaning it has about 5x more weight on the LB score, due to 80% of the metric being the mean of relationship-wise AP.</p> <p><strong>Attribute Classification</strong></p> <p>For this subproblem, I built a training dataset by getting the bounding boxes of all objects which could have a descriptive attribute from the bbox ground truth file. I then matched these objects with the boxes in the relationships ground truth file, which gave me a ground truth (for each object example, which attributes are present).</p> <p>On this dataset, I then trained a single pre-trained DenseNet121 network on the cropped objects resized to 224x224, to predict the probability of all 5 classes - this achieved about 0.95-0.99 AUC on all the classes.</p> <p>The pre-trained model is then used to predict the probability of the 5 attributes for all the applicable bounding boxes in the object detection challenge, with the final predicted probability as follows:</p> <pre><code>P(chair is wooden) = P(bounding box is chair) * P(bounding box is wooden) </code></pre> <p>Where the <code>bounding box is chair</code> probability is the output of the object detection model.</p> <p><strong>Triplet Relationships</strong></p> <p>For the visual relationship triplets, I also built a dataset similarly to attribute classification - all potential pairs of bounding boxes of the two objects, and whether they have the given relationship.</p> <p>This gave me around 250 training sets in total for the different triplets (of varying size). For the first 100 relationships, I built a 5-fold XGBoost model on each one with the following features:</p> <ul> <li>% box 1 inside box 2, % box 2 inside box 1</li> <li>IoU (intersection over union between the boxes)</li> <li>Horizontal/vertical offset of the box centres</li> <li>Euclidean distance between the two box centres</li> <li>Euclidean distance normalised by the size of the boxes (zoom invariance)</li> </ul> <p>And then using the models, the submitted probability of each relationship is for example:</p> <pre><code>P(chair at table) = P(bounding box is chair) * P(bounding box is table) * P(chair at table XGBoost) </code></pre> <p>It seemed that building a separate model for each relationship worked well, as I noticed they each had very different feature importances (eg. sometimes the model was looking at the overlap, while for others the model was looking at the difference in height between the objects). Most classes also had &gt;0.95 AUC, showing the XGBoost model was very good at classifying whether a relationship existed.</p> <p>For the other relationships (with a few hundred samples or less) I replaced the xgboost model with a simple prior - what proportion of pair occurences had the relationship in the training data. I tried using eg. linear models for these but didn’t see an improvement.</p> <p>Overall, the solution runs from start to finish in under 24 hours on an i7+single GPU - I am happy with the performance given the simplicity. I observed a linear increase in performance in visual relationships given an improvement in the bounding boxes scores, so this solution could have potentially scored much higher if I had a better object detector! :)</p> <p>I am curious to see how other competitors approached the problem, as this is quite a new problem for Kaggle.</p> <p>- anokas</p>
Google AI Open Images - Visual Relationship Track
8th place solution [0.17 Private LB]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI Open Images - Visual Relationship Track <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Software</h1> <p>Windows 10 + Python 3.5 + Keras 2.2 + Keras-RetinaNet [0.4.1]: <a href="https://github.com/fizyr/keras-retinanet">https://github.com/fizyr/keras-retinanet</a></p> <h1>Main approach</h1> <p>In this task, I was strongly guided by my results and predictions from the task of Google AI Open Images - Object Detection Track (ODT) simply due to the fact that the "ITEM" classes are contained in the predictions from that task. Namely, 62 classes out of 500. It seems logical that the more accurately you know the positions of the objects you seek, the more accurately you can determine the type of interaction between them. Subsequently, I even trained a separate model for these 62 classes of objects using training boxes from ODT.</p> <p>Statement of the task: two objects from the set to 62 classes are given on the image, as well as their interaction from 10 types: RELATIONSHIP = ['at', 'on', 'holds', 'plays', 'interacts_with', 'wears', 'inside_of', 'under', 'hits', 'is'] In this case the type ‘is’ has a separate property - it is defined for one object and means the type of material, I called its attribute of the object.</p> <p>In general, I split the problem into two independent parts, the Item + Attribute search and the second Item 1+ Item 2 + Relationship search.</p> <h1>Part 1 (Item + Attribute)</h1> <p>I trained two Retina models. The first one try to find for all possible pairs of Item + Attribute present in Train. In total, there were 42 such pairs. That is, each class is for example "Plastic chair", "Wooden chair", "Transparent cup, etc.". The main problem is that for some pairs there are very few samples. For example, for a Wooden Mug, only 5 images. This model alone gives on LB ~ 0.05550</p> <p>The second model looks only for the types of objects from the list LABELS_ATTR = ['(made of) Leather', '(made of Textile', 'Plastic', 'Transparent', 'Wooden'] I wasn’t able to use this model to increase score. But it probably possible for some rare cases from previous model.</p> <h1>Part 2 (Item 1+ Item 2 + Relationship)</h1> <p>In this case, I trained the Retina model for 9 classes, which is looking for the fact of having some kind of relationship. </p> <p>RELATIONSHIP = ['at', 'on', 'holds', 'plays', 'interacts_with', 'wears', 'inside_of', 'under', 'hits']</p> <p>Model tries to predict minimal rectangle containing two items forming some kind of relationship</p> <p>For the prediction of Item1 and Item2, I took the best model from the ODT task. To find the answer itself, a heuristic was invented. All found boxes for item1, item2 and relation were cut off at a certain threshold. Then there was a full search was searched for IOU between the boxes, if the Relation Box had an intersection with Item1 and Item2 and these Items are part of the allowed ones, then answer was added with confidence_score = (w1 * Conf_Item1 + w2 * Conf_Item2 + w3 * IOU1 + w4 * IOU2 + w5 * Conf_Relation) / (w1 + w2 + w3 + w4 + w5). The values ​​for wi were selected through scipy.minimize.</p> <p>Then on the received boxes the modified NMS algorithm was started, which worked on two boxes at once. This model (Item 1+ Item 2 + Relationship) independently gave the score 0.15330 on LB.</p> <h1>Failed experiments</h1> <p>In addition to the heuristics, I also prepared a LightGBM model with an AUC metric that tried to predict whether the predicted rectangles are valid or not on the base of the label, coordinates of the rectangles, areas of rectangles etc. Thus I tried to automatically cut off some non-existent cases, for example, when the first object for ‘under’ relationship is above the second, etc. And on validation this model was really better than my heuristic, but on LB it was for some reason worse. I think this is due to a small number of images in the validation - just slightly more than 5K or with some kind of error in the code. The best score was ~0.12 on LB.</p> <p>Definitely here you have to dig in the direction of second-level models. Ideally, split the whole train set with KFold and use it to generate a variety of OOF features, for example rectangles from different Object Detectors, etc.</p> <p>In general, the validation in this task has always been much more optimistic for me, but in general It worked in the right direction (improvement of validation improved LB). For example, validation for my best submission: Rel mAP 0.263953 Rel Recall 0.619921 Phrases mAP 0.342934 Pred score: 0.366739 LB: 0.20109 - almost two times lower.</p> <h1>Problems of dataset</h1> <ul> <li>A relatively small amount of data in the training set</li> <li>Absence of validation and its difference from the validation from the ODT task</li> <li>The “under” class is almost not represented in the training set (only 34 samples)</li> </ul>
Google AI Open Images - Visual Relationship Track
brief summary of 2nd place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI Open Images - Visual Relationship Track <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Nazim,</p> <p>I'm not sure whether I'll publish my code yet. Anyway, I'll let you know here if I publish it.</p> <p>Hi Tim H,</p> <p>I added Embeded categorical features and numerical features on the top of CNN as dens layer. I used following code for this.</p> <pre>BASE_MODEL='InceptionResNetV2' categorical = ['name_1', 'name_2'] numerical = ['XCenter1','YCenter1','XCenter2','YCenter2','Size1','Size2','Aspect1','Aspect2','XCenterDiff','YCenterDiff','CenterDiff','XCenter','YCenter','IOU'] def mk_model(): if BASE_MODEL=='VGG16': base_model = VGG16(include_top=False, weights='imagenet', input_tensor=Input(shape=(224,224,3))) elif BASE_MODEL=='InceptionV3': base_model=InceptionV3(include_top=False, weights='imagenet', input_tensor=Input(shape=(224,224,3))) elif BASE_MODEL=='Xception': base_model=Xception(include_top=False, weights='imagenet', input_tensor=Input(shape=(224,224,3))) elif BASE_MODEL=='InceptionResNetV2': base_model=InceptionResNetV2(include_top=False, weights='imagenet', input_tensor=Input(shape=(224,224,3))) input_list = [] x1=base_model.output x1=GlobalAveragePooling2D()(x1) input_list.append(base_model.input) emb_list = [] for cat in categorical: inpt = Input(shape=[1], name = cat) input_list.append(inpt) embd = Embedding(len(bb_values), len(bb_values))(inpt) emb_list.append(embd) x2 = concatenate(emb_list) x2 = SpatialDropout1D(0.5)(x2) x2 = Flatten()(x2) x2 = Dense(512,activation='relu')(x2) x2 = BatchNormalization()(x2) inpt = Input((len(numerical),),name='numerical') input_list.append(inpt) x3 = Dense(512,activation='relu')(inpt) x3 = Dropout(0.5)(x3) x3 = BatchNormalization()(x3) inpt = Input((len(bb_values)*2,),name='confidence') input_list.append(inpt) x4 = Dense(512,activation='relu')(inpt) x4 = Dropout(0.5)(x4) x4 = BatchNormalization()(x4) x = concatenate([x1, x2, x3, x4]) x=Dense(1024,activation='relu')(x) x = Dropout(0.2)(x) x = BatchNormalization()(x) x=Dense(1024,activation='relu')(x) x = BatchNormalization()(x) prediction=Dense(n_classes,activation='sigmoid')(x) model=Model(inputs=input_list,outputs=prediction) for layer in base_model.layers: layer.trainable=False return model, base_model </pre> <p>Congrats to all the winners! and I'd like to thank Google and Kaggle for this interesting competition. </p> <p>This is My brief summary.</p> <h1>Model 1: object detection (yolo)</h1> <p>I used yolo for object detection with following modification.</p> <ul> <li>Removed confidence term from loss function</li> <li>Added Class weight</li> <li>Class masking to forces only on labeled classes <br> But, did not mask the area where BB exists (BBs does not exist on same area) <br> But, masked for child class (BBs can exist on same area for child classes) </li> </ul> <h1>Model 2: visual relationship (InceptionResNetV 2)</h1> <p>Relation data(challenge-2018-train-vrd.csv) is mapped to all combinations of human labeled BBs(challenge-2018-train-vrd-bbox.csv) and the data is used as training data of model2.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/379292/10248/example.png" alt="example"></p> <h3>2-1: relation 'is'</h3> <p>I made training data by mapping the ground truth BBOX(challenge-2018-train-vrd-bbox.csv) to ground truth relational data(challenge-2018-train-vrd.csv), giving a target. </p> <ul> <li>Objective: Material (wooden, plastic, ..., None). </li> <li>Features: Cropped image, BBOX class, BB position / size etc.</li> </ul> <p>Example: </p> <p>bbox data:</p> <pre> image1,bbox1,Man image1,bbox2,Guiter image1,bbox3,Chair </pre> <p>Relationship data:</p> <pre> image1, bbox2, bbox2, Guitar, Wooden, is image1, bbox3, bbox3, Chair, Wooden, is image1, bbox1, bbox2, Man, Guitar, hold </pre> <p>then, training data would be</p> <pre> image1,bbox1,bbox1, Man,None,is image1,bbox2,bbox2, Guiter,Wooden,is image1,bbox3,bbox3, Chair ,Wooden,is </pre> <p>  here, target is None if the data is not in Relationship data.</p> <h3>2-2: Triplet Relationships</h3> <p>First, I created all pairs of BBOX in the same image with some filter. Then, I made training data by mapping the pairs to ground truth relational data, giving a target.</p> <ul> <li>Objective: Relationship (at, on, ..., None). </li> <li>Features: Cropped image including two BBOX with box line, LabelName1, LabelName2, XCenter1, YCenter1, XCenter2, YCenter2, Size1, Size2, Aspect1, Aspect2, XCenterDiff, YCenterDiff, CenterDiff, XCenter, YCenter, IOU</li> </ul> <p>Example: bbox data :</p> <pre> image1,bbox1,Man image1,bbox2,Guiter image1,bbox3,Chair </pre> <p>Relationship data:</p> <pre> image1,bbox1,bbox2,Man,Guiter,hold </pre> <p>Then training data would be </p> <pre> image1,bbox1,bbox2,Man,Guiter,hold image1,bbox1,bbox3,Man,Chair,None image1,bbox2,bbox1,Guiter,Man,None image1,bbox2,bbox3,Guiter,Chair,None image1,bbox3,bbox1,Chair,Man,None image1,bbox3,bbox2,Chair,Guiter,None </pre> <p>  here, target is None if the data is not in Relationship data.</p> <h1>Model 3: Score Prediction (Light GBM)</h1> <p>Relation data(challenge-2018-train-vrd.csv) is mapped to all combinations of predicted BBs(output of model1), and the data is used as training data of model3.</p> <ul> <li>Features: model1 output(BBOX classes, BBOX scores, BB positions / sizes etc.), and model2 output(Relationship and its probability)</li> <li>Objective: whether the prediction of model2 is true or not</li> </ul> <p>Model2 and model3 can be merged to one NN model.</p> <h1>About yolo object function</h1> <p>The data set of this competition has the following three characteristics.</p> <h2>(1), every images are not checked for all classes.</h2> <p>For example, there are cases where cats are not checked even if there is a cat in the image. This causes false penalty if the model detects a cat during learning.</p> <h2>(2), each object has a parent-child relationship.</h2> <p>For example, cat class is a child class of the Animal class, so a BBOX of cat is also a BBOX of Animal.</p> <h2>(3), the 500 classes are not balanced.</h2> <p>To handle these better, I first deleted the confidence term from YOLO's objective function. By deleting this term, class probability will be responsible for confidence. Once class probability also have confidence role, this make it possible to mask unchecked classes. By masking the cat class in the example above it becomes possible to eliminate the penalty when cat is detected by the model.</p> <p>However, It is unlikely that different classes of boxes will be in the same area(same place, and same size, and same aspect). So, I did not mask for the same area where some bbox exists.</p> <p>But, there is an exception to this, as bbox of child class can exist in the same area(Animal can be a cat as well). So I masked the child class of the corresponding BBOX.</p> <p>In order to cope with an imbalance of (3), I removed same images. But sampling images is not enough to balance the classes, so I introduced class weight.</p> <p>Thank you radek,</p> <p>As you mentioned, I used class weight to balance the infrequent classes.</p> <p>I expected class probability takes over confidence by removing confidence term from loss function. This (confidence for each class) made it possible to add class masking and class weight, etc to yolo.</p> <p>Thank you, Ahmet!</p> <p>Hi Kim,</p> <blockquote> <p>----EDIT ah, wait. I think I get it now. X1 to X4 are computed features from the layers using inputs like images, labels and bounding box annotations.</p> </blockquote> <p>Yes, you understand it correctly.</p> <blockquote> <p>How do you merge Model2.1 , 2.2 and 3?</p> </blockquote> <p>Model3 is made for Model2.1 and Model2.2 respectively.</p> <pre><code>Model2.1 -&gt; Model3.1 -&gt; prediction for relation 'is' Model2.2 -&gt; Model3.2 -&gt; prediction for Triplet Relationships </code></pre> <p>Then I just combined the predictions to one submission.</p>
Google AI4Code – Understand Code in Python Notebooks
10th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI4Code – Understand Code in Python Notebooks <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h4>Overview</h4> <p>First, I would like to thank Kaggle and Google and X for hosting this interesting competition. <br> In my approach, all the markdown and code cells in a notebook are tokenized and concatenated to form a long input sequence, with all the markdown cells go first followed by all the code cells, and they are separated with a [SEP] token. There are also special separator tokens added for markdown (MKDS) and code (CDS) respectively. Once the input goes through the transformer, tokens of each cell is aggregated by meanpooling and a linear layer to map to a single value. During training, the output of each cell is penalized with a pairwise ranking loss which considers all the combinations of cells within a notebook. For inference we can simply sort the output of each cell for correct order. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F478989%2Ff7de356d26ff88204073ee34d459e7b5%2FUntitled%20Diagram.drawio.png?generation=1669608703048539&amp;alt=media" alt=""></p> <h4>Source Code</h4> <p><a href="https://github.com/GuanshuoXu/Google-AI4Code-Understand-Code-in-Python-Notebooks" target="_blank">https://github.com/GuanshuoXu/Google-AI4Code-Understand-Code-in-Python-Notebooks</a></p> <h4>What Worked</h4> <ul> <li>torch.nn.MarginRankingLoss works better than BCE and MSE loss</li> <li>deberta-v3-large is the best transformer in Huggingface for this task</li> <li>longer training sequence (2048) and even longer inference (4096 for validation and 4352 for submission). </li> <li>compress markdown cells with <a href="https://gist.github.com/lorey/eb15a7f3338f959a78cc3661fbc255fe" target="_blank">https://gist.github.com/lorey/eb15a7f3338f959a78cc3661fbc255fe</a></li> <li>compress code cells with <a href="https://www.kaggle.com/code/haithamaliryan/ai4code-extract-all-functions-variables-names/notebook" target="_blank">https://www.kaggle.com/code/haithamaliryan/ai4code-extract-all-functions-variables-names/notebook</a></li> </ul> <h4>What didn't Work</h4> <ul> <li>seq-to-seq model like T5</li> <li>multilingual models</li> </ul>
Google AI4Code – Understand Code in Python Notebooks
11th Place Solution: Nested Transformers
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI4Code – Understand Code in Python Notebooks <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>BERT is just stacks of transformer encoder? For Cell Transformer we are using huggingface models only.</p> <p>emb_dim = 768<br> embedding of 1 token (output of huggingface model): emb_dim<br> embedding of max_len tokens: max_len * emb_dim<br> after concatenate with the code feature embedding: (max_len+1) * emb_dim<br> This happens on every cell, for all cells: n_cells * (max_len+1) * emb_dim<br> After the attention aggregation (pooler): n_cells * emb_dim</p> <p>Yes, we used something similar. To be precise, it is equation (3) of this <a href="https://aclanthology.org/2020.emnlp-main.511.pdf" target="_blank">paper</a>.</p> <ol> <li>We trained the model from end to end, hence they share the same learning rate and optimizer state.</li> <li>Yes, but the l1loss is weighted by the number of markdown cells in the notebook.</li> <li>We did not have the time to quantify the effect of each and every component, the most significant one should be the pairwise head which gives a boost around 0.02-0.03. </li> </ol> <p>impressive 😹</p> <p>Forgot to mention that besides the cell features, we also add 2 tokens (cell_type and separator tokens) to the text of each cell, either <code>code &lt;sep&gt; (source)</code> or <code>mark &lt;sep&gt; (source)</code>.</p> <p>I didn't try simpler aggregation, but I agreed that it is interesting to know the score difference. In fact this kind of analysis to any design choice is interesting, hopefully I will have the bandwidth to try them in the future :)</p> <p>To save training time, we trained on shorter notebooks first (&lt;=126 cells), then only further finetune on longer notebooks, it depends on the model size and VRAM to decide whether to include notebooks with certain number of cells for training.</p> <p>We do not have enough submissions to test every single model after the training is done. In our case, CodeBERT and DistilBERT have similar performance, the two debertas slightly underperformed them, but to be fair, we trained deberta-base and deberta-large with smaller number of epochs (they are much larger and require longer training time per epoch).</p> <p>Details are added. Thanks for reading!</p> <h1>Overview</h1> <p>We called our network architecture "Nested Transformers", since it contains two levels of transformers:</p> <ul> <li>Cell transformer: A transformer encoder to encode cell tokens, this can be a typical transformer for NLP.</li> <li>Notebook transformer: A transformer encoder with cell*cell self-attention to learn the interaction among cells.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2202881%2Faad7b5931ba26287e467c535a69d595a%2Fnested_tfm.png?generation=1660286753619476&amp;alt=media" alt="Nested Transformers"></li> </ul> <p>Source codes: <a href="https://github.com/ShinSiangChoong/kaggle_ai4code_nested_transformers" target="_blank">https://github.com/ShinSiangChoong/kaggle_ai4code_nested_transformers</a></p> <h1>What Worked</h1> <h4>Modified Format of the Input to Cell Transformer</h4> <ul> <li>The input is in the following format: <code>&lt;code&gt;&lt;sep&gt;&lt;source_tokens&gt;</code> OR <code>&lt;mark&gt;&lt;sep&gt;&lt;source_tokens&gt;</code></li> <li>Thanks <a href="https://www.kaggle.com/cabbage972" target="_blank">@cabbage972</a> for verifying the usefulness of this idea. Please refer to this <a href="https://www.kaggle.com/competitions/AI4Code/discussion/343680#1907307" target="_blank">comment</a>. </li> </ul> <h4>Cell Features:</h4> <ul> <li>The code cell order is known, while the markdown order is unknown, hence a straightforward positional embedding scheme in the notebook transformer won't work. The cell features aim to inform the model about the known code cell positions/order.</li> <li>We use 2 features for each cell, i.e. cell type (code -&gt; 1, markdown -&gt; 0) and percentile rank among the code cells: If the cell is a markdown cell, this feature = 0</li> <li>The 2 features are "expanded" by FC layers, such that they are represented as an embedding that has the same dimension with that of the cell token embeddings. </li> <li>These cell feature embeddings are then concatenated with the token embeddings and passed into the Aggregation Attention Layer together.</li> </ul> <h4>Aggregation Attention</h4> <ul> <li>We believe different tokens (and the cell features) have different importance to represent the cell. An aggregation attention layer is used to learn the weights of each token embedding (and the cell feature embedding). </li> <li>The weights are normalized by a Softmax layer, such that the final cell embedding is a weighted average of its token and feature embeddings.</li> <li>Thanks <a href="https://www.kaggle.com/cabbage972" target="_blank">@cabbage972</a> for verifying the usefulness of this idea. Please refer to this <a href="https://www.kaggle.com/competitions/AI4Code/discussion/343680#1909812" target="_blank">comment</a>. </li> </ul> <h4>Double Head Architecture</h4> <ul> <li>Pointwise Head: output the percentile ranks of each cell. L1 Loss is applied.</li> <li>Pairwise Head: For each pair of cells A &amp; B, predict whether cell B is the next cell of cell A. Cross-entropy loss is applied.</li> <li>We only use the pointwise head for inference, i.e. the cells are ranked by the predicted percentile rank. </li> <li>The pairwise head is not used during inference, however, in our experiment, inclusion of the pairwise head helps in boosting the validation score by around 0.03.</li> </ul> <h4>Code Order Correction</h4> <ul> <li>Since our model predicts a percentile rank for each cell (including code cells), the code cell order (which is given) may be violated. </li> <li>If this happens, we swap the code cells based on the given order.</li> <li>This idea only marginally boosts the score. The model actually learns the code order pretty well given the cell features.</li> </ul> <h1>What Didn't Work</h1> <ul> <li>Make use of the pairwise head prediction in a greedy manner while fulfilling the code order constraint, i.e. keep determining the next cell by selecting the cell with the highest prediction score.</li> <li>Make use of the pairwise head prediction as a distance matrix, then use LKH to solve a Sequential Ordering Problem (TSP with precedence constraints)</li> <li>Train 2 cell transformer backbones together end to end, rely on the Aggregation Attention to aggregate the token embeddings from the 2 backbones.</li> </ul> <h1>Additional Context</h1> <ul> <li>The final model is an ensemble of 4 different cell transformer backbones, i.e. CodeBERT, Deberta-large, Deberta-base, and DistilBERT. </li> <li>For the notebook transformer, we used 6 transformer encoder layers here, each with 8 self-attention heads. This layer "adjusts" the embedding of each cell based on the embeddings of other cells using the self-attention mechanism.</li> </ul> <h1>Acknowledgements</h1> <p>We would like to thank the competition host(s) and Kaggle for organizing this interesting competition.</p> <h1>Team Members</h1> <p><a href="https://www.kaggle.com/css919" target="_blank">@css919</a> <br> <a href="https://www.kaggle.com/erniechiew" target="_blank">@erniechiew</a> <br> <a href="https://www.kaggle.com/hexomiter" target="_blank">@hexomiter</a></p> <p>depending on the backbone and notebook length, we use batch size within [1:4]</p> <p>Updated the post with the link to the Github repository.</p> <p>I'm sorry, I forgot to add back certain files after renaming. Have just pushed a new tested version.</p>
Google AI4Code – Understand Code in Python Notebooks
15th Segment Ranking
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI4Code – Understand Code in Python Notebooks <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Train one epoch the score will be above 0.87, train more epochs(like 10) the score will be ~ 0.89 to 0.9 on both CV and LB.</p> <p>Add fold0 and external dataset to the training set and train 10 epochs, will achieve ~0.91 on LB</p> <p><code>train_orders.csv</code> has the right order.</p> <p>Yes it's not used at inference time.</p> <p>Instead of pretrain with language modeling, I train the language modeling along with the task.</p> <p>The ML head boosts CV is ~ 0.015 and the padding method boost CV is ~ 0.005.</p> <h1>Segment Ranking(9th on LB)</h1> <p>Thank Kaggle for hosting this game, and the community for all the sharing. It's my first NLP competition so there are not much NLP-related sharing in my solution, just task-related tricks.</p> <p>Source code: <a href="https://github.com/louis-she/ai4code" target="_blank">https://github.com/louis-she/ai4code</a></p> <p>Inference Notebook: <a href="https://www.kaggle.com/code/snaker/new-rank/notebook" target="_blank">https://www.kaggle.com/code/snaker/new-rank/notebook</a></p> <h2>TL;DR</h2> <p>One deberta-v3-small model is enough, ranking method, subsequences(segment).</p> <p>My solution only uses the ranking method. Given a markdown cell and a bunch of sorted code cells, predict the position of the markdown cell. The main difference here is that I do not use a whole notebook to rank a markdown cell, but just a part of it, which can be called a <code>segment</code>.</p> <p>For instance, given a notebook of 40 code cells, we can divide it into 4 sub segments, each segment containing 10 code cells. Then we rank each markdown of the whole notebook in the 10 code cells.</p> <p>Interesting thoughts about this method, if we divided the 40 code cells into 40 segments, each segment contains only 1 code cell, then this method is the same as Pairwise, if do not divide the notebook(or divide it into 1 segment), then it's same as full ranking.</p> <h2>Segment Ranking</h2> <p>The full ranking method, which is also open sourced <a href="https://www.kaggle.com/competitions/AI4Code/discussion/326970" target="_blank">here</a>, is to predict a markdown cell's position in the whole code cells. The sequence feed to the model is usually like this: markdown_cell + code_cell + code_cell … code_cell, but many of the sequence is way too long for transformer models(even for longformer), one simple method to handle the long sequence issue is to cut off the length of makdown_cell and code_cell to make the ensembled sequence length is acceptable. But the more cut-off, the more information we lose, so how can we keep the information as much as we can and also get a reasonable output sequence length?</p> <p>To address the global information and sequence length balance issue, I came up with the Segment Ranking method.</p> <p>A segment is part of a notebook's sorted code cells. If using M for the number of code cells in a notebook, and N for the number of code cells in a segment, then a notebook can be divided into C segments, in which C equals <code>math.ceil(M / N)</code>.</p> <p>Now since we have multiple segments of a notebook, and markdown cells in that notebook can only belong to one of the segments, we have to first decide if a markdown belongs to a segment, and then do the ranking in that segment. So there are two heads of the model, one for classification and one for ranking. The following figure demonstrates the details.</p> <p><img src="https://raw.githubusercontent.com/louis-she/ai4code/master/assets/segment-ranking.png" alt="segment-ranking"></p> <p>The number of cell in segments(N) here is a super important hyperparameter. A large N will speed up training and inference, small N can bring more context information. I ended up choosing N = 16 and sequence max length = 384.</p> <h2>Other important Task-related tricks</h2> <p><strong>Training with language modeling</strong></p> <p>There is actually the third head of my model, which is used as language modeling. The input for that head is &lt;previous_cell_encode + &lt;anchor_cell_encode + &lt;next_cell_encode, positive means that the three is continuous, negative means not continuous, they are just randomly pick from the notebook(but not the previous and next ones).</p> <p><strong>pad is a wast</strong></p> <p>I try to keep as much information as possible and reduce tokens. Given the following 5 cells, how can we cut off to keep a length of 200? Instead cut every cell at a fixed length, I used the following method showed by the figure. Or have a look at the <a href="https://github.com/louis-she/ai4code/blob/master/tests/test_utils.py#L6" target="_blank">test case</a>.</p> <p><img src="https://raw.githubusercontent.com/louis-she/ai4code/master/assets/sequence-ensembling.png" alt="sequence-ensembling"></p> <p>Thanks for reading.</p> <p>I keep thinking about how to bring more global context information. The model can only handle 512 or 1024 length of sequence, but the number of tokens of one notebook is way larger than that. So splitting a notebook into smaller segments is the first and most straightforward way to handle this issue.</p> <p>if the real order of a notebook is like this: <code>c1 c2 c3 c4 c5 m1 c6 c7 c8 c9</code>, and divide into 3 segments:</p> <p>segment1: m1 c1 c2 c3 cls label 0(m1 not in the segment), ranking label: skipped<br> segment2: m1 c4 c5 c6 cls label 1 (m1 is in the segment), ranking label: 2.5<br> segments: m1 c7 c8 c9 cls label 0, ranking label: skipped</p> <p>I'm changing the title back… I got alive in the 2nd rerun..</p>
Google AI4Code – Understand Code in Python Notebooks
1st Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI4Code – Understand Code in Python Notebooks <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>1) It makes text shorter, so we can handle long text. Otherwise 5120 is not enough.<br> 2) Yes. I only sample 30000 notebooks for fast experiment at the early stage. </p> <p>I tested it before. So it's fine.</p> <p><strong>Overview</strong></p> <p>I used single list-wise deberta-v3-large trained on all competition data. Input text is cells separated with [SEP] and [CLS] tokens.</p> <p><strong>Code</strong></p> <ul> <li>training code: <a href="https://github.com/hydantess/2022-Google-AI4Code-1st-Place-Solution" target="_blank">https://github.com/hydantess/2022-Google-AI4Code-1st-Place-Solution</a></li> <li>inference notebook: <a href="https://www.kaggle.com/code/hydantess/ai4code0729?scriptVersionId=103060429" target="_blank">https://www.kaggle.com/code/hydantess/ai4code0729?scriptVersionId=103060429</a></li> </ul> <p><strong>What Worked</strong></p> <ul> <li>MLM</li> <li>preprocessing<ul> <li>Reasoning and context: It makes text shorter, so we can handle longer text. </li></ul></li> <li>use deberta rather than codebert</li> <li>+lstm head</li> <li>bigger cell_cnt and seq_length<ul> <li>Reasoning and context: metric function cares more about long text. </li></ul></li> <li>+postprocessing: predict code and markdown both, then sort code by ground truth.<ul> <li>Reasoning and context: more reasonable than only predict markdown, and concat with code cells. </li></ul></li> </ul> <p><strong>What Didn’t Work</strong></p> <ul> <li>translation: facebook/mbart-large-50-many-to-one-mmt<ul> <li>Reasoning and context: most notebooks are in english. Or I did something wrong.</li></ul></li> <li>+code rank to dense layers<ul> <li>Reasoning and context: I should try to concat text with rank information. Model will learn better.</li></ul></li> <li>extract embedding, then train a transformer model<ul> <li>Reasoning and context: I should pre-train the extracting model first.</li></ul></li> <li>xlm-robert-large or mdeberta-v3-base<ul> <li>Reasoning and context: much worse than deberta-v3-large. so can not ensemble directly.</li></ul></li> </ul> <p><strong>Important Citations</strong></p> <ul> <li><a href="https://www.kaggle.com/code/ilyaryabov/fastttext-sorting-with-cosine-distance-algo" target="_blank">https://www.kaggle.com/code/ilyaryabov/fastttext-sorting-with-cosine-distance-algo</a> by ilyaryabov</li> <li><a href="https://www.kaggle.com/competitions/AI4Code/discussion/327626#1804178" target="_blank">https://www.kaggle.com/competitions/AI4Code/discussion/327626#1804178</a> by Nguyentuananh</li> </ul> <p><strong>Additional Context</strong></p> <ul> <li>loss is MAE.</li> <li>MLM on training data for 15 epochs, max_length is 1024. It takes 3 days. </li> <li>training for 10 epochs, max_length is 2048. It takes 7 days. I think set max_length to 5120 will greatly improve model performance but I have not enough A100~</li> <li>inference: max_length is 5120. It takes 6 hours.</li> <li>I used 1 * A100(80GB) for the last two weeks and 1 * 3090(24GB) at the early stage. </li> <li>Future work:<ul> <li>concat text with rank information: flag + rank_1 + cell_1 + flag+ rank_2 + cell_2 + … + flag+ rank_n + cell_n</li> <li>multi languages</li> <li>extra training data</li> <li>Adversarial Weight Perturbations (AWP), but too time-consuming</li></ul></li> </ul> <p><strong>Thanks and Acknowledgements:</strong></p> <p>Thanks to competition organizers for hosting such an amazing NLP competition. And thanks to the<br> great notebooks and discussions, I learned a lot. </p> <p><strong>Team Members:</strong></p> <ul> <li><a href="https://www.kaggle.com/hydantess" target="_blank">@hydantess</a> <br> .</li> </ul> <p>List-wise. </p> <p>Did you try to use deberta rather than codebert. Deberta is much better in my exp even without MLM. <br> Or maybe our approaches are too different.</p> <p>You are the genius too! I am very happy competing with you~</p> <p>Thanks. Your poly-encoder like approach is also very impressive. I tried translating and using xlm-roberta model but failed!</p> <p>I see, hhh.</p> <p>I didn't deal with multi languages situation, so just list it in feature work.<br> I think Dmytro Poplavskiy's poly-encoder like approach may help.</p> <p>Nice conclusion! Deberta-v3-base is slightly better than codebert in my exp. And deberta-v3-large is very slow, I have no time to do ensemble.</p> <p>percentile ranks. </p> <ol> <li>Cells are truncated and length is smaller than 128.</li> <li>Each cell's length starts from 1 and for-looper all cells, step = 4, then break when total length is greater than certain value.</li> </ol> <p>Yes, you are right.</p>
Google AI4Code – Understand Code in Python Notebooks
21st - A Bag of Words is All You Need
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI4Code – Understand Code in Python Notebooks <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks, Allohvk.</p> <p>If normal attention calculates its attention matrix as:</p> <p><code>A = matmul(query, key.T)</code></p> <p>I calculate mine as:</p> <p><code>A = matmul(query, key.T) + sum_i[ matmul( augment_query_i, augment_key_i.T ) * augment_matrix_i ]</code></p> <p>where in this case the augment matrices are the identity matrix and two ngram similarity matrices. </p> <p>This approach allows the transformer to make use of relationships between tokens that are not necessarily easy to represent with token embeddings alone. For example it could be very hard for a token to attend the token that is most similar to itself without attending to itself using normal attention. My approach allows the identity matrix to be subtracted from the attention matrix making this operation simple. </p> <p>I'm not sure if I'm the first to come up with this however.</p> <p>Thank you, congrats on 13th. </p> <p>I used colab notebooks for everything. Usually I get a v100 gpu.</p> <p>Thank you, it's not very readable, but here it is: <a href="https://www.kaggle.com/code/jacobyjaeger/a-bag-of-words-is-all-you-need/" target="_blank">https://www.kaggle.com/code/jacobyjaeger/a-bag-of-words-is-all-you-need/</a></p> <p>Thank you everyone for an interesting competition. </p> <p>My solution ignores sentence grammar entirely, embedding entire cells as single tokens with a sparse-dense-matmul between the tfidf-matrix and word/word-ngram embeddings. Using this method, it's possible to have global attention between every cell of each document. An epoch of training takes about 20 minutes.</p> <p>The rest of my model is very similar to BERT, except that the model has the option to add an eye matrix, or input character n-gram similarity matrix weighted attention to the attention matrix eg. to bias attention away from cells attending to themselves or to bias attention to cells that have similar character-ngram compositions.</p> <p>My highest scoring notebook doesn't use external data as I was running out of time towards the end of the competition, but given the short epoch time requirement I think my score likely could have been improved quite a bit by incorporating a large volume of external data.</p>
Google AI4Code – Understand Code in Python Notebooks
28th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI4Code – Understand Code in Python Notebooks <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Overview</h2> <p>Based on the baselines' ideas, we propose a two-phase inference algorithm to predict the exact position of the markdown cells.</p> <ul> <li>Phase 1: We first use rank-based and pairwise-based methods to predict the markdown cells' approximate position. Two rank-based models and one pairwise-based model are used. We use softmax with temperature for the ensemble.</li> <li>Phase 2: Once we obtain the approximate position of the markdown cells, we move to phase 2 to predict the exact position of the cells. We use a rank-based model (window size is 20, detecting around the position phase 1 predicted) and two pairwise-based models for prediction. Finally, another softmax is applied for the ensemble.</li> </ul> <h2>Code</h2> <p><a href="https://www.kaggle.com/code/tide525/ai4code-final-ensemble/notebook?scriptVersionId=102987081" target="_blank">https://www.kaggle.com/code/tide525/ai4code-final-ensemble/notebook?scriptVersionId=102987081</a></p> <h2>Solution Pipeline Visualization</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4373352%2F7478ef3bee26d1d7826468ae2760a8f5%2FWechatIMG2812.jpeg?generation=1669446970236752&amp;alt=media" alt=""></p> <h2>Model</h2> <h3>rank:</h3> <ul> <li>In the rank-based model, we estimate the relative position of a markdown cell by feeding the markdown cell and all the code cells sampled in the correct order. The model is trained on regression of the normalized rank of markdown cells corresponding to that of code cells. We align the markdowns with the model by inserting each markdown at the position of its predicted rank.</li> </ul> <h3>pairwise</h3> <ul> <li>The pairwise-based model predicts the similarity between a markdown cell and a code cell. We compute the similarity to a markdown cell for each code cell and insert the markdown cell next to the code cell with the highest similarity.</li> <li>For training the model, we adopt contrastive learning with in-batch negatives, which is used in DPR. Specifically, pairs of a markdown cell and a code cell next to each other are given as positive examples, and all the other combinations in a batch are considered negative examples. We use contrastive loss as an objective.</li> </ul> <h2>What Worked</h2> <ul> <li>Idea<ul> <li>Data augmentation</li> <li>The original baseline sample 20 code cells in total for more global information. If there are 400 cells in total, the baseline will sample cells whose idx are 0, 20, 40, … 400.</li> <li>We propose a data augmentation method, which collects cells {0, 20, 40, …}, {1, 21, 41, …}, and {2, 22, 42, …} separately, making the data triple.</li> <li>cv: 0.8436 -&gt; 0.8506</li></ul></li> <li>Idea<ul> <li>Two-phase prediction: from wide to narrow</li> <li>In some notebooks with many cells, it is hard to estimate accurate position. Then, we take a two-phase prediction. First, all cells are inputted and predict the approximate position of the markdown cells is. Second, we search for a neighborhood of the first-phase prediction. "Neighborhood" is determined based on experiments as a window size. We experiment from 5 to 40 and found that 20 is the best for the window size. The neighborhood searching model is independently trained. For this training, we split the train data into subsets; each subset has window-size cells.</li></ul></li> <li>Idea<ul> <li>Extra data(KGTorrent)</li> <li>We first train our model for one epoch using KGTorent and AI4Code data, then only use AI4Code data for 3 epochs.</li> <li>cv += 0.002-0.004</li></ul></li> </ul> <h2>What Did not Work</h2> <ul> <li>Idea<ul> <li>Changing input code length dynamically</li> <li>Each notebook has a different number of cells though the max input length is always 512. Then, we hypothesized that it would be better to change the input length of each cell based on the number of cells. For notebooks with many cells, the input length for each cell should be short; for those with a few cells, the input length should be longer. However, stable input length models perform better. One possible cause is that the model could not cope with dynamic changes in the input length.</li></ul></li> <li>Idea<ul> <li>Cubert</li> <li>Cubert (<a href="https://github.com/zhihu/cuBERT" target="_blank">https://github.com/zhihu/cuBERT</a>) has a longer max_length (2048) and larger size (BERT-large configuration), which are attractive to this task. However, it is too heavy for T4 GPU, so we abandoned this model.</li></ul></li> <li>Idea<ul> <li>Data preprocessing</li> <li>We tried (1) excluding all notebooks with over 200 cells (which are difficult to predict) cv: 0.8436 -&gt; 0.8406 (2) transforming markdown to pure text (by <a href="https://gist.github.com/lorey/eb15a7f3338f959a78cc3661fbc255fe" target="_blank">https://gist.github.com/lorey/eb15a7f3338f959a78cc3661fbc255fe</a>) cv: 0.8436 -&gt; 0.8431</li></ul></li> <li>Idea<ul> <li>Translating multilingual to English</li> <li>There are a lot of other languages in the dataset, such as Russian and Korean; we thought preprocessing all foreign languages into English could make a better dataset. However, statistics-based machine translation models perform poorly, and generation-based models take long in beam search. Moreover, the biggest problem is that it is challenging to deal with languages mixing. Therefore, we did not apply machine translation to the dataset. </li></ul></li> <li>Idea<ul> <li>Extra data(GitHub repositories)</li> <li>We collected Jupyter Notebooks from GitHub, but they are not high quality, leading to poor results.</li></ul></li> </ul> <h2>Additional Context</h2> <ul> <li>We used KGTorrent for an additional dataset. Its dataset has 248,761 Jupyter Notebooks written in Python. This dataset covers the period between November 2015 and October 2020. ( KGTorrent is available at <a href="https://zenodo.org/record/4468523#.Y39guXbP2Hs" target="_blank">https://zenodo.org/record/4468523#.Y39guXbP2Hs</a> ) Paper: <a href="https://arxiv.org/abs/2103.10558" target="_blank">https://arxiv.org/abs/2103.10558</a> Github: <a href="https://github.com/collab-uniba/KGTorrent" target="_blank">https://github.com/collab-uniba/KGTorrent</a></li> <li>Additional data were also collected from GitHub repositories using GitHub REST API. We first cloned GitHub repositories, including the Jupyter Notebook file, in order of GitHub stars. Then, we filtered out huge and short files and deduplicated files. The final dataset has about 40000 Jupyter Notebooks.</li> <li>These additional data were formatted into the contest data format. To make it identical to the original data, we assigned a unique 8-digit hexadecimal number to cell_id and a unique 14-digit hexadecimal number to id.</li> </ul> <h2>Important Citations</h2> <p>The following baselines help us a lot.</p> <ul> <li><a href="https://www.kaggle.com/code/suicaokhoailang/stronger-baseline-with-code-cells" target="_blank">https://www.kaggle.com/code/suicaokhoailang/stronger-baseline-with-code-cells</a></li> <li><a href="https://www.kaggle.com/code/yuanzhezhou/ai4code-pairwise-bertsmall-inference" target="_blank">https://www.kaggle.com/code/yuanzhezhou/ai4code-pairwise-bertsmall-inference</a></li> </ul> <h2>Thanks and Acknowledgements</h2> <p>Grateful to the Kaggle team &amp; competition host for this exciting NLP competition!<br> And thanks for the hard work of my teammates!</p> <h2>Team Members</h2> <p>This team represents Kawahara Lab, Waseda University. We focus on researching Japanese NLP. Please check our homepage if you are interested: <a href="https://nlp-waseda.jp/en/" target="_blank">https://nlp-waseda.jp/en/</a></p> <ul> <li><a href="https://www.kaggle.com/conan1024" target="_blank">@conan1024</a> </li> <li><a href="https://www.kaggle.com/tide525" target="_blank">@tide525</a></li> <li><a href="https://www.kaggle.com/hirofumishimizu" target="_blank">@hirofumishimizu</a></li> <li><a href="https://www.kaggle.com/mkmkmk1" target="_blank">@mkmkmk1</a> </li> <li><a href="https://www.kaggle.com/emurata" target="_blank">@emurata</a> </li> </ul>
Google AI4Code – Understand Code in Python Notebooks
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI4Code – Understand Code in Python Notebooks <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I experimented with using both md before code (implemented it first) and md exactly at the code bin.</p> <p>During inference the md before code approach worked much better (I select the bin which minimizes the total sum of probabilities of the wrong prediction, as a reasonable proxy for the competition metric).</p> <p>When I added the softmax based "md at the code bin" output, I had a slightly better result, but it may be the side effect of larger loss values for notebooks with larger number of code cells.</p> <p>Overall to was the hard part, how to design the loss and how to batch/sample notebooks, with such irregular notebook size. The larger notebooks have much bigger impact on the final score, but I'd expect the large variations between notebook sizes would make optimization harder. It helped to sample larger notebooks more often (relative to the notebook size + constant) but still average over all outputs. When I tried to train only the decoder part, I also experimented with using the sum of all outputs and accumulating notebooks gradients up until some total nb_md*nb_code threshold, this also worked. It also helped to add gradient clipping.</p> <p>I have not tried to run models without encoders, it would not be able to model dependencies between different code and md cells. I tried to use GRU instead of code branch transformers and the result was somewhat worse.</p> <p>It was the first time I dealt with NLP task so I did not really know about the common approaches (it also means I most likely missed some obvious improvements). I tried to build the model with a suitable information flow and all the necessary information available for prediction. I was not sure if attention-like design would work well for ordering of md/code cells, but it seems to work. I tried to improve the architecture (like calculation of the mean/max activations for cells before and after the current one, to keep the transformer capacity for more complex tasks but the result was the same), I found the training hyperparameters had a bigger impact on the result. </p> <p>The solution is based on the bi-encoder or poly-encoder like approach, when every cell is individually processed by the codebert, with the code/md interaction modelled with the pair of parallel TransformerDecoderLayer:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F743064%2Fc42b70aafb899ed52cf2ef7ed554e4f7%2Fcode_bert.png?generation=1660273106105753&amp;alt=media" alt=""></p> <p>I either used the shared CodeBERT based encoder for both code and markdown cells or CodeBERT for code and multi language bert for markdown. The resulting activations are averaged over tokens.</p> <p>The code cell activations are passed to 1D convolution with zero padding, to convert from the N code cell positions to N+1 positions between cells (bins to put mb cell to), with every point combined activations of the cell before and after.</p> <p>After adding the pos encoding to the code activations (the combination of the absolute and relative encodings).</p> <p>The next stage is a few layers (2 or 6) of transformer decoders, for code cell to use the self attention and attention to md cells as a memory, the same for markdown: self attention and the memory query over the output of the code cell:</p> <pre><code>for step in range(self.num_decoder_layers): x_code = self.code_decoders[step](x_code, x_md) x_md = self.md_decoders[step](x_md, x_code) </code></pre> <h1>Output and loss</h1> <p>Outputs of the code and markdown decoders are combined with matmul similarly to attention in a transformer,<br> with 3 outputs:</p> <ul> <li><p>(nb_code+1) * nb_markdown shape, is markdown cell i before the code bin j</p></li> <li><p>(nb_code+1) * nb_markdown shape, is markdown cell i at the code bin j</p></li> <li><p>nb_markdown * nb_markdown is the markdown cell i before another markdown cell j</p></li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F743064%2F863249a74f079b19943a7f6a46689e8e%2Fmodel_outputs.png?generation=1668982985055751&amp;alt=media" alt=""></p> <p>The first two outputs are used to put the md cell to the right bin between the code cells, and the last output is used to sort md cells within the single bin. BCE loss was used during training.</p> <h1>Post-processing</h1> <p>MD cell is put to the right bin between code cells using the first prediction: probability of the MD cell i is before code bin j. The position is selected to minimize the sum of probabilities of the wrong placement over all bins:</p> <pre><code>md_cell_bin_num = np.argmin([ -1 * md_after_code[md_idx, :i+1].sum() + md_after_code[md_idx, i+1:].sum() for i in range(nb_bins) ]) </code></pre> <p>Such approach to select the markdown cell position worked much better compared to the simple "is markdown cell i at the code bin j" output + softmax/argmax. The intuition behind the loss and sorting approach was to have a proxy for expected number of cell swaps used in the competition metric.</p> <p>The model was trained end to end, with the batch equal to a single notebook, I guess this makes the optimisation task harder due to the variable size of notebooks. Gradient clipping helped.</p> <p>The loss weighting and notebooks sampling also had a significant impact on the score. Large notebooks have a larger impact on the final score (due to the large number of position inversions for wrong predictions) but the wrong prediction error is averaged over a large number of nb_md * nb_code predictions. To address this I sampled larger notebooks more often (most useful) and for some models added avg_loss + a * max_loss + b * sum_loss (with a very small a of 0.02, adding sum_loss helped for initial experiments but was disabled later).</p> <h1>Non-English notebooks handling</h1> <p>I tried to address the multiple language notebooks, I used either the same shared codebert model for code and md cells or the codebert for code and sentence-transformers/paraphrase-multilingual-mpnet-base-v2 for md.</p> <p>Example of the performance difference between approaches (from the local validation, on one of folds):<br> shared codebert: All notebooks: 0.9113; English notebooks: 0.9164; Non-English notebooks: 0.8652<br> codebert+mpnet: All notebooks: 0.9088; English notebooks: 0.9117; Non-English notebooks: 0.8825</p> <p>The multilingual sentence ensembling mpnet model is trained on language pairs and allows to reduce the score gap between English and non-English notebooks but it has the worse combined score (most notebooks are English).</p> <p>I initially planned to do a different ensemble based on the language, but later I found the impact of ensembling codebert2 and codebert+mpnet models produces a much bigger score jump and decided to use the same large ensemble for all notebooks, regardless of the language.</p> <h1>Ensembling</h1> <p>To increase the ensemble side, I also trained the separate encoder models (using per-cell predictions from the first level transformers). Such encoder only parts of models were much faster to train (single per notebook transformer compared to multiple ber cell transformers, allowed to faster experiment with different architectures) and to predict (L1 per cell model predictions are re-used).</p> <p>The ensemble - to average 3 outputs between 4 folds and two models, including two versions of L2 encoders.<br> For ensembling I used the simple weighted averaging of model predictions (weighting was not very important, used higher weight for better performing models but the result was almost the same as for unweighted average).</p> <h1>External data</h1> <p>Models have been trained on the original dataset.<br> I tried to use the published external set of notebooks with acceptable licenses <a href="https://zenodo.org/record/6383115" target="_blank">https://zenodo.org/record/6383115</a>, it seems to improve results. The dataset was mentioned closer to the end of the challenge so I did not have enough time to re-train the full models. I trained the additional encoders using the external dataset notebooks, the result of individual improved, but the improvement of the final ensemble seems to be quite small.</p> <p>Time permitting, I would re-train models using the external data.</p> <p>The dataset converted to the challenge format: <a href="https://www.kaggle.com/datasets/dmytropoplavskiy/the-dataset-of-properlylicensed-jupyter-notebooks" target="_blank">https://www.kaggle.com/datasets/dmytropoplavskiy/the-dataset-of-properlylicensed-jupyter-notebooks</a></p> <h1>What did not work or had no significant impact</h1> <ul> <li><p>T5 based models performed worse</p></li> <li><p>Different pre-processing of code and markdown cells had little impact, so used unprocessed.</p></li> <li><p>Different ways to combine per cell model outputs performed in a similar way (using average, avg+max, output of the special code token).</p></li> <li><p>Using GRU/LSTM instead of transformer, for the encoder part or GRU for code with transformer encoder for md/code. The performance was worse compared to the dual branch transformer decoder.</p></li> </ul> <h1>Source code</h1> <p><img src="https://github.com/pdima/kaggle_ai4code_solution" alt="Source code for training and inference"></p> <p>Hi, probably to some extent at least, I was impressed with how much DETR simplified the usual detection pipeline, with anchors, NMS etc. I tried to use something similar for the Lift challenge but could not beat the simple CNN baseline. It's interesting to see of how much the NLP and CV approaches started to overlap.</p>
Google AI4Code – Understand Code in Python Notebooks
38th Place Solution - My learnings from AI4C
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI4Code – Understand Code in Python Notebooks <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h4>38th Place Solution</h4> <p>(from the: Google AI4Code – Understand Code in Python Notebooks: Predict the relationship between code and comments)</p> <h4>Overview:</h4> <ul> <li><strong>Model</strong>: Simple blend of point-wise (DeBERTa-V3-Small &amp; GraphCodeBERT-base….thanks Guo) &amp; <a href="https://www.kaggle.com/code/yuanzhezhou/ai4code-pairwise-bertsmall-training/notebook" target="_blank">pair-wise</a> ensemble just for diversity. Around 20% weightage to pair-wise.</li> <li><strong>Data</strong>: Use around 70% of data by careful selection of notebooks. No external data.</li> <li><strong>Compute</strong>: Free Kaggle &amp; Google Colab. Rented A5000-24GB intermittently during last couple of weeks).<br> Lightweight models due to limited compute and the fact that I was only working on weekends:</li> </ul> <h4>What worked:</h4> <ul> <li><p><strong>Core</strong>: The core solution is well outlined in the public <a href="https://www.kaggle.com/code/suicaokhoailang/stronger-baseline-with-code-cells" target="_blank">kernel </a>by Nguyen, the basic idea is to add a ‘context’ which helps the model rank the cells better.</p></li> <li><p><strong>Pre-Processing 1</strong>: Retaining the last few tokens in the markdown (i.e. last few words) helped in a small way. In other words, instead of simply truncating a lengthy markdown, we retain a few ending tokens. These last few words/tokens seem to be more intimately connected to the code cell that follows and helps the model make better connections.</p></li> <li><p><strong>Pre-Processing 2</strong>: Grabbing the class names &amp; function names from the code cells and moving them to the top of the code cell (to avoid truncation) also helps slightly. Moving ‘all comments in the code cell’ to the top of the code cell didn’t help although I have a strong feeling this would have helped larger models (mine was only 768 tokens)</p></li> <li><p><strong>Data Selection:</strong> Kernels selected should be relevant &amp; match with actual test data. This is important from 3 perspectives:<br> ** Fits into my compute budget<br> ** I can run more experiments<br> ** In absence of strong validation data, I felt this will play key role when actual results start rolling in<br> <strong><em><em>What helped:</em></em></strong><br> ** Kaggle increased memory size from 5-20 GB 3 years back. I could identify a bunch of old kernels by simply searching for markdowns on “You can write up to 5GB to"<br> ** Identify R kernels, TF1 kernels etc which use old libraries<br> ** Identify common ancestors/parent kernels etc<br> ** Many times, kernels are not forked, however they are copied manually. So identify first few cells (say 2 markdowns and 3 code cells) and remove duplicates<br> ** When sampling take very few samples from above categories (however don’t make it 0%). For remaining kernels, take a healthy sample. <br> ** In all, I could go up to around 70% samples and yet get a decent result. This 70%, I further split into training and validation<br> ** After tuning, I trained on validation data as well<br> ** In past 3 months, as new data kept rolling in, each run has improved my score slightly</p></li> <li><p><strong>Minor point during Ensembling</strong>: A very minor (4th decimal) increase during ensemble time by checking if the last cell is a markdown. Pair-wise model does not rank such markdowns correctly (due to the way triplet generation happens). So if the last cell is a markdown, don’t average between pair-wise &amp; point wise &amp; instead just take the point-wise data.</p></li> </ul> <h4>What didn’t work:</h4> <ul> <li>A variety of attention heads (at least 5) were tried &amp; none gave a significantly better result than default one</li> <li>Bi-LSTM on top didn’t help either</li> <li>In pair-wise, Arcface, Cosface was used to sharpen the embeddings but didn’t show great results. In fact I had trouble in reproducing results consistently for this model</li> <li>The biggest item in the “didn’t work” category would be my experiences with BigBird and Longformer outlined in the ‘Additional context’ section.</li> <li>2 costly mistakes: Wasted lot of token space as I didn’t catch this <a href="https://www.kaggle.com/competitions/AI4Code/discussion/343801#1896253" target="_blank">bug</a>. I felt this could have pushed up my score significantly. Secondly, last few days had to deal with memory errors. I am reasonably convinced that root cause could be <a href="https://www.kaggle.com/competitions/AI4Code/discussion/343471" target="_blank">this</a>.</li> </ul> <h4>Additional context (on a failed experiment):</h4> <p>Having a good ‘context’ is the key to getting a good score (with this model). With 768 tokens one can squeeze in maybe 20 tokens for each code cell assuming an average of 30 code cells per kernel with a few tokens remaining for the markdown cell in question. 20 tokens covers barely 1-2 lines of code even after preprocessing. Is there a way to increase this ‘context’ without increasing the model size? I thought that both BigBird and Longformer, being sparse attention models would be the right fit to apply to the problem at hand. In Longformer, tokens attend to tokens in the vicinity defined by a window that slides as we move across the text during self-attention. The receptive field can be further increased by applying dilation to the local window (basically the token simply skips certain tokens) thus allowing attention to reach farther tokens. In addition, we also have global tokens which are attended to by everybody. Similarly Bigbird has a combination of local (sliding window), global &amp; random attention. I felt AI4C domain was a classic case for usage of these models since the relationship between kernel cells can be captured beautifully given the constraints of limited token space:</p> <ul> <li>The sliding window will ensure to capture relation between 8-10 consecutive cells. These matter the most. Random tokens do the rest of the job.</li> <li>Global tokens could be identified that attend to all cells. The tokens in the markdown cell would all be global &amp; will attend to each and every code cell token.</li> <li>This way the ‘context’ becomes really really powerful. Relationship between the markdown in question and each and every code cell token can be established while program continuity aspect and relation between a code cell &amp; the cells above and below it can be captured by the sliding window. Random attention further bring in unanticipated connections across cell tokens .<br> However looks like both models perform well only in excess of 4096 tokens and do not work for anything less than that. Some experiments and discussion with HF on this aspect is captured <a href="https://github.com/huggingface/transformers/issues/18234" target="_blank">here</a>.</li> </ul> <h4>Thanks and Acknowledgements:</h4> <p>It has been four months since the competition ended and I could not retrieve the exact empirical details I maintained, however based on the available notes/kernels, I have tried to give a gist of my experience w.r.t what worked and what didn’t. I apologize for not posting this earlier as I felt that I didn’t have anything significant to contribute in terms of ‘What worked’. However Mark Mc Donald’s template made me realize that perhaps I did have something to share as far as “What didn’t work” is concerned 😊</p> <p>I want to Thank the Kaggle staff team for coordinating this great competition and ensuring it goes off seamlessly over a timeframe of 6 months! I want to thank Nguyen for raising the bar high with the initial codebase he shared. This competition had leading industry experts, leading Kaggle grandmasters (many from top 20) &amp; (maybe) folks from academia. I would like to Thank all of them for sharing their thoughts and views during the course of this competition. </p> <h4>Team Members:</h4> <p>Allohvk</p>
Google AI4Code – Understand Code in Python Notebooks
3rd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI4Code – Understand Code in Python Notebooks <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Overview</h1> <p><strong>Main Features</strong><br> 1) The cosine-similarity based classification model to predict which consecutive two code cells the given markdown cell should be placed between.<br> 2) Postprocessing to bridge the gap between the predicted probability distribution and the task metric.<br> 3) Additional notebooks crawled from Kaggle as external training dataset.<br> <strong>Minor Features</strong><br> 4) Additional pretraining of MLM<br> 5) Markdown sorting model for when multiple markdowns are placed between the same code pairs. (“Markdown sorting model” in the figure below.)<br> 6) Some techniques to speed up the inference.</p> <p>Inference code: <a href="https://www.kaggle.com/code/yufuin/ai4code-public-3rd-solution-inference/notebook?scriptVersionId=103005501" target="_blank">https://www.kaggle.com/code/yufuin/ai4code-public-3rd-solution-inference/notebook?scriptVersionId=103005501</a><br> Model: <a href="https://www.kaggle.com/models/yufuin/ai4code-parallel-bert" target="_blank">https://www.kaggle.com/models/yufuin/ai4code-parallel-bert</a></p> <p>Here is the overview of my model.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2430820%2F4793ecab00bc55b7a0f90cec5b239341%2FAI4Code_3rd_sorlution_model.PNG?generation=1669030267549699&amp;alt=media" alt=""></p> <h1>What Worked</h1> <ol> <li><p>Main model: classification model to predict which consecutive two code cells the given markdown cell should be placed between. </p> <ul> <li>Result: 88.5% on dev (Single deberta-v3-large model, w/o {MLM, postprocess, external dataset})</li> <li>This model builds representations of each consecutive two code cells and each markdown cells, then predicts the probability distribution from cosine similarities between a code and a markdown representations. </li> <li>Loss: CrossEntropyLoss</li> <li>The weights of BERT to encode markdowns and codes are shared.</li> <li>The most important part of the model architecture is LSTM at code encoding. Without LSTM, I only got the dev score of 86~87%.</li> <li>I trained three checkpoints for the best submission.<ul> <li>One is initialized by deberta-v3-large, and the other two are initialized by distilroberta.</li> <li>All of them used the same train-dev split.</li> <li>The ensemble is done by weighted averaging the output probability distributions. The ensemble weight is tuned by the grid search on the dev set.</li></ul></li> <li>I used checkpoint ensembling by EMA for deberta-v3-large, and by the strict average for distilroberta.<ul> <li>In fact, the strict average always got better results than EMA even for deberta-v3-large.</li> <li>However, the time to converge is considerably longer than EMA (2~3 times longer), and unfortunately, I found a problem at training a week before the end of the competition and I couldn’t finish re-training for deberta-v3-large with the strict average by the deadline, so I submitted the EMA version for deberta-v3-large instead.</li></ul></li> <li>To prevent out-of-memory during training, I sampled markdown and code cells.<ul> <li>The larger the sampling size, the better the dev/test score.</li></ul></li> <li>The maximum sequence length of subwords for each single cell is 128.<ul> <li>Since a code-pair is constructed from two cells, it could have 256 subwords + special tokens.</li></ul></li></ul></li> <li><p>Postprocessing</p> <ul> <li>Result: +0.8~1% on dev</li> <li>The task score of this competition, Kendall’s tau, has a property that the decrement from the ideal score (=100%) is proportional to the total deviations of the predicted markdown positions from the correct positions (if we ignore the decrement caused by the incorrect order of markdown cells).<ul> <li>For more information: <a href="https://www.kaggle.com/code/yufuin/ai4code-linearity-of-kendall-s-tau-english" target="_blank">https://www.kaggle.com/code/yufuin/ai4code-linearity-of-kendall-s-tau-english</a></li></ul></li> <li>My model outputs the probability distribution of the position of the markdown. According to the predicted probability, we can estimate the expected deviation for each position.</li> <li>Choosing the minimum expected deviation position is a much better strategy to minimize the decrement of the task score than naively choosing by argmax over the probability.</li></ul></li> <li><p>External dataset</p> <ul> <li>Result: &gt;+0.42% on Private LB.</li> <li>I crawled kaggle’s public notebooks myself based on the KGTorrent code<ul> <li>KGTorrent: <a href="https://github.com/collab-uniba/KGTorrent" target="_blank">https://github.com/collab-uniba/KGTorrent</a></li></ul></li> <li>My best submission used only external dataset for training to prevent duplication with original training dataset.</li> <li>For more information: <a href="https://www.kaggle.com/competitions/AI4Code/discussion/343658" target="_blank">https://www.kaggle.com/competitions/AI4Code/discussion/343658</a></li></ul></li> <li><p>MLM</p> <ul> <li>Result: ~+1% on dev (for deberta-v3-large)</li> <li>I trained for 5 days for distilroberta, and 40 days 😂😂😂 for deberta-v3-large by 3090.<ul> <li>For deberta-v3-large, the longer MLM training, the better the dev/test scores.</li></ul></li> <li>I used only the original competition’s training dataset for MLM training.</li></ul></li> <li><p>Markdown sorting model</p> <ul> <li>Result: +0.15% on Public LB (for deberta-v3-large)</li> <li>My main model cannot resolve the order of markdown cells when they are placed between the same code-pair.</li> <li>However, I could improve a maximum +0.6% on dev by perfect ordering than putting randomly. The markdown sorting model addresses this.<ul> <li>the actual boost I made is only a quoter of it, though.</li></ul></li> <li>The pseudo code of the model architecture is as follows</li></ul> <pre><code> (torch.nn.Module): (): h1 = .multi_head_att(mk_attended, mk_attended, mk_attended, mask)[] h2 = mk_attended + .linear1(h1) .linear2(h2).squeeze(-) </code></pre> <ul> <li>Input: a set of markdown vectors from the attention layer of the main model.<ul> <li>Here, the set is constructed of markdown cells only that are predicted to be placed between the same code-pair (i.e., not the all markdown cells in the notebook).</li></ul></li> <li>Output: the percent ranks among the input set.</li> <li>Loss: BCEWithLogitsLoss.</li></ul></li> <li><p>Speed up the inference</p> <ul> <li>The model weights are converted into FP16.</li> <li>I sorted the input sequences by the sequence length and built minibatches before inputting to BERT. I restored the order after pooling.</li></ul></li> </ol> <h1>What Didn’t Work</h1> <ul> <li>Distorting the predicted probabilities at postprocessing stage<ul> <li>What I did: to maximize the task score with postprocessing, using the different temperature parameter of softmax other than the one used at training (i.e., tuning the temperature parameter after training w.r.t. the task score).</li> <li>At the end of the day, I found using the same temperature as the one used at training is the best option.</li></ul></li> </ul> <h1>Additional Context</h1> <ul> <li>Most experiments were done on 3090, except for the models for the final submission (they are trained on A6000).</li> <li>The most appealing point of this competition for me is that the sources of the competition dataset are Kaggle notebooks, which are under Apache License 2.0. It’s quite attractive that learned knowledge and techniques through this competition, and even datasets and models are basically free to use by rebuilding the dataset ourselves (I currently have no plan to use, though). This is a big reason why I decided to crawl the dataset myself.</li> <li>The two-stage competition is mentally very good. We have almost no concern about the leakage. Otherwise I couldn’t run 40 days training in my living room.</li> </ul> <h1>Important Citations</h1> <ul> <li>L1 loss idea<ul> <li>Disucission: <a href="https://www.kaggle.com/competitions/AI4Code/discussion/325272" target="_blank">https://www.kaggle.com/competitions/AI4Code/discussion/325272</a></li> <li>Linearity of Kendall's Tau: <a href="https://www.kaggle.com/code/yufuin/ai4code-linearity-of-kendall-s-tau-english" target="_blank">https://www.kaggle.com/code/yufuin/ai4code-linearity-of-kendall-s-tau-english</a></li></ul></li> <li>KGTorrent<ul> <li>Discussion: <a href="https://www.kaggle.com/competitions/AI4Code/discussion/326520" target="_blank">https://www.kaggle.com/competitions/AI4Code/discussion/326520</a></li> <li>Github: <a href="https://github.com/collab-uniba/KGTorrent" target="_blank">https://github.com/collab-uniba/KGTorrent</a></li> <li>Paper: <a href="https://arxiv.org/abs/2103.10558" target="_blank">https://arxiv.org/abs/2103.10558</a></li></ul></li> </ul> <h1>Thanks and Acknowledgements</h1> <p>I appreciate the task organizer for hosting this exciting competition.<br> I also thank other competitors for insightful discussions.</p>
Google AI4Code – Understand Code in Python Notebooks
41th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI4Code – Understand Code in Python Notebooks <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>41th Place Solution</strong></p> <p>First, thanks Kaggle and the competition hosts for a challenging competition Google AI4Code – Understand Code in Python Notebooks: Predict the relationship between code and comments. This summarizes the effort of our team and it was a great experience working with my teammates! This is a writeup of our collective team effort.</p> <h1><strong>Overview</strong></h1> <p>Based on Khôi's initial idea, we modify a little bit to get more global context information. We sample 45 cells with shorter code cell's length (10 tokens). If the notebook has less than 45 cells, we dynamically adjust the max length of code cell based on the number of code cell.</p> <p>Ensemble from 2 training setting (our setting and Khoi Nguyen setting - 20 code cell with 23 tokens each) also give us a boost on CV and LB. (we don't remember the exact number).</p> <ul> <li><strong>Model</strong>: Weighted Average of some Point-wise models such as: Distilbert-base-uncased, Deberta-v3-base and Codebert-base</li> <li><strong>Data</strong>: All training competition data, no external data. Cross Validation Split: 90/10</li> <li><strong>Hardware</strong>: Two A6000-48GB. The largest model training for 3-4 days, the lighting model like distilbert is 1-2 days.</li> </ul> <p>Some of the results on local CV are detailed in the photos below:</p> <p><strong>Initial experimental</strong><br> Initial experimentals didn't beat Khoi's baseline:<br> <a href="https://postimg.cc/DSSp3ZXN" target="_blank"><img src="https://i.postimg.cc/bwT7bSNq/inital-exp1-fix.png" alt="inital-exp1-fix.png"></a></p> <p>Dynamic tokenizer based on number of code cells and FGM training help boost score on both Local CV and LB:</p> <p><a href="https://postimg.cc/JyJTt2Kc" target="_blank"><img src="https://i.postimg.cc/ZqsXskdg/inital-exp2-fix.png" alt="inital-exp2-fix.png"></a></p> <p><strong>Final &amp; best experimental</strong><br> The best model on both Local CV and LB is Deberta-v3-base.<br> <a href="https://postimg.cc/JsHxdFwV" target="_blank"><img src="https://i.postimg.cc/P527NG2x/final-exp-fix.png" alt="final-exp-fix.png"></a></p> <p><strong>Inference code:</strong> <a href="https://www.kaggle.com/code/dunglt/inference-pct-rank-model?scriptVersionId=101081766" target="_blank">Inference Ensemble Final Submission</a></p> <h1><strong>What worked</strong></h1> <h2>[High Impact] Idea</h2> <ul> <li>Dynamic tokenizer based on number of code cells: Local + 0.007 and LB + 0.005</li> <li>Local tokenizer to boost training speed</li> </ul> <h2>[Medium Impact] Idea</h2> <ul> <li>MLM pretrained (15% probability random mask token) on all data except 1000 val example: Local and LB + 0.002 (not boost much and much time to pretrained)</li> <li>FGM training: Boost Local and CV + 0.002 but 2x time training.</li> </ul> <h1><strong>What didn’t work</strong></h1> <ul> <li>Larger model: Deberta-v3-large, Deberta-v3-xlarge</li> <li>AWP/Fast AWP training: not boost score on local CV and LB</li> <li>EMA training</li> <li>Custom head: <ul> <li>mean/max pooling</li> <li>concat mean-max pooling</li> <li>concat last 4 layers</li> <li>Attention pooling</li> <li>LSTM/ Bi-LSTM</li></ul></li> </ul> <h2>Additional context (idea not have time to try):</h2> <p>We really want to test the ideas in the papers below, but we really don't have enough time and computational resources.</p> <ul> <li>Deep Attentive Ranking Networks for Learning to Order Sentences - <a href="https://arxiv.org/abs/2001.00056" target="_blank">paper</a> </li> <li>Efficient Relational Sentence Ordering Network - <a href="https://ieeexplore.ieee.org/document/9444816" target="_blank">paper</a> </li> <li>Neural Sentence Ordering Based on Constraint Graphs - <a href="https://arxiv.org/abs/2101.11178" target="_blank">paper</a></li> <li>Is Everything in Order? A Simple Way to Order Sentences - <a href="https://arxiv.org/pdf/2104.07064v2.pdf" target="_blank">paper</a> </li> <li>Sentence Ordering and Coherence Modeling using Recurrent Neural Networks - <a href="https://arxiv.org/abs/1611.02654" target="_blank">paper</a> </li> <li>Knowledge Distillation (KD)</li> <li>Augmentation technique: Mixup, Random mask</li> </ul> <h1>Thanks and Acknowledgements:</h1> <p>We want to thanks the Kaggle staff team &amp; Competition Host for coordinating this great competition! <br> Thanks for sharing the solution from all the teams, especially the top teams, we really learned a lot from their solutions.</p> <p>Finally, thanks for the hard work of my teammates <a href="https://www.kaggle.com/dunglt" target="_blank">@dunglt</a> </p> <h1>Team Members:</h1> <ul> <li>Khanh Vu Duy <a href="https://www.kaggle.com/duykhanh99" target="_blank">KhanhVD</a> </li> <li>Dung Le <a href="https://www.kaggle.com/dunglt" target="_blank">Dzung</a> </li> </ul>
Google AI4Code – Understand Code in Python Notebooks
4th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI4Code – Understand Code in Python Notebooks <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/cabbage972" target="_blank">@cabbage972</a> </p> <ol> <li>regular mlm on mpnet</li> <li>Learn exactly the same weights for code and markdown, only difference is the inputs for bert </li> <li>Like this <br> code_cell-2[SEP]code_cell-1[SEP]int(code_cell_pct_rank * 1000)[SEP]Num_cells[SEP]Num_code_cells[SEP]current_code_cell[SEP]code_cell+1[SEP]code_cell+2<br> if like no code_cell+1 then leave it as 'after' and if no code_cell-1 leave it as 'before'</li> <li>The ensembling mainly combine the outputs score of the 3 models. <br> For recall model, pred means the code cell which is neareast to the markdown <br> For recall model, cls pred means the sum of each code_cell_position * softmax_prob, cls pred tau score is higher then pred tau score<br> sim means cosine sim of code and markdown, so var_sim means np.var(all the codes cosine sims for specific markdown)<br> For pairwise concat model, pred and cls pred means the same as recall model<br> For context renak model, as I used classification (100 classes) and regression.<br> So pred, and reg_pred both means regression prediction. as in the figure listed xc_reg_pred.<br> cls_pred means classification prediction (sum of each 100 class position * softmax_prob)<br> cls2_pred means the classification prediction of postion class with the argmax softmax_prob<br> So abs_cls2_reg_diff means for context model abs(cls2_pred - reg_pred)</li> </ol> <p><strong>Recall model</strong> (infer 2h+, 1h using 2 T4)</p> <ul> <li>pairwise, two tower, with backbone sentence-transformers/all-mpnet-base-v2</li> <li>continued mlm on training data</li> <li>predicting for earch markdown wether a code is just after it (also a fake last code with content 'the end of notebook' added) </li> <li>9 negative selected withn the same notebook (better comparing to using 4 both offline and online)</li> <li>for code, encode itsself and before 2 and after 2 codes </li> <li><strong>crossentropy loss with cosine / learned temperature</strong>(finally about 0.002) as input logits</li> <li>offline tau score 897 (averge the recall score for each code position)</li> <li>late submission score is 8936</li> <li><strong>recall model is a key model, as rank model and context rank model all using its output</strong></li> <li>The main drawback here is the recall model if using 9 negatives will be gpu costly, I used 8*v100, and it cost about 30 h for 1 model training on all train data. </li> </ul> <p><strong>Pairwise rank model</strong> (infer 2h+)</p> <ul> <li>pairwsie , concat , with backbone deberta-v3-small </li> <li>only infer for those recall model max prob &lt; 0.8, so infer(rerank) about 50% markdowns.</li> <li>offline tau score 905</li> </ul> <p><strong>Context rank model</strong> (infer 2.5h+)<br> Modified from <a href="https://www.kaggle.com/code/suicaokhoailang/stronger-baseline-with-code-cells" target="_blank">https://www.kaggle.com/code/suicaokhoailang/stronger-baseline-with-code-cells</a> </p> <ul> <li>backbone deberta-v3-small</li> <li><strong>3 markdowns as a group (infer at once for each instance)</strong> <br> these groups slected using recall model, to select markdowns with similar code scores to a group.</li> <li><strong>40 codes selected by recall model (train using recall model OOF</strong>) <br> this is also a key point, for training using neg sample with sample prob provided by recall model, for infer just choose top recall prob codes. And after contest finish I learned that most top teams used much more codes, more context/codes is a key point for win.<br> Train just using spaced selection of codes with some distoration (randomness) and then infer with recall model selected top codes can improve 0.1+. I have tested 1 context model before online from 8806(without recall model) to 8892(using recall model)<br> Then if train using recall model selected, or with probablity of sampling will be even better 0.05+.</li> <li>add OOF recall model score as inputs to bert improve 0.004 <br> not sure if overfit, so I selected 1 sub with it 9147 and another witout it 9143, I made some mistakes and no time to retrain, just finetune 3 epochs for models with OOF score, so this OOF method should have improved much more I think should be 2k+, you could see below huge improvement from just the first epoch . <br> Finally we see improment from 9162 to 9170.<br> <img src="https://i.postimg.cc/TYZ5hH0S/Qd-AG4-Bsuv-FVh-C5-M.png" alt=""></li> <li>multiple losses( classification for num_classes 100 + MAE for regression to preidct the relative rank for each markdown)</li> <li>offline tau score 916(interesting, this is the classification output) and tau score 914(regression output)</li> </ul> <p><strong>GBDT model to combine above 3 models score</strong> (infer 5m) </p> <ul> <li>used catboost and lgb, it helps improve about 6k online.</li> <li>offline tau score 922, online 9147<br> top features show below: <br> xpc means pairwise concat rank model <br> xc means context rank model <br> xp means pairwise two tower model (recall model) <br> <img src="https://i.postimg.cc/kMYWbRSt/2.png" alt=""></li> </ul> <p>Just learned deberta-v3-large with large inputs can be much better , more context(more codes like 128+) is a key for better result,^_^, next time will do more experiments at first to choose the correct direction. <br> I did not use external data finally, though I tried but not with much improvement, the direction is wrong I should have tried to use more kaggle notebooks, I found so much duplicates in the public kaggle dataset published so not try it also.<br> Thanks kaggle , this is really an intersting game! Also want to thanks for wandb and tabnine, they make life easier for doing experiments and coding.</p> <p><strong>code</strong><br> inference to get 9170 for 4th place: <a href="https://www.kaggle.com/code/goldenlock/reproduce-v1" target="_blank">https://www.kaggle.com/code/goldenlock/reproduce-v1</a><br> pairwise inference to get 8936: <a href="https://www.kaggle.com/code/goldenlock/pairwise-recall" target="_blank">https://www.kaggle.com/code/goldenlock/pairwise-recall</a><br> training: <a href="https://www.kaggle.com/datasets/goldenlock/ai4code" target="_blank">https://www.kaggle.com/datasets/goldenlock/ai4code</a> <br> <a href="https://github.com/chenghuige/Google-AI4Code-Understand-Code-in-Python-Notebooks" target="_blank">https://github.com/chenghuige/Google-AI4Code-Understand-Code-in-Python-Notebooks</a></p>
Google AI4Code – Understand Code in Python Notebooks
6th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI4Code – Understand Code in Python Notebooks <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, thanks to the organizers for hosting a challenging and interesting competition. Next, congrats to all winners! I placed better than I thought I would but unfortunately, I still placed right outside of the prize zone. Luckily, there is still $1000 prize for top 10 teams.</p> <p>My inference code is released at: <a href="https://www.kaggle.com/code/shujun717/add-decorator-ai4code-test32-add-break-26-code?scriptVersionId=103045432" target="_blank">https://www.kaggle.com/code/shujun717/add-decorator-ai4code-test32-add-break-26-code?scriptVersionId=103045432</a><br> My training code is released at: <a href="https://github.com/Shujun-He/Google_AI4code_6th_place_solution" target="_blank">https://github.com/Shujun-He/Google_AI4code_6th_place_solution</a></p> <p>Please see below for details of my solution:</p> <h3>Relative to absolute position prediction</h3> <p><img src="https://raw.githubusercontent.com/Shujun-He/Google_AI4code_6th_place_solution/main/graphics/relative2absolute.png" alt="alt text"></p> <p>(I typed this in latex and just screenshot it. Sorry for the quality. For a better version, see <code>write_up_6th_solution.pdf</code> in my code repo)</p> <h3>Network achitecture</h3> <p>My network architecture consists mainly of a deberta encoder and a GRU layer; an overview is shown in the Figure below. First, markdown blocks and code blocks are tokenized and concatenated. During training, 10 markdown blocks with 20 code blocks with a max length of 1100 are used first, then 10 markdown blocks with 40 code blocks with a max length of 1600 are used to train another 5 epochs, and then the weights are retrained again with 20 markdown blocks with 80 code blocks with a max length of 3160. Following concatenation of markdown and cell blocks, a deberta large encoder is used to encode the text, and then each markdown and code block is pooled. Next, features are concatenated to pooled feature vectors (discussed in a later section). Each markdown block feature vector is then concatenated to the sequence of code block feature vectors, and then an GRU network is used to predict the relative position of that markdown block to each code blocks. Lastly, the absolute position of the markdown cell is computed according to Equation 1. MAE loss is used during training.</p> <p><img src="https://raw.githubusercontent.com/Shujun-He/Google_AI4code_6th_place_solution/main/graphics/architecture.png" alt="alt text"></p> <h3>Features</h3> <p>The following features are used:</p> <ol> <li>number of markdown cells in given notebook (normalized by 128)</li> <li>number of code cells in given notebook (normalized by 128)</li> <li>ratio of markdown cells to total number of cells</li> </ol> <p>These features are concatenated with pooled feature vectors of markdown cells. Pooled code cells feature vectors are also concatenated with their absolute positions.</p> <h3>MLM</h3> <p>Masked langauge modeling pretraining is used, where the input text is arranged in the same manner as in finetuning. MLM probability was set to 0.15 and the model is trained for 10 epochs. A max sequence length of 768 is used with at most 5 markdown cells and 10 code cells.</p> <h3>Inference and TTA</h3> <p>During inference, I tried to predict positions of as many markdown cells as possible. I was able to use a max length of 4250 with 100 code blocks and 30 markdown blocks. Code blocks have at most 26 tokens and markdown blocks 64. Notebooks are sorted based on number of markdown/cell blocks to reduce inference time. Because for longer sequences where there are more than 30 markdown blocks, I cannot make predictions for all of them at the same time, and multiple forward passes are needed to make predictions for all markdown blocks. As a result, the model never sees all the markdown blocks at the same time. To counteract that, I do test time augmentation by randomly shuffling the markdown cells each time a new inference cycle is started and therefore the model will see different combinations of markdown cells in long sequences. For shorter sequences, this simply changes the order of markdown cells during inference. For my best inference notebook, a total of 3 inference cycles are used (3xTTA). 3xTTA improves from score from 0.9067 to 0.9100.</p> <h3>What didn’t work</h3> <ol> <li>Trying to translate markdown cells that are not in English did not result in any improve-<br> ments for me. I tried facebook’s m2m100 418M.</li> <li>Separating code and markdown cells and doing two forward passes through deberta for<br> concatenated code and markdown cells.</li> <li>Cubert did not work because I couldn’t fix the tokenizer.</li> <li>Maxpooling instead of meanpooling</li> </ol> <h3>References</h3> <p>I used the solution from Khoi Nguyen as a baseline: <a href="https://www.kaggle.com/competitions/AI4Code/discussion/326970" target="_blank">https://www.kaggle.com/competitions/AI4Code/discussion/326970</a>.</p>
Google AI4Code – Understand Code in Python Notebooks
9th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI4Code – Understand Code in Python Notebooks <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1><strong>Overview</strong></h1> <p>The final submission of us is a blend of three models with same architecture and different training data. Three models share the same pipline. We applied a two stage method for the competition.</p> <ol> <li>Training a bert model (Unixcoder) to get sentence embedding for each cell;</li> <li>Training a simple transformer with sentence embedding (or cell embedding) input.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F847605%2F80fe690c1982391400afb3cf290bd89e%2F1.png?generation=1669259314036608&amp;alt=media" alt=""></li> </ol> <h1><strong>Our Code</strong>:</h1> <p>Submit and training code : <a href="https://www.kaggle.com/code/qiaoshiji/fork-of-ai4code-infer-22e56e/data?scriptVersionId=103071263" target="_blank">https://www.kaggle.com/code/qiaoshiji/fork-of-ai4code-infer-22e56e/data?scriptVersionId=103071263</a></p> <h1><strong>What Worked</strong></h1> <ol> <li>Using unixcoder as first stage pretrained model. The inital score of different model is shown as following:<br> codebert 0.8226<br> graphcodebert 0.8482<br> Plbart 0.8492<br> codet5 0.8475<br> unixcoder 0.8594</li> <li>Pretrain first stage model using three different types of the task as following:</li> </ol> <ul> <li>Randomly selecting N successive cells and training rerank task;</li> <li>Randomly selecting one cell and predicting next cell;</li> <li>Randomly selecting N successive cells and choosing one cell as query, rest as answer.</li> </ul> <ol> <li>Using longer sequence input in second stage, 512 code cells and 512 markdown cells for our final model</li> <li>Multi task for second stage model:</li> </ol> <ul> <li>Position as regression task</li> <li>The position of markdown between which two code cells</li> <li>The number of markdown cells between every two code cells</li> </ul> <ol> <li>Using more data:</li> </ol> <ul> <li>Raw data for model1</li> <li>Raw data + ext data1 for model2</li> <li>Raw data + ext data2 for model3<br> Ext data is from KGTorrent:&nbsp;<a href="https://github.com/collab-uniba/KGTorrent" target="_blank">https://github.com/collab-uniba/KGTorrent</a></li> </ul> <h1><strong>What Didn’t Work</strong></h1> <ol> <li>Classify model as ensemble model</li> <li>Making group by the number of cells and training models for each group</li> </ol> <h1><strong>Important Citations</strong></h1> <ol> <li>KGTorrent<br> Github:&nbsp;<a href="https://github.com/collab-uniba/KGTorrent" target="_blank">https://github.com/collab-uniba/KGTorrent</a><br> Paper:&nbsp;<a href="https://arxiv.org/abs/2103.10558" target="_blank">https://arxiv.org/abs/2103.10558</a></li> <li>UniXcoder<br> Github:&nbsp;<a href="https://github.com/microsoft/CodeBERT/tree/master/UniXcoder#1-code-and-nl-embeddings" target="_blank">https://github.com/microsoft/CodeBERT/tree/master/UniXcoder#1-code-and-nl-embeddings</a></li> </ol> <h1><strong>Thanks and Acknowledgements</strong></h1> <p>Many thanks to organizer for hosting such a great competition.</p> <h1><strong>Team Members</strong>:</h1> <p><a href="https://www.kaggle.com/guoday" target="_blank">@guoday</a> <br> <a href="https://www.kaggle.com/mahluo" target="_blank">@mahluo</a> </p>
Google AI4Code – Understand Code in Python Notebooks
A High Schooler’s Solo Journey to get a Silver Medal.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI4Code – Understand Code in Python Notebooks <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><em>Model Architecture</em></p> <p>The model is based on treating the problem as a regression task, where the known relative code cell positions are assigned a value between 0 and 1 and the predicted markdown cell positions can take a value between -0.2 and 1.2, this way I impose an inductive bias on the code cells and allow the markdown cells to be place anywhere before and after the code cells.</p> <p>The model architecture is as follows:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8807553%2F58d2dcbe3cd1933a229f290470528f7f%2FTree%20diagrams%20(1).png?generation=1663797416217958&amp;alt=media" alt=""></p> <p>Firstly, the tokenized (CodeBERT tokenizer) code and markdown cells are both fed separately into CodeBERT. Only the first latent code and markdown output vectors from CodeBert are selected: </p> <pre><code>x_code = self.code_bert(code_tokens.flatten(0,1), attention_mask=code_mask.flatten(0,1).to(device))\ .last_hidden_state.view(*code_tokens.shape[:3],-1)[:,:,0,:] </code></pre> <p><br> Then, positional encodings are added to the latent code vector, this is the input to the encoder. The output from the encoder is then input (as context) alongside the latent markdown vector into the decoder which can attend to the interactions between all cells, code and markdown. The output of the decoder is finally fed into the regressor which outputs the markdown cell position predictions.</p> <p><em>Model Training</em></p> <p>The models were trained only with the dataset provided by the competition. The only notable difference in training with regards to other solutions is that instead of using MSE or MAE as a loss function I tried to approximate Kendall Tau using a loss function inspired by Spearman Rho which internally uses softrank, making it differentiable.</p> <pre><code>def spearman_p_loss(pred, target_ranks, **kw): d2 = 0. n = 0. for p,t in zip(pred.flatten(1),target_ranks.flatten(1)): mask = t != -1 rp = soft_rank(p[mask].unsqueeze(0), **kw) - 1. rt = t[mask].unsqueeze(0) d2 += (rt-rp).square().sum() n += mask.sum() return 4. * d2/ ((n*(n-1))) </code></pre> <p><a href="https://en.wikipedia.org/wiki/Rank_correlation#Spearman%E2%80%99s_%CF%81_as_a_particular_case" target="_blank">https://en.wikipedia.org/wiki/Rank_correlation#Spearman%E2%80%99s_%CF%81_as_a_particular_case</a> </p> <p><em>Fast Kendall Tau computation</em></p> <p>I noticed the code for Kendall Tau was a bit slow, even the version that was based on bisect, so as a fun experiment I made it faster and implemented it without any loops (and batched):</p> <pre><code> def update(self,preds,index_gt): masks = index_gt != -1 _preds=preds.clone() _index_gt = index_gt.clone() m = masks.flatten(1) _index_gt[~masks] = MAX_CELLS*2 _preds[~masks] = np.inf pred_ranks = _preds.flatten(1).argsort(dim=1) ranks=torch.gather(_index_gt.flatten(1),1,pred_ranks) self.inversions += torch.triu((ranks.unsqueeze(1).permute(0,2,1)&gt;ranks.unsqueeze(1))).sum() len_gt = masks.sum(dim=(1,2)) self.total_2max +=((len_gt-1)*len_gt).sum() def compute(self): return 1. - 4. * self.inversions / self.total_2max </code></pre> <p><em>Closing thoughts</em></p> <p>Having entered the competition only one month prior to the deadline I ran out of computational power and time, therefore my model ended up being undertrained. </p> <p>I really enjoyed this competition, see you guys on the next one!</p> <p><a href="https://github.com/antorbus/Google-AI4Code" target="_blank">https://github.com/antorbus/Google-AI4Code</a> </p> <p>Thanks a lot!</p> <p>Around 4 days with 6x 3090.</p>
Google AI4Code – Understand Code in Python Notebooks
Fast and light ensemble model (37th after 2nd rerun)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI4Code – Understand Code in Python Notebooks <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Description and Source code: <a href="https://github.com/vaaliferov/129_ai4code" target="_blank">https://github.com/vaaliferov/129_ai4code</a><br> <img src="https://raw.githubusercontent.com/vaaliferov/129_ai4code/main/images/regression_model.png" alt=""><br> <img src="https://raw.githubusercontent.com/vaaliferov/129_ai4code/main/images/match_model_train.png" alt=""><br> <img src="https://raw.githubusercontent.com/vaaliferov/129_ai4code/main/images/match_model_infer.png" alt=""></p>
Google AI4Code – Understand Code in Python Notebooks
My Part of Team's Solution (Punch Cards Are All You Need, LB Rank 24)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI4Code – Understand Code in Python Notebooks <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi all,<br> I had a great experience participating in this competition and would like to thank the organizers, Kaggle staff, everyone who shared their insights on the discussion boards and in the public notebooks, and most importantly - my amazing teammates.</p> <p><strong>Background</strong><br> I wasn’t intending to seriously participate in this competition, due to a lack of free time during the summer. However, by late July I came up with a direction that seemed appealing due to being lightweight in terms of development time, training time and inference time. Therefore, I decided to pursue it and see what happens. When I saw that the results are promising, I teamed up with a few other participants. After teaming up, I focused mainly on ensembling the team’s models. I present below some information about my single model, my ensembling method, the team solution, the tools that I used and some of my work process.</p> <p><strong>Single Model</strong><br> I implemented a Bi-encoder model that, given a markdown cell and a text cell, predicts whether they are adjacent or not, based on the cosine similarity of the encoded cells. Here are some details.</p> <ul> <li>Loss function: multiple negatives ranking loss, with additional hard negatives sampled from the same notebook as the positive example.</li> <li><em>Main modeling trick</em>: I added fake code cells to represent the beginning and end of notebook, in order to better predict for introduction and summary markdown cells (LB improvement 0.01)</li> <li>Inference time improvement: Loaded the input data in chunks of 50 notebooks, sorted the cells by their text length to minimize the time being spent encoding pad tokens. Ran the encoding with a batch size of 128.</li> <li>Dataset: 80k notebooks, 90/10 train/test split</li> <li>Sequence length: 256 tokens for each cell, so 256 markdown and 256 code </li> <li>Data preprocessing: None!</li> <li>Number of training epochs: 3</li> <li>Training runtime: 17h on a A6000, </li> <li>Inference runtime: 1h20m</li> <li>Single model performance: Public and Private LB both at 0.8555 with a GraphCodeBert backbone.</li> </ul> <p><strong>Ensembling</strong><br> First of all, I found it helpful to combine the fractional predictions rather than the ranks as they appear in the submission file, as the quantization to integer ranks introduces bias. To calculate the model weights, I optimize for a maximal Kendall Tau value by using the Nelder-Mead method. On top of this, I also implemented an adaptive scheme where each individual prediction is further weighted by a confidence score that is based on how sharp its logit distribution is. The The improvement from this adaptive weighting scheme over a simple linear combination is 0.0037.</p> <p><strong>Team Solution</strong><br> Our best solution (LB 0.8824) is two of my bi-encoder models (CodeBert, GraphCodeBert) ensembled with a single pointwise Deberta-v3-base model that my teammates had developed.</p> <p><strong>Tooling</strong></p> <ul> <li>Experiment tracking with W&amp;B</li> <li>Configuration management with Hydra</li> <li>Team communication via Discord</li> <li>Code version management with Github</li> <li>PyCharm for development</li> <li>Model training on Kaggle, Colab and eventually JarvisLabs</li> </ul> <p><strong>Work Methodology</strong><br> Consistent preprocessing of data between training and inference was achieved by:</p> <ol> <li>Packaging my GitHub project as a Pip Wheel file, saving it as a dataset and then using it to run inference. The inference notebook doesn’t have any Python code in it, it only invokes a script from the package.</li> <li>The training script saves the hyperparameters alongside the model weights. In inference, this hyperparameters file is being applied to the configuration.</li> <li>The training script and the inference code packaging script both save a git commit hash, this allows me to make sure that the versions are aligned.</li> </ol> <p>Procedure for testing new ideas:</p> <ol> <li>Prototyping In a feature branch, to avoid introducing problems into the main code. In Git, it can be hard to undo a bad commit that is far back in the log.</li> <li>Running in the IDE line by line to debug.</li> <li>Training on a small dataset first to see if the results are in a reasonable range.</li> <li>Running a prediction analysis notebook that checks for prediction error by: language, notebook length, true rank quantile of cell, cell text length, etc.</li> </ol> <p>Thanks for reading and good luck to all participants in the upcoming rerun!</p>
Google AI4Code – Understand Code in Python Notebooks
Public LB 14th Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI4Code – Understand Code in Python Notebooks <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yeah, the dense layer is shared. </p> <p>Thanks! The short training time indeed allowed me to iterate on my ideas much more quickly.</p> <p>First and foremost, I want to thank Kaggle and the Kaggle community for providing us with the chance to learn via experience and collaboration. This was my first time competing in a Kaggle competition, and I'm quite pleased because I learned a lot about various model Archs (such as transformers, Pointer Networks, and so on) and NLP in general. </p> <p>We have three approaches for ranking problems (correct me if I'm wrong!): point-wise, pair-wise, and list-wise. The absence of global context is a concern with point-wise, however, some participants attempted to fix it by attaching a subset of code cells to each markdown cell. The pair-wise strategy appears to be highly strong, but it comes at the cost of a long run-time. Finally, list-wise attempts to rank cells using a mix of global and local information.</p> <p>I picked the list-wise method since it was faster than the pair-wise method and seemed more natural to me. </p> <p>The following image depicts the model architecture:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F289290%2F2b83aa98253260dd4bceb7035dd91b99%2FAI4Code_arch.drawio.png?generation=1660293627287930&amp;alt=media" alt=""></p> <ol> <li>Using <code>sentence-transformers/all-mpnet-base-v2</code>, compute cell features separately: I choose the first and last 64 words of each cell's text after preprocessing it to get a feel of how the cell begins and finishes. This is then sent into the sentence transformer model, which produces a 768d feature vector.</li> <li>I then append the following 8 features to each vector:<ul> <li>is_code: 1 if cell is code else 0</li> <li>is_markdown: 1 if cell is markdown else 0</li> <li>code_rank: rank of the code cell if cell is code else 0</li> <li>toc_rank: rank of the markdown cell in the table of contents if cell is markdown and ToC exists else 0</li> <li>numberd_rank: rank of the md cell if it has numbered title (like 1.1, 1.2) else 0</li> <li>has_heading_1: 1 if markdown cell has heading 1 else 0</li> <li>has_heading_2: 1 if markdown cell has heading 2 else 0</li> <li>has_heading_3: 1 if markdown cell has heading 3 else 0</li></ul></li> <li>Then I use an 8-head, 10-layer encoder-only transformer to compute the final features for each cell.</li> <li>Finally, I have two outputs for the code cells and four outputs for the markdown cells:<ul> <li>rank: this is the ranking output that I use in the inference. I apply a sigmoid on it.</li> <li>code_rank: rank of the code cell among other code cells.</li> <li>markdown_rank: rank of the markdown cell among other md cells</li> <li>toc_rank: rank of the md cell in the toc </li> <li>numbered_rank: rank of the md cell among other numbered md cells</li></ul></li> </ol> <h2>Loss function:</h2> <p>L1Loss is used, however instead of using it on the [0, 1] range, I use it on the [0, #cells] range. To do this, I first apply sigmoid, then multiply by the number of cells and use the result as the predicted output. This is due to the fact that longer notebooks contribute more to the scoring function, thus we should pay more attention to their flaws.</p> <h2>training time:</h2> <p>Training the notebook transformer is pretty fast. One epoch of training with a batch size of 10 notebooks and 17k steps takes less than 30 mins on a P100. </p> <h2>Inference time:</h2> <p>The entire inference procedure, including my preprocessing, takes around two hours. However, the model itself takes less than an hour to complete.</p> <h2>Preprocessing</h2> <ol> <li>Fix encodings</li> <li>Fix HTML in markdown cells</li> <li>Detect ToC rankings</li> <li>Detect numbered rankings</li> <li>Detect language</li> <li>Translation of non-English languages using MarianMT</li> </ol> <h2>Challenges</h2> <p>My main challenge was that I couldn't train my entire model end2end on all the notebooks since my GPU (Colab P100 or V100) couldn't handle notebooks larger than 20 cells 😄 </p> <h2>External datasets</h2> <p>I crawled around 50k additional notebooks from Kaggle, but it seems they didn't overlap much with the Private test set cause it only improved my score by 0.01!</p> <p>Thanks! I trained it independently on 10% of the training data using the same architecture, except for the notebook transformer I used a 1-layer 1-head encoder transformer with only the rank loss.</p> <p>The notebook transformer was then deleted, and the base sentence transformer model was utilized as my feature extractor. Following that, I did not train the base model.</p>
Google AI4Code – Understand Code in Python Notebooks
[Chai Time] Sanyam explains top solutions from AI4Code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI4Code – Understand Code in Python Notebooks <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Another week, another top solutions video from <a href="https://www.kaggle.com/init27" target="_blank">@init27</a>.</p> <p>This time he explains top published solutions from this competition. Something to watch while we wait for the private LB.</p> <p>Here is the <a href="https://www.youtube.com/watch?v=MygEfk7TBaM" target="_blank">YouTube link</a>.</p> <p><a href="https://www.kaggle.com/cabbage972" target="_blank">@cabbage972</a> I think that is a great idea. Just need to find some time :D I am trying to recover missed time on feedback2 competition right now.</p> <p>😂 your computer needs some chill time</p>
Google AI4Code – Understand Code in Python Notebooks
[Public 31st] CodeT5-base Encoder 1024
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google AI4Code – Understand Code in Python Notebooks <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats on the high spot! Yes, 100% Tensorflow.</p> <p>Thanks a lot and congrats! Each competition is a great space for learning, especially when solving complicated tasks like this one. Good luck in the next competitions!</p> <p>Thanks! Yes, I had similar thoughts.</p> <p>Thanks and congrats! Addition of markdown cells into context gave me a significant improvement (don't remember how much exactly). I shuffle them because during inference their order is unknown.</p> <p>Many thanks to Kaggle and Google X for an exciting unique challenge. Congrats to all participants!</p> <p>I used a point-wise ranking approach. Context for each markdown cell is dynamic, aiming to get as many meaningful tokens as possible. Specifically we reserve a maximum 128 tokens for markdown cell (query) and the rest goes for context. Context pool contains all code cells followed by all markdown cells except the query. Code cells in the context pool are always ordered, markdown cells are shuffled for each example. Sampling from a context pool is based on evenly distributed amounts of tokens taken from each cell. Specifically we start from 1 token per cell. If the total amount of sampled tokens is less than max length we try to take 2 tokens from each cell, and so on. Eventually if we cannot collect enough tokens padding is applied.</p> <p>I tried several backbones and found GraphCodeBERT-base and CodeT5-base being the best ones. They have similar performance on 512 length while CodeT5 was slightly better in my experiments. CodeT5 with 1024 max length demonstrated significant improvement over 512 max length, about 0.014. Single checkpoint takes 4 hours for inference and gets the score around 0.8700. I got my final score with an ensemble of 2 checkpoints from different folds, but they still were undertrained so probably a single checkpoint can reach even higher.</p> <p>I did not use any additional data. I believe the official training set is already representative enough. I used a validation set of 27k notebooks obtained by group split. CV scores correlate very well, being systematically 0.003 higher than LB. Source cleaning is minimalistic. I just removed extra spaces including all indents from the code to make it more compact.</p> <p>Acknowledgment. Huge thanks to the <a href="https://sites.research.google/trc/about/" target="_blank">TRC</a> program for TPU resources.</p> <p>Best of luck to all in the 2nd phase rerun! <br> Hopefully our models will not encounter spherical endless notebooks especially in vacuum.</p> <p>EDIT: <a href="https://github.com/vecxoz/ai4code" target="_blank">Solution code</a></p> <p>Thanks and congrats! Basically you are correct. I trained the model to predict the relative rank of each markdown cell within any given notebook. For example, say our notebook has 4 markdown cells and 4 code cells, and we want to create a training example for the very first markdown cell. In this case our training example would look like the following:</p> <p>[md1] + [sep] + [code1][code2][code3][code4][md4][md2][md3] and target rank is 0.0</p> <p>I plan to release the code, but not very soon. Need time for some cleaning. I will update this thread when it is ready.</p> <p>Recently returned to this project to run some experiments. Really great dataset. <br> Sorry, it’s a bit late, but still, here’s my <a href="https://github.com/vecxoz/ai4code" target="_blank">solution code</a>. It’s a minimalist TF/TPU pipeline. <a href="https://www.kaggle.com/thedevastator" target="_blank">@thedevastator</a></p>
Google Analytics Customer Revenue Prediction
110th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Analytics Customer Revenue Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks, Oleg! There must be other better ml based strategies, which are expected to be shared.</p> <p>I briefly write down my solution, which got medal-border score. </p> <p>So, this is not a complex approach.</p> <blockquote> <ul> <li><strong>Feature Engineering</strong>: based on <a href="https://www.kaggle.com/yoshoku/gacrp-v2-starter-kit">this kernel</a></li> <li><strong>Modeling...Classification =&gt; Regression</strong>: As discussed <a href="https://www.kaggle.com/c/ga-customer-revenue-prediction/discussion/71427">here</a>, after classifying whether the re-visited purchase will occur or not, I made a regression model which predicts how much the amount of the purchase is.</li> <li><strong>Task Timeframe</strong>: In this task, we have to predict the amount of the purchase from 2018/12/01 to 2019/1/31 per user who has visited between 2018/5/1 and 2018/10/15 with the data whose timeframe is from 2016/4/1 to 2018/4/30 and from 2018/5/1 to 2018/10/15.</li> <li><strong>Validation Strategy</strong>: To consider the problem simply, aiming to ignore the seasonality, I decided to take the following validation strategy: <br> <ul><li>The task is to predict whether re-visited purchase will occur or not between 2018/12/1 and 2019/1/31 by using data whose timeframe is from 2018/5/1 to 2018/10/15. </li> <li>For validation, I predicted the probability of the occurrance of re-visit between 2017/12/1 and 2018/1/31 by using data whose timeframe is from 2017/5/1 to 2017/10/15 (just a year ago). </li></ul></li> </ul> </blockquote> <p>In this competition, the principal, "Trust Your CV", was also true, as the Public LB was no use. So, the validation strategy was so critical that I want to know how other people took the approach. (However the luck might be most important. Is it true or not? )</p> <p>Thanks for reading and welcome any comments!</p>
Google Analytics Customer Revenue Prediction
23th -place solution (Simple CatBoost)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Analytics Customer Revenue Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Great thanks for my first serious comtetitions. I have joined it already at the second stage so I didnt feel disappointment for sudden changes rules :) I used pretty standard aggregate features for my models, but i else calculated all features separately for all positive totalTransactionRevenue. The absence of strong season in data allowed to use all presents data. I used each 5 month with target for 2 month pass one month. For example: x 2016/08/01 - 2017/01/01, y 2017/02/01 - 2017/04/01. For validation - just period from 2018/02/01 to 2018/08/01 with appropriate target. I built standart CatBoost Regression with small number iteration and it was enough to prevent overfitting. </p>
Google Analytics Customer Revenue Prediction
26th place - full solution in R
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Analytics Customer Revenue Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/c/ga-customer-revenue-prediction/discussion/73260">This is a repost of a comment I made to another thread.</a></p> <p>Here's my <a href="https://www.kaggle.com/burritodan/gstore-2-final">source code</a> (and the <a href="https://www.kaggle.com/burritodan/gstore-1-flatten">data prep step</a>). </p> <p>This was my first competition, and it took some time to get the basics right. Luckily I only joined in the v2 phase. As others have mentioned, the data was very sparse, and I found it very hard to beat the baseline (predict the historic mean) by any margin. However, I was able to get reasonably consistent results with a suitable <strong>validation strategy</strong>:</p> <pre><code>cut_offs = ymd("2017-01-15") %m+% months(0:17) # earliest we can start and have 5.5 months data (to make a consistent recent visitor pool) train_cut_offs = cut_offs[cut_offs &amp;lt; max(cut_offs) %m-% months(2)] valid_cut_offs = train_cut_offs %m+% months(2) train_sets = lapply(train_cut_offs, function(t) which(fm$cut_off %within% interval(t %m-% months(12), t))) valid_sets = lapply(valid_cut_offs, function(t) which(fm$cut_off == t)) </code></pre> <p>In other words, I had a cut off and new forecast every month, 18 times. I then built 18 sets of features using an expanding window, from the data’s origin to the cut off. This worked better for me than a rolling window. More on why later. The cut off is the analog to the 15th October 2018: the point at which the training data stops. I did not use the demo account to infill 16th-31st October, as I was trying to infill <em>all</em> the missing data, but of course @alijs execution above is much smarter.</p> <p>For cross-validation, I would train each model with the feature set from the cut off month and the 11 months before, so I had a rolling window of 12 feature sets. (Note, each feature set is built from an expanding window, so they overlap.) This was an important step to <strong>augment the data</strong>. There’s so few conversions of recent visitors in any 2 month period, it’s hard to train a model of any complexity using a single cut off. I chose 12 months somewhat arbitrarily, as 5 or 8 months worked well too, arguing to myself it would smooth out any seasonality.</p> <p>The <strong>metric</strong> I used for cross-validation was R². It scales the MSE vs the baseline: predict the historic mean. I felt this was helpful, as the MSE moved quite a bit over time, and this helped me understand why: the mean moved a lot, presumably due to promotional activity (see @MichaelP's comments). As you can see from this chart, the model was way off last Spring, so my entry relies on this promotional activity (or lack thereof) not happening again this Winter!</p> <p><img src="http://i67.tinypic.com/28tg6x1.png" alt="R² by target interval end dates (cut off plus 3.5 months)"></p> <p>I am willing to bet the winning entry will incorporate both short term knowledge of promotional activity, and long term general behaviour of visitors, especially repeat purchasers.</p> <p>In terms of <strong>features</strong>, I took inspiration from <em><a href="http://www.brucehardie.com/papers.html">Buy ‘Til You Die</a></em> models. They have <a href="https://en.wikipedia.org/wiki/Sufficient_statistic">sufficient statistics</a> of: the number of purchases, and the time since the first and the last purchase. This is why I used an expanding window: these models rely on having the entire history of transactions to determine whether the customer is still alive: they may not have transacted the last 6 months, but this could be just because they tend to purchase infrequently, say once every 8 months on average. These sufficient statistics were the main time features I used, both for visits and transactions (ie all visits, and those visits with positive <code>transactionRevenue</code>). I also added average pageviews per visit and average transaction revenue (ie sum of <code>transactionRevenue</code> over count of transactions).</p> <p>I <strong>mean-encoded</strong> a few of the more useful categorical variables, but <em>not</em> using target encoding. The target was too sparse for this, so I encoded the historical conversion rate instead (total transactions / total visits for the specific value of the categorical).</p> <p>During the competition, I thought I had found an important <strong>data leak</strong>. In the v2 train data, the organisers had removed the <code>source</code> column for a large number of visits. Comparing this to the v1 data, I could see these were consistently identifying internal websites for Google employees. I reasoned that Google employees were much more likely to transact, which is why they had redacted them in this way. So I added the redacted <code>source</code> back to the data set as a feature, for these visits, as well as an indicator. In practice, they did not improve the model's performance once I included all my time features, described above, so I removed them.</p> <p>To speed up my development cycle, I <strong>rebalanced</strong> the “classes”. I took the whole history and removed 90% of the non-transacting visitors. For the final submission I used the full set, of course. I say "classes" as I treated the comp as a regression, not a classification as others have mentioned. My reasoning was the positive cases were very sparse, so building two models was pushing it!</p> <p>I used <strong>Catboost</strong> as it outperformed LightGBM, despite the fact that I had mean-encoded all the categoricals before handing the data to the model (Catboost specialises in mean-encoding categoricals, amongst other things). I didn’t try Xgboost. I didn’t ensemble or stack — I haven’t had time to learn this skill yet!</p> <p>I also totally failed to pick 2 submissions for the private leaderboard! I thought I had to do that after stage 1 finished, but I now realise it’s something you need to do before, to avoid gaming. Anyway, luckily I’d only ever made 3 submissions, and the second highest scoring one (on the stage 1 leaderboard) is the one I wanted to submit, and it will go through automatically. As <strong>Mr Miyagi</strong> would say, <a href="https://www.youtube.com/watch?v=J1gAHil89Z4">“You beginner luck!”</a></p>
Google Analytics Customer Revenue Prediction
36th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Analytics Customer Revenue Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Just show me the code!</h1> <ol> <li>Initial data analysis: <a href="https://www.kaggle.com/returnofsputnik/data-analysis-using-r">https://www.kaggle.com/returnofsputnik/data-analysis-using-r</a></li> <li>Model: <a href="https://www.kaggle.com/returnofsputnik/gstore-r-only-test-data">https://www.kaggle.com/returnofsputnik/gstore-r-only-test-data</a></li> </ol> <h1>Abusing the leak before the reset</h1> <p>The first thing I did when I saw the Google Analytics Demo Account was posted online was to try to familiarize myself with the environment. I was searching around trying to understand it, and then I clicked the User Explorer tab. I found out that the Client ID for each user actually had the visitID of the first session of the visitor. And I found that if I click on this user, I can map each visit to how much he spent based on the number of pageviews he made in that session. So, I simply exported all of this information, and was changing my submission based on what this user export said! This is how I was able to skyrocket to #1 on LB before the reset, and it spawned the "Insight or Dodgy?" discussion that led to this competition's reset.</p> <h1>Construction of solution</h1> <h2>Validation Set</h2> <p>First we must realize that the test set was going to be graded on December 1 2018 to January 31 2019. Therefore I reserved December 1 2017 to January 31 2018 as my validation dataset, instead of using the standard May 1 2018 to October 15 2018 that was issued as the "test set". I did this because I thought this period was holiday/Christmas period, and therefore I thought the behavior of the customers during this period would be significantly different from the behavior of the customers during other periods.</p> <h2>Data Preprocessing</h2> <p>First, I convert all columns with 2 unique values into binary. I then impute some NAs based on what they should logically be. I updated some values, that for anyone who "bounced" from the website, their sessionQuality was set to 1 (the lowest possible) and their timeOnSite was set to 1 (the lowest possible).</p> <p>For everyone who didn't bounce, and there was no logical imputation for NA, I imputed with the median. I created new binary columns representing if the column was originally NA or not.</p> <h2>Feature Engineering</h2> <p>I did a lot of feature engineering in this competition:</p> <ol> <li>Time until final visit</li> <li>Time of last visit (max visit)</li> <li>Time of first visit (min visit)</li> <li>Difference between time of last and first visits</li> <li>Difference between current visit and first/last visits</li> <li>Ratio of 5 and 4</li> <li>Do the same for visitId (I do this because visitId doesn't always go up by 1 for each visit, so maybe a gap in visitId signified something)</li> <li>Take log1p(abs(x)) of these feature engineered to homogenize values</li> <li>Take the length of the gclId </li> <li>Obtain the last 3 characters of the gclId (this can be a new category)</li> <li>Consolidate browsers into fewer categories based on research. Include a new category called "nonhuman" for spiders and scrapers</li> <li>Consolidate operatingSystem into fewer categories</li> <li>Get the timezones for each country, being more specific when countries have multiple timezones by looking into the city</li> <li>Obtain the "local" visitStartTime by converting the UTC visit start time into the local time</li> <li>Obtain hour, day of year, weekday, week, month, and year values from the local visit time.</li> <li>Append GDP data to all countries</li> <li>Append binary column to say if English is the country's official language</li> <li>Using Bing Maps API I was able to get the longitude and latitude of every city. Append the longitude and latitude as features</li> <li>Compute Haversine distances from San Francisco given the coordinates</li> <li>Compute Bearing distances from San Francisco given the coordinates</li> <li>Consolidate adwordsClickInfoSlot</li> <li>Replace unseen countries with similar countries (e.g. Samoa replaced with American Samoa, Micronesia replaced with Vanuatu, etc.)</li> <li>Consolidate trafficSource links (e.g. anything that has "amazon" in it, just let it be "amazon.com"; anything that has "kik" in it, let it be "kik.com". etc.)</li> <li>Binary column if trafficSource has the word "google" in it</li> <li>Binary column if trafficSource has the word ".com" in it</li> <li>Binary column if referralPath has the word "google" in it</li> <li>Binary column if the referralPath has the word "youtube" in it</li> <li>Binary column if the referralPath has the word "intl" in it</li> <li>Binary column if the referralPath has the word "html" in it</li> </ol> <h2>Aggregation to user-based level</h2> <p>Recall this is at a visit-based level, but we need to make user-based predictions. Therefore I had to aggregate all these features by taking maxes, mins, sums, means, standard deviations, heads, tails and modes. The aggregations I chose were based on what I thought would be important. I also made some aggregations based on time, e.g. I took the mean of the customer's lifetime and the mean from the last 6 months, because I saw that features like these can improve the model from the previous Home Credit competition.</p> <p>I took logs of some aggregated features if I found their distribution was better. This was determined using kernel density plots.</p> <h2>Validation Modeling</h2> <p>There were two parts to my solution: 1. First, make a model to predict if the buyer will make a purchase in the future 2. A regression that predicts how much they will spend in the future</p> <p>First, I tuned my parameters using the validation set. I predicted 0 or 1 did you make a purchase or not. Performance was looking good.</p> <h2>Prediction Modeling</h2> <p>Once I was satisfied with my validation, I retrained the model on the entire dataset using 4 StratifiedKFold. This is one of my errors--I should have used TimeSplitFold instead.</p> <p>I generated binary OOFs to feed to the regression, and I achieved an AUC of 0.95255 and Log Loss of 0.0020877.</p> <p>Add these predictions to a LightGBM fitted to minimize MSE. This got my mean absolute error to 0.01147 and my root mean square error to 0.320785.</p> <p>Run this procedure 4 times using 4 different seeds and average the results. Then you obtain my submission!</p> <p>Things I tried that didn't work:</p> <ol> <li>Tons of ratios between the aggregate features, based on the 6 month aggregate versus the lifetime aggregate.</li> <li>I had this idea in my head that perfecting the CV on my validation set wasn't a good idea, because I would be "overfitting" the validation set instead of allowing it to generalize to a test set. I should have looked into observing the difference between my training losses and validation losses to see how well the models were generalizing, and based on this make my decision.</li> </ol> <p>Things that I know to do for next time:</p> <ol> <li>Obviously, the assumption of seasonality for the test period was false based on EDA. I should have explored this before making the assumption, and if I did, I would have made a bunch of windows to train on like many other top competitors did. This would have added model strength</li> <li>I should have tried multiplying the probability of a purchase by the regression, instead of simply feeding that as a feature to LightGBM. In fact, the #1 submission in this competition and the #1 in the recently ended Elo Merchant competition both used this technique of multiplying the regression by probability.</li> <li>I should have looked into reducing the dimensionality of my featureset. I doubt every feature was useful, and because there were so many features I think this harmed the ability of LightGBM to perform well.</li> </ol>
Google Analytics Customer Revenue Prediction
81st Place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Analytics Customer Revenue Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>LightGBM - Regression</p> <p>Training Target - Consider Only the second time Purchase amount as Target All first time visitors with purchase also given Zero Target if they didn't follow up with purchase.</p> <p>Data at Visitor Level, existing features clubbed ( sum, mean as per feature ) New Features Added - <br> 1)Has any one of the previous visit resulted in purchase 2)Purchase amount of previous visit</p>
Google Analytics Customer Revenue Prediction
Late Submission Socre 0.884 ... And Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Analytics Customer Revenue Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thankyou for providing the entrance of late submission. I should trust my model and local cv instead of my intuition. In my model, I split the data into 6 groups, 5 group train sets, and 1 validation set. In each training set, 6 month's aggregrate feature of each id and the next 3 months' sum revenue as target. The validation set is created by the same method, using 2018.02.01-2018.07.31 data to aggregrate features, using 2018.8.1 - 2018.10.15 's sum revenue as target. It's difficult for me to explain my solution without code beacuse of my limited English ability ... The most importance features are (aggregrated) totals.hits/totals.timeOnSite/totals.pageViews and <strong>heats</strong> which is a feature I made by myself using a linear combination of the number of hit and revenues of each month in the training set (made by pivot table). What about yours?</p>
Google Analytics Customer Revenue Prediction
My solution - please learn from my mistakes
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Analytics Customer Revenue Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Kagglers, I know this competition was problematic in many ways, but I enjoyed it. especially after the reboot. Thank you to Kaggle, Google and R Studio for setting this up.</p> <p>I went all in for this competition, because, as an R user, i wanted to be eligible for the R Prize.</p> <p>I did scrape the GA data demo data, and managed to match 1800K~ ClientID\fullVisitorIDs that continued to visit and purchase during october and November.</p> <p>Making my goal to predict the target without 2 months gap.</p> <hr> <p><em>So what happened:</em> - I wanted the most updated data from my scraping, so i only started integrating this data in the last day.</p> <ul> <li><p>When I ran my entire pipeline, i noticed that 1 of the main features was calculated wrong the whole time, since the start of the competition. So I changed it in the last moment to a correct calculation, and after waiting for the entire pipeline to run, i saw that it absolutely wrecked my CV score.</p></li> <li><p>It was already too late to run the pipeline again, so i simply submitted it as is.</p></li> <li><p>Then I had an issue of missing the deadline by 5 minutes because of difference between winter and summer time, having been used to submit by 3am in all past competitions. I sent kaggle right away my submissions explaining my issue, and they were awesome to let me know in the last day before the scoring that they managed to use my submissions. Thank you @PhilCullinton!!!</p></li> <li><p>I'm pretty sure that if i would've submitted using my untouched model i would've at least get a gold and be eligible for one of the top 5 R Prizes.</p></li> </ul> <hr> <p><em>What i learned:</em> - Always save aside your features to load them faster, not having to calculate them every time.</p> <ul> <li><p>DON'T make any changes on the last day, unless it's something you can immediately see its effect on CV</p></li> <li><p>If you're waiting on some kind of data to implement, whether it be scraped data or models from a teammate, ask for a sample of the data so you can prepare integration code that runs successfully ahead of time.</p></li> </ul> <hr> <p><em>Now for my approach - which i wrote when the competition was done a few months ago, can't remember much of it now:</em></p> <ul> <li><p>My training data consisted of 15 sets of data from the first date we have until a specific month, with 45 day gap that IS used also in training (because i scraped the GA data and knew i will have it), and target was 2 months following the gap. The visitorIDs i used for each set for only those that visited during the Gap. my validation was simple 5fold.</p></li> <li><p>I did scrape GA demo data, and managed to match 1800K~ ClientID\fullVisitorIDs that continued to visit and purchase during october and November.</p></li> <li><p>I used featured such as: days from first\last visit\purchase until target period, how many days is the user active, mean days between visits\purchases, how many uniqueN month with purchases, also month as category, and some more.</p></li> <li><p>My model pipeline included LGBM, XGB as level 1 predictors, next was NN that used most features + the LGBM and XGB OOF predictions, and last, linear regression with only these 3 models OOF predictions</p></li> </ul> <p>Thanks for reading!!</p>
Google Analytics Customer Revenue Prediction
No boosting solution (#88 place)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Analytics Customer Revenue Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone! I would like to share our solution with the use of only target variable. I hope ideas provided below will be interesting for you.</p> <p><strong>MODEL I</strong></p> <p>The 1st model is based on time series forecasting.</p> <ol> <li>Firstly, we select all users with only positive revenue history. Here we assume that users who didn't pay will not pay in the future. </li> <li>To predict the sum of revenue from these users we use common pipeline for time series forecasting: <ul><li>outlier rejection </li> <li>exponential smoothing </li> <li>Box Cox transformation </li> <li>forecasting algorithm (we used FBProphet algorithm) </li> <li>inverse transformation</li></ul></li> <li>The next step is to distribute this value among all users. Here we simply guess that payment distribution in the future will be the same as in the train period.</li> </ol> <p><strong>MODEL II</strong></p> <p>The 2nd model operates on probabilities. For each user we determine a transition from one state to another depending on whether there was a transaction or not.</p> <ol> <li>Using parameters <code>[T0,T1,T2,T3]</code>, where <code>T0</code> and <code>T1</code> are start time and end time of train window and <code>T2</code> and <code>T3</code> are start time and end time of test window, we train a transition matrix <code>P(i,j)</code> from state <code>i</code> to state <code>j</code>, where <code>I = {0,1,2}</code> and <code>J = {1,2}</code>. We take into account transitions as follows: <code>(0,1) -&gt; [-] trans before T0, [-] trans in [T0,T1] and [-] trans in [T2,T3]</code> <code>(0,2) -&gt; [-] trans before T0, [-] trans in [T0,T1] and [+] trans in [T2,T3]</code> <code>(1,1) -&gt; [+] trans before T0, [-] trans in [T0,T1] and [-] trans in [T2,T3]</code> <code>(1,2) -&gt; [+] trans before T0, [-] trans in [T0,T1] and [+] trans in [T2,T3]</code> <code>(2,1) -&gt; [+] trans before T0, [+] trans in [T0,T1] and [-] trans in [T2,T3]</code> <code>(2,2) -&gt; [+] trans before T0, [+] trans in [T0,T1] and [+] trans in [T2,T3]</code> where <code>[-]</code> means there was no transaction and <code>[+]</code> means there was at least one transaction. Of course, sum <code>P(i,j) == 1</code> over all <code>i</code> and <code>j</code>. At the same time we build matrix <code>C(i,j)</code> of average cost per user being in position <code>(i,j)</code>.</li> <li>We train model for 2016 year (<code>T0 = 2016.07.31</code>, <code>T3 = 2017.01.31</code>) and for 2017 year (<code>T0 = 2017.07.31</code>, <code>T3 = 2018.01.31</code>) independently and forecast target for each user as follows: <code>forecast_2016(user|state=i) = P(i,2) / (P(i,1) + P(i,2)) * C(i,2)</code>, where matrix <code>P</code> and <code>C</code> estimated on 2016 year, and state (0, 1 or 2) estimated on <code>test_v2</code> history <code>forecast_2017(user|state=i) = P(i,2) / (P(i,1) + P(i,2)) * C(i,2)</code>, where matrix <code>P</code> and <code>C</code> estimated on 2017 year, and state (0, 1 or 2) estimated on <code>test_v2</code> history</li> <li>We make average predictions with different weights <code>forecast = 0.3 * forecast_2016 + 0.7 * forecast_2017</code></li> </ol> <p>Final model makes simple average of Model 1 and Model 2. If you have any no-boost solution, please share it if you want. </p>
Google Analytics Customer Revenue Prediction
The needle in a haystack (12th place solution)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Analytics Customer Revenue Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you @SanyamButhani, About R prize I've a full R solution but any of top solutions may have used R, so really no idea... we'll find out in some weeks!</p> <p>Thanks @MichaelP, and congrats to you for your great bronze result :-)</p> <p>Three months after I posted my <a href="https://www.kaggle.com/c/ga-customer-revenue-prediction/discussion/73378"></a> <a href="https://www.kaggle.com/c/ga-customer-revenue-prediction/discussion/73378">“believes” of this competition</a> and with quite a decent result it’s a time for a quick recap.</p> <p>First, the bad part: The leakage issue in this one was severe and impacted when many kagglers had already put a big effort on it. As a result many were discouraged to continue after the redesign. Another bad consequence was that public leaderboard feedbak was rendered meaningless. Last, but not least a door was left open for getting better data through late web scraping what implied to choose either to take all the aditional fuss to scrape or to assume a big disadvantage.</p> <p>All that said, it’s a pitty. Because, in my opinion all the problems have hidden the interesting parts of a quite dificult competition. I was quite uninterested in the initial format as the looking ahead possibility made it useless from a business point of view. </p> <p>But after reboot… wow. A variant of CLV business problem with almost no transactions. Almost all noise. But, in theory, signal was in there. I was interested.</p> <p><strong>1) Data “augmentation”</strong></p> <p>It was necessary to augment data somehow. As the problem is essentially a time window to time window prediction the obvious path was to use available training data to replicate a good number of time windows. But if the data exhibited -yearly- seasonality it was possibly game over. Luckily (and strangely) some simple plots showed no meaningful yearly seasonality. This was weird, possibly due to some kind of balancing and sampling of training data. But it was also good news, there was a good possibility of enough “stationarity” of the problem to use multiple time windows.</p> <p>So, a series of overlapping time windows beginning every 15 days were constructed and agregated in a single dataset. As <strong>temporal relationship</strong> was what I intended to model the same user could be used multiple times with multiple target values depending on the window. Each window would include multiple features related on number of transactions, with or without purchases, time deltas vs. windows limits and all the usuall CLV features. That led to a nice dataset with like 10 million observations, big enough to find a needle in a haystack.</p> <p><strong>2) -Almost- no tunning</strong></p> <p>Some initial attemps to model with boosting showed huge inestability. What made sense. If you don't have data to spend the smallest problem is to assess model performance, the big problem is you can not even tune hyperparameters. Even if you stick to defaults in boosting still early stopping is an important hyperparameter that you can not afford with no data. Risky as it may seem a zero tunning model was necessary. Obvious choice was <strong>random forest</strong>. It was evidently more stable and good enough. A single RF model. (Stacking was also discarded the moment overfitting was more than a possibility and no data was available to tune nothing. I don't stack that much anyway). </p> <p>So, that’s it. <strong>Those two big decissions</strong> made me quite confident to do well without overfitting, provided there was actually signal and not many people used the available scraping option.</p> <p>I hope the interesting parts of this competition will "survive" all the problems that ocurred. It is clear that many kagglers were affected, in spite of the efforts made by Kaggle team, that in my view were quite evident and well intended.</p> <p>At the very least, looking into the future, this will be an interesting and unusual CLV variant competition to have a look at.</p>
Google Analytics Customer Revenue Prediction
Winning solution (link to kernel inside)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Analytics Customer Revenue Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Kagglers,</p> <p>Key points of my solution:</p> <p><strong>Train creation</strong> As we have 168 days (2018/05/01-2018/10/15) of sessions for customers in the test, 62 days (2018/12/01-2019/01/31) of target calculation period and 46 days (16/10/2018-30/11/2018) of gap between above two windows, it’s absolutely clear to construct train data by analogy. I took 4 non-overlapping windows of 168 days, calculated features for users in each period and calculated target for each user on each corresponding 62-day window. Then those 4 dataframes were combined in one train set.</p> <p><strong>Feature engineering</strong> Very simple: - label-encoded categorical features, - some time features in 168-day window such as intervals between user’s first session and window’s start and from user’s last session to window’s end, - some statistical features based on user’s pageviews and hits. Features were calculated only on 168-day window data without using sessions from next 46-day window, because I didn’t use “external” data from demo account for 16/10/2018-30/11/2018 period.</p> <p><strong>Problem as “classification and regression”</strong> Firstly, I predicted probability of returning of customer using all train set. Secondly, I predicted the amount of transactions for those customers who returned. So, in regression task train was filtered only by customers who had a session in 62-day window. As final prediction I multiplied probability of returning by predicted amount for returned customer. Approach of direct prediction by only regression using all train set gives only 0.88320 score (it’ll be the 6th place on LB). So the trick of adding classification component was crucial. </p> <p>You can find the code <a href="https://www.kaggle.com/kostoglot/test-score#L144">here</a>. Note: Score is slightly different from the score on LB (0.88143 instead of 0.88140) due to different environment on my local machine.</p>
Google Brain - Ventilator Pressure Prediction
"Single" LSTM model with Magic Features. 0.1093 LB.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Brain - Ventilator Pressure Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Many thanks, Chris. Our jump was mostly due to the features which we discovered just three days before deadline :)</p> <p>Hi <a href="https://www.kaggle.com/KouFu" target="_blank">@KouFu</a> ! It's called kaggle intuition :D</p> <p>But if to be serious, these features were born during the analysis of "shifting" errors of our model, where it caught pressure form quite well but made a mistake with bias prediction. In most cases such "problematic" breath ids have very well predicted nearest neighbors (by cosine similarity). So the idea behind our features was to show some nearest neighbor information and deviations from it to the model.</p> <p>Hi! The boost from this trick in comparison with "right-ordered LSTM" was quite small (0.002 locally and the same on public LB) but blending of "left-ordered" and "right_ordered" LSTMs boosted public LB ~ 0.004</p> <p>Those 7 features were the main ingredient for us. They boosted our score for ~0.017-0.018</p> <p>We used both. Colab Pro+ was used for fast experiments (light version of our model + just 2 folds). GCP was used for final models training (15 folds).</p> <p>We have shared the code of our best "single" model which achieves 0.1093 on public leaderboard and 0.1133 on private (after standard postprocessing).</p> <p>The architecture is quite simple. It has just 4 LSTM layers + Dense Layer + Output Layer with skip connections. But the features were the key ingredient for us. So if you're missing oldschool tabular competitions with GBMs and tons of handcrafted features, you'll find it interesting.</p> <p><strong>Key Points of the solution:</strong></p> <ol> <li>Magic features based on aggregations over R, C, rank and rounded u_in value</li> <li>Training the model in reversed order (from 80th timestemp to the 1st one) and including into <br> the model features generated over the right order too (for instance at 80th timestep model <br> sees u_in value of the 80th timestep as well as the value of the 1st timestep)</li> <li>Small tricks as Quantile Transformation of u_in values for u_out == 0, considering <br> pressure values for u_out == 0 as negative values. </li> </ol> <p><strong>Link</strong>: <a href="https://www.kaggle.com/l0glikelihood/0-1093-single-public-lb" target="_blank">https://www.kaggle.com/l0glikelihood/0-1093-single-public-lb</a></p> <p>Hi <a href="https://www.kaggle.com/vialactea" target="_blank">@vialactea</a> </p> <p>Thanks for congratulations. There're my thoughts about your questions: </p> <ol> <li><p>I do not think, that the number of used features is dramatically large. Moreover I'd like to make more experiments with a larger number of input variables but we bumped into certain restrictions of TPU so our limit was 95 features. But I agree with you that some feature selection procedures could help us to decrease MAE more. For instance we could select features with permutation importance but we've not done it. </p></li> <li><p>Yep, we measured the effect of "reversed" LSTM. It was ~0.002 better than "right_order" LSTM locally so we decided to trust our CV and select "reversed" LSTM as our baseline model. ​</p></li> </ol> <p>BUT: <br> 1) We did not conduct any tests to prove statistical significance of the score difference so it could be just a random. <br> 2) I can't find any theoretical explanation why such "reversed-order" model should work better because standard Bidirectional LSTM already catches any "reversed" patterns.<br> 3) The main boost we got from "reversed" LSTM was the blend "right_reversed_LSTM" and "left_reversed_LSTM". Locally such shallow blend (with equal weights) gave us boost ~0.01 locally and ~0.004 on public LB.</p>
Google Brain - Ventilator Pressure Prediction
#16th Place Journey Writeup - No Magik
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Brain - Ventilator Pressure Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I felt a lot of collective love from the Kaggle community. One day soon, I hope to compete with you as opposed to against you =]</p> <blockquote> <p>he is probably guessing the right blend ratio</p> </blockquote> <p>I have a meme for this, but I'll share in the next competition : P</p> <p>I followed the same vein of thinking, especially when I noticed charts like this that show the directionality of the model is on point--just gotta set a good offset. One of the things that made me diverge from this was a belief that if it were real people hooked up to the machine, they would run it like a regular lung, optimizing for patient oxygenation and safety. But since it was generated, I thought they would just generate any random uin pattern they wanted, in which case there would be no legitimate neighbors. It was actually that thought which got me to revisit pseudo-labeling. I guess it is even possible to ourselves generate whatever random uins with our best models, and then train on those until convergence, and then use those are starter models to fine tune against the train set.</p> <p>Although I love technical write-ups, what I have always found more beneficial is understanding how the mind of people work. If I can think like they do, then perhaps in the future I can emulate them. So here, I'll share my journey <a href="https://www.kaggle.com/c/ventilator-pressure-prediction/discussion/283794#1562936" target="_blank">as promised</a>, and although it takes more time, I'd like to encourage my fellow Kagglers to consider orienting their write ups like this as well… for my own personal benefit =P.</p> <p><a href="https://www.kaggle.com/gunesevitan" target="_blank">@gunesevitan</a> and I were <a href="https://www.kaggle.com/c/rsna-miccai-brain-tumor-radiogenomic-classification/discussion/279832" target="_blank">grinding on RSNA</a> when my friend <a href="https://www.kaggle.com/adityaecdrid" target="_blank">@adityaecdrid</a> invited me to this competition. Although the problem space felt great (helping people live!) which attracted me, my heart was initially more inclined to the NFL helmet mapping competition because I felt I'd learn more there. Furthermore, I have actually seen permutations of that problem in the wild before while doing data science consulting. In retrospect, looking over the <a href="https://www.kaggle.com/c/nfl-health-and-safety-helmet-assignment/discussion/284975" target="_blank">multiple</a>, <a href="https://www.kaggle.com/c/nfl-health-and-safety-helmet-assignment/discussion/285112" target="_blank">winning</a>, <a href="https://www.kaggle.com/c/nfl-health-and-safety-helmet-assignment/discussion/285076" target="_blank">solutions</a> for that which included batch registration, perspective transformations and custom trained object detection models, it was probably wise I stuck it out here because two weeks would not have been sufficient for me to be competitive in that space. Although I do believe I would have progressed in my DS/Engineering understanding…</p> <p>The RSNA segmentation models took a lot of time to train, so while those were cooking, I started exploring this competition's dataset.</p> <p>The first thing I noted was the consistency (hard programming) of u_in in the expiratory phases. Since I had intended on modeling this as a seq2seq problem, it didn't make much sense to me that input A -&gt; disparate variables, so I opted to chop off uout=1 data completely (2/3rd of the data) and only train on the inhale phase, which sped up iteration time considerably. Much later on after viewing some EDA on the forums, I added some engineered expiratory features, such as the modes, etc:</p> <p><img src="https://i.ibb.co/rfLnZHF/ex3.png" alt=""><br> Curtesy of <a href="https://www.kaggle.com/patrick0302" target="_blank">@patrick0302</a></p> <p>After reviewing some public kernels, I settled on a 4-layer BiLSTM network. I played with the architecture a lot, more than I should have. Changing the order of norms, the type of norms, the types of activations, the sizes of nodes, different pooling operations, etc.</p> <pre><code>class CustomModel(nn.Module): def __init__(self, num_inputs): super().__init__() self.pos_embed = nn.Parameter(torch.zeros((1,CFG.CHOP,3))) self.mode_embed = nn.Embedding(2, 2) self.gdrop = GaussianDropout() self.lstm1 = nn.LSTM(num_inputs+3+2, 700, batch_first=True, bidirectional=True) self.lstm2 = nn.LSTM(700 * 2, 512, batch_first=True, bidirectional=True) self.lstm3 = nn.LSTM(512 * 2, 256, batch_first=True, bidirectional=True) self.lstm4 = nn.LSTM(256 * 2, 128, batch_first=True, bidirectional=True) self.pressure = nn.Sequential( nn.LayerNorm(256), nn.SELU(), nn.Linear(256, 256), GeM(p_trainable=True), ScaleLayer(), ) self.delta_pressure = nn.Sequential( nn.LayerNorm(256*2), nn.SELU(), nn.Linear(256*2, 256), nn.AdaptiveAvgPool1d(1), ) self.delta_delta_pressure = nn.Sequential( nn.LayerNorm(512*2), nn.SELU(), nn.Linear(512*2, 256), nn.AdaptiveAvgPool1d(1), ) for n, m in self.named_modules(): if isinstance(m, nn.LSTM): print(f'init {m}') nn.init.xavier_normal_(m.weight_ih_l0) nn.init.orthogonal_(m.weight_hh_l0) nn.init.xavier_normal_(m.weight_ih_l0_reverse) nn.init.orthogonal_(m.weight_hh_l0_reverse) nn.init.xavier_normal_(self.pressure[-3].weight) nn.init.xavier_normal_(self.delta_pressure[-2].weight) nn.init.xavier_normal_(self.delta_delta_pressure[-2].weight) def forward(self, cont_seq_x, emb_x): seq_emb = torch.cat(( cont_seq_x, self.pos_embed.expand(cont_seq_x.shape[0],CFG.CHOP,3), self.mode_embed(emb_x[:,[0]]).expand(cont_seq_x.shape[0],CFG.CHOP,2), ), axis=-1) seq_emb = self.gdrop(seq_emb) seq_emb = self.lstm1(seq_emb)[0] seq_emb = self.gdrop(seq_emb) seq_emb = self.lstm2(seq_emb)[0] delta_delta_pressure = self.delta_delta_pressure(seq_emb).squeeze(-1) seq_emb = self.gdrop(seq_emb) seq_emb = self.lstm3(seq_emb)[0] delta_pressure = self.delta_pressure(seq_emb).squeeze(-1) # ([256, 40]) seq_emb = self.gdrop(seq_emb) seq_emb = self.lstm4(seq_emb)[0] seq_emb = self.gdrop(seq_emb) pressure = self.pressure(seq_emb).squeeze(-1) return pressure, delta_pressure, delta_delta_pressure </code></pre> <p>I noticed was the LSTM was great at predicting p[0]. What if I forced it to focus on that? If we do great with p0, everything else <em>should</em> become an easier problem. Predicting the pressure delta from one timestep to the next should only require local information, whereas predicting absolute pressure requires pretty much the entire sequence. Or so I thought. I setup an experiment where the LSTM predicts p0 and then I used a cumsum with pressure delta targets. This actually scored worse. At this point, I let go of the idea of constraining the model to what is physically possible and decided to embrace letting the network examine the entire signal and tried my hand with transformers.</p> <p>None of the transformers I attempted worked. I then recalled something I read in a paper once about the ViT style transformers where they note they had to flip the order of layernorms or summat because of how image near-pixels are highly correlated, which doesn't hold true for NLP. So I attempted again w/ a modified vision transformer and the model was finally able to converge. I don't recall what it got down to, but my lstm models were already mature at that point, so I didn't wish to dedicate more time there.</p> <p>My idea central was to squeeze as much information out of the target as possible. So I also added pressure derivatives as targets which helped improve the score. This is when I made my entry onto the LB:</p> <p><img src="https://i.ibb.co/FXFf8Ky/exh1.jpg" alt=""></p> <p>Oh, would that the competition ended there!</p> <p>Around then, <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> and <a href="https://www.kaggle.com/pcjimmmy" target="_blank">@pcjimmmy</a> had shared the hosts paper so I attempted to take a look. The PID stuff did not make any sense to me, but it got me into the mode of <a href="https://www.kaggle.com/authman/no-magic-hacks-just-fast-feature-engineering" target="_blank">feature engineering</a> and I spent most of my time this past week iterating there, with each new model bring in incremental benefit:</p> <p><img src="https://i.ibb.co/sgVvxFB/ex2.png" alt=""></p> <p><a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> did his experiments with folds which got me thinking that the models had spare capacity to learn a lot more. I tried was pseudo labeling the submission set. During training, every 10 or so epochs I'd use the currently training model to label test set, train on that for an epoch, then switch back to regular training again. That did not work, so I dropped the idea.</p> <p>Next, I spent time figuring out how to augment the data. It's unfortunate I went this way (the easy way) because if I had allowed my models to just train longer, perhaps that would have sufficed. Anyhow, I tried doing was reversing the signal. By this I mean, I took the pressure differences and time deltas and applied them in reverse; however, I start at the original pressure, p0. I felt this was important because it was something the model was able to learn well and because the hard coding meant the beginning of the breath was different from the end. The problem with this approach was that moving backwards from p0, we end up with breaths that have (large) negative pressure… after all the end of the breath is usually at maximum pressure before uout turns on. I had been using <a href="https://www.kaggle.com/adityaecdrid" target="_blank">@adityaecdrid</a>'s <a href="https://www.kaggle.com/adityaecdrid/ventilator-pressure-lstm-starter-scaling-layer" target="_blank">ScaleLayer</a> which I adjusted to account for negatives, but this did not bode well.</p> <p>The next thing I attempted to do was resample the time series. If I could interpolate between actual points, then I could select new time values to sample at that we're still congruent with the original time series. Luckily, we only have uin and ts features to worry about and everything else is generated from them:</p> <pre><code>def aug_data(raw_df): aug_df1 = raw_df.copy() u_in = aug_df1.u_in.values.reshape(-1,80) pressure = aug_df1.pressure.values.reshape(-1,80) time_step = aug_df1.time_step.values.reshape(-1,80) # inject a few positions for i in range(u_in.shape[0]): u_in_interp = interpolate.Akima1DInterpolator(time_step[i], u_in[i]) pressure_interp = interpolate.Akima1DInterpolator(time_step[i], pressure[i]) # inject up to 20 points for _ in range(np.random.randint(1,20)): # First, find a timestep value to inject (from 0 to 1 sec) ts = np.random.random() difference_array = np.absolute(time_step[i]-ts) idx = difference_array.argmin() if idx == 0: continue # Interpolate the associated pressure and u_in values time_step[i, idx] = ts u_in[i, idx] = u_in_interp(ts) pressure[i, idx] = pressure_interp(ts) aug_df1.u_in = u_in.flatten() aug_df1.pressure = pressure.flatten() aug_df1.time_step = time_step.flatten() return aug_df1 </code></pre> <p>I would generate x12 the train data this way and train on that. Once the LB got crazy and people started discovering various magics, I thought it either had to do with reverse engineering the data generation process, or some sort of post-processing that would be able to correct noisy predictions by the model by <a href="https://www.kaggle.com/authman/magic-post-processing-attempt-no-3of6" target="_blank">snapping them</a> into place:</p> <p><img src="https://i.ibb.co/Q942M9m/ex4.png" alt=""></p> <p>I spent two days here but it was a dead end, and the scores would not improve. Someone on the forums mentioned pseudo labeling so I wanted to try this again. This time, I used my best ensemble's predictions on sub set and trained against that, validating on the entire train set until convergence. Then switched over to training on the original train set. This resulted in my best 6-fold single model:</p> <p>all ids [ 1 2 3 … 125743 125745 125749]<br> 0.12653086140580797 exp059_gpu2/new_fold0_best_loss.pth<br> 0.12683538314520465 exp059_gpu2/new_fold1_best_loss.pth<br> 0.1258363151279602 exp059_gpu2/new_fold2_best_loss.pth<br> 0.12765633025724019 exp059_gpu2/new_fold3_best_loss.pth<br> 0.12740142103686664 exp059_gpu2/new_fold4_best_loss.pth<br> 0.1270622852723579 exp059_gpu2/new_fold5_best_loss.pth<br> 0.12688710817925428 Model Total</p> <p>I also experimented increasing the number of folds. These 16-fold models took ~14 hours to train at 350 epochs, and they were usually 0.003 better than their 6-fold cousins.</p> <p>The last experiment I ran was TTA. I discovered that by adding a small amplitude triangle wave pattern to u_in, the resulting pressure did not change much. And by inverting this wave pattern and then taking the mean of the two resulting predictions, I would usually get a better score than even the prediction on the original data by a few clicks:</p> <pre><code># PLUS uin_aug = np.zeros(80) for i in range(1,33): uin_aug[i] = 0.00001 * (1 if i%2==0 else -1) * -1 # -1 and +1 test_tta = test.copy() uins = test_tta.u_in.values.reshape(-1,80).copy() uins = uins + (uins!=0) * uin_aug[None,:] test_tta.u_in = uins.flatten() all_preds_tta_plus = ... # MINUS uin_aug = np.zeros(80) for i in range(1,33): uin_aug[i] = 0.00001 * (1 if i%2==0 else -1) * 1 # -1 and +1 test_tta = test.copy() uins = test_tta.u_in.values.reshape(-1,80).copy() uins = uins + (uins!=0) * uin_aug[None,:] test_tta.u_in = uins.flatten() all_preds_tta_minus = ... ... better_preds = (all_preds_tta_minus+all_preds_tta_plus)/2 </code></pre> <p>For stacking, I experimented with mean, meandian, ridge, lasso, svr, and lstm. LSTM train+val scores looked great but got 14LB so I figured I probably botched some indexing somewhere. I've already spoken at length about my <a href="https://www.kaggle.com/c/ventilator-pressure-prediction/discussion/282735#1565466" target="_blank">confusion</a> about mean and median, so I won't repeat that here. In the end, using ridge was the best for me. I used <a href="https://www.kaggle.com/ambrosm" target="_blank">@ambrosm</a> "Meandian" method but instead of reducing low-spread samples predictions with their means, I replaced with a very regularized ridge:</p> <p><img src="https://i.ibb.co/3pVjK5D/ridge.png" alt=""></p> <p>I brute forced alpha which was 33k (highest I've seen), and for input, I take the <em>mean</em> per single model's fold. This is because OOF are roughly equal sized, so the contribution should be equal. For the median portion where spread was great, I do not use the mean per model, rather I take the single median of the complete set of each model's folds.</p>
Google Brain - Ventilator Pressure Prediction
#2 Solution: The inverse of a PID controller
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Brain - Ventilator Pressure Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>In 85 % of the breaths, the ventilator was controlled by a PI controller. For the other 15 % of the breaths, reverse engineering was not possible.</p> <p>Even in these 85 % of the breaths, the PI controller didn't always directly control all 32 timesteps. The bullets in the diagrams above show where I could compute the inverse of the PI controller. Where there is no bullet, I couldn't compute it.</p> <p>I haven't tried to use only the PI function: I need the nn prediction for the other 15 % and for all timesteps in the diagrams which don't have a bullet.</p> <p>"What a strange dataset!" This is what I thought when I saw the first attempts at predicting pressures: Why do lag_back features - which are future valve settings - help to predict a pressure? Why do people use bidirectional LSTMs rather than unidirectional ones? Why does the last u_in help to predict the first pressure, which was measured three seconds before, if u_in is an independent variable?</p> <p>Before we start number crunching, we need a consistent qualitative model of the system which respects causality:</p> <ol> <li>In the physical system, the pressure is a function of past valve settings: <code>p[i] = f(u_in[:i])</code></li> <li>u_in is not an independent variable. u_in is the output of a digital controller, and the inputs of the controller are the past measured pressures: <code>u_in[i] = g(p[:i+1])</code></li> <li>According to <a href="https://arxiv.org/pdf/2102.06779.pdf" target="_blank">the paper</a>, the experimenters used a PID controller with various parameter settings and neural networks to control u_in, and they introduced random exploratory perturbations.</li> <li>The experiments were run in batches, and the initial pressure p[0] of a breath is determined by the valve settings of the previous breath. Although the breaths have been shuffled in the dataset, a neural network learns to approximately identify the batch from the u_in values and thus can predict p[0].</li> </ol> <p>Looking at this qualitative model, I realized that I needed the inverse function of a PID controller to predict the pressures based on u_in, i.e. a function which gets a u_in as input and determines what pressure the controller has seen to output this u_in.</p> <p>I'll start by explaining the easier case of finding the inverse of a P controller and then go on to the PI controller.</p> <h2>The inverse of a P controller</h2> <p>With a P controller, the user selects a target pressure (<code>p_star</code>), and then the controller repeatedly sets the valve to the difference between the measured pressure and the target pressure, multiplied by a factor <code>p_coef</code>. The lower the measured pressure is, the more the controller opens the valve:</p> <pre><code>def p_controller(p_coef, p_star): for i in range(80): u_in[i] = clip(p_coef * (p_star - p[i]), 0, 100) </code></pre> <p>We recognize the diagram of a P controller at u_in being a scaled mirror image of p:</p> <p><img src="https://i.imgur.com/g1rOijj.png" alt="634"><br> <img src="https://i.imgur.com/4HRCGsZ.png" alt="1300"><br> <img src="https://i.imgur.com/c5qQXLr.png" alt="3193"></p> <p>For the training data, where both u_in and pressure are known, we can determine p_coef and p_star by linear regression. If the linear regression fits well, we have found a P controller; if it doesn't fit, the experiment was run with some other controller. Running the linear regression over all breaths of the training data, we find the values which were used for p_coef and p_star:</p> <ul> <li>p_coef is an element of [0.01, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] (p_coef may be 0, but this case is useless).</li> <li>p_star is an element of [10, 15, 20, 25, 30, 35].</li> </ul> <p>What can we do about the test data, where only u_in is known and three variables are unknown (p_coef, p_star and pressure)? The key observations here are that the pressure sensor always gives one of 950 discrete pressure measurements and the P controller is a digital computer which computes (almost) exactly. We thus find p_coef and p_star with the following algorithm:</p> <pre><code>for all 20 possible values of p_coef: for all 6 possible values of p_star: p[i] = p_star - u_in[i] / p_coef if p[i] is one of the 950 discrete values (up to a very small tolerance): return p_coef, p_star </code></pre> <p>As soon as we know the two parameters, we use them to compute the pressures for the whole breath.</p> <p>For some parameter settings and pressures, the controller computes a u_in which is not in the admissible range from 0 to 100 and clips it. For these clipped u_in values, we cannot compute the inverse function:</p> <p><img src="https://i.imgur.com/WzfnlCA.png" alt="9847"><br> <img src="https://i.imgur.com/pWB6fEy.png" alt="13828"></p> <p>Having understood the P controller, we are ready to tackle the PI controller.</p> <h2>The inverse of a PI controller</h2> <p>Inverting the PI controller is more complex because it has two more parameters (i_coef and T) and because it is stateful. It maintains a running average q of pressure differences which decays with a time constant T, and it sets u_in to a linear combination of the current pressure difference and the running average:</p> <pre><code>def pi_controller(p_coef, i_coef, p_star, T): q = q0 # some unkown initial state for the running average u_in[0] = p_coef * (p_star - p_0) + i_coef * q for i in range(1, 80): dt = time_step[i] - time_step[i-1] s = dt / (dt + T) q = (1-s) * q + s * (p_star - p[i]) u_in[i] = p_coef * (p_star - p[i]) + i_coef * q </code></pre> <p>let's look at two diagrams:</p> <p><img src="https://i.imgur.com/KTUkJvL.png" alt="15090"><br> <img src="https://i.imgur.com/2aBIwxC.png" alt="15101"></p> <p>Again, I first determined the possible parameter values from the training data. p_coef, i_coef, p_star and the initial q can be determined by linear regression, but T can't. I tuned T manually until the linear regression for the other parameters gave the best fit. With T = 0.5, it fits perfectly.</p> <p>For the test data, the algorithm is:</p> <pre><code>for all 950 possible values of p[0]: for all 20 possible values of p_coef: for all 20 possible values of i_coef: for all 6 possible values of p_star: q0 = (u[0] + p_coef * (p[0] - p_star)) / i_coef determine p[1] and p[2] if both p[1] and p[2] are among the 950 discrete values: return p_coef, i_coef, p_star, q </code></pre> <p>It's time to think about optimizing for execution speed: With the four nested loops, the algorithm is terribly slow. Three optimizations reduce the execution time to an acceptable level (I still had to run it locally because it takes more than nine hours):</p> <ol> <li>The 950 values for p[0] have to be searched in order of decreasing frequency: The initial pressure usually is around 6, so we start the search there.</li> <li>The innermost loop can be vectorized by putting all p_star values into a numpy array and using matrix operations.</li> <li>Because the breaths are independent, we can parallelize the execution by distributing the breaths to different CPUs.</li> </ol> <p>For some breaths, u_in was perturbed deliberately. We easily recognize this pattern because the computed pressure for these timesteps is not in the 950 possible values, and then we skip these timesteps:</p> <p><img src="https://i.imgur.com/zKsz2R4.png" alt="15093"><br> <img src="https://i.imgur.com/vNbzMM1.png" alt="15104"></p> <p>Do we need to look at PID controllers with a D element? The paper doesn't mention them and the PI controllers cover 85 % of the data - I didn't investigate further.</p> <h2>Putting it together</h2> <p>My final submission was created in three steps:</p> <ol> <li>At the start are the seven models of <a href="https://www.kaggle.com/dlaststark/gb-vpp-pulp-fiction" target="_blank">https://www.kaggle.com/dlaststark/gb-vpp-pulp-fiction</a>. Thank you, <a href="https://www.kaggle.com/dlaststark" target="_blank">@dlaststark</a>, for sharing the notebook and the models - I doubt that I could have invented this architecture myself.</li> <li>In the second step I ensembled the predictions of the seven models using the <a href="https://www.kaggle.com/c/ventilator-pressure-prediction/discussion/282735" target="_blank">better_than_median()</a> function.</li> <li>The third step tries to determine the generating PID controller for every breath and replaces the nn predictions by the output of the inverse PID controller wherever possible. This notebook takes more than nine hours to finish and must be run locally rather than on the Kaggle infrastructure (it needs neither TPU nor GPU).</li> </ol> <p>Hi Gilles, I can't easily say how much of the data I matched, but I can tell that I updated 915437 of 4024000 values. Of course this figure depends on the quality of the nn predictions I began with.</p>
Google Brain - Ventilator Pressure Prediction
11th Place Writeup - Transformers and Pseudo Label
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Brain - Ventilator Pressure Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks <a href="https://www.kaggle.com/kingychiu" target="_blank">@kingychiu</a> ! Excited to become master, learned from you when team up before.</p> <p>Yes, that was sad. I even shakedown from solo gold to no where</p> <p>Apart from the disentangled self attention, each the hybrid layers is just stacking of standard pytorch modules. E.g. Conv1d -&gt; transformer -&gt; LSTM</p> <p>Thanks for reading! My model architecture is transformer borrowing the DeBERTa's disentangled self attention. You may check the implementation details in here: <a href="https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/deberta/disentangled_attention.py" target="_blank">https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/deberta/disentangled_attention.py</a></p> <p>Thanks! Also read your team's solution. Great to know that we both use iterative pseudo labeling. And awesome data augmentation by mixup, learned a lot!</p> <p>Thanks! Learned from you when team up before!</p> <p>Thanks man!</p> <p>Thank to Kaggle and Google Brain team for organizing this interesting competition. Congrats to all top teams and appreciate the hard work from everyone. Hopefully all of us learned a lot and had fun during the competition.</p> <p>At the beginning, I was thinking to solo on this competition. However the top teams are so strong that I finally merged with my teammate <a href="https://www.kaggle.com/wimwim" target="_blank">@wimwim</a> to keep surviving in the gold zone. Really thank to his brilliant contributions for achieving our final standing.</p> <h1>Summary</h1> <p>Here is the summary of our approach. </p> <ol> <li>Regression: Hybrid model and <strong>disentangled self attention</strong> </li> <li>Classification: <strong>Conformer</strong> and custom attention by <strong>time relative distance</strong></li> <li><strong>Pseudo Label</strong> by ensemble models</li> </ol> <h1>Regression Part</h1> <p>This is my approach before the team merge, got 0.1160 public LB score with proper ensembles.</p> <h4>Model</h4> <p>First I divide the features into u_in features, time features and RC features. Then the core of the model consist with 4-6 hybrid layers. Each hybrid layer has the following component:</p> <p>[u_in features, RC features] -&gt; Conv1d -&gt; signal hidden unit<br> signal hidden unit; time embedding -&gt; Transformer with Disentangled Self Attention -&gt; LSTM</p> <p>The implementation of disentangled self attention is referred to the DeBERTa Model in NLP, only replaced the position embedding to time step embedding.<br> <a href="https://arxiv.org/abs/2006.03654" target="_blank">https://arxiv.org/abs/2006.03654</a></p> <h4>Loss function and Optimization</h4> <p>Loss function: multi-task MAE for u_out == 0 and u_out == 1<br> Optimizer: AdamP with initial learning rate 2e-4<br> <a href="https://arxiv.org/abs/2006.08217" target="_blank">https://arxiv.org/abs/2006.08217</a> <br> Scheduler: StepLR with decay 0.95 per 5 epochs</p> <h4>Some useful idea</h4> <p>Using existing model to generate pseudo Label on test data is useful. I also found that a pair <strong>separated output layer</strong> or even transformer head <strong>for real label and pseudo label prediction</strong> is helpful for model diversification.</p> <h1>Classification Part</h1> <p>This is <a href="https://www.kaggle.com/wimwim" target="_blank">@wimwim</a> 's main idea before the team merge, got 0.1167 public LB score with proper ensembles.</p> <h4>Model</h4> <p>Concat all features and feed to 4 layers of conformer layer. The conformer implementation is referred to the following paper.<br> <a href="https://arxiv.org/abs/2005.08100" target="_blank">https://arxiv.org/abs/2005.08100</a></p> <p>For the custom attention with relative distance, he use the time steps difference to construct a relative distance and add to the attention matrix.</p> <h4>Loss function and Optimization</h4> <p>Loss function: binary cross entropy for discrete pressure values, without fitting u_out == 1<br> Optimizer: Ranger with initial learning rate 5e-3<br> Scheduler: CosineAnnealingLR for first 100 epochs, then switch to ReduceLROnPlateau </p> <h4>Some useful idea</h4> <p>He found that the behavior of different RC combination is quite different. So one of the most natural solution is to <strong>fine-tune for different RC</strong> after getting a generic model from all the data. This can usually upgrade the model by 0.002 public LB score.</p> <h1>Pseudo Label by ensemble models</h1> <p>Surprisingly, a simple blending of our best submission yields to a nice boost of public LB score, which can achieve a 0.1112 public LB score. This could be due to the <strong>merge of regression and classification</strong> approach using diverse model.</p> <p>To get further improvement, we re-trained our models by <strong>pseudo labeling on test data</strong> based on the 0.1112 submissions and tried to cross-apply individual tricks to each other's model. But interestingly, many of those tricks do not work when applying to another model, except the aforementioned fine-tuning and separated output layer. Finally we put new and old models, everything all together and achieve the 0.1074 public LB.</p>
Google Brain - Ventilator Pressure Prediction
14th place solution multitask LSTM - Team "no pressure"
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Brain - Ventilator Pressure Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Teammate <a href="https://www.kaggle.com/tomotomo5" target="_blank">@tomotomo5</a> <a href="https://www.kaggle.com/tommy1028" target="_blank">@tommy1028</a> <a href="https://www.kaggle.com/sunchine" target="_blank">@sunchine</a> <br> Our solution is using <strong>no magic and only LSTM</strong>.<br> We joined this competition with <strong>no pressure</strong>, but to be honest, when our leaderboard rank dropped one by one due to the unbelievable high scores in the final week, the landscape had changed. <br> The last couple of days ended up with <strong>high pressure</strong> to win the gold medal. Now I'm very proud that my team's tenacious work to improve the score step by step finally paid off and won the gold medal!</p> <p>Anyway, the following is what my teammates and I have done for this competition including many failed attempts. Hope this helps.</p> <h1>Preprocess</h1> <ul> <li>Features are based on public notebooks. (ex. <a href="https://www.kaggle.com/tenffe/finetune-of-tensorflow-bidirectional-lstm" target="_blank">@ZHANGXIN</a> </li> <li>Robust Scaler qunatile-range=(10,90)<br> We tried out all sorts of scalers but found that the robust scaler worked the best for our model as in most of the popular notebooks. However, we also ascertained that some of our features ranged from -200 ~ +200 even after scaling with the default quantile_range of (25,75). Therefore we expanded the range of our scalers quantile_range and found out that (10,90) worked the best for us. This improved one of our member's models by approximately <strong>0.010</strong>. </li> <li>Takes the log of u_in<ul> <li>The best score of our model is np.log1p(u_in) before preprocess</li> <li>we use both features raw u_in and log u_in by each model</li></ul></li> </ul> <h1>Target</h1> <ul> <li>Raw pressures and diff of pressures<ul> <li>This improved our model by approximately <strong>0.010</strong>.</li> <li>train['pressure_diff']=train.groupby(['breath_id'])['pressure'].diff().fillna(0)</li> <li>train['pressure_diff2']=train.groupby(['breath_id'])['pressure_diff'].shift(-1).fillna(0)</li></ul></li> <li>Loss : MAE<ul> <li>huber loss, CE, the others have not worked for us..</li></ul></li> </ul> <h1>keras</h1> <ul> <li>Change the activation<ul> <li>we use <strong>linear</strong> instead of selu at the last activation.</li> <li>selu takes the value -1.75~. The minimum of -1.75 is not suitable for using diff of pressures.</li></ul></li> <li>Others<ul> <li>Big folds(10~16fold), 4 layers bidirectional LSTM.</li> <li>Add CNN, transformer did not work.</li> <li>keras got the score <strong>CV 0.134 / Public 0.1157</strong></li></ul></li> </ul> <h1>pytorch</h1> <ul> <li>Change the initializations<ul> <li>Referenced from this notebook <a href="https://www.kaggle.com/junkoda/pytorch-lstm-with-tensorflow-like-initialization" target="_blank">JUN KODA</a></li></ul></li> <li>Scheduler: CosineAnnealingWarmupRestarts </li> <li>Changing the task to predicting multi-task targets lifted our scores more than keras.</li> <li>Others<ul> <li>5 layers bidirectional LSTM.</li> <li>pytorch gets the score <strong>CV 0.135 / Public 0.1153</strong></li></ul></li> </ul> <h1>Postprocess</h1> <ul> <li>median and round as you like.</li> </ul> <h1>Ensemble</h1> <ul> <li>median of 15 models outputs <strong>CV 0.1117 / Public 0.1103 / Private 0.1107</strong></li> <li>stacking of 15 models outputs, and 8 stacking median <strong>CV 0.1134 / Public 0.1100 / Private 0.1111</strong>*</li> </ul> <h1>What did not work so well</h1> <ul> <li>Feature Engineering in general<br> We've explored many many features but could not find a game-changer. What we believe slightly boosted our score includes Fourier transformation, u-in log transformation, PIP feature (average u_in values from 70-80 time steps), u_in velocity features, u_in distance features (euclidean distance-based features based on u_in). FYI, target transformation did not work at all.</li> <li>Pseudo labeling<br> We thought a larger dataset gave us the more precise prediction as we observed the sub-sample training got a pretty worse score. Thus, we tried pseudo labeling but that gave us only marginal improvement(0.0001-ish) that did not worth its longer training time. FYI, we only add test breath-ids that have a similar u-in shape to training breath-ids so that we can avoid including less-accurate noisy predictions into the pseudo training set.</li> <li>Non-model Adversarial Validation<br> As you might observe, some u-in shapes in the training set are quite different from any of the test sets. Also, some breath ids' pressures behave peculiarly. We think excluding them could lead to the training set that is closer to the test set. However, this approach did not work.</li> <li>KNN<br> KNN has occasionally surprised us with its stunning accuracy. So tried this with a little hope. This time, KNN does not work at all, haha…</li> <li>Snapshot ensembling<br> Increasing the number of folds and models was widely known to boost performance. Therefore, we tried snapshot ensembling. However, within the limited time, we had, increasing the number of epochs until performance improvement was not a realistic alternative.</li> <li>Model to find the best model<br> We found out that some of our models work better than others for specific breath_ids as in the figure below. Therefore, we decided to create a multi-class model that chooses the best model or median of all models for each breath_id. Didn't work at all…</li> <li>fine-tuning models by R/C<br> Motivated by the fact that pressure curves range differently between different R/C values, we tried to fine-tune models once trained on the whole training dataset to specific R/C values, creating 9 models. However loss did not improve, and we were not able to improve our score.</li> <li>R/C correction model<br> Motivated by the phenomenon that some breath-ids looked as if they had inaccurate R/C values, we decided to correct the data by modifying the RC values with a classification model. However, this did not work well on our side.</li> </ul> <p>It's no reason… Your aproach is better. Thank you for your advice!</p>
Google Brain - Ventilator Pressure Prediction
20th-place solution : model & multi-task learning
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Brain - Ventilator Pressure Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you very much! I'm glad that it helps you : )</p> <p>Actually, we can't submit those models as a single model because development &amp; training were finished with the last few days of the challenge left. However, To assume with CV score (there're usually 0.02 gap between CV &amp; LB scores in our cases), scores are like below.</p> <ul> <li><strong>w/o pseudo label</strong>, 15 folds (stratified), w/o post-process, same seed.</li> </ul> <table> <thead> <tr> <th>model</th> <th>CV</th> <th>LB</th> </tr> </thead> <tbody> <tr> <td>WaveNet-wise blocks + bi-LSTM/GRU</td> <td>0.13914</td> <td>0.1186</td> </tr> <tr> <td>WaveNet-wise blocks + Transformer Encoder + bi-LSTM/GRU</td> <td>0.136130</td> <td>0.1161 (assume)</td> </tr> <tr> <td>WaveNet-wise blocks + Transformer Encoder + Densely-Connected bi-LSTM</td> <td>0.137117</td> <td>0.1171 (assume)</td> </tr> </tbody> </table> <ul> <li><strong>w/ pseudo label</strong>, 15 folds (stratified), w/o post-process, same seed.</li> </ul> <table> <thead> <tr> <th>model</th> <th>cv</th> <th>lb</th> </tr> </thead> <tbody> <tr> <td>WaveNet + bi-LSTM/GRU</td> <td>0.110575</td> <td>0.1147</td> </tr> <tr> <td>WaveNet + TRFM + bi-LSTM/GRU</td> <td>0.107606</td> <td>?</td> </tr> <tr> <td>WaveNet + TRFM + DC bi-LSTM</td> <td>0.110126</td> <td>?</td> </tr> <tr> <td></td> <td></td> <td></td> </tr> </tbody> </table> <p>% There're 3 variants of models based on WaveNet + bi-LSTM/GRU. Posted one is the baseline version.<br> % there's validation set mismatch between w/o and w/ pseudo label experiment.</p> <p>I'll also attach this table into the post! (with a brief plot of architectures)</p> <p>We take a masked l1 loss for <code>delta pressure</code> &amp; <code>delta of delta pressure</code>, and masked binary cross entropy for <code>u_out</code> prediction with the equal loss weights!</p> <p>% mask : only <code>u_out</code> == 0 (mean masking <code>u_out</code> == 1)</p> <p>Thanks! I read your write-up, and It's amazing! especially Mixup augmentation part : )</p> <p>I'll talk about with my teammate, maybe it'll take some time.</p> <p>for the references, you can check these notebooks and paper.</p> <p>WaveNet-wise block is maybe got from <a href="https://github.com/stdereka/liverpool-ion-switching/blob/f44c57e88a3e7889720c88710135f2c7d31b416e/model/nn.py#L116" target="_blank">here</a> (my teammate's <br> suggestion), which is winning solution of <code>liverpool-ion-switching</code> challenge.</p> <p>bi-LSTM/GRU part seems like <a href="https://www.kaggle.com/dlaststark/gb-vpp-whoppity-dub-dub" target="_blank">this notebook</a> from <a href="https://www.kaggle.com/dlaststark" target="_blank">@dlaststark</a>, encoder &amp; decoder scheme.</p> <p>Densely-Connected bi-LSTM : <a href="https://arxiv.org/abs/1802.00889" target="_blank">arXiv</a>. Stacking bi-LSTM layers with a Dense-Net style.</p> <p>I hope this helps!</p> <p>+) I just uploaded a brief plot of the architectures</p> <p>Thanks a lot! and huge congrats being a competition master : )</p> <p>thanks a lot : )</p> <p>For my case, in general, usually I stared from the public notebook to check &amp; verify the performance fast. Based on it, I gradually update &amp; add the features or models (in case of this competition too).</p> <p>However, there're something limitation or inconvenience, I sometimes build a baseline from the scratch.</p> <p>Thank you!</p> <p>Thank you very much : ). Also, I'm very inspired by your strong single model Transformer!</p> <p>Hi everyone!</p> <p>First, I would like to thank my teammate <a href="https://www.kaggle.com/fissium" target="_blank">@fissium</a> and the organizers!<br> Also, congratulations to all the winners and did a great job on all participants' hard works!</p> <p>Our solutions are focused on the two parts.</p> <ol> <li>Deep learning architectures</li> <li><code>multi-task learning</code> that enables us to train the model more stable</li> </ol> <h2>Pre-Processing</h2> <p>We didn't take much effort into developing pre-processing. The pre-processing publicly available is good enough for us!</p> <p>So, move on to the next chapter : )</p> <h2>Model</h2> <h3>Architecture</h3> <p>We developed a total of 3 types of architecture.</p> <ol> <li>WaveNet-wise blocks + bi-LSTM/GRU</li> <li>WaveNet-wise blocks + Transformer Encoder + bi-LSTM/GRU</li> <li>WaveNet-wise blocks + Transformer Encoder + Densely-Connected bi-LSTM</li> </ol> <p><img src="https://user-images.githubusercontent.com/15344796/140379611-d8368e31-2303-443d-a9b8-ef9bb309d57b.png" alt="architecture"></p> <p>We encode the features with CNN &amp; Transformer blocks, then take them into the bi-LSTM layers. <br> The second model (WaveNet-wise blocks + Transformer Encoder + bi-LSTM/GRU) achieved the best CV score among our models.</p> <p>Actually, we can't submit those models as a single model because development &amp; training were finished with the last few days of the challenge left. However, To assume with CV score (there're usually 0.02 gap between CV &amp; LB scores in our cases), scores are like below.</p> <p>Here's the CV/LB table. Actually, we can't submit those models as a single model because development &amp; training were finished with the last few days of the challenge left. However, To assume with CV score (there're usually 0.02 gap between CV &amp; LB scores in our cases), maybe we can guess the LB score : )</p> <ul> <li><strong>w/o pseudo label</strong>, 15 folds (stratified), w/o post-process, same seed.</li> </ul> <table> <thead> <tr> <th>model</th> <th>CV</th> <th>LB</th> </tr> </thead> <tbody> <tr> <td>WaveNet-wise blocks + bi-LSTM/GRU</td> <td>0.13914</td> <td>0.1186</td> </tr> <tr> <td>WaveNet-wise blocks + Transformer Encoder + bi-LSTM/GRU</td> <td>0.136130</td> <td>0.1160 (assume)</td> </tr> <tr> <td>WaveNet-wise blocks + Transformer Encoder + Densely-Connected bi-LSTM</td> <td>0.137117</td> <td>0.1170 (assume)</td> </tr> </tbody> </table> <ul> <li><strong>w/ pseudo label</strong>, 15 folds (stratified), w/o post-process, same seed.</li> </ul> <table> <thead> <tr> <th>model</th> <th>cv</th> <th>lb</th> </tr> </thead> <tbody> <tr> <td>WaveNet + bi-LSTM/GRU</td> <td>0.110575</td> <td>0.1147</td> </tr> <tr> <td>WaveNet + TRFM + bi-LSTM/GRU</td> <td>0.107606</td> <td>?</td> </tr> <tr> <td>WaveNet + TRFM + DC bi-LSTM</td> <td>0.110126</td> <td>?</td> </tr> <tr> <td></td> <td></td> <td></td> </tr> </tbody> </table> <p>% There're 3 variants of models based on WaveNet + bi-LSTM/GRU. Posted one is the baseline version.<br> % there's validation set mismatch between w/o and w/ pseudo label experiment.</p> <p>I'll also attach this table into the post!</p> <h3>Multi-Task Learning</h3> <p>In the early stage of the competition, I found that using <code>delta pressure</code> (diff of pressure) as an auxiliary loss raises CV/LB scores (+0.01 ~ 0.015 boosts).</p> <p>Similarly, <code>delta of delta pressure</code> (double diff pressure) as an auxiliary loss also boosts the performance by about +0.002 ~ 3 on CV/LB scores.</p> <pre><code>df['delta_pressure'] = (df['pressure'] - df.groupby('breath_id')['pressure'].shift(1)).fillna(0).values df['delta_delta_pressure'] = (df['delta_pressure'] - df.groupby('breath_id')['delta_pressure'].shift(1)).fillna(0).values </code></pre> <p>In the TF 2.x code, we use like below.<br> <img src="https://user-images.githubusercontent.com/15344796/140241043-89243eb7-dfb1-4313-97d0-6dd47f1d6f0a.png" alt="image"></p> <h2>Post-Processing / Ensemble</h2> <p>We can't find any huge boosts from the post-processing. However, Ensemble methods do bring an improvement!</p> <p>First, We tuned an ensemble weight with <code>Optuna</code> library based on the OOF predictions. <br> After that, calculating <code>gap of predictions (spread)</code> and taking mean (w/o min, max predictions) or median selectively by <code>spread threshold</code> (optimized based on OOF) on the whole models (include the weighted ensemble prediction). It improves +0.0001 ~ CV boost compared with just taking a median.</p> <p>Ultimately, We ensembled the 5 models based on the above architectures with some differences (e.g. layers, features). Got CV 0.096600 LB 0.1131 PB 0.1171 (low CV score is due to training on the pseudo label)</p> <h2>Summary</h2> <h3>Works</h3> <ul> <li>cross-validation<ul> <li>stratified k fold (on R, C factors) is better than group k fold </li> <li>15 folds is better than 10 folds (about +0.002 CV/LB boosts)</li></ul></li> <li>pseudo label<ul> <li>single model : CV 0.13914 LB 0.1186 PB 0.1226 -&gt; CV 0.11057 LB 0.1147 PB 0.1186 (about LB/PB +0.004 improvement)</li></ul></li> <li>masked loss<ul> <li>only for <code>u_out</code> == 0 works better</li></ul></li> <li>multi-task learning<ul> <li><code>delta pressure</code> (about +0.01 ~ 0.015 CV/LB boosts)</li> <li><code>delta of delta pressure</code> (about +0.002 ~ 0.004 CV/LB boosts)</li></ul></li> </ul> <h3>Did not work</h3> <ul> <li>adding more features (e.g. more u_in lag, etc…)</li> <li>deeper network</li> <li>few folds/epochs</li> </ul> <p>Thank you very much : )</p>
Google Brain - Ventilator Pressure Prediction
4th-place solution: Hacking the PID control
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Brain - Ventilator Pressure Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for mentioning me in your solution and I am glad that it was useful. Your solution is great! </p> <p>I see, there was much more that I thought. The added u_in is not applied to all time, so there was a window to match for those data set, too. I stopped soon after the first p mismatch and did not go further. Anyway, I was doing that in the night before the deadline, so well done to myself. <a href="https://www.kaggle.com/group16" target="_blank">@group16</a> congratulations for the first!! I enjoyed your solution.</p> <p>The input <code>u_in</code> is controlled by the proportional–integral–derivative (PID) method, linearly dependent on the target measured pressure. The organizer explained in their paper [1] that they did this for safety reasons, and avoid arbitrary input. I reversed this mapping and get the exact values for about 30% of the data. I do not have my own strong model for the rest 70% and that must be the difference from the 2nd and 3rd teams.</p> <p>Since my technique overlaps a lot with the existing 1st-place solution, I start from my personal recollection; the technique is in the next section.</p> <h2>Recollection</h2> <p>I first thought this is a physics time-evolution problem and tried some ordinary differential equation type recurrent network <code>dq/dt = F(q, u_in)</code>. This remembers the past but does not use the future <code>u_in</code>, and I only got around scores of 0.5.</p> <p>Although <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> raised the <a href="https://www.kaggle.com/c/ventilator-pressure-prediction/discussion/277847" target="_blank">question</a> at early stage, I assumed <code>u_in</code> is a fixed function and nonsense to use measured pressure for u_in for a competition to predict p. I was not freed from this (plausible) bias until about a week before the deadline. I do not why, but I forgot to read the organizer's paper for a long time.</p> <p>One day, I simplified the problem to predict p[1] or p[1] - p[0] only, and realized that LSTM models predict p[0] extremely accurately. From u[0], R, and C, we can only predict MAE error ~0.5, but LSTM model can easily achieve 0.15. <br> There was already <a href="https://www.kaggle.com/c/ventilator-pressure-prediction/discussion/279384#1549448" target="_blank">an excellent question and answer</a> by <a href="https://www.kaggle.com/rajkumarsinghakonwar" target="_blank">@rajkumarsinghakonwar</a> and <a href="https://www.kaggle.com/pcjimmmy" target="_blank">@pcjimmmy</a> but I didn't realize until I found it out by myself. </p> <p>I continue to think <code>u_in</code> is fixed and searched for some magic that might come from experimental equipment or the conditions, such as the atmospheric pressure or temperature of the day. Around a week ago, I finally read the paper and learned about the PID control.</p> <h2>PID controller</h2> <p>The PID controller has a target pressure, <code>p*</code>, and the input is proportional to the error <code>e = p* - p</code> and the integral of the error, where <code>p</code> is the target variable we are looking for:</p> <pre><code>e[t] = p* - p[t] u_in[t] = Kp e + Ki I[t] </code></pre> <p>The target values are constant in this competition and Kp and Ki are constant parameters. The derivative term, "D" in PID, is not used for the data.</p> <p>An important technical detail is that the integral is actually a moving average with exponential decay [2]:</p> <pre><code>dt[t] = time[t+1] - t[t] s[t] = dt[t] / (dt[t] + rc) I[t] = (1 - s[t-1]) I[t-1] + s[t-1] e[t] </code></pre> <p>rc = 0.5 is 0.3 in the source code [2], but looks like it is 0.5 here.</p> <h2>Solving p from u_in</h2> <p>Although PID control is extremely simple, it is not trivial to invert u_in(p). The constants are not unique and the initial integral I[0] is not always reset to 0 (seems about 1/5 are 0). I exploit the discreetness of Kp, Ki, p*, and, p, which are written in the paper but can be measured from the training data, too.</p> <pre><code>for all Kp, Kp, p*, and, p[1]: solve for I[0] using the equations above solve p[t] iteratively from I[0] </code></pre> <p>I wrote the nested loop in C++ and the brute-force attack takes only 5 minutes for all the test data.</p> <p>Not all p[t] can be solved exactly:</p> <ul> <li>About half the data includes additional piecewise-linear u_in and I was not able to handle these</li> <li>When u_in is clipped to 0 or 100 the information is lost. One can restart the search beyond the overflow but I did not have time.</li> </ul> <p>I use the pressure that has the expected discreteness with threshold 1e-12, and got exact pressure for about 30% of the inspiratory data (for about 50% of the breath_id but not for all times).</p> <p>[1]: Machine Learning for Mechanical Ventilation Control <a href="https://arxiv.org/abs/2102.06779" target="_blank">https://arxiv.org/abs/2102.06779</a> <br> [2]: <a href="https://github.com/CohenLabPrinceton/pvp" target="_blank">https://github.com/CohenLabPrinceton/pvp</a> pvp/controller/contol_model.py</p> <p>Thanks, you are welcome </p>
Google Brain - Ventilator Pressure Prediction
5th place Approach and Solution Walkthrough and Live Discussion
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Brain - Ventilator Pressure Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/nikhilmishradev" target="_blank">@nikhilmishradev</a> and <a href="https://www.kaggle.com/sahil711" target="_blank">@sahil711</a> will presenting their approach and solution on 12th November 2021, at 18:30 IST (13:30 GMT). </p> <p>Catch us live at- <a href="https://youtu.be/pCkDcXe_9AM" target="_blank">https://youtu.be/pCkDcXe_9AM</a> .</p>
Google Brain - Ventilator Pressure Prediction
5th place solution: A thousand Epochs and a Crazy Transformer
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Brain - Ventilator Pressure Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you <a href="https://www.kaggle.com/gogo827jz" target="_blank">@gogo827jz</a> :D</p> <p>Thanks a lot <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a>. You are always an inspiration, and to get such a good score in so short time is nothing but amazing :) Definitely a lot for both me and <a href="https://www.kaggle.com/sahil711" target="_blank">@sahil711</a> to learn in this competition, and list of things we can try in future, learning from other's solutions</p> <p>hehe, and thanks <a href="https://www.kaggle.com/faisalalsrheed" target="_blank">@faisalalsrheed</a> . There's no restriction or any term or condition mentioning you cannot have more than one colab pro+ account for yourself and yes same payment for all :)</p> <p>Sorry for forgetting about the learning rate. It was a linear decay starting from 2e-4 all the way upto 1e-6, for any given number of epochs</p> <p>thanks a lot Chris :)</p> <p>Firstly congratulations to everyone who did great in this competition. Now please don’t run away after reading this:<br> <strong>There was no magic in our solution 😂</strong><br> <a href="https://www.kaggle.com/sahil711" target="_blank">@sahil711</a> and I worked crazy hard in the last week for this, so hope you enjoy, and let us know if any questions.<br> Let's dive into the key features of our solution:</p> <ol> <li><strong>Sequence length of 40 instead of 80</strong>:<br> We did not understand why people did not try this, (or maybe have tried and we missed out :) ). The maximum u_out==0 breath_id had sequence length 35, so we could easily half the sequence length. This helped us save a lot of time in the last week doing faster experiments and getting faster results. There was no difference in the cross validation score if you used a full sequence length or halvened it. In hindsight maybe models with full seq. length could have added some level of diversity to our final solution.</li> <li><strong>Classification Instead of Regression</strong> :<br> <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> public transformers on regression was so simple and amazing, but nevertheless regression transformers did not work for us.<br> Infact we struggled with regression across various architectures. Classification was working way better in our experiments and we just stuck to it till the end.</li> <li><strong>Loss Function</strong> : We tried a lot of losses like:<ul> <li>Cross Entropy (CE)</li> <li>Smoothened CE (<a href="https://www.kaggle.com/c/ventilator-pressure-prediction/discussion/278362" target="_blank">https://www.kaggle.com/c/ventilator-pressure-prediction/discussion/278362</a>)</li> <li>Ordinal CE (tried smoothing here too :p)</li> <li>Focal Loss<br> But CE worked best in the end. To be honest we could have experimented with the loss a bit more, but due to time constraints we just went in the direction where things seemed more promising.</li></ul></li> <li><strong>Learnable Frontend</strong>:<br> Our first inutuition was that similar to last few audio competitions, here too 1D CNNs will play an important role. Therefore we tried multiple 1D CNN based frontend, but we were unable to beat the performance by hand-crafted features.<br> Hence, we combined the two i.e we used both the hand-crafted features and also concatenated output of the 1D CNN layers applied on top of the numerical features.<br> The frontend arch. was simple:<br> <img src="https://ibb.co/rcN21BR" alt="https://ibb.co/rcN21BR"></li> <li><strong>Highly parameterized Deep Transformers with Heavy Dropout</strong>:<br> Initially we had started with a simple transformer (2 layer deep) with low embedding size (32). But it was not matching the performance of LSTMS even in the training loss, so we started overparameterizing the model, intially by increasing the embedding size, and the number of heads. This helped a lot and brought the training and validation losses both down. Next step was to go deep by increasing the number of encoder layers, here the model started overfitting, so to tackle it we introduced slight dropout which helped improve the validation loss. Little did we know then, we would be tuning the number of encoder layers and dropout till the end of the competition. Our final model had 20 encoder layers with 128 heads and a dropout of 0.8 in the attention as well as the fully connected layers. I swear we would have made the dropout almost 1, had the competition gone for 2 more weeks 😂<br> This overparameterization (especailly with large number of heads) with heavy dropout we believe is the biggest key to our top high performing model.<br> Another great idea which helped us train the transformers was from the paper <a href="ReZero" target="_blank">https://arxiv.org/abs/2003.04887</a><br> Instead of directly adding the output of a previous layer to the current output , you can introduce an additional parameter alpha to weigh the output of the current layer. This helps the network to decide how much of the current information you want to retain, atleast that’s what we understood intuitively. This according to the paper and our results, improved the convergence as we went deeper.<br> <em>PS: We are unable to get the same performance using PyTorch despite replicating the same architecture</em></li> <li><strong>Wannabe Conformers</strong>: <br> We wanted to try something different with the transformers. Intially the idea was using conformers, but we couldn’t get the same level of performance as we were getting using a simple transformer encoder layer. So, we lazily applied some convolutions alongwith the transformer encoder layer, and it worked out just great. Normally we feed the output of the last encoder layer to the next which is:<br> <code>Input_Encoder(N) = Output_Encoder(N-1)</code><br> Instead we did:<br> <em><code>Input_Encoder(N) = Concat([Output_Encoder(N-1), 1D_Conv(Ouputut_Encoder(N-1)]).</code></em><br> This operation alone gave us a direct boost of 0.01 in the CV.</li> <li><strong>Multihead Output</strong>:<br> This idea was inspired from <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> (<a href="https://www.kaggle.com/c/ventilator-pressure-prediction/discussion/278964" target="_blank">https://www.kaggle.com/c/ventilator-pressure-prediction/discussion/278964</a>) .<br> We used two outputs one to predict the sequence where u_out == 0, the other to predict u_out == 1 and then merged them.<br> This idea was one of the last things we did, but it did help us getting the 5th place, without it we would be 6th.</li> <li><strong>Validation Strategy</strong>:<br> We used a simple KFold strategy. To begin with we were using K=5,but soon shifted to higher values of K like (10/15)</li> <li><strong>Final Model and Scores</strong>:<br> Above described model gave ~0.107 CV on 15Fold CV and scored ~0.102 on LB.<br> Next we decided to train the model on the entire data with different seed. This was driven by this dicussion here (<a href="https://www.kaggle.com/c/ventilator-pressure-prediction/discussion/278964#1559594)" target="_blank">https://www.kaggle.com/c/ventilator-pressure-prediction/discussion/278964#1559594)</a>. Thanks <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> !!<br> <em>We trained our models for 1000 epochs (yes you read this right, this started out as a joke <a href="https://www.kaggle.com/c/ventilator-pressure-prediction/discussion/274892" target="_blank">https://www.kaggle.com/c/ventilator-pressure-prediction/discussion/274892</a> :p )</em><br> After training the model on the entire data on 15 different seeds, we are able to reach out current score of 0.0999 Public LB and 0.1022 on Private LB.</li> <li><strong>Resources</strong>:<br> Finally we would like to thank google colab pro+ :p.<br> We purchased 5 colab pro+ accounts (ouch…) to run our models. On one colab pro with low ram you can run 3 tpu sessions in parallel. And with 5 accounts we were able to run 15 models parallely. One colab pro+ account can run in background for 24 hours, and 1000 epochs took approximately 22 hrs. The models still did not converge though and I am sure if we ran the model for another 500 epochs we would get a boost of 0.005, but there was no time for that :), we were busy trying out other experiments.</li> <li><strong>Feature Engineering</strong>:<br> Our feature engineering was nothing special, we had our own sets of features, but mostly very similar to public kernels. The big boost was the architecture.</li> <li><strong>Lot of Things that Did Not Work 😛</strong>: We tried lots of different things and most of them didn’t work. Some of them are listed below:<ul> <li>Augmentations:<ul> <li>Add some noise to the input signal</li> <li>Shift Augmentation-&gt; since we are using only 40 timesteps, so rather than using first 40, shift by n and use n:n+40</li> <li>Flip augmentation</li></ul></li> <li>Mutli Task Training:<ul> <li>Instead of using R,C values in the input, we thought of using them as auxiliary targets.</li> <li>Loss was weighted CE on (pressure class, R,C)</li> <li>We missed out on using delta pressure (1 delta and 2 delta) as auxiliary targets.</li></ul></li> <li>Non-Transformer Architectures: We wouldn’t call them as failed experiments but we are not able to get a score anywhere close to the final transformer and hence these didn’t add any value to the final solution. Some of archs tried were:<ul> <li>Multi Layer LSTMs</li> <li>Original Conformers :p</li> <li>More complex 1DCNN frontends</li> <li>LSTM+Transformers<br> <strong>FINAL MODEL ARCHITECTURE:</strong><br> <img src="https://ibb.co/pn1xc4x" alt="https://ibb.co/pn1xc4x"></li></ul></li></ul></li> </ol> <p>Thanks <a href="https://www.kaggle.com/songwonho" target="_blank">@songwonho</a> and congratulations for the great finish too. I don't know if you had observed a similar thing but we observed that for the first 100 epochs sequence length 80 was performing better than shorter sequence lengths but then it gradually gave a much worse performance</p> <p>thank you <a href="https://www.kaggle.com/raghaw" target="_blank">@raghaw</a> </p> <p>thank you <a href="https://www.kaggle.com/mzr2017" target="_blank">@mzr2017</a> </p> <p>Hi <a href="https://www.kaggle.com/obougacha" target="_blank">@obougacha</a> as you can also see in public notebook(s) there were total 950 different values of pressure in training data, so we used a total of 950 different classes</p>
Google Brain - Ventilator Pressure Prediction
6th-place solution: Single Multi-task LSTM
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Brain - Ventilator Pressure Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>As you can see, we only use dropout(0.1) in the lstm parameter(it means adding dropout after each lstm layer)</p> <p>Not much, about 0.001-0.002.</p> <p>This is how we add the neighbor:</p> <pre><code>from torch.utils.data import Dataset class VentilatorDataset(Dataset): def __init__(self, df, mode='train'): self.df = df self.mode = mode def process_train_df(df): uin = {} pressure = {} for k, _ in df.groupby(['R','C']): uin[k] = _['u_in'].values.reshape(-1, 80) pressure[k] = _['pressure'].values.reshape(-1, 80) return uin, pressure self.train_uin, self.train_pressure = process_train_df(df_all) self.len = self.df.shape[0]//80 self.prepare_data() def __len__(self): return self.len def prepare_data(self): self.pressures = self.df['pressure'].values.reshape(-1,80) self.u_outs = self.df['u_out'].values.reshape(-1,80) cols = sorted(list(set(self.df.columns)-set(['id','breath_id','pressure','pred','label']))) self.inputs = self.df[cols].values.reshape(-1,80,len(cols)) uins = self.df['u_in'].values.reshape(-1,80) rcs = self.df[['R','C']].values[::80] self.similar_index = [] for i in tqdm(range(len(rcs))): uin = uins[i] rc = rcs[i] cur_uin = self.train_uin[tuple(rc)] if self.mode == 'train': idx = np.argsort(np.abs(cur_uin-uin).mean(-1))[1:6] elif self.mode == 'valid': idx = np.argsort(np.abs(cur_uin-uin).mean(-1))[1:2] else: idx = np.argsort(np.abs(cur_uin-uin).mean(-1))[0:1] self.similar_index.append((tuple(rc),idx)) self.cls_label = self.df['label'].values.reshape(-1,80) def __getitem__(self, idx): rc, index = self.similar_index[idx] index = random.choice(index) index = index[self.select_idx] train_uin = self.train_uin[rc][index] #neighbour uin train_pressure = self.train_pressure[rc][index] #neighbour pressure input = self.inputs[idx] ... </code></pre> <p>After adding the pseudo-label, cv has become incorrect. You can try to keep the original valid-set.<br> For us, due to our scheduler, it always reaches the optimal in the last epoch without need to consider the valid-set.</p> <ol> <li>"diff of pressure" about 0.002, and all targets give us 0.005+ improvement</li> <li>Randomness can bring better robustness, and TOP5 <strong>by u_in</strong> do not look much different from the original.</li> </ol> <p>Firstly thanks a lot to my teammates and the great kaggle community, to all people who keep sharing, we have all learned a lot during this competition. Every competitor is worthy of respect. </p> <p>At the same time, congratulations to my teammates <a href="https://www.kaggle.com/mahluo" target="_blank">@mahluo</a> for becoming GM, <a href="https://www.kaggle.com/yzheng21" target="_blank">@yzheng21</a> and me for becoming Masters.</p> <p>Our solution consists of two main parts:</p> <ol> <li>Simple features and LSTM models</li> <li>Boost performance by neighbours and multi-task learning</li> </ol> <h2><strong>Features</strong></h2> <p>We found that too many features and normalization do not bring improvement. So we used the following features without normalization:</p> <p>raw_features, u_in_cumsum, u_in_log, diff1-4 of [time_step, u_in, u_in_log]</p> <h2><strong>Model and Training</strong></h2> <p>Our model is very simple:</p> <ul> <li>nn.LSTM(input_dim, 512, batch_first=True, bidirectional=True, num_layers=4, dropout=0.1)</li> <li>Initialization (<a href="https://www.kaggle.com/c/ventilator-pressure-prediction/discussion/276568" target="_blank">https://www.kaggle.com/c/ventilator-pressure-prediction/discussion/276568</a>)</li> <li>batch_size 128, large batch is faster but poor performance</li> <li>AdamW, lr 3e-4, epoch 280 with CosineAnnealingWarmRestarts (After using the two tricks mentioned below, the convergence speed is greatly accelerated, 280 is enough)</li> </ul> <p>​ <img src="https://i.ibb.co/HgmvrVK/om5-ZDj-pkrp0-JOFWGSWDng.png" alt=""></p> <h2><strong>Neighbours</strong></h2> <p>This wonderful idea comes from <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>: <a href="https://www.kaggle.com/c/ventilator-pressure-prediction/discussion/280356" target="_blank">https://www.kaggle.com/c/ventilator-pressure-prediction/discussion/280356</a>. Our method is similar to this picture, but not exactly the same.</p> <p>​ <img src="https://i.ibb.co/vLzFZ4V/Selection-999-186.png" alt="img"> </p> <h4><strong>how to find nearest neighbours</strong></h4> <ol> <li>Group by R, C</li> <li>Sort by MAE of u_in (Knn is not used and time_steps are not considered, which needs to be improved)</li> </ol> <h4><strong>how to</strong> <strong>use it for training</strong></h4> <ol> <li>Randomly select one from TOP5 neighbours in training set</li> <li>Add neighbour's uin, uin_diff1-2, pressure, pressure_diff1-2, neighbour_uin-self_uin to the features</li> </ol> <h4><strong>how to</strong> <strong>use it for testing(TTA)</strong></h4> <ol> <li>Select 1-5 respectively from TOP5 neighbours in training set</li> <li>Add neighbour's uin, uin_diff1-2, pressure, pressure_diff1-2, neighbour_uin-self_uin to the features</li> <li>Use the 5 results for ensemble</li> </ol> <h2><strong>Multi-task</strong></h2> <p>We found that adding multi-targets is beneficial to speed up convergence and improve performance. (As they mentioned, uin can be calculated from pressure, pressure_diff, … . We did not think of reverse engineering, but chose to predict them directly)<br> We mainly used these four additional targets:</p> <ul> <li>diff of pressure</li> <li>diff2 of pressure</li> <li>pressure - neighbour's pressure</li> <li>classification label (950 classes)</li> </ul> <p>In training, we give their loss the same weight, except that the classification ce loss is too large, whose weight is adjusted to 0.025. </p> <p>In inference, we only take the output of pressure, the others are not used.</p> <h2><strong>Others</strong></h2> <ul> <li>Good correlation between cv and lb, both public and private</li> </ul> <p>20 folds at first, cv 120, lb 0.1040. </p> <p>Full data with 30+ seeds at last, lb 0.0999.</p> <ul> <li>Pseudo-label training, only had time to train 14 models, lb0.0998</li> </ul> <p>No. We find that just using the u_in directly for neighbours works best.</p> <p>Hello, I think the difference between adam and adamw is almost negligible (0.000x). It should be within the range of random seeds.<br> The pressure mentioned here belongs to the neighbors, and they come from the training set, so there are labels that can be used.</p> <p>All. <br> Actually our classification output can provide confidence for filtering but we didn't give it a try.</p>
Google Brain - Ventilator Pressure Prediction
9th solution(with github code): Laplace distribution noise likelihood optimization
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Brain - Ventilator Pressure Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for your comment!<br> This loss is a bit unstable, but limiting the gradient norm worked well for me :)</p> <p>Thanks for your comments!<br> Yes, in a nutshell, the model produces a batch size x 80 standard deviations per iteration and calculates a weighted MAE.<br> It does not output a single standard deviation from the data.</p> <p>Since the Laplace distribution is constant at 1 when integrated, the model can avoid large losses by outputting a large standard deviation for predictions it is not confident about.<br> For example, in the example shown in the figure, the model outputs a large standard deviation for areas where the input changes significantly. Since errors are generally larger in such areas, it is presumed that this type of output is being produced.</p> <p>I thought that this would prevent overfitting, but it may not be scientifically correct :)</p> <p>Thanks for asking the question!</p> <p>Thanks for your great baseline! it helped a lot!</p> <h1>9th solution: Laplace distribution noise likelihood optimization</h1> <p><a href="https://github.com/ryoma-k/GBVPP_Public" target="_blank">github code</a></p> <p>[update 21/11/8] I found a related paper. <a href="https://ieeexplore.ieee.org/document/374138" target="_blank">Estimating the mean and variance of the target probability distribution</a></p> <p>Congratulations to the winners! Thank you also to the host for organizing the competition.</p> <p>I am a first-year master's student in <a href="https://www.mi.t.u-tokyo.ac.jp/" target="_blank">Harada Laboratory</a>.</p> <p>The solution is basically based on nakama's <a href="https://www.kaggle.com/yasufuminakama/ventilator-pressure-lstm-starter" target="_blank">notebook</a>.<br> The changes are</p> <ol> <li>changed L1 loss to likelihood loss (0.1095 on Public LB for 20 model ensemble)(10 fold &amp; all with 10 seed)</li> <li>pseudo labels using test data (0.1068 on Public LB)</li> </ol> <p>In the following, we will discuss the first one, minimization of the L1 norm assuming Laplace distribution as the error.</p> <h2>Optimization using the Laplace distribution</h2> <p>To begin, the usual L1 loss is equivalent to likelihood maximization assuming an equivariant Laplace distribution at each time for the error distribution.</p> <p>$$<br> p(x) = \frac{1}{2b}\exp{(-\frac{|x - \mu|}{b})}<br> $$</p> <p>$$<br> L_{1} = | x - \mu |<br> $$</p> <p>$$<br> L_{laplace} = - \log{p(x)}= \frac{|x - \mu|}{b} - \log{\frac{1}{2b}}<br> $$</p> <p>Then</p> <p>$$<br> \frac{\partial}{\partial x}L_{1} = const. \frac{\partial}{\partial x}L_{laplace} <br> $$</p> <p>Now, for this data, as pointed out in some of the notebooks, increasing the training data resulted in better CV scores. This may be an indication that the situation is prone to overtraining.</p> <p>Then I thought this might be due to the fact that we assume equal variance as the variance of the error, meaning that it might be more difficult to predict accurately at some times than at others.</p> <p>So I assumed that the variance for each time was different.<br> Therefore, I decided to output not only the predicted value x but also the predicted variance b from the model. The final loss is</p> <p>\begin{align}<br> L_{laplace} &amp;= \Sigma_{i=0}^{80} - \log{p_{i}(x)} \\<br> &amp;= \Sigma_{i=0}^{80} - \log{(\frac{1}{2b_{i}}\exp{(-\frac{|x_{i} - y_{i}|}{b_{i}})})})<br> \end{align}</p> <p>This loss increased the score by more than 0.01.</p> <p><img src="https://user-images.githubusercontent.com/45588624/140300890-f89b5c3d-6239-4f42-9b95-1aec0c474fc4.png"></p> <p><img src="https://user-images.githubusercontent.com/45588624/140300044-7dba94d8-c54b-422f-a306-568165171c19.png"></p> <pre><code> def masked_laplace(self, outputs, labels): mask = labels['mask'] y = labels['label'][mask] x = outputs[...,0][mask] b = outputs[...,1][mask] p = 1 / (2*b) * (- (x - y).abs() / b).exp() loss = - torch.log(p + 1e-12) return loss.mean() </code></pre> <p>The parameters are noted below.</p> <pre><code>Optimizer: AdamW lr: 5e-3 Scheduler: MultiStep(gamma 0.1) Iteration: 550,000(steps at 400,000 / 500,000) batchsize: 256 gradient norm: 50 model(based on nakama's notebook) hiddensize: 128 num_layers: 3 </code></pre> <p>I spent the last two weeks augmenting the data and looking for leaks, but I couldn't improve my score.</p> <p>It was a simple task, but the competition was challenging and fun to analyze.<br> I am happy to have won the gold medal.</p> <p>Finally, I would like to thank the Harada Laboratory for lending us their computing resources.</p> <p>Thank you very much, everyone!</p> <p>I'll think about turning this into a paper!<br> I'll publish the code on github over the weekend.</p> <p>Thanks a lot for your comments!</p> <p>yeah, learning rate decreased there.<br> thanks!</p> <p>Thanks for the comment.<br> b is not a parameter, it is an output from the model. It would be the standard deviation of the Laplace distribution, and the output b generally becomes smaller as the learning progresses.</p> <p><img src="https://user-images.githubusercontent.com/45588624/140471454-a0cd5477-d21f-449d-9975-a5861960add1.png"></p> <p>here it is!<br> (<a href="https://github.com/ryoma-k/GBVPP_Public" target="_blank">https://github.com/ryoma-k/GBVPP_Public</a>)</p> <p>And I also found a related paper. the link is above. thanks!</p>
Google Brain - Ventilator Pressure Prediction
Single Model Transformer - LB 0.112 - Gold Medal
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Brain - Ventilator Pressure Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes. Each neural network starts with their layers randomly initialized with different seeds. </p> <p>But more importantly the training processes uses batches so each time you train a neural network you will get different samples within each batch. As such even if we started with same layer initialization (from pretrained weights for example), no two neural networks will ever be the same after you train them for hours on 1000s of different batches.</p> <p>Neural networks <strong>always</strong> benefit from training the same network many times and ensemble copies of itself.</p> <p>Thanks <a href="https://www.kaggle.com/group16" target="_blank">@group16</a> Once again your team crushed it just like you did in Ion Comp. I'm looking forward to reading your solution very carefully!</p> <p>We already have feature "time_step", so the model can already learn position. Furthermore "time_step" is better than a positional encoding <code>1, 2, ..., 78, 79</code> because position 5 in one breath isn't really the same as position 5 of another because each breath can have a different time step difference.</p> <p>For inferring submission.csv, each fold was trained on 100% train data. </p> <p>We made transformers with decoders and without decoders. The pure encoder worked best.</p> <p>Thanks. Congratulations to your team. Your solution is amazing.</p> <p>Thanks <a href="https://www.kaggle.com/duykhanh99" target="_blank">@duykhanh99</a> ! Your transformer is probably stronger than you think. Our submission of LB 0.112 uses 32 folds where each fold is trained on 100% data. If we only use 11 folds of 100%, the LB is 0.115. And if each of the 11 folds only uses their 91% of train data (instead of 100%) it would be even less (not sure by how much).</p> <p>How many folds did you use for test inference? And did you use 100% per fold?</p> <p>Yes. Another use of decoder is with stacking model. After you have predictions from an LSTM. We can feed those OOF predictions into the decoder while feeding features into the encoder. This way we do not need to train recurrently. I was sure this would do well but it did not improve CV LB</p> <p>If we use 5 folds (and 100% data), the LB is 0.121. I think if you use more folds and 100% data, your LB could improve from 0.150 to 0.135 or so!</p> <p>Just trial and error. I used 3 restarts. The first cycle uses <code>T0</code> epochs while the second and third use <code>2*T0</code> and <code>4*T0</code>. Before restarts, i was using a single cosine schedule. I found that the optimal for single was 125 epochs and LR=6e-4 and 0 warmup.</p> <p>So the first triple i tried was <code>120//2, 120, 120*2</code> with LR=6e-4. Here <code>T0 = 60</code> and total epoch is 420. Afterward i tried <code>T0=50</code> and <code>T0=70</code> but using 60 was best.</p> <p>Thanks Old Monk!</p> <p>Trial and error. I tried <code>1.0 + 0.0</code> and <code>0.9 + 0.1</code> and <code>0.8 + 0.2</code> etc and saw which one had best CV score.</p> <p>Also note that without skip connection like <code>1.0 + 0.0</code> the model would not learn with many layers. The loss would be NAN. Using skip connections is important.</p> <p>Yes. First i use a normal 11 fold CV where each validation set has 9% of data. I tune the learning schedule so that the maximum validation loss is achieved at the last epoch.</p> <p>Next for inference i use 100% data and use the exact same number of epochs and learning schedule that i found previously. I am confident that the model will achieve best accuracy on last epoch.</p> <p>This trick has boosted me from Silver to Gold in many competitions!</p> <p>Attention module has 2 hyperparameters. One is the dimension used for finding similarities. The second is the dimension that the feed forward layer outputs. </p> <p>I used dim=64 for finding similarities (i like to keep this similar to number of features) to prevent overfitting. And i used feed forward dimension as features number (equals 73) plus 32 to give a little wiggle room for finding relationships between features.</p> <p>These hyparameters were found by trial and error. I also tried 32, 48, 64, 96 for embeddings. And 48, 64, 96, 128 for feed forward layer.</p> <p>CV is computed using 11 folds (in normal fashion). Afterward submission.csv is created using 32 folds 100% data (using learning schedule that worked best for 11 folds).</p> <p>Our team has published our Gold winning solution <a href="https://www.kaggle.com/c/ventilator-pressure-prediction/discussion/285435" target="_blank">here</a> (which involves many exciting models). Below is one of our strong single models.</p> <h1>Attention is All You Need!</h1> <p><a href="https://arxiv.org/abs/1706.03762" target="_blank">"Attention is All You Need"</a> revolutionized sequential data models.</p> <h1>Single Transformer Model - CV 0.133 LB 0.112</h1> <p>There has <strong>not</strong> been many public notebooks about <strong>transformers</strong>, so we would like to publish a strong TensorFlow transformer notebook <a href="https://www.kaggle.com/cdeotte/tensorflow-transformer-0-140/" target="_blank">here</a>. It achieves CV 0.133 and LB 0.112 (when inferring on 32 folds using 100% train data each fold). This is a gold single model!</p> <h1>Transformers</h1> <p>This notebook uses only encoder layers. We experimented with decoder layers and feeding decoder layers OOF predictions, but using only encoder layers (without OOF) worked best.</p> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Nov-2021/attention.png" alt=""></p> <h1>Starter Notebook</h1> <p>We published a TensorFlow transformer starter notebook <a href="https://www.kaggle.com/cdeotte/tensorflow-transformer-0-140/" target="_blank">here</a></p> <p>Hi <a href="https://www.kaggle.com/felipebihaiek" target="_blank">@felipebihaiek</a> two motivations. First the competition metric only scores <code>u_out=0</code> and second i tried both weighting only <code>u_out=0</code> and not weighting only <code>u_out=0</code>. The former achieves a better competition metric CV score.</p> <p>After reading other teams' solutions, i think the best option would be to have two heads. One head predicts for <code>u_out=0</code> and one head predicts for <code>u_out=1</code>. Because there is valuable information in the <code>u_out=1</code> target.</p> <p>Do note that even though our model only weights <code>u_out=0</code>, the model has access to all the features which include <code>u_out=1</code> features (and all time steps from <code>t=1</code> to <code>t=80</code>).</p>
Google Brain - Ventilator Pressure Prediction
Thoughts and Efforts from 300+ Place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Brain - Ventilator Pressure Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Definitely, thank you Ynot! </p> <p>Thank you CepheidQ for your thoughts and I look forward to your solution! :) </p> <p>For sure! Congrats to you! Look forward to your team solution :) </p> <p>It was a great competition and I joined it right after the G2Net Comp. I thought it would went well as we just got 3rd place at that G2Net comp but it turns out to be no luck. But I still wanna share some words here about the efforts I put into this comp and the thoughts along the way, hopefully it can bring some relief (or hope) to someone :D</p> <p>For the efforts:</p> <ul> <li>Time: I kept recording the time I put into this comp using github to track the progress, in total I spent around 90 hours on this comp (<a href="https://github.com/VincentWang25/GBVPP" target="_blank">repo</a> here). </li> <li>PC Set Up: Right after the G2Net Competition I also considered running experiments locally instead using Kaggle Kernel, Colab and JavarisLab (they are good but not as convenient as local PC). So I bought 2 2080Ti and set up my own PC, which costs around 4k$ and takes me around 30 hours to set it up. Fortunately they work really well in this comp, which actually saves my time to run more experiments.</li> <li>Model Experiments: I mainly tried variation of LSTM and also explored Transformer a bit but no luck. In total, I did 100+ runs and the computational time is around 17 days according to Weight&amp;Bias. </li> </ul> <p>For the thoughts:</p> <ol> <li>More efforts into model design: My best single model can only perform Public score 0.1463 Private Score 0.1492. And the ensemble brings to Public 0.1338 and Private 0.1365. As we already know some single model can reach 0.11, there are definitely a lot to learn about the model structure. </li> <li>More efforts into data processing: It is a shock for me to realize the matching algorithm from first place solution and it all starts from a simple linear regression. I have to admit that sometimes the marginal improvement from many hours of model training is way less than a simple preprocessing or postprocessing trick..</li> <li>Higher feature importance != good feature: this comes as a surprise to me as I tried some features which turns out to have the highest feature importance but when I remove them, the model reaches better score in the end. I guess it is caused by overfitting using the features. For future practice, it would be better to add the new features sequentially instead of adding them all together, so it would save some time to remove some of it later.</li> </ol> <p>Even though 100+ hours and around 4k$ didn't bring medal in the end, I am glad to build up my own DataScienceComp PC and model pipeline as well. The learning experience and coding practice are also very helpful during this process. I believe the efforts will accumulate and somehow work in the future. Now it is time to turn fully into learning mode and try to digest others solution as much as possible :) </p> <p>See you in the next comp!</p> <p>Cheers,<br> Vincent</p>
Google Brain - Ventilator Pressure Prediction
[#1 Solution] The code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Brain - Ventilator Pressure Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We have uploaded a big chunk of our code required to reproduce our solution as notebooks on Kaggle.</p> <ol> <li><p><a href="https://www.kaggle.com/shujun717/1-solution-lstm-cnn-transformer-1-fold" target="_blank">LSTM CNN transformer</a><br> The code required to run a single fold of our LSTM + CNN Transformer model. Running multiple folds of this and taking a median ensemble should result in a significant boost of LB scores.</p></li> <li><p><a href="https://www.kaggle.com/group16/1-solution-pid-controller-matching-v1" target="_blank">PID Controller Matching</a><br> The code &amp; intuition behind the PID controller matching. The current code is a preliminary version that still introduces some errors. Newer versions will be uploaded shortly.</p></li> </ol> <p>Github repos:</p> <ol> <li><a href="https://github.com/GillesVandewiele/google-brain-ventilator" target="_blank">PID Controller Matching</a></li> <li><a href="https://github.com/Shujun-He/Google-Brain-Ventilator" target="_blank">LSTM+CNN Transformer</a></li> <li><a href="https://github.com/whoknowsB/google-brain-ventilator-pressure-prediction/tree/9b62366274d9ffcc1eada4dc56d3ccd672c81dc3" target="_blank">LSTM on Raw data</a></li> </ol> <p>Pretrained models:</p> <ol> <li><a href="https://www.kaggle.com/shujun717/googlebrainventilator1stshujun" target="_blank">https://www.kaggle.com/shujun717/googlebrainventilator1stshujun</a></li> </ol> <p>In case there are any questions, feel free to leave a comment on the corresponding notebook or here!</p> <p>Our LSTM model will soon be uploaded as a notebook as well so that people can play around with that one too (I will update this topic once it is published). Moreover, I have started writing the blog post with a more elaborate explanation of our solution, I will make a new topic with a link once this is finished!</p> <p>I have just updated this topic to include our 3 github repositories which can be used to fully reproduce our solution. A blog post will be published later today as well, I will make a separate topic to announce this.</p>
Google Brain - Ventilator Pressure Prediction
[3rd Place] Single model scored 0.0975 w/o PID controller
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Brain - Ventilator Pressure Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello,</p> <p>We used GPU(4x3090). Epoch and LR scheduler is 500 and scheduler with cosine policy.</p> <p>Thanks for your comment:)</p> <p>Thanks :) </p> <p>thanks :) </p> <p>Thanks :) 😀</p> <p>thanks :) 👍</p> <p>Thanks for viewing long story :)</p> <p>Thanks for leaving comment :) I used just a powerpoint =)</p> <p>At first the convergence of the model was not clear because of many features too. So I decided to develop the model without the engineered features. Thanks for sharing your experiences :)</p> <p>Thanks for ur comment :)</p> <p>He is a magician ;)</p> <p>We create new pressure using averaging. Thanks for ur comment and congrats gold medal !</p> <p>Yeah. It`s from limerobot who led us top score :). Thanks !</p> <p>There are huge gain in augmentation. Thanks :)</p> <p>Thanks. Not measured individually but it boost scores from 0.16x to 0.14x in cv scores using Masking and Shuffling. Also as stated in the main, it went from 0.117 to 0.1004 in lb scores using Mixup.</p> <p>Thanks for ur congrats :)</p> <p>Thanks for reading our solution :)</p> <ol> <li><p>Yes. 3 data augmentations are applied at the same time. The order is like this: Mixup with 0.4 probability, Shuffling with 0.2 probability, and Masking with 0.2 probability.</p></li> <li><p>Yes you are right, but in case of diff_loss, there are 4 targets (diff with period 1, 2, 3, and 4). So there are equal weights between diff loss and pressure loss, but not equal weights between pressure and each diff.</p></li> </ol> <p>Yeah, we intended to make model understand the delayed air. Thanks for reading and leaving comment :)</p> <p>Thanks for reading long solution !</p> <p>Thanks for congrats :)<br> 감사합니다 ㅎㅎ.</p> <p>Thanks for sharing ur experience. In our case, at early time we couldn't see a boost due to many features. so we removed them all, then we could check that there is a boost using data augmentations.</p> <p>Thanks for comment. </p> <p>For most of the hand-crafted features, it distracted the convergence of model, and We thought that features which the model understands are better than those engineered by humans. In fact, when we used Conv1d, rolling features didn't make a boost. </p> <p>Second, I think that increasing the filter size is an area of ​​experiment, but it had no effect on the cv score so that we set the filter size to be the same as the input feature size.</p> <p>Thanks for your kind words !</p> <p>Thanks :)</p> <p>There were no effects in using Conv1d when we used rolling features 😂</p> <p>Thanks for ur kind words :)</p> <p>Thanks for ur comment :) </p> <p>Yes, it seems to be important mix the types. For example, when the selected C is 50 and 20, we thought that our model understands the case where C is 35 (0.5*50 + 0.5*20). I think defining a new type in this way has had a great effect. </p> <p>In particular, we shuffled two sequences in R by a certain ratio. Since the pressure does not increase linearly with the value of R, we expect to get better results if we mix these sequences better.</p> <h1>Head</h1> <p>Before writing our solution, we really thank to Google Brain and Kaggle for hosting this competition and congrats to all.<br> Also, I really enjoyed doing this competition with my UPSTAGE team (limerobot and dlrgy22). And thanks to many kaggler who shared the notebook and left discussion, we were able to get a good score. Before explaining what we do, I will write a summary first. If something is not clear, please refer the main text or leave any comments.</p> <p>Let`s start our story.</p> <h1>Summary</h1> <ul> <li>Validation strategy</li> </ul> <blockquote> <ul> <li>Stratified k-fold using type_rc as y</li> </ul> </blockquote> <ul> <li>Model: We used combination of Conv1d and LSTM</li> </ul> <blockquote> <ul> <li>Architecture: Conv1d + Stacked LSTM</li> <li>Optimizer: AdamW</li> <li>Scheduler: Cosine</li> </ul> </blockquote> <ul> <li>Features: We used 8 features</li> </ul> <blockquote> <ul> <li>Original: u_in, u_out, R(one-hot), C(one-hot)</li> <li>Engineered: u_in_min, u_in_diff, inhaled_air, time_diff</li> </ul> </blockquote> <ul> <li>Data augmentation: We used three type of augmentations</li> </ul> <blockquote> <ol> <li>Masking: Randomly mask the R or C</li> <li>Shuffling: Randomly shuffle our sequences</li> <li>Mixup: Select two sequences and mix up them</li> </ol> </blockquote> <ul> <li>Loss: We used multi-task MAE loss</li> </ul> <blockquote> <ol> <li>MAE of pressure</li> <li>Difference of pressure</li> </ol> </blockquote> <ul> <li>Ensemble: We used iterative pseudo labeling and seed ensemble</li> </ul> <blockquote> <ol> <li>Seed ensemble using median</li> <li>1st Pseudo labeling</li> <li>2nd Pseudo labeling</li> </ol> </blockquote> <h1>Main</h1> <p>I will describe our solution in the following six structures: Validation strategy, Model, Features, Data augmentation, Loss, and Ensemble.</p> <h3>Validation strategy</h3> <p>What is the most important thing in a competition? IMHO, It`s to avoid shake-up when the competition is finalized and thus a well-designed validation set is needed to avoid shake-up. Although the validation set was not important to this competition, this is why I first focus on validation strategy when competitions start.</p> <p>We knew that the types of R and C were very important in this competition through <a href="https://www.kaggle.com/c/ventilator-pressure-prediction/discussion/276599" target="_blank">this discussion</a> (thanks to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>), and we made a validation set with the same ratio of R and C for every fold.</p> <p>Detailed pseudo-code is below.</p> <pre><code>cols = ['breath_id', 'R', 'C'] meta_df = train_df.groupby('breath_id')[cols].head(1) meta_df['RC'] = meta_df['R'] + '_' + meta_df['C'] meta_df['fold'] = -1 kf = StratifiedKFold(12, random_state=seed, shuffle=True) for fold, (_, val_idx) in enumerate(kf.split(meta_df, meta_df['RC'])): meta_df.loc[val_idx, 'fold'] = int(fold) </code></pre> <p>In this way, we were able to create a validation set that was highly correlated with test data, but I thought that it would be possible to create a good validation set even if we just used normal KFold.</p> <h3>Model</h3> <p>After setting the validation set, we focused on boosting the performance of the model using only the original features.</p> <p>Since Limerobot, the magician of transformers, was on our team, some kagglers expected that our team would use transformers, but we just used simple LSTM models (thanks to <a href="https://www.kaggle.com/tenffe" target="_blank">@tenffe</a> from <a href="https://www.kaggle.com/tenffe/finetune-of-tensorflow-bidirectional-lstm" target="_blank">this notebook</a>). </p> <p>Detailed model architecture is below.<br> <img src="https://raw.githubusercontent.com/SONG-WONHO/figure/master/model_arch2.png" alt="model architecture"></p> <p>Our trial-and-error with models related to the pytorch and tensorflow libraries. Initially, most high-score public notebooks were implemented in tensorflow, and we tried to port it to pytorch version. However, there was a low pyotrch score because of a difference between tensorflow and pytorch in the model initialization method (thanks to <a href="https://www.kaggle.com/junkoda" target="_blank">@junkoda</a> for sharing <a href="https://www.kaggle.com/junkoda/pytorch-lstm-with-tensorflow-like-initialization" target="_blank">this notebook</a>). Model initialization methods like tensorflow allowed us to develop models in pytorch. </p> <p>Detailed code is below (from JUN KODA`s notebook).</p> <pre><code>for name, p in self.named_parameters(): if 'lstm' in name: if 'weight_ih' in name: nn.init.xavier_normal_(p.data) elif 'weight_hh' in name: nn.init.orthogonal_(p.data) elif 'bias_ih' in name: p.data.fill_(0) # Set forget-gate bias to 1 n = p.size(0) p.data[(n // 4):(n // 2)].fill_(1) elif 'bias_hh' in name: p.data.fill_(0) if 'conv' in name: if 'weight' in name: nn.init.xavier_normal_(p.data) elif 'bias' in name: p.data.fill_(0) elif 'reg' in name: if 'weight' in name: nn.init.xavier_normal_(p.data) elif 'bias' in name: p.data.fill_(0) </code></pre> <h3>Features</h3> <p>After designing the model, we focused on feature engineering. Features can be divided into original features and engineered features. For the original features, all features except R and C were used as they are, and one-hot encoded R and C were used to show the type information of R and C to the model.</p> <p>For engineered features, we did a lot of EDA and tried to put features the model could not see. In this context, we have created the following features: u_in_min, u_in_diff, inhaled_air, and time_diff.</p> <p>Details are below.</p> <pre><code>df['step'] = list(range(80))*df['breath_id'].nunique() df['u_in_min'] = df['breath_id'].map(df.groupby('breath_id')['u_in'].min()) df['u_in_diff'] = df['u_in'].diff() df.loc[df['step']&lt;1, 'u_in_diff'] = 0 df['time_diff'] = df['time_step'].diff() df.loc[df['step']&lt;1, 'time_diff'] = 0 df['inhaled_air'] = df['time_diff']*df['u_in'] </code></pre> <h3>Data augmentation</h3> <p>After feature engineering, we focused on data augmentation. As a result, data augmentation led us to score high. We used three type of data augmentation as following: Masking, Shuffling, and Mixup.</p> <h4>Masking augmentation</h4> <p>In the case of masking, it is a little bit similar to Cutout, but masking is randomly performed on the R and C only. In other words, numeric information such as u_in is left as it is, and only information about the type R and C is intentionally masked. In the case of u_in, it was seen that the correlation with pressure was directly high from EDA, and by intentionally erasing the type information, the model was trained to understand the general semantic of u_in.</p> <p><img src="https://raw.githubusercontent.com/SONG-WONHO/figure/master/masking.png" alt="Masking"></p> <p>Pseudo code is below.</p> <pre><code>one_sample = get_sample(idx) # Type masking if random.random() &lt; .2: if random.random() &lt; .5: # maskring R one_sample['R'][:] = 0 else: # masking C one_sample['R'][:] = 0 </code></pre> <h4>Shuffling augmentation</h4> <p>Shuffling randomly shuffles units within a specific window in a sequence. For example, suppose we have a sequence like <code>[0,1,2,3,4,5,6,7]</code> and window size is 3. First randomly shuffle between <code>[0,1,2]</code>; next shuffle <code>[3,4,5]</code> and so on.</p> <p><img src="https://raw.githubusercontent.com/SONG-WONHO/figure/master/shuffling.png" alt="Shuffling"></p> <p>Details are below.</p> <pre><code>one_sample = get_sample(idx) # Shuffling sequence if random.random() &lt; .2: ws = np.random.choice([2, 4, 5]) num = max_seq_len // ws idx = np.arange(0, max_seq_len) for i in range(num): np.random.shuffle(idx[i * ws:(i + 1) * ws]) one_sample = one_sample[idx] </code></pre> <h4>Mixup augmentation</h4> <p>Finally, let`s talk about the Mixup method, which gave us a huge boost in cv and lb scores. Mixup is similar to the process of creating of synthesis sequence, in which two sequences are randomly selected and mixed. For example, given two sequences like <code>[1,2,3,4,5]</code> and <code>[5,4,3,2,1]</code>, mix them in half to produce the sequence <code>[3,3,3,3,3]</code>. </p> <p>At this time, the point we were discussed about was how to mix-up R and C type information because of the importance of type information. In fact, before this augmentation, we were treating R and C as categorical features and embedding them via nn.Embedding. But we were able to mix these R and C features by changing the embeddings as one-hot. If there is a sequence with C 20 and the other sequence C 50, one-hot C would look like this: <code>[0, 1, 0]</code> and <code>[0, 0, 1]</code>. Then mix these two embeddings like this: <code>[0, 1, 0]</code> * 0.5 + <code>[0, 0, 1]</code> * 0.5. Then we can finally get an embedding like this: <code>[0, 0.5, 0.5]</code>.</p> <p>This augmentation improved the single model score on the public leaderboard from 0.117 to 0.1004.</p> <p><img src="https://raw.githubusercontent.com/SONG-WONHO/figure/master/mixup.png" alt="Mixup"></p> <p>The code is below.</p> <pre><code>one_sample = get_sample(idx) # Mixup two sequences if random.random() &lt; 0.4: # Get new index idx = np.random.randint(len(indices_by_breath_id)) sampled_sample = get_sample(idx) mix_cols = ['R', 'C', 'numeric_features', 'target'] k = 0.5 for col in mix_cols: one_sample[col] = (one_sample[col]*k + (1-k)*sampled_sample[col]) </code></pre> <h3>Loss</h3> <p>In fact, loss design and augmentation were done at the same time, but for convenience, the augmentation was explained first.</p> <p>When we analyzed the relationship between u_in and pressure, it was confirmed that the value of u_in of a specific step affects the pressure of a next step. In other words, the flow of air when step is 0 affects pressure of step 1 or pressure of step 2. Therefore, when air enters at a certain point in time, it is delayed and reflected in the later pressure. Thus, we let our model predict the difference of pressure as well.</p> <p>Using this loss, we were able to improve the cv score from 0.14x to 0.127x, and as a result of median ensemble, we were able to rise to 0.11x in the public leaderboard. IMHO, the huge gap in public leaderboard may be due to this loss.</p> <p><img src="https://raw.githubusercontent.com/SONG-WONHO/figure/master/loss.png" alt="Loss"></p> <p>Code.</p> <pre><code>df['step'] = list(range(80))*df['breath_id'].nunique() df['pressure_diff1'] = df['pressure'].diff(periods=1) df['pressure_diff2'] = df['pressure'].diff(periods=2) df['pressure_diff3'] = df['pressure'].diff(periods=3) df['pressure_diff4'] = df['pressure'].diff(periods=4) df.loc[df['step']&lt;1, 'pressure_diff1'] = 0 df.loc[df['step']&lt;2, 'pressure_diff2'] = 0 df.loc[df['step']&lt;3, 'pressure_diff3'] = 0 df.loc[df['step']&lt;4, 'pressure_diff4'] = 0 </code></pre> <h3>Ensemble</h3> <p>Through the above processes, we were able to record 0.1004 as a single model on the public leaderboard, and achieved a score of 0.0975 when applying median ensemble and round post-processing (thanks to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>). After that, we scored 0.963 by doing seed ensemble (seed42 and seed7).</p> <p>Then we created another two model with different seeds using pseudo labels generated from 0.963 submission, and we could get to 0.948 to public leaderboard.</p> <p>Finally, when we pseudo-labeled the submission with 0.948 files one more and training our model, then we were able to finally achieve 0.0942 for the public leaderboard and 0.0970 for the private leaderboard.</p> <h1>Tail</h1> <p>; We made a mistake of missing the paper the hosting team wrote. So, our team missing the point of being able to match predicted pressure to real pressure through PID controllers, and even if we knew, We might not be able to find the pressure perfectly.</p> <p>; I put all my time into this competition over the last few days to find the secret magic (although I haven`t found it). During this, the words that i shouted countless times were "Why not?", and I changed my nickname to ynot from sogna with the determination of finding 😂.</p> <p>; Our UPSTAGE team had a lot of discussion in this competition, and a lot of ideas were born in the process. A big thanks again to our teams <a href="https://www.kaggle.com/jungikhyo" target="_blank">@jungikhyo</a> and <a href="https://www.kaggle.com/limerobot" target="_blank">@limerobot</a> who led us to top scores. And also thanks for reading long solution.</p> <p>Thanks :) In our case, we also used the cumsum of u_in but we removed because we thought that it would interfere a lot in prediction the pressure at the point of decreasing u_in. A lot of our discussion was to predict the inital pressure, and we thought it was difficult to predict due to randomness, but I was amazed by your team`s solution. Congrats again!</p> <p>Hi, <a href="https://www.kaggle.com/illidan7" target="_blank">@illidan7</a>.</p> <p>Yes, in pseudo labeling, we added test dataset to our training. The important things here is how we assign the labels to our test dataset. In this case, we used submission generated from 0.963 models.</p> <p>Yes. it was applied to pressure and pressure_diff targets.</p> <p>thanks for ur comment :)</p> <p>Thanks :) </p> <p>hello, <a href="https://www.kaggle.com/vialactea" target="_blank">@vialactea</a>.</p> <p>I can see a result in this <a href="https://www.kaggle.com/c/ventilator-pressure-prediction/discussion/285882" target="_blank">discussion</a> related to what you did . I think it might be helpful to see.</p> <p>I'm glad it was helpful to you :) Thanks.</p> <p>Hi <a href="https://www.kaggle.com/oguiza" target="_blank">@oguiza</a>.</p> <p>Yes, I think this is the realm of experiments. We experimented with different kernel sizes, we used the best settings.</p> <p>Thanks for ur kind words :)</p> <p>Hello, <a href="https://www.kaggle.com/fankaixie" target="_blank">@fankaixie</a>.</p> <p>We used Conv1d between time sequences so that the model might understand the relationship between the time steps.</p> <p>Thanks :)</p> <p>Thanks for ur kind word :) </p> <p>Thanks :)</p> <ol> <li><p>First, because our model included the Conv1d model, we did not use rolling related features. </p></li> <li><p>Afterwards, it was important to predict the delayed pressure according to the EDA result, u_in, so features such as u_in_diff was adopted to reflect this. We also thought that it was very important to predict the initial pressure, and some sequences had unusually very low pressures in the beginning. After analyzing these sequences, we found that the minimum u_in value is very low, so we designed a feature like u_in_min. Lastly, in the case of inhaled_air, we wanted to put the inhaled air into the model for a specific period of time. If you look at the value of time_step, every sequence has a different value for each step. Instead of using time_step directly, we used time_diff.</p></li> </ol> <p>Thanks for comment.</p> <p>We made model predict the diff outputs too. When we used diff norm by using pressure predicted, it had no effects in cv scores.</p> <p>We used below code.</p> <pre><code>if 'target' in batch: # pressure, diff1, diff2, diff3, diff4 loss = F.l1_loss(pred, batch['target'], reduction='none') res['loss'] = loss[batch['u_out']==0].mean() with torch.no_grad(): res['mae'] = loss[:, :, 0][batch['u_out']==0].mean() </code></pre> <p>Thanks :) </p> <p>Thanks :) </p> <p>Thanks for ur kind words :)</p> <p>Thanks for ur kind words :)</p>
Google Cloud & NCAA® ML Competition 2018-Women's
#1 solution - lucky error
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Cloud & NCAA® ML Competition 2018-Women's <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I thoroughly read the instructions what I have to do and I want to apologize. It is the first time this kind of licencing has been used(?) and was pretty sure it does not mean it should be published. I'll talk to Kaggle admins if I really do not violate rules by publishing the code. If not - I'll make a kernel in a few hours.</p> <p>oh. I thought smooth.spline was from lme4 package, but it seems its generic function from stats package :D lme4 was used for team strength estimation, and yes, TeamId for random effects</p> <p>You should go back to A1. There is not a word that my code is open source, rather than I have used open source to make it</p> <p>I feel this is becoming a pointless discussion as you cannot read the rules and understand what they say :) I might as well zip it and upload it to file sharing site with a zip password then share the zip and give the password to organizers. Does this change anything?</p> <p>I have used R which is open-source. I made sure that my code has not incorporated any copyright software (that is why matlab, SAS and similar tools are not used in Kaggle) and therefore there are no copyright dependencies in my code. That is what A1 rule asks from me. </p> <p>A2 has nothing to do with open source. Its purpose is that I must provide source code in free of charge manner, like uploading the code to organizers server. I think this rule is made on purpose that there would be no additional costs for organizer related to receiving the code (A1 covers that the code itself has no copyright licence)</p> <p>I don't know what you wanted to say with that. I am going to share it with organizers, as required by this A.1 Rule and therefore not licensing my code. There is no rule about sharing it publicly.</p> <p>No, sorry. </p> <p>I guess these models are simillar - I believe random effects are equivalent to beta parameters for each as.factor(T1) (maybe shifted or demeaned, but that does not matter too much)</p> <p>Try it yourself, just open sourced it. I did not track MAE to be honest.</p> <p>As requested, <a href="https://github.com/fakyras/ncaa_women_2018">here</a> is the source of my solution.</p> <p>xgboost supports linear models with its gblinear method (gives almost identical results to glment's ridge regression)</p> <p>I think logloss is a perfect metric for this kind of competition. point based metrics would be worse, as the point difference would be all about predicting who is going to dominate in games and who will not. Its not that interesting imho.</p> <ol> <li><p>Pretty much the same; one of major differences was that I included some of Massey ordinals for men's competition. If I have not used Massey this year, I would have been in top 50; Last year I did not use it - ended up 19th. Going to skip them next year :)</p></li> <li><p>Carefully cross-validated regular season statistics for each team as features. Also implemented my own team strength estimator (also based on a regular season). Ended up with roughly 20 features. For feature selection and parameter tuning I did 5-fold CV repeated 10 times. Submission predictions were bagged 10 times as well.</p></li> <li><p>VERY important - I would not even be in top 10. Last year I ended up 19th in men's competition (same pipeline, but RMSE). If I had used cauchy back then, I would have finished 8th (assuming I would only change the objective function in the pipeline). </p></li> <li><p>Random split on all tourney data. No year-by-year splits</p></li> </ol> <p>5 - 6. smooth.spline from lme4; Default settings. I simply ran the GAM model on pooled CV oof predictions. It is a very strong regularization - i am quite sure it is too strong for the given setup. Therefore, kept it simple and did not over-complicate my CV.</p> <p>It may look that I revealed a lot, but there are a few things that are important but should be kept secret for upcoming competitions :)</p> <p>This result was totally unexpected, as I only spent 4 hours in total in this one, as my main focus was men's competition for which I spent 3 days. However, the ideas in men's competition transferred well to women's, and I am happy they did!</p> <p>First, I have not used any external data and only used regular season data. After my final submission I noticed a small error made from my side - I forgot to include stage2 regular season file, and therefore used Prelim2018 file. It turns out these files are very different, as 2017 season was also affected. If I have used full data, my same model would have placed in 10th place only! </p> <p>Surprisingly, my model did a good job at the only tourney upsets this year - #6 Oregon St. winning over #2 Baylor (16%) and #1 Notre Dame winning over #1 UConn (33%). </p> <p>My model is a single xgboost model. I only used tournament data as my training set. I modeled the win margin using MAE metric together with <a href="https://www.kaggle.com/c/allstate-claims-severity/discussion/24520#140163">cauchy</a> objective function, which is suitable for MAE optimization. Then the score difference predictions were used as inputs for smoothing splines GAM model to smoothly transform them to probabilities.</p> <p>This year I focused on very careful feature selection and it turned out to be very important factor - I have tried but failed to create a better performing private LB model so far. So it seems my solution is close to optimal with the data and ideas I had.</p> <p>My overall CV logloss was 0.4039551 (5-fold random split over all tournament data). CV year-by-year:</p> <pre><code>Seas logloss 2010 0.4131130 2011 0.3587928 2012 0.4074102 2013 0.3928945 2014 0.3836127 2015 0.3529199 2016 0.5117136 2017 0.4111838 </code></pre> <p>CV estimates are pretty much stable and this year's results were on average no different from previous tournaments.</p> <p>Big thank you for organizers for adding women's competition this year and changing the rules not to share everyone's predictions during tournament - it really made the tournament more enjoyable to watch. </p> <p>Also big thanks to people who believed in me who have wanted me to win this competition!</p> <p>my bad - fixed it :)</p> <p>There were some hand-coded rules in one of my submissions, but they didn't work out as UConn lost it in the end</p>
Google Cloud & NCAA® ML Competition 2019-Men's
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Cloud & NCAA® ML Competition 2019-Men's <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you!! Good luck with everything!</p> <p>Thank you very much, raddar!</p> <p>Hi Corey,</p> <p>Actually, I wouldn’t consider either of those two games as much an upset as the seeding suggests, as the model predicted almost a 40% chance for both Liberty and Minnesota winning.</p> <p>What you are suggesting for the future is really interesting, I think it would make game theory an integral part of this competition, and maybe a more important part than machine learning. If everyone uses this same strategy, then a lot of people would happen to be lucky enough to predict 2-3 games correctly, so maybe for the win, one would need to have 4-5 or more correct guesses. Anyways, we shall see that in the future.</p> <p>First, I would like to express my gratitude to the organizers of this competition, and most importantly to raddar for sharing his winning solution in last year’s women’s competition, as it was extremely helpful to me. </p> <p>In this challenge, luck has had a very big role in determining the winning teams. In my approach to solving this, I decided to rely on raddar’s model for the men’s competition with some slight modifications to first make it more conservative, and only then take chances with a few first round games.</p> <p>Originally, raddar’s model had 1-4 seeded teams beating 13-16 seeded teams with 100%. As this strategy rarely works in the men’s competition, I decided not to employ it. This turned out to be a good idea because of the UC Irvine upset.</p> <p>Assuming this model would be enough for a decent position, but not enough to land in the top 5, I decided to try my luck by overriding 3 first-round games (assigning 100% probability to one team), as I felt that was a good trade-off between risk-taking and the possibility of improving the score. That’s where I indeed got lucky. Actually, this same approach did not work out too well for me in the women’s competition where I ended up at the 434th position out of 500 :D. </p> <p>Lastly, since I had the chance to compete with two submissions, I picked one 50-50 first round game to play both outcomes (as this could improve any model by ln2/63 = 0.011 points). </p> <p>Here is the link to my solution: <a href="https://github.com/salmatfq/KaggleMarchMadnessFirstPlace">https://github.com/salmatfq/KaggleMarchMadnessFirstPlace</a> I hope everyone enjoyed the challenge, for me, it was definitely exciting to watch it unfold.</p>
Google Cloud & NCAA® ML Competition 2019-Men's
2nd Place Model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Cloud & NCAA® ML Competition 2019-Men's <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to the competition organizers and sponsors, this has been a fun project for me over the years to learn how to code in R (hence why this code is fairly ugly because i didn't re-write when i learned more efficient ways of doing things). The model itself is actually fairly simple in it's construction but fairly strong in it's predictions and it's love for Virginia. I made no adjustments to the model and only made one submission (because I forgot about doing it until the night before) so I didn't utilize any game theory, which would have been smart.</p> <p>It uses 8 basic variables to train and Score the model: Each variable is the difference between the 2 teams playing in each category. KenPom Variables: Adjusted Offensive Efficiency, Adjusted Defensive Efficiency and Strength of Schedule Rating. These variables were pulled from the website and do have a slight bias as tournament data is incorporated in them for historical years so the fit is a little biased but it's readily available and easy enough to pull.</p> <p>Final Massey Ratings, tested several of the rankings out and these seemed to work the best.</p> <p>Calculated fields from the Data provided: Season Defensive Rebounding Odd Features, last 30 days of the season: Difference in the variance of game to game free throw percentage. Difference in the variance of turnovers in the game to game free throw percentage.</p> <p>Seed Difference: This variable seems silly to include but should account a little bit for the "eye test" used in seeding teams in things that aren't accounted for in other metrics. It obviously biases the predictions towards the higher seeds, so i'd like to give a shout out to the committee for their great job of seeding this year.</p> <p>Here is the link to my code on GitHub. <a href="https://github.com/gjwierz/NCAA_Kaggle_2019">https://github.com/gjwierz/NCAA_Kaggle_2019</a></p>
Google Cloud & NCAA® ML Competition 2019-Men's
3rd Place Solution - Madtown Machine Learning Madness
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Cloud & NCAA® ML Competition 2019-Men's <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Team: Madtown Machine Learning Madness</h1> <ul> <li>Jonathan (Jack) Smith - [email protected]</li> <li>Jay Jojo Cheng - [email protected]</li> <li>Young Lee - [email protected]</li> </ul> <h1>Team Background</h1> <p>We are graduate students at University of Wisconsin - Madison. - Jack works as a software engineer for Epic. He is currently pursuing a masters degree in computer science. - Jojo is a first year PhD student in the department of Biostatistics and Medical Informatics. He spent a few years working in healthcare prior. He studied math in undergrad. - Young will be graduating in May with masters in statistics and computer science. He interned at Amazon as data scientist and is currently searching for jobs in New York City.</p> <h1>Summary</h1> <p>We used a logistic regression model to predict the winning probabilities. This worked well when one team was clearly better than the other.</p> <p>However, in March Madness, there are many evenly matched games, which we assumed to be unpredictable. For these games, we randomly chose a team to win. Specifically, we chose the team with higher ID to win in our first submission, and the team with lower ID to win in our second submission. With this approach we were able to reduce our logloss from about 0.473 (top 200 in the leaderboard) to our final score of 0.427.</p> <h2>Model</h2> <p>Our final model has only two features: 1. TeamID 2. Points each team won by during regular season</p> <p>Our model extends the <a href="https://youhoo0521.github.io/kaggle-march-madness-men-2019/models/bradley_terry.html">Bradley-Terry logistic regression model</a>. We estimate parameters that represent the level of every team based on the winning outcomes of regular season games. During training, we shared these parameters with another linear regression model that models the difference in the points earned by each team in a game as the response variable. By doing this, we were able to use the game scores, in addition to the winning outcomes, to estimate the team levels.</p> <p>In this model, we deliberately limited the use of game-level features such as boxscore statistics. The main reason is that, for many standard models, these features would provide misleading information. To illustrate this point, suppose that Wisconsin scored 100 points and had excellent offensive efficiency rating in a game. Against a strong defensive opponent, this could mean that Wisconsin has very strong offense. Against a very weak defensive opponent, it’s plausible that Wisconsin’s offense is mediocre. Without knowing the opponent, game-level features provide incomplete information about the team.</p> <p>We experimented with various combinations of boxscore feature engineering, ML models, and ensembling. In the end, our simple logistic regression was the best predictor.</p> <h2>Random Guessing (Submission Strategy)</h2> <p>Some Kagglers used two submission to flip one game (100% to 0%). With our strategy described below, we flipped 31 closely-matched games (64% to 36%).</p> <h3>Motivation</h3> <p>We noticed that logistic regression model correctly estimated its own uncertainty; when it predicts that a team will win with 50% probability, it is correct about 50% of the time. However, it’s impossible to get competitive log-loss by making many conservative predictions near 50% probability. We hypothesized that when logistic regression is not confident, it's because the game is evenly matched and is unpredictable.</p> <h3>Strategy</h3> <p>When the game is unpredictable (model prediction is less than 77%), we picked the team with lower ID to win with 64% probability. In our second submission, we predicted the other team to win with 64% probability. Assuming that team IDs have nothing to do with winning, this is just random guessing.</p> <h3>Why does this work?</h3> <p>Under log-loss scoring, this strategy works if there is an uneven split between the number of correct and incorrect predictions. The more extreme the split, the better this works. For example, 70:30 split is better than 60:40. If these games are truly unpredictable, there’s about 30% probability that the split will be favorable for our strategy (binomial distribution). Luckily, this was the case for this year's tournament!</p> <h1>References</h1> <ul> <li>Model summary document as per the guideline: <a href="https://drive.google.com/open?id=1SR4OVaLd-qBUGBZD3Ml5W4bGpdeCZ9NVk8Tcv4XzqU0">https://drive.google.com/open?id=1SR4OVaLd-qBUGBZD3Ml5W4bGpdeCZ9NVk8Tcv4XzqU0</a></li> <li>Minimal code to generate submissions: <a href="https://github.com/YouHoo0521/kaggle-madtown-machine-learning-madness-2019">https://github.com/YouHoo0521/kaggle-madtown-machine-learning-madness-2019</a></li> <li>Our development repo contains code and notebooks we used to explore feature engineering and modeling: <a href="https://github.com/YouHoo0521/kaggle-march-madness-men-2019">https://github.com/YouHoo0521/kaggle-march-madness-men-2019</a></li> </ul>
Google Cloud & NCAA® ML Competition 2019-Men's
4th place solution (neural network)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Cloud & NCAA® ML Competition 2019-Men's <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>See my github: <a href="https://github.com/DaveLorenz/NCAAMarchMadnessNN">https://github.com/DaveLorenz/NCAAMarchMadnessNN</a> </p> <p>Note that the "Model summary.md" file in my github repository contains the summary that I pasted below (but with pictures). Please let me know if you have any questions!</p> <p>TL;DR I used historical NCAA men's basketball data to train a neural network that predicted win probabilities for each game in the 2019 NCAA tournament. The neural network achieved a 0.42788 log loss and placed 4th in the Kaggle Google Cloud &amp; NCAA® ML Competition 2019-Men's. </p> <p>About me I am master's candidate at the University of Virginia's Business Analytics program. An integral component of this part-time degree program is the application of machine learning to business problems. This was my first Kaggle competition, but our UVA cohort regularly has prediction challenges with a leaderboard for assignments. I also regularly wrangle data and build econometric models for my job as an Economic Consultant. I largely credit my success in the competition to my educational training at UVA, training on the job, and some hard work. I spent about 30 hours collecting data and training/testing my model for the competition.</p> <p>Summary I collected and combined historical regular season data, tournament data, and metrics from leading basketball statisticians from 2004-2019. Ken Pom's efficiency margin was the important feature. I trained a variety of models (i.e., boosted trees, random forest, neural network) and tuned hyperparameters (e.g., learning rate, max depth of trees, number of hidden layers/nodes). The neural network performed the best. This neural network with the hidden layers and nodes I used to enter the competition is depicted below. I wrote my code in Python and used scikit-learn and xgboost to train, test, and evaluate models. Training the model on the 60K observations takes less than 15 minutes. My model picked UVA to win all of their games (go Hoos!).</p> <p>Feature selection My final model included the following 12 features for each team (i.e., 24 total team-specific features):</p> <p>Ken Pom end of season ranking Ken Pom efficiency margin Ken Pom offensive efficiency Ken Pom defensive efficiency Ken Pom "luck" Regular season wins against top 25 ranked teams Regular season average point margin Regular season field goal percentage Regular season 3-point field goal percentage Binary indicator for whether team played 20+ tournament games during 2004-2018 Binary indicator for whether team is in the ACC, B10, B12, or SEC Seed of team</p> <p>I also included a binary indicator for whether the game was a tournament game. I collected Vegas line data and considered using line as a feature for the first round (similar to Matthews and Lopez approach in 2014 NCAA Kaggle win). Note that line is not available for predicting the tournament after the first round, because we don't yet know the matchups. However, the line can help predict the first round. Surprisingly, line did not improve model log loss. Therefore, I decided to exclude it. It is difficult to extract feature importance in a neural network. The figure below highlights that the efficiency margin (home and road) was very important in the boosted tree model.</p> <p>Model As mentioned above, I trained a variety of models (boosted trees, random forest, neural network) and tuned hyperparameters (learning rate, max depth of trees, number of hidden layers/nodes). The neural network with 3 hidden layers (nodes of 7, 5, 3) depicted above performed the best (i.e., had the lowest log loss). More specifically, the neural network I entered into the competition had a log loss of 0.42-0.45 in various test sets. The other two models were above 0.50 log loss. I did not consider ensembling the various models, because the neural network was far superior to the random forest and boosted trees. In future competitions, I plan to evaluate whether the application of a soft voting classifier improves model performance. A voting classifier would (1) apply greater weight to the neural network and (2) take advantage of model diversity.</p> <p>I trained my model using both tournament and regular season games and tested on two seperate holdout sets: (1) 2016 tournment only and (2) 2017/2018 tournments. A holdout set is some portion of historical data that you do not allow your model to “see.” This allows you to see how your model performs out of sample and evaluate where your model underfit or overfit. I tuned a number of hyperparameters for the neural network. Unsurprisingly, the relu activation function performed better in the test set than sigmoid activation function (i.e., logistic). I also tried a number of hidden layers and nodes within those layers until I was able to find the combination that led to the lowest test log loss. Given that there is no risk involved with entering a Kaggle competition, I did not focus much on model stability. However, my model's log loss was 0.42 in one holdout set and 0.45 in another, which suggests the predictions are fairly stable. </p>
Google Cloud & NCAA® ML Competition 2019-Men's
5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Cloud & NCAA® ML Competition 2019-Men's <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Model Summary</p> <p>Part 1 - Competition Background Competition Name: Google Cloud &amp; NCAA® ML Competition 2019-Men's Team Name: Sharp Sports Analytics Private Leaderboard Score: 0.43148 Private Leaderboard Place: 5th</p> <p>Name: Jordan Sundheim Location: Reno, NV Email: [email protected]</p> <p>Part 2 - Personnel Background After four seasons of coaching college I decided I was more interested in sports statistics than in coaching sports. I went back to school and got a Master's in Computational Economics at Duke. This was my 5th time entering the contest and reading the forums and interacting with the other Kagglers was huge in my success. The series of articles written after the 2014 competition were quite informative and helped me improve my model. I decided to enter the competition this season because I got the itch. In fact, I was going to skip the competition this year but the day before the tournament I decided to enter. I spent as much time as I could, but given that I decided to enter the day before the tournament this was the least amount of time I've spent on this competition in my five entries.</p> <p>Part 3 - Model Simple is better, especially when it comes to sports data. While particularly neural nets and other machine learning methods tend to be quite popular these days I stuck to the basics and used a generalized linear model (probit). By far the most important features in my model were offensive and defensive scoring efficiency. While many others in the competition relied on Ken Poms numbers I generated my own. To generate my predictions I used a postgreSQL database to store and manipulate the data and I used R to run the model. The actual execution time for the model was quite quick. The script I ran to make my submission took a matter of seconds.</p> <p>Part 4 - Feature Selection Features Ranked by Importance Offensive Efficiency Defensive Efficiency Win percentage against the spread Top 10 / Top 25 pre-season ranking School in major conference - Binary 0/1</p> <p>Feature selection was a huge part of my model. I used three different approaches to construct my model. 1) Keeping it basic - O/D Efficiency has been documented as a major driver, so I built one model based simply on those. 2) Box score stats - I just put all the box score stats into a model and saw how they did. 3) Thinking like a gambler - I added features such as win percentage against the spread, win streak (straight up), and pre-season rankings. My final model was a weighting of these three approaches. I manipulated all statistics in order to account for opponent strength.</p> <p>Part 5 - Training Methods I used leave 1 season out validation. I would build the models of 15 seasons of data, leave one season out, and then did that a total of 16 times to see how the model would perform out of sample. Model weights were quite ad hoc. I remember reading somewhere that 80/20 spread/data was good and I went with it.</p> <p>Part 6 - Interesting Findings Luck matters. I manually manipulated some probabilities and my success can be attributed to Iowa's first round victory. If they lost I would have been screwed. As far as generating quality numbers, the stats themselves aren't that useful. There is so much variation in the quality of opponents in college basketball that you need to account for who someone plays in order to appropriately use the statistics.</p> <p>Part 7 - Simple Features and Methods To begin with I think my model was quite simple. But to simplify it further, offensive and defensive efficiency are the drivers. Those two stats alone get you 90% of the way.</p> <p>Part 8 - Model execution time Generating the historical statistics does take some time, running the adjusted box score stats for each season back to 2003 took about 20 minutes. But the actual execution time for the models and the submission form were quite quick. Maybe 30 seconds to run everything once the statistics were in the correct format.</p> <p>Part 9 - References I heavily relied on the data provided by Kaggle. There were three pieces of data I gathered from outside sources. Per-Season Top 25 Rankings - Sports Reference <a href="https://www.sports-reference.com/cbb/seasons/2019-polls.html">https://www.sports-reference.com/cbb/seasons/2019-polls.html</a> Team win percentage against the spread - Vegas Insider <a href="http://www.vegasinsider.com/college-basketball/">http://www.vegasinsider.com/college-basketball/</a> Point Spread Data - Sportsbook Review <a href="https://www.sportsbookreviewsonline.com/scoresoddsarchives/ncaabasketball/ncaabasketballoddsarchives.htm">https://www.sportsbookreviewsonline.com/scoresoddsarchives/ncaabasketball/ncaabasketballoddsarchives.htm</a></p> <p>My solution files are attached in .zip format as per the directions</p> <p>Lots of data sources out there these days. The action network app has been my favorite source for pre-game (and live) lines.</p>
Google Cloud & NCAA® ML Competition 2019-Women's
15-th place solution (a bit risky)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Cloud & NCAA® ML Competition 2019-Women's <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Without override: 0.36626 With override: 0.34766</p> <p>Baseline: <a href="/raddar">@raddar</a> 's MAE optimization, transcribed in python with quality features generated from R code</p> <p>A Few additions:</p> <ol> <li><p>I tested out xgb, lgb and carboost separately. Since catboost's python custom metric is too slow and adding the cauchy objective in c++ and recompiling ended up not working, I ended up using catboost with rmse metric.</p></li> <li><p>Baseline's 'quality' feature ended up being really useful, but my separate observation is: although some teams are equally competitive, one may be more stable than the other, so I engineered features such as truncated mean and standard deviation of quality.</p></li> <li><p>Fitting a spline to the score-diff -&gt; win-prob curve will cause the graph to have two dips around prob = 0 and prob = 1, which is counter-intuitive: an increase in score-diff should only increase the winning probabilities. So I changed UnivariateSpline to IsotonicRegression.</p></li> <li><p>Clipping final [0.025, 0975]</p></li> </ol> <p>Overrides:</p> <ol> <li><p>According to the bracket generation program I've found out that Baylor's winning probability is around 0.68, so I made a gamble to set Baylor's winning probability against the two most likely opponents in the final to 1, and it paid of pretty well!</p></li> <li><p>Seed overrides as described in baseline. I used the same approach in Men's competition and get burned for that :(</p></li> </ol> <p>Now looking back, I should've set Baylor's winning probability against all other teams to 1, which would be more risky than my current setup but will pay off better, and I think that's my solution's difference from a gold solution...... (more faith!) And it's kinda sad that I'm only 4 place away from a gold medal (and Grandmaster status requires a solo gold), so I guess I'll have to wait for another competition :)</p> <p>PS: If there's enough interest I will upload my solution via Kaggle kernel.</p>
Google Cloud & NCAA® ML Competition 2019-Women's
1st Place Solution (Be Aggressive, be be Aggressive)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Cloud & NCAA® ML Competition 2019-Women's <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Before I get into the explanation of how I made my predictions (sure to be a boring letdown), I would like to thank Jeff, Addison, and everyone at Kaggle that put work into administering this wonderful competition.</p> <p>As an actuary, I work extensively with obtaining and analyzing large quantities of data on a regular basis. Furthermore, basketball is my favorite sport, both to play and to watch. Quantitative analysis of basketball, in particular college basketball, has been a passion of mine for the past decade. I have written multiple articles on the website I host (1-3-1 Sports) analyzing college basketball from a quantitative perspective. Additionally, I also act as a contributor to the website Busting Brackets, where I analyze basketball in a more qualitative manner.</p> <p>I used simple linear regression in order to create a projected margin of victory for all possible iterations of games between each of the 64 teams playing in the tournament. The margin of victory was generated using offensive and defensive efficiency metrics calculated for each team using a points per possession methodology such that possessions are approximated as Field Goals Attempted – Offensive Rebounds + Turnovers + .5 * Free Throws Attempted. I took these calculated margins of victory and compared them to the power scores and probabilities produced here (<a href="https://projects.fivethirtyeight.com/2019-march-madness-predictions/womens/?ex_cid=rrpromo">https://projects.fivethirtyeight.com/2019-march-madness-predictions/womens/?ex_cid=rrpromo</a>) as a reasonability check. Finally, I manually input predictions for as many match-ups as I could given the submission deadline. I prioritized prediction inputs in order from most likely games to be played (e.g. guaranteed-first round matchups) to least likely games to be played in order to optimize my available time.</p> <p>My exposure to years of playing, coaching, and analyzing basketball allowed me to familiarize myself with the best indicators of victory within the sport. Time and time again, my research has shown that offensive and defensive efficiency work best and keep models simple.</p> <p>Even with my exposure to the sport, I think more than anything what allowed me to achieve my final score was the aggressive predictions that I made. Take that with a grain of salt, because this methodology for making predictions will result in a wildly volatile performance year over year. When I finish dead last in next year’s contest because of an unexpected upset, don’t be surprised!</p> <p>To provide a tangible example, I predicted all 1 through 3 seeds would win their first round games with 99% probability. This was based on the fact that in the past 20 years of the Women’s NCAA Tournament, one, two, and three seeds are undefeated in the first round against fourteen, fifteen, and sixteen seeds. Considering only one of these twelve games had a single digit margin of victory during this year’s tournament, in hindsight, it seems like a strategy of risk worth taking to have a marginal advantage against other scores.</p> <p>In the coming months, my plan is to create a much more robust methodology for creating accurate probabilistic predictions, so that I can hopefully share that (and use it) in the 2020 edition of this competition.</p> <p>I hope everyone had a great time during this competition, and I hope it continues to expand (both in participation and data availability) in years to come!</p> <p>Thanks!</p> <p>I wasn't quite as aggressive in my predictions for the men's competition. In that contest, I chose to override only 1 and 2 seeds, because there have been six instances of a 14-seed beating a 3-seed in the men's tournament in the past decade. </p> <p>These patterns can be seen here:</p> <p>Men's: <a href="https://en.wikipedia.org/wiki/NCAA_Division_I_Men%27s_Basketball_Tournament_upsets">https://en.wikipedia.org/wiki/NCAA_Division_I_Men%27s_Basketball_Tournament_upsets</a> Women's: <a href="https://en.wikipedia.org/wiki/NCAA_Division_I_Women%27s_Basketball_Tournament_upsets">https://en.wikipedia.org/wiki/NCAA_Division_I_Women%27s_Basketball_Tournament_upsets</a></p> <p>Obviously this isn't a foolproof strategy (it would not have worked last year with UMBC beating Virginia), but I think strategy selection depends on the endgame: consistency in performance versus high risk/high reward.</p> <p>Regression Kernel: <a href="https://www.kaggle.com/jleecook/1st-place-regression-to-projected-mov/">https://www.kaggle.com/jleecook/1st-place-regression-to-projected-mov/</a> Submissions: <a href="https://github.com/jleecook/NCAAW">https://github.com/jleecook/NCAAW</a></p>
Google Cloud & NCAA® ML Competition 2019-Women's
2nd place solution: too risky (don't try at home!)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Cloud & NCAA® ML Competition 2019-Women's <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>... and the kernel version: <a href="https://www.kaggle.com/panosc/2nd-place-solution-raddar-overrides">Link to kernel</a></p> <p>First of all, many thanks to @raddar for making his script from last year public (and for using R)! He helped me -and I suppose most of us- a lot, but also raised the bar. And thanks to all participants for sharing so many insights!</p> <p>Second, I know nothing about NCAA and especially the women's tournament. I am only familiar with Seton Hall (alma mater of Nikos Galis, the legend of Greek and European basketball) and Loyola (my wife happened to be an exchange professor in Loyola Chicago last March during their once in a lifetime feat, brought me back a Ramblers coffee mug!), so I am far far away from being an expert! Which proved a good thing... </p> <p>Having such a good script as a base meant that the machine learning part was already close to optimized. This meant that I would need to find an approach that would give some competitive advantage under certain conditions == increase the risk.</p> <p>Combining practically Raddar's model with an ELO based approach of my own, I saw that there were big differences in terms of strength and that I could rank the teams in rather distinct groups: 1-2. Baylor and Notre Dame 3. Oregon 4-5. Mississippi and Connecticut 6. Louisville 7. Marquette 8- everyone else</p> <p>As I said, I know nothing about the NCAA. I only read in the discussions that the top teams are much better than the rest. So my strategy to make the model more confident was to give the first four teams in my ranking a 0.95 in all their games (ordering by strength for games between the top four), keeping the binary option for the Baylor- Notre Dame game.</p> <p>There were a few surprises in the first 2 rounds, so my submissions did goodish, in the bronze medal area. In Sweet-16 the heuristic worked great (as I suppose happened with everyone, since there were not any surprises), so I rise to the silver medal area. In Elite-8 I get very lucky: my heuristic gave 0.95 to Oregon(seed 2) vs Miss. St. (seed 1) and the model still saw a win of UConn (seed 2) vs Louisville (seed 1).</p> <p>The all-or-nothing moment was the Notre Dame- UConn semifinal. Having no idea about NCAA, I didn't know about the <a href="https://en.wikipedia.org/wiki/Connecticut%E2%80%93Notre_Dame_women%27s_basketball_rivalry">Connecticut–Notre Dame women's basketball rivalry</a> Had I known about it, I wouldn't have allowed the model to bet 0.95 on a Notre Dame win! </p> <p>Needless to say, the same strategy for the men's tournament was a real disaster!</p>
Google Cloud & NCAA® ML Competition 2019-Women's
3rd Place - Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Cloud & NCAA® ML Competition 2019-Women's <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Shout out to the organizers of this tournament. It was quite the rollercoaster ride as the tournament played out!</p> <p>I used a linear regression model to predict the spread of each match-up. With pymc3's API, I was able to obtain team-level posterior distributions on FG attempts and their percentages. This allowed me to finally simulate a high number of match-ups between two teams and express a team's chance of winning as a function of the number of simulated matches won. Without a doubt, forcing high seed/low seed matches definitely helped with my final model score and this idea was inspired by @raddar from his 2018 winning solution.</p> <p>If anybody cares, here's a breakdown of my approach (copy+pasted from my README.md):</p> <blockquote> <p>A linear regression model was put together using pymc3. All data pre-processing and post-processing steps were handled using both numpy and pandas.</p> <p>i. FG attempts as team-level Gammas vs Normals (or Poissons for counts) to allow using loc/scale parameters instead of mu/sd ii. FG percentages as team-level Betas vs Bounded (0,1) Normals to allow using loc/scale parameters instead of mu/sd iii. League-level Half-Cauchy error term to soak up some variance iv. Assume team's scoring propensity is normally-distributed and treat as observed variable v. Assume points spread between two teams is normally-distributed and treat as observed variable</p> <p>Evaluation of odds of one team winning over another was done via simulation of team-level posterior scoring propensities (calculated from above).</p> <p>i. Simulation for each match-up produces an output distribution representing the spread of each simulated match ii. The win probability is determined by the number of simulated matches won over the total number of simulated matches iii. 5000 simulations were run for each potential match-up iv. Matches exempt from this process for overridding odds are high seed/low seed tournament matches</p> </blockquote> <p>The Github link to the solution can be found below: <a href="https://github.com/duythedewey/Kaggle_NCAA_Womens_2019">https://github.com/duythedewey/Kaggle_NCAA_Womens_2019</a></p>
Google Cloud & NCAA® ML Competition 2019-Women's
4th place solution: 538 and Baylor Winning
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Cloud & NCAA® ML Competition 2019-Women's <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, thanks to the organizers and to Kaggle... and to 538 and their predictions and ELO ratings... and to Baylor!</p> <p>My hope was to make predictions that were good enough and then add just enough guessing. </p> <p>Here's what I did in more detail:</p> <ol> <li><p>Used 538's predicted probabilities for all first round games.</p></li> <li><p>Used 538's ELO ratings to generate probabilties for all other games.</p></li> <li><p>Adjusted second round games for home field advantage (the first round probabilities already included 538's home field advantage adjustment).</p></li> <li><p>Gave Baylor 100% to win all games.</p></li> <li><p>Created one bracket where the winner of the Chicago region beats the Albany region winner (100%) and another where Albany winner beats the Chicago winner (100%).</p></li> </ol> <p>My code is here: <a href="https://github.com/jfcross4/kaggle_NCAA/blob/master/stage2_womens.R">https://github.com/jfcross4/kaggle_NCAA/blob/master/stage2_womens.R</a></p>
Google Cloud & NCAA® ML Competition 2019-Women's
5th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Cloud & NCAA® ML Competition 2019-Women's <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to @raddar for providing a great blueprint. </p> <p>Available here: <a href="https://github.com/dbrait/ncaa_womens/blob/master/sub_model.R">https://github.com/dbrait/ncaa_womens/blob/master/sub_model.R</a></p>
Google Cloud & NCAA® ML Competition 2019-Women's
Bronze Medal Solution - No ML Model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Cloud & NCAA® ML Competition 2019-Women's <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congraturations on Baylor University and thanks for hosting this exciting competition! </p> <p>I ended up in the bronze medal, but want to share my simple solution, based on <a href="https://www.kaggle.com/addisonhoward/basic-starter-kernel-ncaa-women-s-dataset-2019">starter kernel</a>. </p> <p>My solution is not complicated: </p> <p>It predicts the probability of the occurrence of the upset, which means that the low seed rank team beats the high seed rank team, aggregating past game results. </p> <p><a href="https://www.kaggle.com/takaishikawa/no-ml-modeling">https://www.kaggle.com/takaishikawa/no-ml-modeling</a></p> <p>I'll welcome any comments from you.</p>
Google Cloud & NCAA® ML Competition 2019-Women's
First Competition and Silver Medal!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Cloud & NCAA® ML Competition 2019-Women's <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I can't believe how much fun this competition was. I was as high as 2nd during it, but finished 34th. This outcome is way better than expected and I am super excited to enter more competitions. Congrats to all that surpassed their expectations!</p>
Google Cloud & YouTube-8M Video Understanding Challenge
1st Place approach (WILLOW)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Cloud & YouTube-8M Video Understanding Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Except random sampling of frames (from the starter code) nothing else.</p> <p>I actually did 99% of the work (full time for 2 months). Josef and Ivan are my PhD advisors they mainly helped me with the paper (and let me use the lab GPU cluster ahah). The 25 ensemble includes Not Gated variation of the method + different number of cluster + small modification on the Context Gating and variations of NetVLAD architecture: Nothing really interesting I think.</p> <p>Hi, Sorry for the late post, we wanted the paper to be nice to read :).</p> <p>Here is the report: <a href="https://arxiv.org/abs/1706.06905">https://arxiv.org/abs/1706.06905</a></p> <p>Code: A github repo to reproduce a winning submission (not the best one). It should do 84.698 on the private. It is a very simple average of 7 models checkpoints. <a href="https://github.com/antoine77340/Youtube-8M-WILLOW">https://github.com/antoine77340/Youtube-8M-WILLOW</a></p> <p>And here a github repo containing our Tensorflow toolbox (LOUPE), implementing all the learnable pooling modules with their Gated versions (see the paper for more information): <a href="https://github.com/antoine77340/LOUPE">https://github.com/antoine77340/LOUPE</a></p> <p>The paper mainly focus on the contributions. It might lack some details concerning the kaggle best submission (which I think is not very important as a simple ensemble of 7 models already achieve the first place). If you really need clarifications, I will be happy to answer your questions.</p> <p>Antoine for the WILLOW team</p> <p>Indeed, we briefly thought of some recurrent NetVLAD architecture. But we quickly stop thinking about that for two simple reasons: first this problem is not a sequential problem. The order of the frames do not matter (If you just see random frames of a Youtube video, you can guess the topics, we also tried to train LSTM/GRU with random ordering of frames and nothing changed). Also the problem with VLAD is its super high dimensional feature space. Making it recurrent is just not computationally possible.</p> <p>And in a general manner we think we should try avoiding RNN as much as possible because it is computationally not optimal for GPU. That's a reason why some AI labs are recently publishing research for neural translation machine that do not use any RNN notably (see Facebook and Google).</p> <p>Instead we did the opposite: what about taking the best of the recurrent models and add it to the clustering based model ? This is partly done with the Context Gating which model some gating mechanism, that are at the core of the LSTM/GRU algorithm.</p> <p>For the Gated NetVLAD/NetFV/NetRVLAD, it decreases very fast and plateau early. One epoch is enough if I remember to achieve 82% with the best model and 4 epochs to achieve 83.2%. I like these methods in particular because they converge extremely fast and are moreover 4x faster than RNN for forward pass+backprop.</p> <p>Hi, I did not submit my best single model checkpoint, but as I noticed my validation score to be consistently 0.25 above the private set, a good estimate of the Gated NetVLAD(256 clusters) score would be 82.9% on the private set (this is an estimate)</p> <p>Yes, you are right, thank you for letting me know !</p> <p>Hi,</p> <p>I did try to run the command on the single train-0.tfrecord file and it works on my computer. I do not have any idea why it is not working on your computer, I'm far from being a tensorflow expert unfortunately :(.</p> <p>Hi, I had the same problem the first time I worked on this challenge. This was due to some corrupted/incomplete tensorflow recordfile. To solve this problem you will have to re-download the corrupted record files.</p> <p>Hi, The Gated NetVLAD with 256 clusters requires a lot of memory. One parameters you can adjust is --netvlad_cluster_size=256. Try putting 128 or even 64 to see if it fits in memory.</p>
Google Cloud & YouTube-8M Video Understanding Challenge
5th place solution write-up
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Cloud & YouTube-8M Video Understanding Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello guys, </p> <p>5th place solution is out. report: <a href="https://arxiv.org/abs/1706.04572">https://arxiv.org/abs/1706.04572</a> code: <a href="https://github.com/mpekalski/Y8M">https://github.com/mpekalski/Y8M</a></p> <p>Any comments are welcome.</p> <p>Best, Miha</p> <p>Already submitted.</p>
Google Cloud & YouTube-8M Video Understanding Challenge
FDT's solution sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Cloud & YouTube-8M Video Understanding Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone,</p> <p>This is the link for our report: <a href="http://bigvid.fudan.edu.cn/shares/YT8M-CVPRW.pdf">http://bigvid.fudan.edu.cn/shares/YT8M-CVPRW.pdf</a></p> <p>And this is our code repository (Apache License 2.0 license): <a href="https://github.com/forwchen/yt8m">https://github.com/forwchen/yt8m</a></p> <p>Yes, the predicted scores for other labels are zero, but we have weights to control how much this model's prediction contribute to the ensemble. The ensemble weights are chosen(searched) to perform as good as possible on parts of the validation set. The other labels are those with relatively more training samples and can be predicted well by other models.</p>
Google Cloud & YouTube-8M Video Understanding Challenge
Monkeytyping's solution sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Cloud & YouTube-8M Video Understanding Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi guys,</p> <p>This is our report: <a href="https://arxiv.org/abs/1706.05150">https://arxiv.org/abs/1706.05150</a></p> <p>This is our code repository (licensed under the Apache License 2.0 license): <a href="https://github.com/wangheda/youtube-8m">https://github.com/wangheda/youtube-8m</a></p>
Google Cloud & YouTube-8M Video Understanding Challenge
Simple solution with standard models
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Cloud & YouTube-8M Video Understanding Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, thank you guys and YouTube for all the interactions we had over this competition, it was exciting to learn many new topics in video classification!</p> <p>I saw today that some people already shared their solutions with the community, and the models were not much based on the out-of-the-box ones that are in the <a href="https://github.com/google/youtube-8m">YouTube-8M GitHub</a>. So I wanted to share what I used, in order to show that these initial models are already powerful after some tuning.</p> <p>My model achieved Private LB GAP of 0.81390 (66th in the competition) and it combined predictions from the MoE model, LSTM and a simple neural net with 2 layers:</p> <ul> <li>MoE model with moe_num_mixtures=24, trained for ~10 epochs [Private LB GAP 0.80498]</li> <li>MoE model with moe_num_mixtures=16, trained for ~10 epochs [Private LB GAP 0.80356]</li> <li>LSTM model with original parameters, trained for ~0.5 epoch [Private LB GAP 0.78781]</li> <li>Neural Network with 2 hidden layers, each with 4716 neurons using ReLU, and sigmoid output [Private LB GAP 0.78689]</li> </ul> <p>The models were combined using the arithmetic mean of the 4 predictions.csv files. And here is some short analysis:</p> <ol> <li><p>It's interesting to note that when using the LSTM model predictions, even when trained only for half an epoch, it boosted the GAP score of the averaged predictions by almost 0.01. I believe this is due to the use of frame-level features, which contain information that was not captured by the other models and maybe it's not present in the video-level features.</p></li> <li><p>Using larger MoE models (larger moe_num_mixtures) always increased the GAP, both in the training and test sets, and I believe using even larger ones would prove to be helpful. I only stopped at moe_num_mixtures=24 due to computational limitations.</p></li> </ol> <p>These models were run on the ML Engine of Google Cloud Platform, using the free trial credits (thanks, Google!).</p> <p>I hope this is useful in some way to you. And see you in the next competitions!</p>
Google Cloud & YouTube-8M Video Understanding Challenge
Solution sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Cloud & YouTube-8M Video Understanding Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for organizing the competition! Although my final score was far from impressive, I did learn a few useful things in this competition, so thanks Google for letting us play with this data!</p> <p>Let's share the solutions in this thread. </p> <p>I'll start with my 0.799 model, which is a neural net with one hidden layer: <a href="https://github.com/alexeygrigorev/yt8m-kaggle">https://github.com/alexeygrigorev/yt8m-kaggle</a>. To get a 0.8+ score, it can be run multiple times with different seeds and number of neurons, and then averaged. </p> <p>Maybe I should also add what didn't work for me. For a while I quite unsuccessfully was trying to make xgboost work, but apparently 4k+ labels is too much for it. Also, I tried KNN with annoy, but the score was quite low. Finally I tried to stack it with the NN model, also with no luck. </p>
Google Cloud & YouTube-8M Video Understanding Challenge
offline solution (rank 3rd, single model 82.75%)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Cloud & YouTube-8M Video Understanding Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Alls:</p> <p>We have put our solutions on the <a href="https://arxiv.org/pdf/1707.04555.pdf">https://arxiv.org/pdf/1707.04555.pdf</a>. The code and pre-trained models can be found at <a href="https://github.com/baidu/Youtube-8M">https://github.com/baidu/Youtube-8M</a>.</p> <p>The secret weapon of our team is a fast-forward LSTM with a depth of 7 layers which achieves 82.75% in term of GAP@20 on the Kaggle Public test set. </p> <p>Feel free to try our codes and please let me know if you have any questions!</p>
Google Landmark Recognition 2019
15th place Rapids.ai solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to winners. I'm by no means an expert in computer vision but I did my best for this competition. Here is a brief version of my solution and more code sharing will be done later.</p> <p><strong>The major takeaway: please check out the cuml's KNN:</strong> <a href="https://github.com/rapidsai/notebooks/blob/branch-0.8/cuml/knn_demo.ipynb">https://github.com/rapidsai/notebooks/blob/branch-0.8/cuml/knn_demo.ipynb</a> It is just so much faster. It just took <strong>less than 5 mins</strong> to find 5 neighbors of 110K test images from 4M train images.</p> <p>The challenge of this competition is the large data size and number of classes. Given 200K classes and 4 million images with extremely uneven distribution, a simple classification model is difficult to make progress. My best classification model only yields 0.04 GAP, far less than the 0.19+ with the following pipeline:</p> <ol> <li><p>Finetune a resnet152 and a densnet 161 for the most popular 10k landmarks classification. The validation accuracy is about 0.3.</p></li> <li><p>Extract bottleneck layers, which are 512 features in my case. And apply a KNN to find K=5 nearest images in train given a test image based on these 512 features.</p></li> <li><p>Repeat step 2 for different checkpoints of the two CNN models and aggregate the their KNN results. I used 10 saved CNN models to get 10 KNN models. At this point, each image in test has 10*5 = 50 candidate images from train. There are of course duplicates in these candidates.</p></li> <li><p>Make it a recommendation problem by predicting a pair: a test image and a candidate train class. The target to predict is binary: if the test image is from the class than the target is 1.</p></li> <li><p>Build features for such pair, like minimum distance from the test image to the images of a class, popularity of the class, so on so forth. I built 15 features in the end.</p></li> <li><p>Run a xgb model and a NN model for the above data and average their predictions. For each test image, select the highest prob class. </p></li> </ol> <p>To be continued!</p> <p>Thank you very much :)</p>
Google Landmark Recognition 2019
20th place solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Wow, thanks for checking my code! I think you're right. I was guided by this solution from the last year: <a href="https://www.kaggle.com/c/landmark-recognition-challenge/discussion/57896#latest-418704">https://www.kaggle.com/c/landmark-recognition-challenge/discussion/57896#latest-418704</a> </p> <p>Now I have no idea how could I give up that early before calculating class centroids. I was misguided by the fact that it worked on local validation but didn't work with the test set. Now I realize that the test set is much noisier. And you're right, nearest neighbours are very similar, so the signal is very noisy.</p> <p>Hola,</p> <p>TL;DR: SE-ResNext50, softmax 92k classes, 224x224 crops and data cleaning on the test set using pretrained Mask R-CNN and ImageNet classifier. The code is here: <a href="https://github.com/artyompal/google_landmark_2019">https://github.com/artyompal/google_landmark_2019</a></p> <p>So, my solution is simple, it's a single model (SE-ResNext50), single fold. My code is very similar to my kernel: <a href="https://www.kaggle.com/artyomp/resnet50-baseline">https://www.kaggle.com/artyomp/resnet50-baseline</a>. I used 224x224 crops from 256x256 images. I trained only on classes with &gt;= 10 samples (92740 classes). Surprisingly, it wasn't much slower than training on 18k or 35k classes (maybe 20% slower).</p> <p>This is the config of my model: <a href="https://github.com/artyompal/google_landmark_2019/blob/master/models/v1.3.8.seresnext50_92740_classes.yml">https://github.com/artyompal/google_landmark_2019/blob/master/models/v1.3.8.seresnext50_92740_classes.yml</a> So, it adds a bottleneck layer with 1024 neurons, uses SGDR from <a href="https://arxiv.org/pdf/1608.03983.pdf">https://arxiv.org/pdf/1608.03983.pdf</a> and rectangular crops from <a href="https://arxiv.org/pdf/1812.01187.pdf">https://arxiv.org/pdf/1812.01187.pdf</a>. </p> <p>It resulted in 0.128 on public LB after almost 5 days of training on a single 1080Ti.</p> <p>Obviously, such a low score was caused by both a noisy train set and a noisy test set. I had no time to clean the train set and re-train, so I only cleaned the test set.</p> <p>The first stage of cleaning was just No Landmark Detector from this kernel: <a href="https://www.kaggle.com/rsmits/keras-landmark-or-non-landmark-identification">https://www.kaggle.com/rsmits/keras-landmark-or-non-landmark-identification</a>. I just got rid of predictions where any of top3 results were "not landmark".</p> <p>The second stage. I noticed that there's a lot of portraits, selfies, and pictures of cars in the test set. So I took the pretrained Faster R-CNN from the new TorchVision release (checkout object_detection.py) and got rid of images with too many humans or cars (confidence &gt;0.5, total area &gt; 0.4). Check this out: <a href="https://github.com/artyompal/google_landmark_2019/blob/master/patch_sub2.py">https://github.com/artyompal/google_landmark_2019/blob/master/patch_sub2.py</a></p> <p>The third stage. I noticed that there's a lot of random images: dogs, helicopters, flowers and so on. So I took a pretrained ImageNet classifier. I used ResNet50 (I wanted to use something better, but my GPUs were busy doing something else, so I needed a simple network; ideally, I'd take some smarter model). It's in the image_classifier2.py. So I rejected images with any of those classes and confidence &gt; 0.7 (<a href="https://github.com/artyompal/google_landmark_2019/blob/master/patch_sub3.py">https://github.com/artyompal/google_landmark_2019/blob/master/patch_sub3.py</a>)</p> <p>The first stage gave me 0.02 on the public LB, but didn't help at all on the private LB. I wonder if the public/private split was made by class? The private part of the stage-2 test set seems to have cleaner data.</p> <p>In total, before cleaning I had 0.128 on public LB, 0.144 on private LB. After cleaning I got 0.162 on public LB, 0.148 on private LB.</p> <p>What didn't work: I implemented KNN on top of SE-ResNext50 features and it worked okay locally (GAP 0.2 on all classes), but it performed like crap on the leaderboard (about 0.00002 or something). I must have a bug somewhere.</p> <p>PS: thanks to <a href="/pudae81">@pudae81</a> for this repo: <a href="https://github.com/pudae/kaggle-humpback">https://github.com/pudae/kaggle-humpback</a>. I borrowed some code from there.</p> <p>Thanks for reading this and I hope this helps in future competitions! Artyom</p>
Google Landmark Recognition 2019
27th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all congratulations to all gold and other medal winners. Also thanks to Kaggle/Google for hosting such an extremely challenging and interresting competition.</p> <p>I'll share a brief overview of the solution I used to get to a 27th place. I'll share my code later when I have some spare time to clean it up.</p> <p>I downloaded and resized all train and test images to 224 * 224 pixels. Using the pretrained models generated from the Places365 dataset I made a top-5 class prediction for all train and test images. Initially I started with a simple voting classification. For all train images I removed the ones that had 4 of 5 classes predicted as indoor (which can be loosely interpreted as non-landmark). For the test images I used a 3 out of 5 prediction to specify which ones where non-landmark. This removed several hundreds of thousands of images from the train set and also allowed me to specify tens of thousands of images from the test set as non-landmark.</p> <p>As model I used a pretrained Resnet50 with a custom head layer. I added a Generalized-Mean (GeM) pooling layer followed with a fully connected layer to trim down the number of parameters. I used basic augmentation for all images (flip lr, updown, shear and zoom). Head-only training I usually did 4 to 8 epochs (each epoch with 1/4 of the train images). Learning rate 0.0005 and a batch size of 128. Full training of the network for 6 to 18 epochs (each epoch with 1/6 of the train images). For full training batch size would be about 20 to 24 (depending on the amount of classes selected). Learning rate would drop from 0.0001 to 0.00003.</p> <p>Initially I started with using validation of about 10% of the train data. I noticed quickly that it took a lot of additional time and there was no noticable correlation between local metrics and the LB. So I just skipped validation completely...not something you would do in any production scenario but it worked OK for me in this competition.</p> <p>For generating my submissions I usually used 3 to 4 models from different epochs to create the predictions. I used TTA varying between 2 to 14. For 2 just a simple fliplr of the test image. When using 14 I also made several crops. In the end the simple fliplr seemed to give the most stable results surprisingly.</p> <p>In the last week I spent a couple of evenings to further optimize the landmark/non-landmark identification and apply that to the stage2 test images and also to further select train images. That way I was able to gain a 50% increase in my score in the last 3 days. The final model was trained on 83K of classes.</p> <p>Things that didn't work for me: I tried various Xception models. Training took however way longer then my Resnet50 models. Also I got the impression that Xception did not handle the large amount of classes very well. In the end I tried 4 or 5 different models but decided to stick with Resnet50. I also tried using the Hadamard classifier to be able to cope with the large amount of classes. The performance seemed quitte good but I was only able to get a model with a significant score on the LB if the number of classes that I used was not to large (&lt; 10K ). </p> <p>I'am looking forward to reading the Gold solutions from the winners.</p>
Google Landmark Recognition 2019
2nd Place and 2nd Place Solution to Kaggle Landmark Recognition and Retrieval Competition 2019
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for the great challenge, our solution has been released to arxiv. Further, we will open our source code. Thanks.</p> <p>arxiv link: <a href="https://arxiv.org/pdf/1906.03990.pdf">https://arxiv.org/pdf/1906.03990.pdf</a> source code: <a href="https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/Research/landmark">https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/Research/landmark</a></p>
Google Landmark Recognition 2019
31st place solution completely on kernels
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello</p> <p>Thought people might be interested to know how to run this entire competition on Kaggle kernels and still finish in good positions on LB. So here is my solution to it. I have completed both the Recognition and Retrieval challenges relying solely on Kaggle kernels. Thank you Kaggle!</p> <p>P.S. I had earlier authored a public <a href="https://www.kaggle.com/mayukh18/dataset-exploration-and-simple-on-the-fly-training">kernel</a> in Keras at the start which showed this can be done. Even Artyom did this in Torch (<a href="https://www.kaggle.com/artyomp/resnet50-baseline">https://www.kaggle.com/artyomp/resnet50-baseline</a>).</p> <h3>Approach for Kaggle Kernels</h3> <p>The solution was run completely on Kaggle Kernels. Since the dataset size is around 500 GB, taking advantage of the size of each tar file, the training was done on a download-train-discard methodology. In each mini-epoch, the data generators downloaded one tar file from the 500 tar files; extracted it; created batches from the valid images and at the end of the epoch deleted all the files. Each tar file downloaded in about ~30 secs, and a mini-epoch of about ~7000 images took a total of 100-110 secs(including everything).</p> <h3>Model Architecture</h3> <ul> <li>I used a ResNet50 with weights pretrained on imagenet. Chopped off of the final layer. Added a GlobalMaxPooling layer and a subsequent softmax layer at the end for the output. </li> <li>Used landmarks which have atleast 20 images.(~55k landmarks)</li> <li>Resized every image to 192x192 pixels. </li> <li>Used Adam optimizer with a rate of 0.0002 at the staring and reduced it after each ~150 mini-epochs. </li> <li>Trained the model for one full epoch(500 mini-epochs) after which it started overfitting. Could have tuned the hyperparameters better for the subsequent epochs but there was not much time left for experimentation.</li> <li>Did not use any external dataset for non landmark images.</li> <li>TTA somehow gave me worse results.</li> </ul> <p>Special mention to Dawnbreaker's <a href="https://www.kaggle.com/c/landmark-recognition-challenge/discussion/57152">solution</a> from 2018 version of the challenge which helped me improve my results.</p> <p>What didn't work? ResNet101, DenseNet etc. and more. Even ResNet50 with 128x128 images didn't give any good result. I initially started off with a ResNet101 but it failed to match the level of performance of the ResNet50. Taking this from a comment of Dawnbreaker in the above solution thread: &gt; "Actually I tried ResNet101 and got a higher val accuracy but a worse LB performance (LB:0.101). Eventually I think maybe an accurate prediction for the landmark images is not as important as an efficient rejection for the non-landmark images."</p> <p>Perhaps this makes sense. Add to the fact that here we were predicting on ~55k classes compared to last year's 15k, rejection seems even more important.</p> <p>The complete organized code(including the Keras generators) and pretrained model can be found here: (<a href="https://github.com/mayukh18/Google-Landmark-Recognition-Retrieval-2019">https://github.com/mayukh18/Google-Landmark-Recognition-Retrieval-2019</a>)</p> <p>Happy if anyone finds this useful. Cheers! Mayukh</p>
Google Landmark Recognition 2019
88th place write-up [fast.ai]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>88th place is not super exciting. However, you can surely learn from some of my mistakes and pick apart some of my code. </p> <p><a href="https://towardsdatascience.com/diving-into-googles-landmark-recognition-kaggle-competition-6975dbe11072?source=friends_link&amp;sk=2f6cf21fd799a1b02302243d487256b5">Here is the medium "friend" link so it's not behind a paywall. </a></p>
Google Landmark Recognition 2019
8th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, congratulations to everyone that participated! This was a rough competition, more than 4 million images (&gt; 600GB) and 200 thousand classes! And, of course, congratulations to the winners! Well deserved!</p> <h1>Summary</h1> <ul> <li><strong>Training set</strong>: Sub-set of 2018 Google Landmark Recognition</li> <li><strong>Descriptor</strong>: Ensemble of DenseNet121-GeM-512FC (512x512, ArcFace) and ResNet101-GeM-2048FC (768x~, ContrastiveLoss)</li> <li><strong>Landmark classifier</strong>: ResNet50 with T-scalling trained on OpenImages and Places. Remove images with less than 0.99 probability from train-set (4M-&gt;2.6M)</li> <li><strong>Inference</strong>: Extract features for train and test set and compute the cosine similarity matrix between test and class centers. Then, apply K-NN to retrieve the 12 nearest neighbours and apply softmax on them.</li> <li><strong>Re-rank 1</strong>: Multiply confidence score from inference by classification's score on test.</li> <li><strong>Re-rank 2</strong>: Perform K-NN search (K=1750) of test set on itself and apply softmax. Multiply last score by this one.</li> </ul> <h1>Long version</h1> <p>I believe two issues must be addressed to tackle this competition. The first is a similarity search problem, where we want to create a good descriptor that would allow us to correctly classify an image based on a K-NN search in a gallery (trainset). The second part is a detection/re-rank of non-landmarks in both train (gallery) and test (query) sets. This is paramount since the GAP metric used in this competition penalize us if we make a prediction for a non-landmark with higher confidence score than any other landmark images.</p> <h2>Similarity search</h2> <h3>Descriptor</h3> <p>I reckon a metric learning approach is much more natural to this task than a pure classification one. This is because we have a huge number of classes (&gt; 200k) and some of them have just a few examples. Furthermore, in practice we would not be able to re-train our classifier every time we need to add a new landmark to the gallery.</p> <p>With this mindeset, I spent most of the time (until the last 2 weeks) training several models with ArcFace loss. I tried several types of pooling schemes such as SPoC, GeM and REMAP (my own re-implementation). Also tried several types of backbones like DenseNet121, ResNet101 and SEResNeXt101.</p> <p>The ones that worked best for me were DenseNet121-GeM-512FC (512x512) and SEResNeXt101-REMAP-2048FC (384x384). However, SEResNeXt101-REMAP was really slow so I only used DenseNet for the second stage.</p> <p>I trained on a sub-set of last years' competition, it was around 80k images of 15k classes. A margin of 0.8 for <a href="https://arxiv.org/abs/1801.07698">ArcFace</a> loss was used and images were resized to the specified size. Weird as it may sound, I found it was better to squash it into a square than keeping the original aspect ratio with zero padding.</p> <p>I (unfortunately) spent most of my time here and the result were not so great. My best model only score around 0.33 on the image <strong>retrieval</strong> public LB (no QE, DBA or any re-rank). At this time I decided to give up ArcFace and go for the siamese approach.</p> <p>In the last weeks I trained a ResNet101-GeM-2048FC (768x~) following Radenovic's <a href="https://arxiv.org/abs/1711.02512">paper</a> and despite I didn't submit to the retrieval competition (deadline was near) I believe it was considerably better than the other models I had trained so far.</p> <p>My final descriptor was an 2560-dim. ensemble of both models. I L2-normalized both and concatenated as follows: [1.5 * ResNet, 1 * DenseNet]. I didn't have enough time nor RAM to apply PCA on the resultant descriptor.</p> <h3>Landmark classifier</h3> <p>Since there were a lot of non-landmark images both in test and train sets I trained a landmark classifier using the descriptor's train set as landmark examples. For the non-landmarks, I used images from OpenImages and Places datasets. I used classes such as: sky, forest, bathroom, bedroom, restaurant, Human faces, food, etc... It was important to use outdoor classes as negative examples, otherwise the model would be biased to predict any outdoor image as landmark. Finally, I used <a href="https://arxiv.org/pdf/1706.04599.pdf">T-scalling</a> to scale the predictions.</p> <p>For the train set I removed all the images with classification score less than 0.99, which resulted in around 2.6M images.</p> <p>For test set, on the other hand, I did not remove any images but used the scores to re-rank the final submission's confidence scores. I did this because, if the conf. score is well ranked, there is no penalty for making predictions on non-landmark images.</p> <h3>Out-of-distribution images</h3> <p>Since the descriptor was only training on landmark images it is very bad in differentiating between images that are non-landmarks, e.g. human faces or types of food. As consequence of this fact, non-landmark images are matched with very high confidence scores, decreasing the GAP metric. In order to overcome this issue, three re-rank steps were used (will be explained later).</p> <h3>Extract features</h3> <p>The CNNs previously mentioned are used to extract features (descriptors) for both train and test sets. Then for the train set, the class centers are computed by averaging all the image descriptors that belong to the same class. Finally, both test and class centers descriptors are L2 normalized.</p> <h3>Search</h3> <p>The K-NN search was performed on the cosine distance matrix using <a href="https://github.com/facebookresearch/faiss">Faiss</a> library. The top 12 NN were kept and softmax was applied on them.</p> <p>Even though we only need the first nearest neighbour, it was important to keep more of them to apply softmax and convert the cos. similarities into a probability distribution. By doing this we filter out some non-landmarks that have high similarity with a lot of classes in the gallery (train set). Without this or any other re-ranking steps, the score is likely to be close to zero even for a good descriptor.</p> <h3>Re-rank:</h3> <p>The second re-rank was very straightforward. The original confidence score was multiplied by the probability of being a landmark from the classifier. With this step many non-landmark images got their conf. score reduced.</p> <h3>Re-rank based on out-of-distribution similarity</h3> <p>For this step, the K-NN (K=1750) of the test set with itself was computed. The intuition here is that since our model is known to produce high similarity between out-of-distribution images, if we compute the similarity between the test set it is likely that non-landmark images will have much more similar neighbours than landmark ones. Thus, I apply the softmax and multiply by the confidence score from the last step.</p> <p>Hence, the final score is given by: <code>score = metric_learning_score * classifier_score * ood_score</code></p> <h1>Leaderboard progression (stage 2)</h1> <ul> <li><strong>DenseNet121-GeM-512FC (cls re-rank)</strong>: 0.22379</li> <li><strong>DenseNet121-GeM-512FC (cls and ood re-rank)</strong>: 0.23862</li> <li><strong>ResNet101-GeM-2048FC (cls and ood re-rank</strong>): 0.26994</li> <li><p><strong>Ensemble (cls and ood re-rank)</strong>: 0.28792</p> <h1>Resources</h1></li> <li><p><strong>GPU</strong>: 1x Nvidia 1080ti (11GB)</p></li> <li><strong>RAM</strong>: 32GB</li> <li><strong>HD</strong>: 1TB SSD</li> <li><strong>Processor</strong>: Intel i7 6800k</li> </ul> <h1>Acknowledgments</h1> <p>Thanks Kaggle and Google for hosting this competition. Thanks as well to all the community and <a href="/filipradenovic">@filipradenovic</a> for opening his 2018 Google-Landmark-Retrieval solution.</p> <p>Thanks!</p> <blockquote> <p>Yeah, interestingly, squashing the train images to squares increases the test accuracy for our meta learning model as well. I was only guessing it encodes features more precisely. When an image is zero padded, the model encodes zero padded features as well. And during the inference, the test images do not necessarily have zero padded features, depending on how test images are processed. </p> </blockquote> <p>That's a fair point! This must be the reason why Radenovic only resizes the images (keeping aspect ratio) without padding. The drawback is that you cannot form a batch with different image sizes and you have to freeze batch norm (and is slow).</p> <p>Thanks a lot <a href="/seesee">@seesee</a> =]]</p> <p>Thank you <a href="/titericz">@titericz</a>, it means a lot coming from you! =] Yes, I used PyTorch! Indeed, open set scenarios are always tricky to deal with! I had to add more classes as negative examples and retrain it a couple of times.</p> <p>Thank you! Yes, I'll upload it later. I just need to clean it a bit since its quite messy =]</p> <p>Thank you! Almost one and a half month of work! I spent almost two weeks just to download the data</p> <p>Many thanks! =]</p>
Google Landmark Recognition 2019
From 0.08695 to 0.14681 by TTA
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="/nicolefinnie">@nicolefinnie</a> Nice to see you too. I joined the competition vert late so just used the modified SeResXt from the Doodle Recognition Challenge and almost didn't change anything. I added a decode part (upsamplimg) to the vanilla ResNext, forcing the network to learn more I hope. The total number of classes (&gt;= 8 samples) is 106996. I didn't consider the classes which have samples less than 8 but I think the few(one)-shot learning is a super cool idea. The way I did just to mimic the way we look at a picture. Normally the object located at the center is what the picture wants to tell us. We look at the overview of the picture and gradually move our focus on the enter ( zooming in the part in our brain for the small details). </p> <p>Thanks. For training, the edges were randomly chopped off between 0% and 12.5%. The loss was softmax_cross_entropy.</p> <p>My computation resource only allowed me to train the Se_NeXt 50 with the image size 96 X 96 in a reasonable time. I got the private score 0.08695 if I didn't use any TTA. So for TTA, I tried to use the "larger" images to pick up details by gradually chopping off the edges by certain points (like 2.5%, 5%, 7.5%,10%, 12.5%), then resized the image back to 96 X 96. With left-right flip, I got the total 12 predictions. I then selected the top 1 for each image and the score became 0.14681. It was interesting to note that the averaged probabilities for the common classes for each image gave worse score. </p>
Google Landmark Recognition 2019
Team JL Solution Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers and all other participants. Here is a summary of our solution.</p> <p>Our final solution consists of the following 2 retrieval models (1 global model fine-tuned on the training set and 1 local model that is publicly available) and 3 steps.</p> <p>Model-1: A global CNN retrieval model. By this we mean a model that takes an input image and produces a global descriptor by taking the output of the last convolutional layer and apply a global pooling method followed by L2 normalization, as common for retrieval tasks. For simplicity, we will not include all the details here (more information can be added later if needed). In the hindsight, we think it is possible that a simple ResNet+GeM model should do the job just fine, but we tried a couple of combinations with backbones such as ResNet, ResNeXt, SE-ResNet, SE-ResNeXt, SENet with attention and pooling methods such as GeM, RMAC, MAC, SPoC, as well as multiple concatenations of them. We take the "raw output" as descriptor (i.e. without any post-processing techniques such as DBA, QE, or Diffusion, except for concatenations in which case PCA and whitening is used to reduce the dimension to 2048 or 4096) and used the public leaderboard of the retrieval challenge for validation.</p> <p>Model-2: A local CNN retrieval model. This is just the Detect-to-Retrieval (D2R) model released by Andre Araujo at this post <a href="https://www.kaggle.com/c/landmark-recognition-2019/discussion/92942#latest-536570">https://www.kaggle.com/c/landmark-recognition-2019/discussion/92942#latest-536570</a></p> <p>Step-1: Use Model-1 to match all test images (about 120K) vs. all train images (about 4.13M). For each test image, accumulate similarites for labels in the top-5 matched images, and take the label of the highest score as prediction. This achieves private/public 0.25138/0.21534.</p> <p>Step-2: Same as Step-1, but for each test image, keep the top-20 matched images from the training set. Then a second match was performed using Model-2, and accumulate D2R scores for labels in the top-5 closest images and take the label of the highest scores as before. This achieves private/public 0.31870/0.26782.</p> <p>Step-3: This is perhaps the most important step in which, as everyone knows, something needs to be done to separate landmark images from distractors. Instead of training a separate classifier using external data, we made the following assumptions. (1) All the landmark images already have correct predictions, and the reason why score is low is because the evaluation metric is GAP and a lot of the distractor images are placed higher than them in the ranked list of predictions. Of course the first part of the assumption is not true, and we tried some methods to replace the labels of some of the potentially wrong predictions, but it didn't work. (2) In the ranked list of predictions, all top-500 images are real landmark images with correct predictions, and all images below rank 20000 are distractors. So if there is a distractor in the top-500, or if there is a real image below 20000, then let them be there and there is nothing we can do about it. With these two assumptions, we perform the following to the Step-2 prediction list. Starting from the rank 1 test image, match vs all other test images below until rank 20000 using Model-2, if the D2R score is greater than a certain threshold (23 in our case), then pull it up (along with its predicted label) and place it just below rank 1. It is possible that the image being pulled up has different predicted label, but we don't change it. We do this for two searches, where the first search expands top-500 to about 770, and the second search expands 770 to about 800. Since the second search only adds about 20 to 30 new images, we stop it there. This boosts the score to private/public 0.36787/0.31626. Next, we do the same to the Step-1 prediction list. this time with top-300 images (again two searches) because Step-1 score was lower than Step-2, and it expands the list to about 520 images. Finally, we merged the two new prediction lists as follows (New-List-1 has about 800 "safe images" in the top, whereas New-List-2 has about 520 "safe images" in the top.) Starting from rank 1 image in New-List-1, place it straight to rank 1 in the merge-list. Then take rank 1 image in New-List-2 and place it as rank 2 in the merge-list if it's not already there, otherwise go back to New-List-1. Continue this interleaving process until all "safe images" have been absorbed, and append the rest using New-List-1. The merge-list has about 840 "safe images", i.e. New-List-2 only adds about 40 "safe images" to New-List-1, but it could potentially change the ranks of some of the "safe images" in New-List-1 in the insertion process. This slightly increased the score to private/public 0.37601/0.32101, which was our final score.</p> <p>P.S. We also trained another DNN model with only 1 hidden layer using features extracted by Model-1 as input and applied the steps as above to add more "safe images" to the predictions from Step-3. This scores 0.37936 on the private leaderboard, but unfortunately we did not select it because it degraded the public leaderboard score to 0.32100.</p> <p>Thanks :)</p> <p>Thanks. For Model-2, we just used the publicly available tensorflow codes. For our fine-tuned model, we used pytorch by modifying Radenovic's official codes of GeM. We are working on cleaning up the codes right now to be submitted later.</p> <p>Thanks :) Congratulations to you as well.</p> <p>Thanks :)</p> <p>Thanks. We are working on the report right now to be posted on arXiv as requirement.</p>
Google Landmark Recognition 2020
(Public>0.54) Provisional 36th solution and kernel
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2020 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank Kaggle for hosting this awesome competition! And congrats the top teams!<br> This link is our kernel (version 10) <br> <a href="url" target="_blank">https://www.kaggle.com/tangshuyun/lb-0-54-effnetb6b7-global-feature</a><br> Our approach is very straightforward and simple:<br> Thanks to Ragnar's awesome kernel of training Effnet, we find some practical ways to train our own models. After reading Keetar's fantastic writeup of his GLD retrieval, we trained our Effnet B6 and B7 first with 384 sized images. Then we use the increasing 512 sized images to further tuned our B6 and B7. The training environment is Colab Pro. Then we simply put the two model ensembling predictions to the global feature extraction.</p> <p>For more details and adjustments, please refer to the kernel above. </p> <p>Feel free to ask if you have any questions</p>
Google Landmark Recognition 2020
1st Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2020 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to google and kaggle for hosting this yearly competition. It was a lot of fun exploring and learning all about global, local descriptors and algorithms to match similar images.</p> <h3>Brief Summary</h3> <p>Our solution is an ensemble of 7 global descriptor only models trained with arcface loss. We classify landmarks by KNN on an extended version of the train dataset and efficiently rerank predictions and filter noise using cosine similarity to non-landmark images. We did not use any local descriptors.</p> <h3>Detailed Summary</h3> <p>Below we give a detailed description of our solution of which architecture is only a small part.<br> Video content of us presenting the solution is available under:</p> <p>NVIDIA Grandmaster Series Ep2 <a href="https://youtu.be/VxNDH6qLZ_Q" target="_blank">https://youtu.be/VxNDH6qLZ_Q</a><br> Chai Time Data Science <a href="https://youtu.be/NRl3lMlixPc" target="_blank">https://youtu.be/NRl3lMlixPc</a></p> <h4>Pipeline</h4> <p>We wanted to use this competition as a chance to improve our pipeline and coding skills. While in past competitions we mainly used jupyter notebooks locally, we switched to a collaborative approach using scripts with github versioning for this one. After some acclimatization we clearly saw a benefit of our pipeline consisting of the following tools</p> <p>Github: Versioning and code sharing<br> Neptune: logging and visualisation<br> Kaggle API: dataset upload/ download<br> GCP: data storage</p> <p>So in practice we downloaded preprocessed data from google storage, trained our models using pytorch lightning where we logged with neptune and uploaded the latest version of our git repo and model weights to a kaggle dataset to use it in our inference kernel. This allowed us to experiment and iterate quickly.</p> <p>We are planning to release our code on github soon after some clean up.</p> <h4>Architectures</h4> <p>Our ensemble consists of 7 models using the following backbones available in the timm repository. Instead of much augmentation we trained our models on different image scales using albumentations.</p> <ul> <li>2x seresnext101 - SmallMaxSize(512) -&gt; RandomCrop(448,448)</li> <li>1x seresnext101 - Resize(686,686) -&gt; RandomCrop(568,568)</li> <li>1x b3 - LongestMaxSize(512) -&gt; PadIfNeeded -&gt; RandomCrop(448,448)</li> <li>1x b3 - LongestMaxSize(664) -&gt; PadIfNeeded -&gt; RandomCrop(600,600)</li> <li>1x resnet152 - Resize(544,672) -&gt; RandomCrop(512,512)</li> <li>1x res2net101 - Resize(544,672) -&gt; RandomCrop(512,512)</li> </ul> <p>We normalize the images by the mean and std of the imagenet dataset before feeding them into a pretrained backbone. All models use GeM pooling for aggregating backbone outputs. We use a simple Linear(512) + BN + PReLU neck before feeding into an arc margin head with m ranging from 0.3 to 0.4 predicting one of the 81313 landmarks. We use the 512 dimensional output of the neck as the image embedding (= global descriptor) The following illustrates our setup for a SEResnext101 backbone.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1424766%2F6872393fee6bc154a4bf7d0127b62061%2FScreenshot%202020-09-30%20at%2013.30.05.png?generation=1601467910231165&amp;alt=media" alt=""></p> <h4>Training strategy/ schedule</h4> <p>We train all our models on gldv2 clean data only. Each model is trained for 10 epochs with a cosine annealing scheduler having one warm-up epoch. We use SGD optimizer with maximum lr of 0.05 and weight decay of 1e-4 across all models.</p> <h4>Ranking post-processing</h4> <p>As previous editions of this competition have shown, properly ranking and re-ranking predictions is crucial to improve the GAP metric at hand that is sensitive to how landmarks and non-landmarks are ranked respectively. So one major aspect is to specifically penalize non-landmarks that constitute a large portion of the test set. We always tracked both overall GAP as well as landmark-only GAP separately and evaluated all ranking experiments on our validation set that resembled the test set quite well. There are quite different ways to approach this ranking problem, and different ways can lead to success, but here is what we found to work extremely well.</p> <p>In the following graphic we visualize the main concepts of our ranking process. Test refers to the test set on the leaderboard, so the images we need to rank. Train refers to the candidate images we can use to determine the labels and the confidence. One important thing here is that we increased this set of images by all available images for the classes from gldv2_clean from gldv2_full. The 3rd place solution [1] notes that extending the dataset to include also these images improves their training, but what we found is that this is even more useful to be included in the inference process, which makes sense as there are more images to choose from. This also worked well on CV which is how we found it. And finally, Non-landmark includes all non-landmark images from the gldv2 test set. We then calculate all-pairs similarity between all of these sets. A measures the similarity to all available landmarks and their confidence. B measures the similarity of all train images to all non-landmark images and C does the same for the test images. </p> <p>The core idea is now to penalize A by B and C, so to penalize images that are similar to the non-landmark images. A is calculated between each test image and each train image. B and C are calculated by the mean similarity between the image and the top-5 (or 10) most similar non-landmark images.</p> <p>Then, the first step is to penalize A_ij by B_j, pick the top-3 most similar images, and sum the confidence for the same label and then pick the highest. Afterwards, this score is further penalized by C_i. Actually, using both B and C for penalization is a bit redundant, and just using either of those brings good improvements, with just using B is better than just using C.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1424766%2F1e262e5afd63136dec8bc07e686a365a%2FScreenshot%202020-09-30%20at%2014.13.22.png?generation=1601468018948485&amp;alt=media" alt=""></p> <p>This whole procedure helps us to boost the actual landmarks to higher ranks and to penalize non-landmarks and rank them lower. Specifically B helps us to eliminate noise from the Train images. This gave both impressive boosts on CV and LB. <br> One more important thing to note here is that when calculating cosine similarity between different sets, the similarity metric benefits from similarly scaled vectors. So what we do is that we fit a QuantileTransformer (other scalers work similarly well) on the test set, and apply them on the train and non-landmark datasets. This makes the scores way more stable and we assume that this also adjusts differently sized images better.</p> <h4>Blending</h4> <p>For blending our various models, we first l2-normalize them separately and concatenate them and apply above mentioned quantile transformer on each feature. We then calculate for each model separately the top 3 scores from above and then sum the same labels across all top 3 scores from all models and select the maximum label. For calculating C, we use the concatenated embeddings. This procedure is a bit more robust compared to just using the concatenated embeddings, but for simplicity one can also rely on that approach as it produces very similar rankings.</p> <h4>A word on local descriptors</h4> <p>We tried hard getting something out of local descriptors. For that we tried DELG as well as superpoint.<br> DELG: first we tried to port the pretrained DELG to pytorch but gave up after struggling for a day with tf1 and tf2.0 mixups in the original implementation. Instead we extracted local features using the tf implementation directly as done in the public baseline. <br> Superpoint: We extracted local descriptors using the pretrained superpoint net, which is very fast.<br> We matched keypoints straight forward and applied RANSAC after. However the improvement even when using different image scales etc. was very small for both, DELG and Superpoint, and computational time for extracting and matching keypoints (especially when using DELG) was very high. Hence we did not use local descriptors in our final submissions.</p> <h4>What did not work</h4> <ul> <li>Training together with gldv1</li> <li>Training together with gldv2 full</li> <li>Using index dataset 2019</li> <li>Using test set from 2019 stage1 </li> <li>Hyperbolic image embeddings</li> <li>Superglue</li> <li>Deformable Grid</li> <li>1000 other things</li> </ul> <p>Thanks for reading. Questions welcome.</p> <p>[1] <a href="https://www.kaggle.com/c/landmark-recognition-2020/discussion/187757" target="_blank">https://www.kaggle.com/c/landmark-recognition-2020/discussion/187757</a></p> <p>update:<br> paper: <a href="https://arxiv.org/abs/2010.01650" target="_blank">https://arxiv.org/abs/2010.01650</a><br> code: <a href="https://github.com/psinger/kaggle-landmark-recognition-2020-1st-place" target="_blank">https://github.com/psinger/kaggle-landmark-recognition-2020-1st-place</a></p> <p>Hard to say. We submitted the last single model without any reranking 2 weeks ago. That model was at 0.52 private LB and is not part of our final ensemble.</p> <p>Thx. Thats how I felt when private LB was revealed</p> <p>we have 2 efficientnet b3 in our ensemble. Training one of our models takes roughly 24h on 8xV100</p> <p><a href="https://github.com/albumentations-team/albumentations/blob/master/albumentations/augmentations/transforms.py#L407-L441" target="_blank">https://github.com/albumentations-team/albumentations/blob/master/albumentations/augmentations/transforms.py#L407-L441</a></p> <blockquote> <p>Rescale an image so that minimum side is equal to max_size, keeping the aspect ratio of the initial image.</p> </blockquote> <p>and LongestMaxSize accordingly</p> <p>updated description with video links of us presenting the solution:<br> NVIDIA Grandmaster Series Ep2 <a href="https://youtu.be/VxNDH6qLZ_Q" target="_blank">https://youtu.be/VxNDH6qLZ_Q</a><br> Chai Time Data Science <a href="https://youtu.be/NRl3lMlixPc" target="_blank">https://youtu.be/NRl3lMlixPc</a></p>
Google Landmark Recognition 2020
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2020 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/weimin" target="_blank">@weimin</a>, <a href="https://www.kaggle.com/andy2709" target="_blank">@andy2709</a> yes! Post-processing was even important in last year's competition as there are much more distractors ( with less than 2k landmark images and more than 100k distractors). </p> <p>I struggled with the big dataset and tried hard to feed larger images to the models during the whole period of competition since I entered (~40 days) on my server with 8 GPUs, I also tried Colab Pro, as the winner of Landmark Retrieval Competition 2020 trained similar models successfully on it. </p> <p>Congrats to you and <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a>, you've done a very good job. I quite enjoy the "battle" and satisfied with the result.</p> <p>Yes, the usage of the full-version of GLDv2 is the key to this competition indeed, as there are so many landmark images were mis-cleaned from GLDv2 dataset.</p> <p>It's a pity that I analysed the rules of generating the GLDv2 clean dataset, and I quite sure I should use all the images when training but missed the kNN search part.</p> <p>Congratulations to all the winners, thanks to kaggle and Google for hosting Landmark Retrieval/Recognition competition. </p> <p>Here is my brief solution to this competition.</p> <p><strong>Definition</strong><br> <strong>GLD_v2c</strong>(cleaned GLDv2), there are 1.6 million training images and 81k classes. All landmark test images belong to these classes.<br> <strong>GLD_v2x</strong>, in GLDv2, there are 3.2 million images belong to the 81k classes in GLD_v2c. I define these 3.2m images as GLD_v2x.<br> <strong>query image</strong>, the test image when submitting to kaggle or the image in validation set when validate locally.<br> <strong>Index image set</strong>, the images from train set, as described in Data page of this competition, “subset contains all of the training set images associated with the landmarks in the private test set”<br> <strong>non-landmark image set</strong>, sub set of 5000 non-landmark images form test set of GLDv2.</p> <p><strong>Models to retrieve images</strong></p> <p>It’s very important to search related images from index image set accurately. I trained the efficientnet B5, B6, B7 and Resnet152 models according to the first place solution of Landmark Retrieval 2020 competition from <a href="https://www.kaggle.com/keetar" target="_blank">@keetar</a>, <a href="https://www.kaggle.com/c/landmark-retrieval-2020/discussion/176037" target="_blank">https://www.kaggle.com/c/landmark-retrieval-2020/discussion/176037</a>. I will only describe the differences of my training strategy, please refer to his great solution for details.<br> 1.Train 448x448 images from GLD_v2c for 5-6 epochs.<br> 2.Finetune the model of step1 on 448x448 images from GLD_v2x<br> 3.Finetune the model of step2 on 512x512 images from GLD_v2x for 5-6 epoch. I used GLD_v2x instead of GLD_v2 all.<br> 4.640x640 for 3-5 epochs<br> 5.736x736 for 2-3 epochs</p> <p><strong>Loss function</strong>: arcface instead of adacos loss.<br> <strong>Optimizer</strong>: SGD(0.01, momentum=0.9, decay=1e-5) <br> <strong>Inference</strong>: extract embeddings by feeding 800x800 images.<br> <strong>Validate set</strong>: sample 200 images from GLDv2 test set as val set and all the ground truth images of Google Landmark Retrieval Competition 2019 as index dataset</p> <p>After replacing the model in baseline kernel <a href="https://www.kaggle.com/camaskew/host-baseline-example" target="_blank">https://www.kaggle.com/camaskew/host-baseline-example</a> from the host with trained efficientnet B7 model, <br> the public and private score of B7 model are 0.5927/0.5582.</p> <p><strong>Validation strategy for Landmark Recognition Task</strong><br> The val set part 1: the 1.3k landmark images from GLDv2 test set( exclude those not in 81k classes) . <br> The val set part 2: sample 2.7k images from GLD_v2x-GLD_v2c.</p> <p>The index image set for val set: all the images of related landmarks from GLD_v2c train set and sample some other images to get 200k images.</p> <p>This strategy is quite stable during the whole competition, but unfortunately, the private test set distribution is a little different from my local CV and public test set. I should have used all the GLD-v2x images to generate index image set as many landmark images are not included in GLD-v2c. </p> <p><strong>SuperPoint+SuperGlue+pydegensac</strong><br> This combination is better than delf+kdtree+pydegensac.<br> The scored improved from 0.5927/0.5582 to 0.6146/0.5756, which can be top 10 on leaderboard.</p> <p><strong>Post-Processing</strong></p> <p>Although I force myself not pay to much efforts on post-processing, finding magic or reverse engineering, post-processing is so important in this competition that I had to spend a lot of time to analyse the model results and design rules on the validation set, I tried rules from winner solutions of last year, the following are the effective rules:<br> 1.Search top 3 non-landmark images from no-landmark image set for query a image, if the similarity of top3 &gt;0.3, then decrease the score of the query image.<br> 2.If a landmark is predicted &gt;20 times in the test set, then treat all the images of that landmark as non-landmarks.</p> <p>As many features(ransac inliers, similarity to index images, similarity to non-landmark images…) can be used for determining whether an image is non-landmark or not, I developed a model which can be called re-rank model. As to re-rank, we can refer to this post:<br> <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159315" target="_blank">https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159315</a></p> <p>After post-processing, the score of efficientnet B7 model improved from 0.6146/0.5756 to 0.6797/0.6301 which can be top-3 on leaderboard.<br> <strong>Ensemble</strong><br> My final scored was achieve by ensemble efficientnet B7, B6, B5, Resnet152 models, which is not a big improvement considering the complexity and computation resources.</p> <p>repo:<a href="https://github.com/bestfitting/instance_level_recognition" target="_blank">https://github.com/bestfitting/instance_level_recognition</a><br> [Update] with paper attached</p> <p>Thanks for your kind word, but I learned so much from kaggle community, without <a href="https://www.kaggle.com/keetar" target="_blank">@keetar</a> 's sharing, I can not train a good retrieval model. So, let's compete and learn from each other.</p> <p>800x800 is a little better. <br> Train on small images and inference on large images is a useful trick when we want to make use of the details of an image.</p> <p>I mentioned here:<br> <a href="https://www.kaggle.com/c/landmark-recognition-2020/discussion/188299#1037363" target="_blank">https://www.kaggle.com/c/landmark-recognition-2020/discussion/188299#1037363</a></p>
Google Landmark Recognition 2020
31st place solution - single DELG model.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2020 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>31 place model:<br> I only used a single DELG model and with no extra training just the pre-trained weights with extra parameters tuning and post/pre-processing like e.g. below.<br> <strong>RANSAC Parameters</strong></p> <p>This was a new area for me, not much of experience, so started to read a lot, posted some finding in forum with topic The world of RANSAC <a href="https://www.kaggle.com/c/landmark-recognition-2020/discussion/180921" target="_blank">https://www.kaggle.com/c/landmark-recognition-2020/discussion/180921</a></p> <p><strong>RANSAC Parameters</strong><br> <strong>MAX_INLIER_SCORE</strong></p> <p>Here I tested some values and a value of 20 had best result in the beginning but saw in the papers that a value of 70 was used, I ended up with a value in between to be safe. Couldn’t see that this was a part of the original ransac implementation, nor finding any papers on the subject, so didn’t gave it so much time. I also started using the DELG_SCORE_THRESHOLD_TENSOR and didn’t want to limit the data too much.</p> <p><strong>RANSAC Parameters</strong><br> <strong>MAX_REPROJECTION_ERROR, MAX_RANSAC_ITERATIONS, HOMOGRAPHY_CONFIDENCE</strong></p> <p>I had the following research paper as a code reference on the subject. “Image Matching Across Wide Baselines: From Paper to Practice” from 17 aug <a href="https://arxiv.org/pdf/2003.01587.pdf" target="_blank">https://arxiv.org/pdf/2003.01587.pdf</a> and also the Github <a href="https://github.com/ducha-aiki/pydegensac" target="_blank">https://github.com/ducha-aiki/pydegensac</a> </p> <blockquote> <p>“As usual, you may set pixel threshold for point to could as inlier, max_iters and confidence Besides this, there are following options:<br> laf_consistensy_coef. If &gt; 0, it means that scale and orientation of the feature will be checked for consistency with the far-the-best found model. The threshold for checking px_th * laf_consistensy_coef. Becayse they are usually less precise that keypoint center, it is recommended toset laf_consistensy_coef &gt; 1, e.g. 3.<br> error_type. The measure of the correspondence quality. Can be 'sampson', 'symm_sq_max', 'symm_max', 'symm_sq_sum', 'symm_sum'. For precise definition, please take a look to Chapter 4.2 Hartley and Zisserman "Multi View Geometry.pdf)"<br> symmetric_error_check If one should perform the additional check of the inliers with 'symm_max' error type"</p> </blockquote> <p>I tested many of the other included PyRANSAC options Symmetric_error_check, Error_type , laf_consistensy_coef, but didn’t changed the score to the better, default parameters Pixel Threshold, Max_iters and Conf worked best for this problem.</p> <p>From the paper I saw that for PyRANSAC a <strong>MAX_REPROJECTION_ERROR</strong> of 0.1-20 was used and they had the best threshold between 0.24-2.0, the baseline kernel had 4.0, for this Landmark problem I found with local validation that a value of 12 work best but used a value of 13 in the final model since it had a better Competition score.</p> <p>For <strong>max iterations</strong> they wrote in the paper:</p> <blockquote> <p>We summarize the optimal hyperparameters – the maximum number of RANSAC iterations η and the ratio test threshold r – for each combination of methods. The number of RANSAC iterations Γ is set to 250k for PyRANSAC, 50k for DEGENSAC, and 10k for both GC-RANSAC and MAGSAC</p> </blockquote> <p>And many public kernel used much higher iterations and they reported good results within that interval. In the DELG paper 1k was used for validation and same in the baseline. I could also see in my study of this parameter that the number of Iteration are related to number of outliner ratio, well described in this video <a href="https://youtu.be/5E5n7fhLHEM" target="_blank">https://youtu.be/5E5n7fhLHEM</a>. My conclusion was that an iteration of 5k was enough to fit the inference limit and the problem, and I also finally used 5k after some interval testing.</p> <p>In the same paper the <strong>confidence level</strong> was discussed,</p> <blockquote> <p>“All methods considered in this paper have three parameters in common: the confidence level in their estimates, τ ; the outlier (epipolar) threshold, η; and the maximum number of iterations, Γ. We find the confidence value to be the least sensitive, so we set it to τ = 0.999999”</p> </blockquote> <p>I kept the baseline value of 0.99 as it had not much effect of the outcome, had already tweaked the kernel into outer space at this time.</p> <p><strong>Max_distance</strong></p> <p>I left the max_distance to 0.85 even though in some paper the value 0.8 are used and I could also see better result with higher value, so here we had an interval of choices, but finally left the default value, too much tweaking had already been done.</p> <p><strong>Other notes:</strong></p> <p>In baseline code I think removing the pre_verification_predictions part will make the inference quicker as it only use the post_verification_predictions part.</p> <p>I tried the last days to cast with float16 and bigger batch size to speed up the inference, didn’t though time the inference, maybe it can help to fit more expensive settings or ensemble.</p> <p><strong>Other models:</strong><br> I trained my own model as well but did not have time to train it better.</p> <p>This was my first Landscape Recognition competition, so many new areas, ideas and valuable time trying to solve the problem.<br> A started to look at some of the previous competitions write-ups, research papers about the techniques and math’s used in the area, and also studied information from TF/DELG-sites, just to mark some for further reading and testing, it became a long list but a great driving license.</p> <p><a href="https://www.kaggle.com/c/landmark-retrieval-2019" target="_blank">https://www.kaggle.com/c/landmark-retrieval-2019</a><br> <a href="https://www.kaggle.com/c/landmark-recognition-2019" target="_blank">https://www.kaggle.com/c/landmark-recognition-2019</a><br> <a href="https://www.kaggle.com/c/landmark-retrieval-2020" target="_blank">https://www.kaggle.com/c/landmark-retrieval-2020</a></p> <p>I could see that using EfficientNets would be good strategy but would take some time to finish.<br> First though I started with instead using the baseline and DELG-models, as stated in the papers a single model could compete in the top-3 segment, good track record to follow. I should also have started train the Efficientnets directly and not in the middle of the competition, but now this was the case this time, and using ifs and buts is not a delicate approach.</p> <p><strong>Competition score and local validation ratio</strong></p> <p>After looking at the past Landmark competitions I saw that it where a minimal to non-shake-ups, so fitting and rely on the leaderboard score should be save.</p> <p><strong>EfficientNets</strong></p> <p>First a quick recap of the training of my own model.<br> Starting kernel <a href="https://www.kaggle.com/ragnar123/efficientnetb3-data-pipeline-and-model" target="_blank">https://www.kaggle.com/ragnar123/efficientnetb3-data-pipeline-and-model</a>, excellent work and all credit to <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> and I had a plan to follow this great winning solution <a href="https://arxiv.org/abs/2009.05132" target="_blank">https://arxiv.org/abs/2009.05132</a>.<br> Trained a couple of EffNets 4-7, but with the limit of 40h/week on Kaggle TPU and 6h/day on Colab TPU, I should started the training much earlier in the competition to use them in final ensemble, Again - using ifs and buts is not delicate approach.<br> Best single model:<br> EffB7ns with GEM(3), ArcFace(0.1), No DropOut, 512Dim, SGDM, SyncBatchNormalization, weight average of 6 last epochs, no scheduler.</p> <p>Switched from 384 to 512 before it converged when I saw it shouldn’t converge before competition deadline. With the colab limit of 6h I had to split the training data to half to train them in different epochs, to fit the limit, not the best situation, maybe I could have done a better solution.<br> Trained with random_flip_left_right and used TTA flip_left_right in Inference, increased the score.</p> <p>Also used XLA and TensorFlow Auto Mixed Precision, the XLA part help to speed up the training but every attempt using mixed precision with TPU failed, could though start the training with 2x batch size but gave NAN loss after a while(?), even tried LossScaleOptimizer.<br> Ended up with Public 0.4924 Private 0.4721 – and not close to global minimum/maximum.</p> <hr> <p>That's all ! 🙂 And it gave me my first silver medal! 🙂<br> With this competition it's also time celebrate one year with Kaggle🎉, and with almost different challenges/problems every time I hope some will return my 2nd year, facing the competitions even better the second time with some experience :)</p>
Google Landmark Recognition 2020
3rd Place Solution — A Pure Global Feature Approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2020 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers and congrats to all the winners! Our ( <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a> <a href="https://www.kaggle.com/boliu0" target="_blank">@boliu0</a> <a href="https://www.kaggle.com/garybios" target="_blank">@garybios</a> <a href="https://www.kaggle.com/alexanderliao" target="_blank">@alexanderliao</a>) solution is a pure global feature metric learning approach with some tricks.</p> <h2>architecture: sub-center ArcFace with dynamic margins</h2> <p>ArcFace (<a href="https://arxiv.org/abs/1801.07698" target="_blank">paper</a>) has become a standard metric learning method on Kaggle over the past two years or so. In this competition, we used Sub-center ArcFace (<a href="https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123560715.pdf" target="_blank">paper</a>), a recent improvement over ArcFace by the same authors. The idea is that each class may have more than one class center. For example, a certain landmark's photos may have a few clusters (e.g. from different angles). Sub-center ArcFace's weights store multiple class centers' representations, which can increase classification accuracy and improve global feature's quality.</p> <p>The classes in GLD dataset are extremely imbalanced with longs tails. For models to converge better in the presence of heavy imbalance, smaller classes need to have bigger margins as they are harder to learn. Instead of manually setting different margin levels based on class size, we introduce <em>dynamic margins</em>, a family of continuous functions mapping class size to margin level. This give us some major boost. Details to be described in paper.</p> <p>We train the model using ArcFace loss only.</p> <h2>validation scheme</h2> <p>Stratified 5-fold. Training using 4 folds, validate on only 1/15 of 1 fold to save time. Use different folds for different single models, for maximal diversity in ensemble.</p> <p>We use GAP metric to validate. CV GAP is very high compared to LB. All model's CV GAP are over 0.967. Good news is that CV GAP and LB have high correlation.</p> <h2>test set predicting strategy</h2> <ul> <li>If we use model's ArcFace head to predict test set, our best single fold model's public LB is only <strong>0.564</strong></li> <li>A better strategy is to calculate the global feature cosine similarity of each [private train image, private test image] pair, and use the top1 neighbor and corresponding cosine similarity of each test image as prediction. This gives us <strong>0.604</strong> public LB for the same single fold model</li> <li>Instead of using top1 neighbor only, we can improve it by combining top5 neighbors and their similarities. Best combining function is 8th power. <br> E.g. let's say test image A's top5 neighbors and their cosine similarities are: class 1 (0.9), class 2 (0.8), class 2 (0.7), class 1 (0.5), class 3 (0.45). Then class 1's total score is <code>0.9**8 + 0.5**8 = 0.434</code>; class 2's total score is <code>0.8**8 + 0.7**8 = 0.225</code>. So we predict class 1 with p=0.434<br> LB increases to <strong>0.610</strong></li> <li>We can further improve it by incorporating ArcFace head's predictions with 12th power. In above example, assuming image A's ArcFace head give class 1 score = 0.75, class 2 score = 0.88. Then class 1's total score becomes <code>(0.9**8 + 0.5**8) * 0.75**12 = 0.0138</code>; class 2's total score becomes <code>(0.8**8 + 0.7**8) * 0.88**12 = 0.0486</code>. Now we predict class 2 with p=0.0486<br> LB increases to <strong>0.618</strong></li> </ul> <h2>augmentations</h2> <p>We resize all images to square shape without cropping.</p> <pre><code>import albumentations as A A.Compose([ A.HorizontalFlip(p=0.5), A.ImageCompression(quality_lower=99, quality_upper=100), A.ShiftScaleRotate(shift_limit=0.2, scale_limit=0.2, rotate_limit=10, border_mode=0, p=0.7), A.Resize(image_size, image_size), A.Cutout(max_h_size=int(image_size * 0.4), max_w_size=int(image_size * 0.4), num_holes=1, p=0.5), A.Normalize() ]) </code></pre> <h2>pretraining and finetuning</h2> <ul> <li>In cGLDv2 (cleaned GLDv2), there are 1.6 million training images and 81k classes. All landmark test images belong to these classes.</li> <li>In GLDv2, there are 4.1m training images and 200k classes, among which 3.2m images belong to the 81k classes in cGLDv2.</li> </ul> <p>We noticed that (1) training with the 3.2m data gives better results than only the 1.6m competition cGLDv2 data, (2) pretraining on all 4.1m data then finetuning on 3.2m data gives even better results.</p> <h2>3-stage training schedule</h2> <ul> <li>Stage 1 (pretrain): 10 epochs with small image size (256) on 4.1m data</li> <li>Stage 2 (finetune): about 16 epochs with medium image size (512 to 768 depending on model) on 3.2m data. Number of epochs varies by model and ranges from 13 to 21. </li> <li>Stage 3 (finetune): 1 epoch with large image size (672 to 1024) on 3.2m data</li> </ul> <p>Note: above schedules are for Sub1 (private 0.6289). Sub2 has higher score (private 0.6344) but more complex schedules, i.e. longer with more rounds of finetuning. Both submissions are 3rd place.</p> <h2>ensemble</h2> <p>7 models: EfficientNet B7, B6, B5, B4, B3, <a href="https://github.com/zhanghang1989/ResNeSt" target="_blank">ResNeSt101</a>, <a href="https://github.com/clovaai/rexnet" target="_blank">ReXNet2.0</a></p> <p>For global feature neighbor search, we concatenate each model's 512-dimension feature; for ArcFace head, we take simple average of each model's logits.</p> <p>Best single model is EfficientB6. <strong>private = 0.6005, public = 0.6179</strong> (We didn't submit all the epochs, there may be higher ones)<br> Sub1 is 7 model ensemble. <strong>private = 0.6289, public = 0.6604</strong><br> Sub2 is 9 model ensemble (B5 and B6 twice). <strong>private = 0.6344, public = 0.6581</strong></p> <h3>[update 10/12/2020]</h3> <p>paper: <a href="https://arxiv.org/abs/2010.05350" target="_blank">https://arxiv.org/abs/2010.05350</a><br> repo: <a href="https://github.com/haqishen/Google-Landmark-Recognition-2020-3rd-Place-Solution" target="_blank">https://github.com/haqishen/Google-Landmark-Recognition-2020-3rd-Place-Solution</a></p> <p>Hi Zakirov, I was not aware of this paper. Thanks for letting us know. I will take a look.</p> <p>Hi Thanks HangWang. We just open sourced the solution: <a href="https://github.com/haqishen/Google-Landmark-Recognition-2020-3rd-Place-Solution" target="_blank">https://github.com/haqishen/Google-Landmark-Recognition-2020-3rd-Place-Solution</a></p>
Google Landmark Recognition 2020
6th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2020 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We thank all organizers for this very exciting competition. <br> Congratulations to all who finished the competition and to the winners.</p> <h2>Summary</h2> <ul> <li>kNN based on global descriptor</li> <li>Non-landmark filtering with similarity between test set and GLDv2 test set</li> <li>We tried local descriptor methods but they were not used in our final submission</li> </ul> <h2>Model Details</h2> <p>We trained CosFace based global features models. <br> The settings are almost the same as our models in the retrieval2020 (See <a href="https://www.kaggle.com/c/landmark-retrieval-2020/discussion/175472" target="_blank">https://www.kaggle.com/c/landmark-retrieval-2020/discussion/175472</a>)</p> <ul> <li>Backbones: Ensemble of ResNeSt101, ResNeSt101, ResNeSt200</li> <li>Pooling: GeM (p=3) (Replace GeM p=3 with p=4 in testing)</li> <li>Head: FC-&gt;BN-&gt;L2</li> <li>Loss: CosFace with Label Smoothing</li> <li>Data Augmentation: HorizontalFlip, RandomResizedCrop, Rotation, RandomGrayScale, ColorJitter, GaussianNoise, Normalize, and GridMask</li> <li>LR: Cosine Annealing LR with warmup, training for 30 epochs + refine 5epochs</li> <li>Input image size in training: 352 (in refine: 640)</li> <li>Adding 600 non-landmark images sampled from the test set of the last year's recognition competition in training (this made the total 81,314 classes).</li> </ul> <p>Additionally, we tried "Inplace Knowledge Distillation" by MutualNet paper (<a href="https://arxiv.org/pdf/1909.12978.pdf" target="_blank">https://arxiv.org/pdf/1909.12978.pdf</a>) to train multi-scale image features efficiently. <br> It has been adopted for all models.</p> <h2>HOW descriptor based ASMK similarity</h2> <p>We tried HOW descriptor (<a href="https://arxiv.org/pdf/2007.13172.pdf" target="_blank">https://arxiv.org/pdf/2007.13172.pdf</a>) as an alternative of the global feature model. <br> HOW aggregates CNN based local descriptors into a single global descriptor with ASMK. <br> We trained a ResNet50-based HOW descriptor model (our pytorch implementation) and got public score = 0.5284 (private = 0.5044). <br> One of our final submission was based on blending between global cosine similarity and ASMK similarity, that achieved the best public score = 0.6298. However, the other global-feature-only approach achieved the better private score.</p> <table> <thead> <tr> <th>Final submissions</th> <th>Private</th> <th>Public</th> </tr> </thead> <tbody> <tr> <td>Global features</td> <td>0.5983</td> <td>0.6271</td> </tr> <tr> <td>Global + HOW ASMK</td> <td>0.5960</td> <td>0.6298</td> </tr> </tbody> </table> <h2>What did not work</h2> <ul> <li>Reranking top-k with HOW ASMK similarity (Blending similarity was better)</li> <li>Reranking top-k with DELG local descriptors</li> <li>DBA</li> <li>Replacing the private train set with GLDv2clean<ul> <li>Undersampling up to 350 images per class</li> <li>Removing classes that do not exist in the private train set</li></ul></li> </ul>
Google Landmark Recognition 2020
7th place solution (with inference kernel + training code)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2020 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Firstly, I want to thank my teammates <a href="https://www.kaggle.com/aerdem4" target="_blank">@aerdem4</a> and <a href="https://www.kaggle.com/dattran2346" target="_blank">@dattran2346</a> for their hard work. In this comp., they focused on improving global models so that I could spend time on local part.</p> <p>Our best sub. was geometric mean of similarities from 4 models (2 backbones SEResNext101+ResNext101-32x4d at 512x512 and 736x736 image sizes) + re-ranking by SuperPoint + SuperGlue. We pre-computed 1.6m train embeddings for 4 models, added them as ext. data and simply filtered for the relevant private image ids (100k) for inference.</p> <p><strong>1. Global models</strong><br> We continued training from our best checkpoints from retrieval challenge (<a href="https://www.kaggle.com/c/landmark-retrieval-2020/discussion/176151)" target="_blank">https://www.kaggle.com/c/landmark-retrieval-2020/discussion/176151)</a>. Our models were originally trained with focal smoothing loss with a tweak to work nicely with CosFace (thanks to Ahmet) on GLv2 clean dataset. For this comp., we used all the remaining train images from 81k classes (3.2m images) for re-training. We also came across the seesaw loss paper (<a href="https://arxiv.org/abs/2008.10032" target="_blank">https://arxiv.org/abs/2008.10032</a>) and decided to give it a try (thanks to Dat).<br> Model architecture was just simply CNN + GEM + Linear + BN + CosFace, like many other teams. <br> Our 2 models were then re-trained with these two losses as following:</p> <ul> <li>Stage 1: 20 epochs at size 512x512</li> <li>Stage 2: froze batchnorm layers, then fine-tuned at size 736x736 for 2 epochs.<br> Compared to the retrieval comp., our local validation scores were boosted by 5% and correlated well with the LB. From our estimates, 3% was due to adding extra data, 1% from the seesaw loss and 1% from larger image size.</li> </ul> <p><strong>2. Local model</strong><br> After obtaining the top-k nearest train ids for each test image, we used SuperPoint + HRNetv2 pre-trained on ADE20k dataset to filter predicted keypoints on sky/person/flowers/tree classes + SuperGlue (copied from the winning sol. at CVPR this year) to calculate the number of inliers between each image pair. Local score was then obtained using the formula provided by the host team (with max_num_inliers=200) and then multiplied with global score to get final score. This post-processing step gave a very strong boost (4-6% in public LB to our above global models). However, we noticed that the better the global models were, the less improvement this local re-ranking step gave us. In addition, we couldn't come up with a reliable strategy to evaluate SuperPoint+SuperGlue when integrating with global models locally. Relying solely on LB was pretty dangerous and we were lucky to stay at 7th 😅.<br> I also tried to re-train SuperPoint and SuperGlue on a subset of 200k clean train images but couldn't obtain any positive improvement.</p> <p>Our kernel + training code + model checkpoints were here: <a href="https://www.kaggle.com/andy2709/fork-of-recognition-notebook-16367c-511fa4-95d3bf?scriptVersionId=43685736" target="_blank">https://www.kaggle.com/andy2709/fork-of-recognition-notebook-16367c-511fa4-95d3bf?scriptVersionId=43685736</a>. </p> <p>P/s: The DELG weights + training config. files for 3 different backbones (res50, res101 and seres101) will be uploaded soon 😃. They were all trained with 512x512 images for 10 epochs with AdamW and cosine scheduler. In retrieval challenge, res50's perf was 0.299/ 0.268 (public/ private); others were higher so I'm quite confident in the implementation correctness. </p> <p>It's a high-performant segmentation model. You can find it here: <a href="https://github.com/CSAILVision/semantic-segmentation-pytorch" target="_blank">https://github.com/CSAILVision/semantic-segmentation-pytorch</a></p>
Google Landmark Recognition 2020
My engineering approach to the problem. #49
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2020 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is my first real competition. Definitely learnt quite a bit! Thanks to the organizers. After giving up on training my model, I leveraged the baseline model with few additional measures. My approach did quite well in my test cases with 2019 competition data (~0.6) but unfortunately didn't do quite as well on this contest.</p> <ul> <li><p>Precomputed embeddings for complete train data and stored in sqlite3 database. This substantially reduced time to evaluate.</p></li> <li><p>Mix in precomputed embeddings with embeddings generated on a subset openimages dataset. I assign a landmark ID of 999999 to this dataset and use it to filter out any images that matched this dataset. </p></li> <li><p>After this step, the dataset is reduced by 10x allowing a lot of time to do local matching etc. </p></li> </ul> <p>I did try using cv2 sift for local scoring but didn't see much benefit. </p>
Google Landmark Recognition 2020
[21st place] - My solution to 2 silver medals in GL 2020
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2020 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you. I tried to filter non-landmark by using the probabilities of arcface head (updated in my post), but it downgraded my score so I didn't use any filtering in my final results.</p> <p>Thank you, congrats your silver too :D </p> <p>Thank you :D</p> <p>Wow :)), thank you <a href="https://www.kaggle.com/duykhanh99" target="_blank">@duykhanh99</a> </p> <p>Thank you <a href="https://www.kaggle.com/vineeth1999" target="_blank">@vineeth1999</a> :))</p> <p>First of all, congratulations to everyone for having interesting journeys across such a challenging competition, especially to the top teams and all who gained a lot from the competition. Also, thanks to google for organizing the 3rd Landmark competitions.</p> <p>For me, throughout two competitions of Google Landmark 2020, I really learnt such an amount of knowledge, and also came up with many ideas to compete in those harsh competitions. Here are things I have done.</p> <h1><strong>Retrieval competition</strong></h1> <p>Actually, I didn’t think this competition was for me at first, because I’m kind of a newbie in tensorflow :)). However, after a bad performance in Global Wheat Detection challenge, I decided to join seriously in the competition within 13 days left.</p> <h2>Architecture</h2> <p>First, I tried to train delf model from <a href="https://github.com/tensorflow/models/tree/master/research/delf" target="_blank">tf delf</a>, the result is no where near the host baseline kernel (only 0.12-0.13 compared to 0.271 of host baseline) so I quickly quitted this way.</p> <p>Luckily, <a href="https://www.kaggle.com/chandanverma/convert-pytorch-model-to-tf-2-2-submission-format" target="_blank">onnx</a> kernel was published and it was a big chance for a pytorch user like me that I could develop my own model. From top-down view, my final model is ensemble of 4 models, which can be represented as followed:</p> <pre><code>net_1 -&gt; feature_1 | ... |--[concat]--&gt; [fc 4096]--&gt; final_feature net_4 -&gt; feature_4 | </code></pre> <p>With net_i:</p> <pre><code>CNN backbone --&gt; GeM pooling --&gt; fc --&gt; batchnorm --&gt; arcface/cosface </code></pre> <p>More detailed:</p> <pre><code>net_1: efficientnet-b2, fc 1024 net_2: efficientnet-b3, fc 512 net_3: efficientnet-b3, fc 1024 net_4: efficientnet-b4, fc 512 </code></pre> <p>From my results, training one more fc layer on top of ensemble net gave a 0.02 boost on final score compared to simple concatenating.<br> I used input images of size 256x256, which might really affect my score (compared to top solutions that use images of large size), and training only with GLDv2 clean. Also while I always use RAdam optimizer, I realize that almost all top teams use SGD, maybe I should switch to SGD in the future?</p> <h2>Result</h2> <pre><code>Best single-model score: 0.270 - 0.233 (efficientnet-b3, fc 512) Best ensemble-model score: 0.293 - 0.260 </code></pre> <p>I had learnt a lot from public solution of top teams after the end of the competition, and figured out somethings I could improve in the retrieval task:</p> <ul> <li>larger image size</li> <li>dealing with imbalanced dataset using class weights</li> <li>training on extended dataset: GLDv1, GLDv2 full.</li> </ul> <p>Although I struggled with tf model submission type at first, it was at last very lucky for me because I didn't have to care about the post-processing phase such as local features, rescoring and reranking, ransac, etc. </p> <h1><strong>Recognition competition</strong></h1> <p>Maybe I was lucky in the retrieval competition, finally I had to deal with everything when coming to the recognition challenge :).</p> <h2>Global feature extraction</h2> <p>I used entirely different set of backbones in comparison with those in retrieval task:</p> <pre><code>net_1: resnest200, fc 512 net_2: resnest200, fc 1024 net_3: resnest269, fc 512 net_4: resnest269, fc 1024 net_5: resnet152, fc 512 </code></pre> <p>From my results, resnest and resnet152 gave much better performances than efficientnet (even b7). I trained with 224x224 images of GLDv1 + GLDv2 clean and fine-tuned with 448x448 of GLDv2 clean, using arcface.</p> <h2>Local feature extraction</h2> <p>I trained delg, delf with <a href="https://github.com/tensorflow/models/tree/master/research/delf" target="_blank">tf delf</a> again, and further trained a PCA to diminish dimension of descriptors from 1024 -&gt; 128. However, I couldn’t beat the delg model in the host baseline kernel. My best self-trained delg is worse 0.01 than baseline delg.<br> So for my final score, I chose the delg baseline model to extract local features.</p> <h2>Rescoring and reranking</h2> <p>I defined score of reranking process as followed:</p> <pre><code>score = global_score + local_score + recognition_score </code></pre> <p>Definition of my own recognition score:</p> <pre><code>recognition_score = max_value * (1-k/topk) ** alpha - recognition score is based on the arcface head of my model - max_value: max value of recognition score - k (&lt; topk): rank of predicted label in sorted arcface head - topk: topk prediction of arcface, recognition_score &gt; 0 if predicted label is in topk else 0 - alpha &lt; 1: giving a boost to small k (top1, top2 &gt;&gt; top 99, top 100) - My final submission: (max_value, topk, alpha) = (2, 200, 0.35) </code></pre> <p>I didn’t try many values of hyperparameters of my recognition_score, but it gave a 0.006 boost compared to the scoring scheme from the baseline kernel.</p> <h2>Result</h2> <pre><code>Best single-model score: 0.5424 - 0.5233 (resnest200, fc 512) Best ensemble-model score: 0.5640 - 0.5354 </code></pre> <h2>Not worked things</h2> <p>Here are some ideas which did not work well:</p> <ul> <li>Filter distractors by arcface head (if label not in top 200 -&gt; non-landmark) it downgraded my score around 0.01</li> <li>Online fine-tuning with private training set, downgraded my score around 0.02</li> </ul> <p>That might be a little long for now :)). I will update this post if I remember something interesting. Thank you for reading my sharing!</p> <p>Thank you :)</p> <p>Thank you, but no, call me Khoa plz :))).</p>
Google Landmark Recognition 2020
[22nd Place + Code] Margin Scheduling – First time on Kaggle :D
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2020 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Colab Pro TPU v2-8 and private GCS buckets 😊 I spent more money on milk tea with boba than on training hardware and cloud computing 😂</p> <p>First, <strong>congratulations to the top teams!!!</strong> 🎉🎉🎉🎉🎉🎉🎉 What they achieved is incredible! </p> <p>This year's <a href="https://www.kaggle.com/c/landmark-retrieval-2020/" target="_blank">Landmark Retrieval</a> and <a href="https://www.kaggle.com/c/landmark-recognition-2020/" target="_blank">Recognition</a> challenges are the first ML competitions that I've ever participated in. Haven't touched anything ML-related for a year, so this was refreshing.</p> <p>Unfortunately, I joined the Retrieval competition late (8 days before the end) and got only around 2 weeks to participate in this one part-time. I've learned a lot during this competition. If there will be a landmark challenge next year, I'll give it a better fight!</p> <p>Also, I would like to thank <a href="https://www.kaggle.com/keetar" target="_blank">@keetar</a> for being an inspiration – what he achieved with just Colab Pro showed me that you don't need fancy hardware to compete, even in such a hard competition with a ton of data. His <a href="https://www.kaggle.com/c/landmark-retrieval-2020/discussion/176037" target="_blank">excellent writeup</a> was the sole reason why I decided to participate in the Recognition track.</p> <h2>Retrieval Track (39th, Silver medal)</h2> <ul> <li>Nothing special, just a weighted ensemble of <code>0.271</code> and <code>0.277</code> models and got silver medal lol lol lol 😆😆😆 I also tried some filtering using Places365 VGG model, but it doesn't work out.</li> </ul> <hr> <h2>Recognition Track (22nd, Silver medal) – margin scheduling</h2> <p>I realized that I've been spamming the Discussion section a lot, so I'll keep this short:</p> <ul> <li><p>I trained a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/applications/ResNet152V2" target="_blank">ResNet152V2</a> with <a href="https://arxiv.org/abs/1711.02512" target="_blank">GeM</a> (p=3) + <a href="https://arxiv.org/abs/1801.07698" target="_blank">ArcFace</a> and a <a href="https://github.com/qubvel/efficientnet" target="_blank">EffNetB6</a> with GAP + <a href="https://arxiv.org/abs/1801.07698" target="_blank">ArcFace</a> global descriptors on Cleaned GLDv2. Didn't have time to train to full convergence ☹️☹️☹️ </p></li> <li><p>For <a href="https://arxiv.org/abs/1801.07698" target="_blank">ArcFace</a>, I started with a small margin (i.e. <code>1e-3</code>), and gradually increased it at the end of every period of Cosine LR, up to <code>0.31</code>. It was harder for me to train with a fixed margin.</p></li> <li><p>Training hardware: Colab Pro only!</p></li> <li><p>I simply used a publicly available object detector <a href="https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1" target="_blank">FastRCNN+InceptionResNetV2</a> trained on <a href="https://ai.googleblog.com/2020/02/open-images-v6-now-featuring-localized.html" target="_blank">Open Images V4</a> for non-landmark removal. This gave a boost of around <strong>+0.005</strong> on pub. LB.</p></li> <li><p>Because I started late (~2 weeks before the end) and didn't had enough time to experiment around, I used <a href="https://arxiv.org/abs/1907.05550" target="_blank">Data Echoing</a> technique to speed up training <strong>by ~20%</strong>.</p></li> <li><p>Standard re-ranking: Retrieval step (using KNN) + Non-Landmark Removal (as described above) + Local Descriptors extraction (using baseline DELG model) + RANSAC.</p></li> <li><p>During inference, I used index embeddings only! I did not have time to generate embeddings for training samples.</p></li> </ul> <p>What I didn't have time to finish:</p> <ul> <li>Local descriptors. I only implemented the local descriptors <a href="https://www.kaggle.com/c/landmark-recognition-2020/discussion/187344" target="_blank">36 hours before the end</a>.</li> <li>Multi-Gradient Descent (<a href="https://papers.nips.cc/paper/7334-multi-task-learning-as-multi-objective-optimization.pdf" target="_blank">MGDA-UB</a>). I figured this could allow the local descriptor to train alongside with the global ones without <code>stop_gradient</code>.</li> <li>Pre-computed embeddings. Ironically, <a href="https://www.kaggle.com/c/landmark-recognition-2020/discussion/176697" target="_blank">I was the first who have asked if this is legal</a>.</li> </ul> <p>My training code is <a href="https://github.com/hav4ik/google-landmarks-2020" target="_blank">available on Github</a>. I've also published my <a href="https://www.kaggle.com/chankhavu/glob-effb6-e48-rn152-e40-det-frcnn-ss-orig" target="_blank">final submission</a>.</p> <hr> <h2>Fun fact: you could've gotten a Silver medal using just baselines!</h2> <ul> <li><p>Replacing the global descriptor from the <a href="https://www.kaggle.com/paulorzp/baseline-landmark-recognition-lb-0-48" target="_blank">Public Recognition Baseline Example</a> with the model from <a href="https://www.kaggle.com/nvnnghia/main-0806" target="_blank">Public Retrieval Baseline</a> will get you <code>0.4872/0.5081</code>, or a <strong>Bronze</strong> medal.</p></li> <li><p>Adding a simple non-landmark removal with object detectors will boost the above described baseline-based solution to a <strong>Silver</strong> medal.</p></li> </ul> <p>[spoint]: <a href="https://arxiv.org/abs/1712.07629" target="_blank">https://arxiv.org/abs/1712.07629</a><br> [sglue]: <a href="https://arxiv.org/abs/1911.11763" target="_blank">https://arxiv.org/abs/1911.11763</a><br> [fmatches]: <a href="https://local-features-tutorial.github.io/pdfs/Local_features_from_paper_to_practice.pdf" target="_blank">https://local-features-tutorial.github.io/pdfs/Local_features_from_paper_to_practice.pdf</a></p> <p>I was able to train global features. For local features, I didn't have time to train to convergence, so I used publically available local descriptor. Then, I used PyDegensac for feature matching.</p>
Google Landmark Recognition 2021
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2021 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>sub-center arcface</p> <p><a href="https://ibug.doc.ic.ac.uk/media/uploads/documents/eccv_1445.pdf" target="_blank">https://ibug.doc.ic.ac.uk/media/uploads/documents/eccv_1445.pdf</a></p> <p>labels of previous test sets were released after previous competitions</p> <p>First, let me thank kaggle staff and google team for organizing the 2021 landmark competitions. I am still overwhelmed and shocked by my result, which resulted in becoming #1 on the overall kaggle leaderboard, an ultimate goal for many kagglers. My solutions are similar in terms of models and training routine for the retrieval and the recognition task, but slightly different in details, especially in postprocessing, i.e. in re-ranking procedure. I try to give attribution to people, whose smart ideas I levered from last year's solutions. For recognition track I re-used several ideas from 2nd place solution of GLR2020 by <a href="https://www.kaggle.com/bestfitting" target="_blank">@bestfitting</a>, 3rd place solution of team All Data Are Ext ( <a href="https://www.kaggle.com/boliu0" target="_blank">@boliu0</a> <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a> <a href="https://www.kaggle.com/garybios" target="_blank">@garybios</a> <a href="https://www.kaggle.com/alexanderliao" target="_blank">@alexanderliao</a> ) Please comment if I forget someone.</p> <h2>TLDR</h2> <p>My solution is an ensemble of DOLG models [1] and Hybrid-Swin-Transformers, which are both especially strong architectures for integrating local and global descriptors of a landmark into a single feature vector in a single-stage manner. I trained the models using a 3-step approach where I enlarged the training dataset and image size in each step. For postprocessing, which mainly handles non-landmark identification, I used the same method as in last year's 1st place recognition solution.</p> <h2>Preamble</h2> <p>Before going into details I want to address a few circumstances leading me to the solution. Firstly, compared to last year, this year's competition had a significantly more tense timeline, which forced me to skip following some ideas I had and concentrate on the most promising. That especially led to the choice to not consider spatial verification with local descriptors at all, as it has (at least in my opinion) a bad cost-benefit trade-off, especially in large scale image recognition, with limited kernel runtime. Secondly, since top teams of last year's iterations published their solutions in detail, accessible to everybody, the bar to outperform here was quite high. To be consistent in naming I use the following for subsets of gldv2</p> <p><strong>gldv2</strong> available under [2] (5 mio images, 200k classes)<br> <strong>gldv2c</strong> (cleaned subset of gldv2 / this years training data) (1.6 million training images, 81k classes)<br> <strong>gldv2x</strong>, uncleaned subset of gldv2 restricted to only have the same 81k classes as gldv2c (3.2 mio images)</p> <p>Furthermore I use <strong>GLRec</strong> for past google landmark recognition competitions and <strong>GLRet</strong> for retrieval. </p> <h2>Cross-validation</h2> <p>I used the same cross-validation scheme as last year, namely the 2019 labeled test set, and had a good correlation between Public LB and my local validation.</p> <h2>Training routine</h2> <p>My training routine follows train dataset choice from 2nd place GLRec 2020 but with some improvements using ideas from <em>All Data Are Ext</em><br> As a first step I train models on small image size (e.g. 224x224) on the clean gldv2c for around 10 epochs. Then I use medium large image size (e.g. 512x512) and train for a long time on the more noisy gldv2x (30-40 epochs). Finally I finetune on large image size (e.g. 768x768) also on gldv2x for a few epochs. I used Adam optimizer and cosine annealing schedule in every step. I used same augmentation as <em>All Data Are Ext</em>, which proved to be very efficient:</p> <pre><code>cfg.train_aug = A.Compose([ A.HorizontalFlip(p=0.5), A.ImageCompression(quality_lower=99, quality_upper=100), A.ShiftScaleRotate(shift_limit=0.2, scale_limit=0.2, rotate_limit=10, border_mode=0, p=0.7), A.Resize(image_size, image_size), A.Cutout(max_h_size=int(image_size * 0.4), max_w_size=int(image_size * 0.4), num_holes=1, p=0.5), ]) cfg.val_aug = A.Compose([A.Resize(image_size, image_size),]) </code></pre> <h2>Model Architectures</h2> <p>For me this is the most fun part in every competition, to design and explore model architectures. I like the idea of models solving multiple parts of the problem at hand in end2end fashion. In landmark recognition it is crucial to leverage both, global and local image information. While historically this was solved in a two-stage fashion by training a global descriptor for preliminary recognition on the one hand and extracting local descriptors for post reranking using spatial verification on the other hand. Recently attempts have been made to train local and global descriptors simultaneously using a single model (DELF/ DELG). But that still uses a 2nd stage spatial verification. <br> I implemented two architectures that combine local and global descriptor into a single image embedding. Both use an effientnet encoder and an sub-arcface head with dynamic margins which was shown by team <em>All Data Are Ext</em> in 2020 to outperform classic arcface.</p> <h3>DOLG</h3> <p>The very recently released DOLG paper, goes on step further claiming to train local and global descriptors also simultaneously but additionally fusing them within the model into a single descriptor. Unfortunately/ fortunately it was so new that no code was released so far and I needed to implement deep orthogonal fusion by myself. Luckily the paper gives enough details and I was able to implement it quickly. The following shows the final architecture:</p> <p><img src="https://i.imgur.com/9FiSEet.png" alt="DOLG-EffiencentNet"></p> <h3>Hybrid Swin Transformer</h3> <p>Another architecture I explored was a Hybrid-Transformer, half a CNN encoder with a transformer ending. The idea behind is that transformers are like graph neural nets on image patches connecting distant local features with each other. A property CNNs are lacking, but which are very important for landmark recognition/ retrieval. However, pretrained image transformers are mostly done on 224 or 384 image size, and I wanted to go bigger. Also the encoding of each patch is of a higher quality when using a sophisticated CNN for the patch embedding. Luckily the awesome timm repository gives you the tool set to glue together a CNN and an image transformer and I ended with an architecture like shown below. In particular I used Swin transformer [3], as I think the moving window approach captures more diverse local descriptors and EfficientNet for encoding.</p> <p><img src="https://i.imgur.com/2iXNuBA.png" alt="Hybrid-Swin-Transformer"></p> <p>However, for training several problems arise: Firstly the pretrained weights of image transformer and CNN are trained separately, hence not fitting very well together, resulting in NaNs while training. Secondly, since image transformer require a fixed input size it's a bit more difficult to follow a training routine with increasing image size in each step. I found that the following procedure fixes the issues and gives a good final model, and especially step 3 is important to make the Hybrid-Swin-Transformer working</p> <ol> <li>train only image transformer on 224x224 size</li> <li>Exchange original patch embedding module with block 0,1,2 from Effnet encoder, </li> <li>freeze image transformer and sub-arcface head and train for 1 epoch on 448x448, to let the effnet encoder “adjust”</li> <li>unfreeze image transformer and head and train for 30-40 epochs</li> <li>add block 3 from effnet encoder and finetune a few epochs on 896x896</li> </ol> <p>For two of my models I used the “stride trick”, i.e. putting a stride of (1,1) in the first conv layer to effectively double the used resolution instead of double the actual image size, which I think is better for small original images.<br> I also retrained a few models from 2nd and 3rd place solutions from 2020 recognition competition following their provided git repositories and instructions, and evaluated possible ensembles on my local cross validation. Models form All Data Ext team where quite low correlated to mine and added a nice diversification benefit</p> <p>In my final ensemble I had the following models:</p> <ul> <li>DOLG-Efficentnet b5 (768x768)</li> <li>DOLG-Efficentnet b6 (768x768)</li> <li>DOLG-Efficentnet b7 (448x448) stride 1</li> <li>Hybrid SwinBase224-Efficentnet b5 (448x448) stride 1</li> <li>Hybrid SwinBase224-Efficentnet b3 (896x896)</li> <li>Hybrid SwinBase384-Efficentnet b6 (384,384) stride 1</li> <li>All Data Ext 2020 Efficientnet b6 (512x512) </li> <li>All Data Ext 2020 EfficientNet b3 (768x768)</li> </ul> <p>Post-processing<br> I literally used the same approach (developed by <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a>) as in our last year solution. you can find it here:</p> <p><a href="https://www.kaggle.com/c/landmark-recognition-2020/discussion/187821" target="_blank">https://www.kaggle.com/c/landmark-recognition-2020/discussion/187821</a></p> <p>For more details please refer to the paper on arxiv. Thank you for reading.</p> <p>[1] <a href="https://arxiv.org/abs/2108.02927" target="_blank">https://arxiv.org/abs/2108.02927</a> (DOLG)<br> [2] <a href="https://github.com/cvdfoundation/google-landmark" target="_blank">https://github.com/cvdfoundation/google-landmark</a><br> [3] <a href="https://arxiv.org/abs/2103.14030" target="_blank">https://arxiv.org/abs/2103.14030</a> (Swin)</p> <p>Code: <a href="https://github.com/ChristofHenkel/kaggle-landmark-2021-1st-place" target="_blank">https://github.com/ChristofHenkel/kaggle-landmark-2021-1st-place</a> (in progress) <br> Paper: submitted to arxiv, waiting for acception. In the meanwhile I uploaded to <a href="https://github.com/ChristofHenkel/kaggle-landmark-2021-1st-place/blob/main/GLR_2021_1st_place.pdf" target="_blank">https://github.com/ChristofHenkel/kaggle-landmark-2021-1st-place/blob/main/GLR_2021_1st_place.pdf</a></p> <p>It depends on several factors, especially if you change more than just image size. Also when going from GLDv2c to GLDV2x, its expected to have a higher loss, as data becomes more noisy. If you only change img size, loss should converge to old level pretty fast.</p> <p>To be honest, given the tighter timeline this year, there was no time for hyperparameter tuning. I mainly used hyperparameters from last year/ experience and focused on architectural changes. However, and I think thats important, I spend some time to implement local validation of retrieval score at the end of every epoch, so that I can asses model quality.</p> <p>Luckily, I have access to a DGX-1 (8xV100 32GB) at work. When I started kaggling I first used colab and then relatively quickly build my own Desktop-PC with 2-GPUs (GTX 1080Ti at that time) following some youtube/ medium suggestions for parts. </p> <p>I mainly used hyperparameters from last year and experience from other competitions. But in general its a good idea to use small image size to roughly verify different optimizers and schedule. Also, as you say, only using a portion of the dataset will further improve time-efficiency for those choices.</p> <p>yes I skipped the pooling layer, as it was not transparent to me, how/ what they pool. What do you mean with "resample the size to half of its original feature map's size"? </p> <pre><code>class MultiAtrousModule(nn.Module): def __init__(self, in_chans, out_chans, dilations): super(MultiAtrousModule, self).__init__() self.d6 = nn.Conv2d(in_chans, 512, kernel_size=3, dilation=dilations[0],padding='same') self.d12 = nn.Conv2d(in_chans, 512, kernel_size=3, dilation=dilations[1],padding='same') self.d18 = nn.Conv2d(in_chans, 512, kernel_size=3, dilation=dilations[2],padding='same') self.conv1 = nn.Conv2d(512 * 3, out_chans, kernel_size=1) self.relu = nn.ReLU() def forward(self,x): x6 = self.d6(x) x12 = self.d12(x) x18 = self.d18(x) x = torch.cat([x6,x12,x18],dim=1) x = self.conv1(x) x = self.relu(x) return x </code></pre> <p>taking input tensor of (bs,w,h,in_chans) this module outputs(bs,w,h,out_chans)</p> <p>its a typo, thanks for noting</p>
Google Landmark Recognition 2021
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2021 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>To answer your two questions: </p> <p>1) Yes we uploaded a dict mapping for all 4M full train image id -&gt; non-landmark score. During inference, we retrieved top K private train images first, and for each image, we found its score from the mapping, and subtract that from its cosine</p> <p>2) Yes, all 4M from against 115K 2019 test nonlandmark images. You can use some modern GPU library such as faiss to compute the distance, and return only the top 3 best match, instead of full 100K. Then take the average of the top 3 cosines. Should take around 3 mins on GPU without memory issue I guess. </p> <p>I only used 2 8-V100 machine to train my models - B5 and B6. </p> <p>Thanks! It is similar to what top 3 used last year, but instead of multiplying we used addition. And our top 1 classification adjustment is new. </p> <h2>3rd place solution</h2> <p>Congrats to the winners, and thanks a lot to Kaggle &amp; Google for organising this wonderful Landmark Recognition competition yearly. This is great a teamwork for us, and we will briefly discuss our solution here. </p> <p>We are probably the most submitted team (200+ submissions) on LB, reason being we were exploring on different approaches quite independently in the beginning, where we only merged in the final week right before the deadline. We have explored quite many different ideas from the start, but our final ensemble pipeline differed a lot from what tried in the beginning. </p> <p>Probably like most others, we wasted quite a few submissions before realising that 1) there are at least 20K private test images to be provided during inference, instead of 10K as in test folder 2) there are landmark ids other than the cleaned 81313 version existing in private train. </p> <h3>Solution Overview</h3> <p>Our final models include two sets - <strong>retrieval models</strong> and <strong>classification models</strong>. And our solution can be summarised the following: <br> <code>Concat &amp; Retrieval -&gt; classification logits adjustment -&gt; distractor score adjustment -&gt; top 1 classification score</code></p> <h3>Dive into details for each part</h3> <h4>1) Concat &amp; Retrieval</h4> <p>We have 7 retrieval models in our final pipeline: </p> <p>OrangeX: Swin-l (384), Swin-l (384 , different training), EfficientnetV2 (800), EfficientB7 (800), CvT (384)<br> Weimin: EfficientB5 768</p> <p>We found that training models only on 3M (81313 landmarks) dataset gives better or at least equal retrieval performance compared to finetuning on full 4M datasets. And our single best retrieval model - Swin-l - gives 0.4 for its pure retrieval performance on Public LB. All others like B7, V2, B5 are a lot weaker in terms of individual retrieval performance. (Around 0.29 ~ 0.33). </p> <p>All our models give 512-d output features, and in our final pipeline we simply concat them to form a final embedding vector (512 x 6) for retrieval of top K matched training images from private train. We used K = 7. </p> <p>We found that using dynamic margin (or adaptive margins) actually helps improve training, thanks To Qishen's great solution last year! </p> <h4>2) classification logits adjustment</h4> <p>We found that it is crucial to use classification logits to support predictions. From the beginning, I managed to split the full 4M dataset into training and validation folds, and when I realised the private train could contain un-cleaned landmark ids, the validation mean_gap can be reliably used as measurement for classification performance, which is also consistent to LB. </p> <p>It turned out that this is important. As using a representative validation set, it is relatively easy to finetune models back on full 200k landmark ids training set (4M images). In the end, we found the top 1 pure classification accuracy of my B5 is around 0.39+ on LB, where for swin it is around 0.34. We decided to go with all 4 EfficientNet that I trained on my side (B5 512 &amp; 768, B6 512 &amp; 768) as our main classification models for stage 2) here as well as 4) later on. </p> <p>At this stage 2), we have got top 7 retrieved training images from stage 1). So for each of the 7 images, we look up its classification logits from all 4 models chosen (B5 512&amp;768, B6 512&amp;768), and simply add the averaged logit to its corresponding cosine score as adjustment. </p> <h4>3) distractor score adjustment</h4> <p>Similarly from what Dieter did last year, we use the 2019 test set's nonlandmark images as index, and for each training image (4M), we found its top 3 matched scores and simply take its average as its distractor score. We generated the mapping between each 4M training image id to its score in a dict, and uploaded to Kaggle to use in submission. </p> <p>We then subtract the distractor score from each adjusted cosine score above from stage 2). <br> In short, stage 1) - 3) can be viewed as: </p> <pre><code>Cosine score + classification logit - distractor score </code></pre> <h4>4) top 1 classification score</h4> <p>We found that using the top 1 classification logits from our best classification models (i.e. EffNet B5 and B6) can have another boost. </p> <p>At stage 3) above, we should have all top 7 indexed images with their score adjusted ready, so we simply aggregate them to each's corresponding landmark id. We will, however, add another pair of <code>(top 1 classification landmark id, top 1 classification logit)</code> into the aggregation step. The classification logit used is just the raw top 1 logit, after we averaged all classification models' 200k prediction logits. </p> <p>We can't penalise the classification pair as it is not from any image like the top 7 matched, therefore it does not have a distractor score. But this turns out not to be a problem, as we found that top 1 classification score can be naturally used as penalty of non-landmark. See the distribution of landmark &amp; non-landmark images' top1 scores below: </p> <p><a href="https://drive.google.com/file/d/127O__NgWGIW8E73XX2ZqrK4X9m9asZY_/view?usp=sharing" target="_blank">https://drive.google.com/file/d/127O__NgWGIW8E73XX2ZqrK4X9m9asZY_/view?usp=sharing</a></p> <p>Our final selection of landmark id and score for each test image will be just the aggregation result from all 1) - 4) stages. </p> <p>Using stage 1) - stage 4) with a single B5 only (not fully trained yet) achieves 0.445/476 on LB. We achieved our final rankings using all models mentioned above. </p>
Google Landmark Recognition 2021
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2021 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to my longtime teammate <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a> and new teammate <a href="https://www.kaggle.com/hongweizhang" target="_blank">@hongweizhang</a> . It was again great teamwork. We got 3rd place in retrieval and 4th place in recognition competition. I'm posting the solutions in both forums for easy future reference, even though there is large overlap.</p> <h2>Approach</h2> <p>Qishen and I were part of the 3rd place team (together with <a href="https://www.kaggle.com/garybios" target="_blank">@garybios</a> <a href="https://www.kaggle.com/alexanderliao" target="_blank">@alexanderliao</a> ) in recognition 2020. Last year we developed a strong model, i.e., sub-center ArcFace with dynamic margins. </p> <p>This year two competitions run simultaneously with tighter deadlines, so we largely re-used last year's pipeline, with newer architectures and better post-processing. We trained models the same way for both retrieval and recognition competitions, but selected different model combinations and different post-processing for final submissions.</p> <h2>Training recipe</h2> <p>We mostly follow our last year's solution (see <a href="https://www.kaggle.com/c/landmark-recognition-2020/discussion/187757" target="_blank">here</a> for details). The recipes are</p> <ul> <li>sub-center ArcFace with dynamic margins</li> <li>progressive training with increasing image sizes</li> <li>the indispensable <a href="https://github.com/rwightman/pytorch-image-models" target="_blank">timm</a> library</li> <li>cosine learning schedule with Adam/AdamW optimizer</li> <li>multi-GPU training with DistributedDataParallel</li> </ul> <h2>Model choices</h2> <p>There are quite a few new SOTA image models published in the past year, notably EfficientNet v2, NFNet, ViT, Swin transformers. We trained all these new models using last year's recipe. For transformers, we used 384 image size. For CNN models, we used progressively larger image sizes (256 -&gt; 512 -&gt; 640/768). </p> <p>We found that the CNNs and transformers perform equally well on local GAP scores (recognition metric), but transformers have significantly higher local mAP (retrieval metric) even with smaller 384 image size. We think this is because transformers are patch-based and can capture more local features and their interactions, which are more important for retrieval tasks.</p> <p>Besides the new models, we also reused some of our last year's models by finetuning, mostly EfficientNets.</p> <h2>The ensemble</h2> <p>Our final ensemble consists of 11 models: 3 transformers, 3 new CNNs and 5 old CNNs. Their specifications and local scores are below. Old CNNs' CV scores are not shown because they are leaky (last year's fold splits were different).</p> <p>Our best retrieval ensemble has 4 fewer CNNs and only 7 models in total (3 transformers + 4 CNNs), because the addition CNNs hurt the retrieval score. This is because transformers are more important for retrieval, and adding more CNNs would reduce transformers' weights.</p> <table> <thead> <tr> <th>Model</th> <th>Image size</th> <th>Total epochs</th> <th>Finetune 2020</th> <th>cv GAP (recognition)</th> <th>cv mAP@100 (retrieval)</th> </tr> </thead> <tbody> <tr> <td>Swin base</td> <td>384</td> <td>60</td> <td></td> <td>0.7049</td> <td>0.4442</td> </tr> <tr> <td>Swin large</td> <td>384</td> <td>60</td> <td></td> <td>0.6775</td> <td>0.5161</td> </tr> <tr> <td>ViT large</td> <td>384</td> <td>50</td> <td></td> <td>0.6589</td> <td>0.5633</td> </tr> <tr> <td>ECA NFNet L2</td> <td>640</td> <td>38</td> <td></td> <td>0.7053</td> <td>0.3600</td> </tr> <tr> <td>EfficientNet v2m</td> <td>640</td> <td>45</td> <td></td> <td>0.7136</td> <td>0.3811</td> </tr> <tr> <td>EfficientNet v2l</td> <td>512</td> <td>30</td> <td></td> <td>0.7146</td> <td>0.4117</td> </tr> <tr> <td>EfficientNet B4</td> <td>768</td> <td>10</td> <td>✓</td> <td></td> <td></td> </tr> <tr> <td>EfficientNet B5</td> <td>768</td> <td>20</td> <td>✓</td> <td></td> <td></td> </tr> <tr> <td>EfficientNet B6</td> <td>512</td> <td>20</td> <td>✓</td> <td></td> <td></td> </tr> <tr> <td>EfficientNet B7</td> <td>672</td> <td>20</td> <td>✓</td> <td></td> <td></td> </tr> <tr> <td>ReXNet 2.0</td> <td>768</td> <td>10</td> <td>✓</td> <td></td> <td></td> </tr> </tbody> </table> <h2>Post-processing</h2> <p>Post-processing was shown to be very effective in last year's top solutions. We incorporated ideas of top3 solutions</p> <ol> <li>Combine cosine similarity and classification probabilities (our <a href="https://www.kaggle.com/c/landmark-recognition-2020/discussion/187757" target="_blank">3rd place solution</a>)</li> <li>Penalize similarity with non-landmark images (<a href="https://www.kaggle.com/c/landmark-recognition-2020/discussion/187821" target="_blank">1st place solution</a>)</li> <li>Set similarity=0 if top5 similarities with non-landmarks &gt; 0.5 (<a href="https://www.kaggle.com/c/landmark-recognition-2020/discussion/188299" target="_blank">2nd place solution</a>)</li> <li>Penalize CosSim(query, index) for index class’s count in train set (ours, new)</li> </ol> <p>Thanks. For Swin Large at image size = 384, I used 8 V100 GPUs (32G) with batch size = 9 per GPU. It takes about 8.1 hours/epoch, or about 20 days for 60 epochs. </p> <p>Training schedule is cosine schedule (see our last <a href="https://github.com/haqishen/Google-Landmark-Recognition-2020-3rd-Place-Solution" target="_blank">solution</a>) with initial learning rate =2.5e6</p> <p>optimizer is AdamW</p> <p>On a single RTX 3090, you probably want to use Swin base, and train it on cGLD2 only, for fewer epochs. Just for fun though, it would take you only 20<em>(32</em>8/24) = 213 days to train 60 epochs on all data. If you start now, you can definitely make it for next year's landmark competitions 😉</p>
Google Landmark Recognition 2021
5th Place Solution Sharing: A Learnable Pooling Approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2021 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/tmxxuan" target="_blank">@tmxxuan</a> Yes, the 100k images in private training set should come from the 200k landmarks instead of 81313 ones. I got notebook exception threw error when applied a landmark_id mask to only 81313 classes. While debugging the issue, I think "the total public training set" actually means the V2Full. The description is unfortunately not very clear. </p> <p><a href="https://www.kaggle.com/keremt" target="_blank">@keremt</a> </p> <p>(not sure why the reply is not working)</p> <p>mainly use the same inference process from last year's winning solution. Find the 3 nearest neighbors from filtered V2Full pretrained embedding and penalize by the similarity to nonlandmarks(A_ij - B_j as mentioned in the post). Finally downrank the nonlandmarks with scores &gt; 0.3. Nothing really new and interesting. I am sure you can learn more from other top solutions about the postprocessing part.</p> <p><a href="https://www.kaggle.com/apretrue" target="_blank">@apretrue</a> the test set is not a subset of V2C but V2Full. I encountered the same problem at the beginning(you can refer to my previous post in this competition).</p> <p>To be more precise, using a subset of the V2full during inference is important. During the inference, I upload the pre-calculated embeddings for V2Full and simply filter out the 'invalid' landmarks which didn't appear in the private index(training) set. </p> <p>Congratulations to the winners and thanks google and kaggle for hosting such an interesting competition! </p> <p>This is my first time to participate in the landmark recognition/retrieval competition. After reading the previous winning solutions, I also felt the anxiety with the limited computation resource I have: 1x3090 at the very beginning. So I decided to find a better(affordable) model architecture for the task, instead of following the common network+gem+arc_face approach.</p> <p>Based on the observation that larger input images would lead to better accuracy, I tried to use the intermediate feature maps from the backbones. With my experience in the Yourtube8M video classification, I then find that NeXtVLAD(<a href="https://arxiv.org/pdf/1811.05014.pdf" target="_blank">https://arxiv.org/pdf/1811.05014.pdf</a>) perform pretty good in aggregating and merging features from these different feature maps. </p> <h3>Model Architecture</h3> <p><img src="https://i.postimg.cc/SRwWXn7s/swin-nextvlad-drawio-1.png"></p> <p>As you can see, each feature map is aggregated to one high-dimensional 1D feature with NeXtVLAD, then those features are concatenated and fed into a simple linear projection layer with output_dim=512. In my implementation, I also add more non-linearity with another SE-gating layer(which was actually found to be not really helpful). At last, we use the arc face product and dynamic margin for the loss.</p> <p>Training setup:<br> V2X: 3.2M images with 81313 landmark<br> V2C: 1.6M images with 81313 landmark<br> V2Full: 4.1M images with 203094 landmarks+ nonlandmark</p> <p>1) fix the weight in backbone and only train the learnable pooling layer with V2X for 10 epochs, which surprisingly have very high local validation(around 90%). [AdamW, 0.001 initial, 0.8 decay)</p> <p>2) finetune the whole model with V2C for 10 epochs(around 94% local validation accuracy)[AdamW, 0.0001 initial, 0.9 decay)</p> <p>3) change only the arc_face_product layer with 203095 classes(use the nonlandmark from 2019 testset is really helpful). Again, fix the weight of backbone and only tune the learnable pooling layers with V2Full. (around 95-96% accuracy after 10 epochs in local validation)[AdamW, 0.001 initial, 0.8 decay]</p> <p>The advantage of the learnable pooling layer is: </p> <ol> <li>The training is super fast as you only need to train the learnable pooling layers at step 1 and step 3.</li> <li>I use the image input size of 256x256 at step 1 and step 2. Then increase the image size only at step 3, when the backbone is fixed(so I never need to train an efficientnet with large input images).</li> </ol> <p>For instance, training a swin-base-224 only takes around 3 days using single 3090 GPU with the setup. </p> <p>I achieved top10 around early Sep with just one 3090. And then my teammate <a href="https://www.kaggle.com/marbury" target="_blank">@marbury</a> provided me with another 2xV100 so that I can just scale to different backbones. My final solution is an ensemble of</p> <ul> <li>swin-base-224</li> <li>swin-base-384</li> <li>swin-large-224</li> <li>resnest101 (512 input at the step 3)</li> <li>efficientv2-m (512 input at step 3)</li> </ul> <p>The final LB score is highly correlated with the accuracy of the backbone models in imagenet, which I suppose show me that this learnable pooling approach have good ability to do the transfer learning. </p> <p>It would be interesting to see what the model arch can achieve with more complex backbones, including efficientb5-b7, efficientv2-large and swin-large-384. There is still plenty of room to improve and hopefully to inspire more ideas &amp; works using the learnable pooling module for the competition in next year. Will share my code once it is cleaned up.</p> <h3>Other Insights</h3> <ul> <li>using the whole v2full as the index set during the inference is important to stay at top 10.</li> </ul>
Google Landmark Recognition 2021
8th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2021 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Again, thanks to Kaggle and the hosts for hosting an interesting competition. Congratulations to all the winners. Special thanks to all my teammates <a href="https://www.kaggle.com/takuok" target="_blank">@takuok</a> and <a href="https://www.kaggle.com/tascj0" target="_blank">@tascj0</a> . Fortunately, we got the gold medal of Recognition competition, too. We show the details of our models in <a href="https://www.kaggle.com/c/landmark-retrieval-2021/discussion/276632" target="_blank">Retrieval solution post</a>.<br> Here, we show the unique part of Recognition challenge.</p> <h1>Tips in Recognition.</h1> <p>We used the same two strategies as the last year's 1st team used. While submitting, we used pre-computed embedding of train4.1m data, penalized these embeddings that are similar to the non-landmark images. These methods are again very robust this year and critical for the gold medal.<br> This approach boost the score for a single model from <strong>public/private 0.37843/0.36045</strong> to <strong>0.45724/0.43252</strong>.<br> Final score is <strong>ensemble of beit, swin, b4, b6, xcit, v2m, b5</strong>. See the details about models in <a href="https://www.kaggle.com/c/landmark-retrieval-2021/discussion/276632" target="_blank">Retrieval post</a>.</p> <h1>Acknowledge</h1> <p>takuoko is a member of Z by HP &amp; NVIDIA Data Science Global Ambassadors.<br> Special Thanks to Z by HP &amp; NVIDIA for sponsoring me a Z8G4 Workstation with dual RTX6000 GPU and a ZBook with RTX5000 GPU.<br> This competition has the big dataset.<br> So I tried pytorch's DDP parallel training on my dual RTX6000 GPUs and it helped a lot.</p>
Google Landmark Recognition Challenge
My single-(ok, 1.5-)model solution and source code [19th place]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Ah, I forgot to mention that cropping at prediction was quite efficient, I used 22 crops per image which increased the GAP by about 10% (and was much cheaper than training additional models). </p> <p>My solution consists of two stages, the first stage is given by a finetuned Xception model with sigmoid activation layer trained on landmark and non-landmark images to more naturally reject non-landmark images. I used a generalized average pooling layer, which has been shown to improve landmark recognition performance (<a href="https://arxiv.org/pdf/1711.02512.pdf">https://arxiv.org/pdf/1711.02512.pdf</a>). Also, I implemented a slightly reweighted loss, but I am not sure if this makes a difference (at least it doesn't seem to hurt, see the explanations in the repo for details).</p> <p>This NN classifier alone resulted in a (public/private) GAP of 0.145/0.149. (On an easy dev set comparable to the training data without non-landmark images, this model has a GAP of about 0.96).</p> <p>As a second step I use Google DELF features (<a href="https://arxiv.org/abs/1612.06321">https://arxiv.org/abs/1612.06321</a>) to rescore every image by comparing it to (up to) 32 landmark images of the proposed category (max num. of inliers in one image). </p> <p>The total confidence for the prediction is then computed by a weighted average of NN and DELF confidence, so that the NN and the DELF confidences contribute roughly half for images with typical DELF-based scores (wheras very high DELF scores dominate the average).</p> <p>The full model lead to a GAP of (public/private) 0.211/0.192.</p> <p>I uploaded the source code (as jupyter-notebooks) on github: <a href="https://github.com/jandaldrop/landmark-recognition-challenge/">https://github.com/jandaldrop/landmark-recognition-challenge/</a> and I'd be happy to receive some comments ;)</p> <p>Hi soywu, I created a google cloud account and used the free credits. The NN was trained on a NVIDIA K80 (12GB) for the geometric verification of DELF features I used 12 cpu cores (i.e. I ran the script in the git repository 4 times in parallel). Unfortunately, there are no credits left for another challenge of this kind ... ;)</p> <p>Hi Eric, yes I used quite some image augmentation, random shifts, shearing, zoom in and out, shearing, channel shift, and I also reflected images bot at training and prediction. I also used random crops from the high resolution images (which thus contain additional information).</p> <p>No, I did not take care about them. I had once tried to also predict on rotated images (and use the max conf.), but that decreased the overall performance (most likely due to the large amount of non-landmark images).</p> <p>Without the delf features the NN resulted in a score of (public/private) 0.145/0.149, with the delf features I obtained 0.211/0.192 (as written above..). So it helped quite a bit, even though the NN was pretty strong with all the little tweaks. The overfitting with the delf feautures is likely due to the fact that I had tested several merge procedures of the two scores on the public test set.</p> <p>btw. I would have really liked to train 2 or 3 more models of this kind and ensemble average, but time and resources were limited ;) (I had also tried to ensemble with earlier versions during training, but the above described single-model predictions gave the overall highest score).</p> <p>Anyway, thank you for your interest :) and I hope I could clarify a few things by this post. You can also have a look at the jupyter-notebook in the github repository, there are some (markdown) explanations (with a bit more detail than the above post).</p>
Google Landmark Recognition Challenge
Our 10th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi, thank you! Images was resized to 224x224, the embedding of our CNN had a dimension of 1024. </p> <p>In the training time we just added additional non landmark class and ordinary softmax worked quite well for this class.</p> <blockquote> <p>Am I correct?</p> </blockquote> <p>Yes, you are!</p> <p>Hi! We used 4 1080 Ti.</p> <p>Our solution consists of 4 main steps:</p> <p><strong>1. CNN</strong> We experimented with a several types of network architectures: ResNet, Wide ResNet, Inception-v3, DenseNet. We tried to learn both pretrained on ImageNet models and from scratch. The best result on validation was shown by ResNet-101 pretrained on ImageNet. </p> <p>As training time augmentation we used random resized crops, color jittering, horizontal flipping. It took us some time to find the parameters for random resizing cause they are important enough. CNNs were trained on 224x224 crops. </p> <p><strong>2. Loss</strong> We attack landmarks recognition problem from a metric learning point of view. Therefore, the most important part of our algorithm is a suitable for metric learning loss function. After some experiments we choose loss from <a href="https://arxiv.org/abs/1801.07698">ArcFace</a> as the most appropriate for our approach. We trained our networks from previous section with combination of this loss and softmax.</p> <p><strong>3. Inference</strong> For inference we calculated centroid for each landmark class on embeddings from a last layer of CNN by following steps:</p> <ol> <li>Pick 100 random elements of this class from training class</li> <li>Calculate mean embedding vector on picking elements </li> <li>Normalize mean vector </li> </ol> <p>Additionally on the inference step was used test time augmentation by ten crops averaging. TTA gave us improve about 0.1 GAP.</p> <p>On inference step the classification decision is made on the basis of the following steps:</p> <ol> <li>An input image forward through the network</li> <li>The resultant embedding compares by cosine distance with all classes centroids</li> <li>Сlass of the nearest centroid by this distance is taken as a result</li> <li>If not a landmark class softmax value &gt; 0.25 then we consider an input as a not a landmark </li> </ol> <p><strong>7. Ensembles</strong> We trained this pipeline for several types of networks listed in the section 1. Additionally training set was split on 5 folds and each model was trained by cross fold and the result was obtained by averaging across results on this 5 folds. </p> <p>For ensembling all these models we made weighted voting across models results. Weights was picked accordingly success of each single model on the LB.</p> <p>Unfortunately ensemble did not improve our best single model solution by as much as we expected. Our best ResNet-101 model achieve GAP 0.205 (private) when our final result with ensemble is 0.228 (private). </p> <p>We also experimented with kNN on embeddings of our best model, but it did not significantly improve the score.</p>
Google Landmark Recognition Challenge
Our solution [4th place]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It is a common practice to use cosine similarity in this case. Another approach is to use euclidian distance, however, it usually works worse. </p> <p><strong>Step 1: Training a usual CNN</strong></p> <p>We just trained a lot of different models: ResNet34, ResNet50, ResNet101, ResNet152, ResNeXt101, InceptionResNetV2, DenseNet201.</p> <p>Some of them were training with 15k classes. Random 224x224 crops, resizes, scales, shifts, rotates, flips were used for augmentation.</p> <p>Pavel Pleskov firstly used 128x128 crops and then increased crop size up to 256x256. CNN's were training using fast.ai until accuracy on validation does not become 0.975. He did not use classes with less than 10 samples (the number of remaining classes was eight thousand). </p> <p>However, it is not enough to train a lot of CNN's and merge their predictions in this competition. There are a lot of challenges:</p> <ul> <li>Classes with one or two samples in the train dataset</li> <li>Images from Google Landmark Retrieval Challenge in the test dataset</li> <li>Non-landmark images in the test dataset</li> </ul> <p><strong>Step 2: Recognizing landmarks from retrieval challenge</strong></p> <p>We used a separate CNN (based on ResNet50) classifier to separate images with landmarks from this competition and from retrieval challenge. A simple heuristic improved our score: if is_from_another_competition_probability &gt; 0.95: confidence *= 0.1;</p> <p><strong>Step 3: Recognizing images with no landmark</strong></p> <p>Another separate binary classifier (based on ResNet50) trained on Open Images &amp; train data. The heuristic was the same as in step 2.</p> <p><strong>Step 4: Few-shot learning</strong></p> <p>We found a simple technique to improve the score of any of our neural networks:</p> <ol> <li>Extract features from hidden layer</li> <li>For each image from test set find K closest images from train set (K=5)</li> <li>For each class_id we computed: scores[class_id] = sum(cos(query_image, index) for index in K_closest_images)</li> <li>For each class_id we normalized its score: scores[class_id] /= min(K, number of samples in train dataset with class=class_id)</li> <li>label = argmax(scores), confidence = scores[label]</li> </ol> <p><strong>Step 5: kNN with features from local crops</strong></p> <p>We extracted 100 augmented local crops from each image. Crops with no landmarks were removed (using CNN from step 3). Then, we used the technique described in step 4 with K=20 independent for each local crop from query image (this is an image from test dataset). After that procedure, for each image from test set we got a set of local crops (up to 100) and a list of pairs (label, confidence) for each crop. For each class_id we computed its score as a sum of 5 - class rank for each crop. Class rank means the position of this class_id in the sort order of pairs (label, confidence). If at least one class rank was 5 or more, then we reset the entire sum. Thus, based on this technique we were able to choose approximately one thousand examples with high confidence in our prediction.</p> <p><strong>Merging:</strong></p> <p>One of the biggest challenges in this competition was how to merge predictions from different models and how to set confidence score correctly. We used several heuristics:</p> <ol> <li>Compute confidence score for each label using predictions from steps 1-4 as follow: score[label] = label_count / models_count + sum(label_confidence for each model) / models_count. Here label_count is a number of models where the prediction with max confidence is equal to the label.</li> <li>We also used each prediction from step 5 with confidence = 1 + confidence_from_step_5 / 100</li> </ol> <p><strong>Slight leakage:</strong></p> <p>The test set has a slight leak: many of the images have a name starting with "2016-" or "2017-". For example 2017-02-23.jpg. There are no such names in the train dataset, so our assumption was that these images contain no landmark. We tested our hypothesis but the score became worse. Then, we found that a simple heuristic could slightly improve our score: if confidence &lt; 1 and name is leaky: confidence *= 0.1</p> <p>Yeah, it helps as well as on public lb</p> <p>We used approximately 8 GPU and 1-4 for training each model</p> <p>Not too much in the final ensemble. About 0.003 on both public and private LB.</p>
Google Landmark Recognition Challenge
Our solution and source code (0.22 Public 0.17 Private)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Overall approach</p> <p><strong>CNN Training</strong></p> <p>One CNN was based on Xception implemented in Keras with an extra layer with 16384 features followed by a Hadamard classifier (to avoid huge FC weights and push all expression power to the CNN). </p> <pre><code>_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= image (InputLayer) (None, 256, 256, 3) 0 _________________________________________________________________ axception (Model) (None, 16384) 42938920 _________________________________________________________________ logits (HadamardClassifier) [(None, 14951), (None, 14 14952 _________________________________________________________________ predictions (Activation) (None, 14951) 0 ================================================================= Total params: 42,953,872 Trainable params: 42,870,672 Non-trainable params: 83,200 </code></pre> <p>The training regime was as follows:</p> <p>Trained with all 14591 classes duplicating images from landmarks with a single image. Augmentation kicks in after the net has seen an image at least once Training started at small crops and gradually increased crop size up to 256x256.</p> <p>This net was trained for 16 epochs until val_acc was <code>0.923113</code>. </p> <p>Once net was trained, we froze weights and added a distractor detection head, which took the the unscaled 14591 logits of the previous net as inputs as well as a VGG16 net trained on Places1365Hybrid followed by a few FC layers and a sigmoid output:</p> <pre><code>__________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== image (InputLayer) (None, 256, 256, 3) 0 __________________________________________________________________________________________________ axception (Model) (None, 16384) 42938920 image[0][0] __________________________________________________________________________________________________ logits (HadamardClassifier) [(None, 14951), (Non 14952 axception[1][0] __________________________________________________________________________________________________ vgg16 (Model) (None, 512) 14714688 image[0][0] __________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 15463) 0 logits[0][1] vgg16[1][0] __________________________________________________________________________________________________ d_fc1024 (Dense) (None, 1024) 15835136 concatenate_1[0][0] __________________________________________________________________________________________________ bn_m1024 (BatchNormalization) (None, 1024) 4096 d_fc1024[0][0] __________________________________________________________________________________________________ act_mrelu1024 (Activation) (None, 1024) 0 bn_m1024[0][0] __________________________________________________________________________________________________ d_fc512 (Dense) (None, 512) 524800 act_mrelu1024[0][0] __________________________________________________________________________________________________ bn_m512 (BatchNormalization) (None, 512) 2048 d_fc512[0][0] __________________________________________________________________________________________________ act_mrelu512 (Activation) (None, 512) 0 bn_m512[0][0] __________________________________________________________________________________________________ d_fc256 (Dense) (None, 256) 131328 act_mrelu512[0][0] __________________________________________________________________________________________________ bn_m256 (BatchNormalization) (None, 256) 1024 d_fc256[0][0] __________________________________________________________________________________________________ act_mrelu256 (Activation) (None, 256) 0 bn_m256[0][0] __________________________________________________________________________________________________ d_fc128 (Dense) (None, 128) 32896 act_mrelu256[0][0] __________________________________________________________________________________________________ bn_m128 (BatchNormalization) (None, 128) 512 d_fc128[0][0] __________________________________________________________________________________________________ act_mrelu128 (Activation) (None, 128) 0 bn_m128[0][0] __________________________________________________________________________________________________ predictions (Activation) (None, 14951) 0 logits[0][0] __________________________________________________________________________________________________ distractors (Dense) (None, 1) 129 act_mrelu128[0][0] ================================================================================================== Total params: 74,200,529 Trainable params: 16,528,129 Non-trainable params: 57,672,400 __________________________________________________________________________________________________ </code></pre> <p>This was trained on the recognition train data and a) yelp restaurant photo classification, b) open images and c) landmark retrieval index as distractors. It achieved 0.88 validation accuracy on distractor detection.</p> <p>The code for the above part is all in <code>train.py</code> </p> <p>The other two CNNs predictions were based on fine-tuned SE-resnet 50, the out of distribution images are all set to a label 15000. The validation accuracy is about 94% while the public leaderboard is only about around 0.7 for probabilities from a single model. Simple additions of the model with different input resolution can achieve about 0.11-0.12(public leaderboard). </p> <p><strong>Ensembling</strong></p> <p>We ensembled by voting the predictions for the 3 nets above. Each of them yielded 0.125 LB, 0.7LB and 0.93LB and the ensemble yielded 0.156 LB.</p> <ol> <li>Indoor images detection We detected and re-scored indoor images in test dataset using the labels of the top5 predicted place categories from VGG16Places365 to vote if the given image is indoor or outdoor.</li> </ol> <p><strong>NN Search</strong> </p> <p>We extracted fc1 layer 4096-dimensional feature vectors from VGG16Places365 and computed cosine distances for all train-test image pairs. Brute force search on GPU using tensorflow took a couple of days. After that we increased the scores of images if CNN predicted label and top1 neighbour’s label agreed. Then we repeated this procedure for the same class average features. Finally we ended up using fast but less accurate faiss for NNs search on 512-dimensional feature vectors.</p> <p><strong>Source code:</strong></p> <p>Available @ <a href="https://github.com/antorsae/landmark-recognition-challenge">https://github.com/antorsae/landmark-recognition-challenge</a></p> <p>We also tried Annoy <a href="https://github.com/antorsae/landmark-recognition-challenge/pull/5/commits/87ceccf68f69d234024ada7e8f8eed058296caeb">https://github.com/antorsae/landmark-recognition-challenge/pull/5/commits/87ceccf68f69d234024ada7e8f8eed058296caeb</a> - and brute force.</p>