Title of Competition
stringclasses
343 values
Title of Writeup
stringlengths
6
139
User
stringclasses
1 value
Writeup
stringlengths
419
80.4k
Predicting Molecular Properties
24th (-2.633) - haha nice - pairs of pairs of ...
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Hey everyone,</h1> <h2>this was our initial approach:</h2> <p>For each connection we would like to find features that describe the influence other connections have on said connection. To do this we use a neural network that looks at pairs of connections. Its input are simply the connection types and all six Euclidean distances. We do this for all connections with shared weights, resulting in a tensor of size (1, channels, size, size-1). We efficiently implement the network using <strong>1x1 2D-convolutions</strong> to process an entire molecule at once. Once we have calculated these features, we sum them up (see picture), concatenate with distances and connection types of single connections and use another neural network, implemented as <strong>1D-convolutions</strong> with a kernel size of one to get the scc predictions. It is probably easier to understand with this picture:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3356694%2Fba61a244d36f37fbe8b0ddf5e05f94e5%2Fimplementation.png?generation=1567077987502246&amp;alt=media" alt=""></p> <p>Our solution is invariant to - the order of connections fed to the network - translations and orthorgonal transformations</p> <p>and</p> <ul> <li>it does not require any padding</li> <li>works with differently sized molecules.</li> </ul> <p>Summing and weight sharing enforce additivity of pair features. If there are two connections in the vicinity of a connection, that have the same physical influence (e.g. symmetric connections), their pair features should be the same! At least we hope so!</p> <p>Details of training: - L2 loss - Adam - no regularization/no dropout/no batchnorm - learning rate initially 1e-4, reduced to 1e-6 in multiple steps - 100 to 200 epochs</p> <p>Connection types are one-hot encoded, distances are standardized and scc are standardized per type. We used up to 1024 hidden neurons in up to 5 layers and ReLU activations in both networks. Training took ~2 weeks on colab (mostly on a single K80) and gave us a score of <strong>-2.126</strong>. </p> <p>We believe that this approach has a fundamental flaw. We only look at pairs and not at higher order interactions. What can we do? Looking at triplets is too expensive.</p> <h2>Improved approach:</h2> <p>Similarly to how we build a matrix of pairs from single connections, we use the pair features (after summing) of the previously explained network to build another matrix! Pairs of pairs. We also concatenate the original features to the matrix (types and distances). We again use <strong>1x1 2D-convolutions</strong> and sum it up. The final NN is given both the summed features of the pairs and pairs of pairs network. Training was done similarly to our initial approach and resulted in a score of <strong>-2.473</strong>.</p> <p>The cool thing about this architecture is that you can repeat the process of building matrices over and over again, increasing the effective field of view. We did it one more time. Sadly we ran out of time in the end. 2 weeks of training on kaggle kernels were insufficient.</p> <p>Our best submission is a blend of submissions (<strong>-2.633</strong>).</p> <p>We did very little hyper parameter and architecture search and are quite surprised that it worked quite well. There are many improvements to be made.</p>
Predicting Molecular Properties
35th Place Solution - Basic MPNN & First Medal
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I want to thank everyone who participated in this competition and the competition organizers. </p> <p>This was only my 2nd competition and I am really happy to have achieved my first ever medal and my first ever silver medal! </p> <p><strong>Overview</strong> - Every coupling was treated as its own graph - For the same molecule, graphs of 2 different couplings were different from each other. - Used the MPNN from the Gilmer paper <a href="https://arxiv.org/abs/1704.01212">https://arxiv.org/abs/1704.01212</a> - Used basic chemical features like atomic number and basic geometric features like angles and distances. - Had same features for all types but different connectivity for 1JHX, 2JHX and 3JHX - Most important part was not the model but how the molecular graph was connected together - All geometric features were relative to the atoms at atom index 0 and 1 and 1 or 2 other atoms which I found.</p> <p><strong>Molecular Graph Representation</strong> In the Gilmer Paper, a molecule is represented as a fully connected graph i.e. there are the default bonds (real bonds) and on top of that each atom is connected to each atom through a fake bond. In the paper, the point is to predict properties that belong to the whole graph and not to a particular edge or a node. So, in order to adapt to the nature of this competition, I used the following representation:</p> <ul> <li>Each coupling was a data point i.e. each coupling was its own molecular graph</li> <li>If a molecule had N number of couplings, then all N graphs are different from each other</li> </ul> <p><em>Type 1JHX</em> - Connected each atom to the 2 target atoms (atom index 0 and 1) on top of the default real bonds (note how this is not the same as the Gilmer paper where the graph is fully connected) - All geometric features were calculated as relative to the 2 target atoms.</p> <p><em>Type 2JHX</em> - Found the atom on the shortest path between the 2 target atoms. So there were now 3 target atoms (atom index 0, atom index 1, atom on shortest path) - Connected each atom to the 3 target atoms on top of the default real bonds. - Features were calculated relative to all 3 target atoms e.g. distance &amp; angle to atom index 0, atom index 1 and the atom on shortest path.</p> <p><em>Type 3JHX</em> - Found the 2 atoms on the shortest path between the 2 target atoms. So there were now 4 target atoms (atom index 0, atom index 1, 1st atom on shortest path, 2nd atom on shortest path) - Connected each atom to the 4 target atoms on top of the default real bonds. - Features were calculated relative to all 4 target atoms.</p> <p>Also, I made all the graphs fully bidirectional. Using a fully bidirectional graph gave me a significant improvement over a one-directional graph which was used in the paper.</p> <p><strong>Model</strong> - The model was really basic with some additional layers and slightly larger dimensions, very similar to what is written here <a href="https://github.com/rusty1s/pytorch_geometric/blob/master/examples/qm9_nn_conv.py">https://github.com/rusty1s/pytorch_geometric/blob/master/examples/qm9_nn_conv.py</a>. - I added very little Dropout and BatchNorm in the initial linear transformation layer which actually led to the model performing better. - I experimented with adding Dropout in the MLP used by the NNConv and it showed promising results but they were too unstable so I decided to not go through with it. - I tried adding an attention mechanism over the messages passed by the network but did not see an improvement in score (most likely implemented it incorrectly) - I also tried using the node vectors of the target atoms only to predict the scc but this actually performed way worse (probably because the way I am representing my molecules does not translate well to using just the node vectors of a subset of nodes) - I only trained a single model for each type (8 models total) so did not do any ensembling</p> <p><strong>Train only data</strong> Unfortunately, towards the end of the competition I was busy with some other work so could not get a chance to play around the fc, pso etc features. </p>
Predicting Molecular Properties
3rd solution - BERT in chemistry - End to End is all you need
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for kind words !</p> <p>Bravo!! Thanks :)</p> <p>Thanks for kind words !</p> <p>Thanks for kind words !</p> <p>Thanks for kind words !</p> <p>Thanks for kind words! Hope this post is informative to you!</p> <p>Thanks for kind words !</p> <p>Thanks for kind words!</p> <p>I've always liked your solutions :) Thanks for kind words!</p> <p>Thanks for kind words !</p> <p>Thanks for kind words !</p> <p>Thanks for kind words !</p> <p>Thanks for kind words !</p> <p>Good to hear that this post is easy to read for you. We will follow the host and kaggle's guidelines :)</p> <p>Thanks for kind words !</p> <p>Very thanks to our teams <a href="/limerobot">@limerobot</a> <a href="/sunghwanchoi">@sunghwanchoi</a> <a href="/songwonho">@songwonho</a> <a href="/yslee04">@yslee04</a> </p> <p>Our story</p> <p>As you know, we need to find the appropriate representation for data. That’s why we’re struggling to do feature engineering.</p> <p>As CPMP mentioned, two member of our team have domain knowledge on this competition. Sunghwan Choi has Ph.D at quantum chemistry and works on the field of quantum chemistry and chemical application of machine learning. </p> <p>I’m going on the Ph.D course in chemical engineering and have some experience for quantum calculation and dealing molecules. More important thing is that I’m kaggler :).</p> <p>We thought that our problem can be solved by conventional graph models whose edge features are distance-derived properties. Those models are quite conventional in machine learning applications on chemical system. Hence, we tried to figure out appropriate hyper parameters and edge features for models</p> <p>As many kagglers did, we also tested message passing neural networks by adopting many useful kernels. We thank to many kegglers; especially heng cher keng. :) by modification of their solution, we got a silver place 1 month ago. But, the gap between leading group and us was getting large. </p> <p>We had to find out breakthrough. We brainstormed a lot. At that time, limerobot, who is an expert of natural language processing suggested to use the raw xyz coordinates, I and Sunghwan didn’t agree with that, because if we use xyz coordinates instead of distance, translational and rotational invariances are not satisfied. The model which do not preserve those invariances seems to be ridiculous</p> <p>But, limerobot did on his way. And he showed that his model won my GNN. His model was based on BERT model. You can see the big success in toxic competition on his profile. </p> <p>Maybe, insights from toxic competition save us :).</p> <p>Anyway, he don’t have any domain knowledge. He just wanted to make a model to learn the complex representation using xyz coordinates. (end-to-end)</p> <p>He just input the xyz coordinates of atom1, atom2, coupling type, distance and difference of each xyz coordinates (he thought the model can learn the distance formula based on that...amazing)</p> <p>Because we had only one month, we decided to do all things based on the transformer. After that, we did a number of experiment for hyper parameter tuning.</p> <p>Because BERT is very large model, the performances differed according to the number of hidden layers, type of embedding and learning schedule.</p> <p>Before 1 week to end of competition, we found our own scheduling and parameters and tiny modification of readout layer ( I will explain model later ) </p> <p>We made multiples of models. Based on the ensemble of them, we got 3rd place :).</p> <p>Thanks for reading our stories. We want to share the specific magics below. Please keep going :)</p> <h1>Overall architecture</h1> <p>Here is our overall architecture for our model. As you can see, our input sequence is transformed into output sequence by BERT-encoder.</p> <p>• The number of encoder layer: 8 • The number of heads for attention 8 • Dropout ratio is 0.1 which is conventional choice for BERT model. • For each type, different readout networks but having same architecture are used</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1155353%2F9d825e9c577db8b951a18cb9827e683b%2Ffig_1.png?generation=1567130786795298&amp;alt=media" alt=""></p> <h1>Input features and embedding layer for them</h1> <p>Our float sequences composed of multiple embedding results, are the magic to get the achievement.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1155353%2Fac01c25bcce0d2e5e1638b6f6bd2c952%2Ffig_2.png?generation=1567130820868426&amp;alt=media" alt=""></p> <p>Multi-head attention layer itself preserve permutational invariance therefore, order of couplings do not change the results but invariance when atomic_index_0 and atomic_index1 are changed, is not preserved since we use feature vector as concatenation of the embedding results of atomic charge, xyz coordinate(position), atomic number of two atoms. </p> <p>• Size of embedding for atomic charge: 32 • Size of embedding for position: 256 • Size of embedding for atomic number: 64 • Size of embedding for distance: 64 • Size of embedding for type: 64</p> <p>Total feature size for single feature vector is (32+256+64)*2+64+64=832</p> <h1>Augmentation</h1> <p>In order to impose pseudo-invariance on our model, we use rotational and translational noise when augmenting data.</p> <p>• Translational noise: For each axis, Gaussian noise( mean: 0, std: 2) was added • Rotational noise: Rotational transformation whose axis is translational noise vector and angle is from Gaussian noise( mean:0, std: 3.14/2 )</p> <h1>Regression layers for predicting scalar coupling constants</h1> <p>• As you know that, spin-coupling (sc) value can be decomposed into four different terms (fc, sd, pso, dso) • After optimizing architecture and various losses, we found that auxiliary target using contributions gave a high boost. • During training, model minimize loss1 + loss2 with AdamW algorithm. • There are 8 regression layers to cover 8 different coupling types</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1155353%2F939a6ff0e4a6848477a655a2bfc6ad54%2Ffig_3.png?generation=1567130847675121&amp;alt=media" alt=""></p> <h1>Specific learning rate</h1> <p>• We were always using linear learning rate decay. • We think that there might be improvement with various learning schedule such as cycle lr. But, we didn’t have time because BERT is very large…(about 75M parameters. It took 1~2 days to get a model using 2~4 V100 machine)</p> <h1>Pseudo labeling</h1> <p>• To get the more results, we needed some magics. With having an insight that there are less probability to be overfitted (Sunghwan choi’s insight) and experiment result from limerobot, we adopted pseudo-labeling. • After predicting for test set, we used the pseudo-labeled test dataset for training. • The model showed more than -3.4 CV. So, we trained model more with only train data to minigate overfitting. (finally we got ~-3.11 LB single models) • Overall learning process is illustrated below.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1155353%2F92bc1322f07e876a7e5f1c60684a095d%2Ffig_4.png?generation=1567130868749389&amp;alt=media" alt=""></p> <h1>Final submission</h1> <p>• We made 14 models with various seed and hidden layers (most have 8 layers, other have 6 layers) • After weighted average according to cv score of types, we got -3.16. • After multiple procedure of pseudo labeling, we had 8 models. • With simple average, we got -3.19 (our final score :) 2 hours before the end of competition.</p> <h1>What we’ve learned</h1> <p>• End-to-End works!!!. Amazing BERT. • Learning schedule is very important for modelling of molecular property.</p> <p>Thanks for kind words :)</p> <p>Thanks for kind words :)</p> <p>Thanks for kind words :)</p> <p>Thanks for kind words :)</p> <p>You did an amazing job as well :) congrats!</p> <p>Thanks for kind words :)</p> <p>Thanks! And congratulations!</p> <p>Limerobot is amazing! </p> <p>Maybe BERT can be applied in other applications!</p> <p>Thanks!</p> <p>I've shocked for the performance of BERT :)</p> <p>Hi, thanks for kind words :)</p> <ol> <li>We trained the model from scratch.</li> <li>We got a model which show -3.06 lb without pseudo label. With 4 V100, about 2 days.</li> </ol> <p>I'm really lucky. Our team work was wonderful :) Thanks for kind words!</p> <p>Thanks for kind words! </p> <p>Thanks for kind words!</p> <p>Thanks! Hope this post is useful to you!</p> <p>Thanks for kind words :) As you said, limerobot is amazing!</p> <p>Thanks for kind words! Hope this post is easy to read!</p> <p>Thanks for kind words.</p> <p>We were also surprized about the performance of using raw coodinates. I don't think any leak in making the structure of molecules. The molecules might be optimized using QM calculations.</p> <p>Anyway, we used the rotation matrics which were not too much to rotate molecules. So, Rotated molecules showed very similar coupling constants compared to that of original structures.</p> <p>We just embedded the continous value into 32 or 64 size not using binning.</p> <p>The 32 size was arbitrary. We wanted to optimize the size, but not enough time.</p> <p>We can expect that there are optimal size of embeddings for various inputs.</p> <p>Hi, I really want to appreciate you! Yoy showed the posibillities to our team. It is very interesting that 1st, 2nd and 3rd team used the transformer :) You did amazing job !</p> <p>Thansk fo kind words :) Hope this post is helpful to you!</p> <p>Many discussions brought us amazing ideas :) Thanks!!</p> <p>Thanks for kind words :) My teamates are amazing !</p> <p>Hope our explanation is easy to read for you! Thanks!</p> <p>We did because we're team :) Thanks!</p> <p>Thanks for kind words :)</p> <p>Thanks for kind words. Our team is amazing! I'm lucky man</p> <p>Thanks for kind words! I will keep going on until GM. We will meet always :)</p> <p>Thanks for kind words! As illustrated in description, we will follow the policy of the CHAMPS.</p> <p>Thanks!! Very happy now :)</p> <p>Thanks for kind words!</p> <p>Thanks for kind words!</p> <p>We just embeded the xyz-coordinate (3 dimension) into 256 dimension!</p> <p>Thanks for kind words! Hope this post is easy to read!</p> <p>Thanks for kind words!</p> <p>Thanks for kind words!</p> <p>Hi, Saad. Hope this solution is easy to read for you :)</p> <p>Thanks for kind words ! You did also great job :) I've read your kernels which showed what's going on the competitions.</p> <p>Thanks for reading!</p> <p>Thanks for kind words !</p> <p>Thanks! </p> <p>Hope this solution inspires you !</p> <p>Hi, Thanks for kinds words!</p> <p>Thanks! Hi, we used pytorch_transformer (<a href="https://github.com/huggingface/pytorch-transformers">https://github.com/huggingface/pytorch-transformers</a>).</p>
Predicting Molecular Properties
42nd Solution, explanations and apologies
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Update Kernel</strong> I added all the information about my features.</p> <p><strong>Updated kernel</strong> Our single model LGBM has private score -2.242 <a href="https://www.kaggle.com/jesucristo/single-lgbm-2-242-top54">Single LGBM -2.242 = Top54 </a></p> <p><a href="/antorsae">@antorsae</a> Congratulations! looking forward to your solution/paper :) es la primera vez que veo un español tan arriiba en kaggle, espero ansioso ver como lo conseguiste :)</p> <p>Fisrt of all, I'd like to apologize for what happened last week, the leaked -2.163 submission ... also thank my mates and <a href="/inversion">@inversion</a> for understanding the situation and reacting quickly.</p> <p>It's surreal, I know, days later I read the posts and many people didn't understand why it had happened or what reasons I would have to do it. It's very surrealistic, I am sure the first kaggler that filters a top50 solution while in the bathroom pooping (not a joke). Now when I remember it I laugh, but at that moment I felt really really bad. I submit, go to the bathroom, come back and see my teammates are calling me desperate...</p> <p>I write this post in order to prove our solution is legit, explain it and show our code.</p> <p>I didn't hide my mistake and communicated it as quickly as I could to the community and administrators, and I pointed out who had used it and forked it.</p> <p>I don't know what happened that night, I think I edited a kernel that was already public, or because I am used to sharing kernels I didn't read the message when I invited my mates and clicked to accept... I don't know.</p> <p>A lot of people shared different opinions, and I tought about it... I remember the post <a href="https://www.kaggle.com/product-feedback/83351">Disallow Deleting Topics</a> by <a href="/brandenkmurray">@brandenkmurray</a> , there are a lot of deleted posts in these forums and using posts you can share almost the same information as kernels (code, pictures, descriptions etc) ... (it's just a opinion, I think it's also a big problem and I thought about it after reading all the messages). I resent the fact that many people think I leaked it on purpose or that it's a case of private sharing, because I hate private sharing and believe in fair competitions, but at the same time I understand those persons because this is surreal.</p> <p>I just wanted to apologize with the community, hosts and my mates... and share my point of view. I hope you have obtained great results, or at least according to the time you have dedicated to the competition, and I also hope that you have not been affected by the leak.</p> <p>best, Marcos.</p> <hr> <h3>Base</h3> <p>The -2.163 submission was a simple median of 4 files, the names of the files were self-descriptive, like <em>lgbm1945.csv</em>, the models were :</p> <p>| model | LB | |---|---| |lgbm | -1.945 | | | mpnn | -1.884 | | | semGCN | -1.830 | |</p> <p><strong>why didn't we open sourced the solution?</strong> Team decision, I would like but we had 3 big problems: 1. We couldn't reproduce results</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2779868%2F2818de1aed80ee79651f428d131e7c53%2FScreenshot%20from%202019-08-29%2000-11-39.png?generation=1567030811963445&amp;alt=media" alt=""></p> <p>The people who forked the kernel can check that we used those subs for stacking, but we did those subs 1 month ago... (I wasn't even member of the team) we couldn't reproduce those results, semgcn is a python script (no checkpoints, no github, just a local run). 2. I read about that idea on Friday (next day afternoon) and I was very busy, I couldn't prepare all the code + continue competition... and then all the problems with kaggle kernels + we couldn't reproduce results. 3. If we open sourced the solution, people would use the submission without running the code and using the excuse: nono, we generated this file using the open source solution (at least that what we tought)</p> <h3>Features</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2779868%2Fc7a27bb40ebfeb48f2f68ddd76e57f97%2FScreenshot_from_2019-08-25_09-42-58.png?generation=1567037089996198&amp;alt=media" alt=""></p> <p>I read the posts and papers, watched some youtube videos and my knowledge in biochemistry and bioinformatics. - Distances and Angles (bond angles and dihedral angles). The distance between C-C bond is very important. - Bond type - Number of atoms, number of neighbors of each ... - Hybridization (aromatization, atom__type, electronegativity, valences) in the theory are equivalent. &gt; C3 = carbon sp3 = more electronegative than carbon2</p> <p>Important <a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/104241#latest-606224">Is 1JHC really one class</a> &gt; Giba: The two groups are easily splited setting a threshold in 1J coupling distance to 1.065.</p> <p>The reason is this:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2779868%2F4cef58f29e1062c9271e1b17e9bc80f9%2FScreenshot_from_2019-08-25_09-43-22.png?generation=1567037089834080&amp;alt=media" alt=""></p> <p>So we started to count those substituents using this <a href="http://rdkit.org/docs_temp/source/rdkit.Chem.Fragments.html">rdkit.Chem.Fragments</a> (Number of aromatic nitrogens, Number of carboxylic acids etc)</p> <h3>Things that didn't work</h3> <ul> <li><p><strong>SchNet</strong> I improved this amazing baseline and we could get -1.61 with only 25 epochs, but we faced 2 problems:</p> <ol><li>Time, needs loong epochs, and we only had 1 kaggle gpu (MPNN was running on GCP already)</li> <li>Initialization, random_state, nobbie error... the results vary greatly depending on the initial configuration and we forgot to set random seed.</li></ol></li> <li><p><strong>Karplus Ecuation</strong></p></li> </ul> <p>Bothner-By equation JHH = 7 -cos Θ + 5 cos 2Θ</p> <p>where Θ is the torsion angle ... the problem wasn't obtain those angles, the problem was that I checked openbabel and RDKit and were not well calculated! For example we have CH4 (1st molecule) Even the bond angles, I tried with water H20 and insted of 104.5 I obtained random results like: 122, 97... I read about it <a href="https://www.rdkit.org/docs/GettingStartedInPython.html">here</a> and in the case of RDKit, it uses an algorithm based on distance geometry for conorming molecules from 3D (xyz), probably that's the reason :(</p> <p>&gt; Note that the conformations that result from this procedure tend to be fairly ugly. They should be cleaned up using a force field. This can be done within the RDKit using its implementation of the Universal Force Field (UFF).</p> <h3>Final submission and postprocessing</h3> <p>| model | LB | |---|---| |lgbm | -2.103 | | | mpnn | -2.24 | | | NN | -2.11 | |</p> <p>btw, we just realized about this (private scores):</p> <p>| single model | LB private | |---|---| |lgbm | -2.252 | | | NN | -2.134 | |</p> <p>Median stacking + <strong>Lasso</strong> regression in order to smooth/round the predictions</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2779868%2F642260d3a87b56078ba639a7a6cbf216%2FScreenshot%20from%202019-08-29%2002-11-15.png?generation=1567037508316091&amp;alt=media" alt=""></p> <h3>Models and Code</h3> <ul> <li>We used the thread <a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/93972#latest-602628">Which graph CNN is the best (with starter kit at LB -1.469)?</a> as baseline for the GNN, all credits to <a href="/hengck23">@hengck23</a> (<strong>note:</strong> we did several changes).</li> <li>NN based on <a href="https://www.kaggle.com/xwxw2929/keras-neural-net-and-distance-features">Keras Neural Net and Distance Features</a> by <a href="/xwxw2929">@xwxw2929</a> </li> <li>LGBM based on <a href="https://www.kaggle.com/criskiev/distance-is-all-you-need-lb-1-481">Distance - is all you need. LB -1.481</a> by <a href="/criskiev">@criskiev</a> </li> </ul> <p>other important kernels: - <a href="https://www.kaggle.com/toshik/schnet-starter-kit">SchNet Starter Kit</a> by <a href="/toshik">@toshik</a> - <a href="https://www.kaggle.com/sunhwan/using-rdkit-for-atomic-feature-and-visualization">Using RDKit for Atomic Feature and Visualization</a> by <a href="/sunhwan">@sunhwan</a> - <a href="https://www.kaggle.com/jmtest/molecule-with-openbabel">Molecule with OpenBabel</a> by <a href="/jmtest">@jmtest</a> </p> <p><strong>Code and datasets</strong></p> <ul> <li><p><a href="https://github.com/mv-lab/CHAMPS-Kaggle19">github</a></p></li> <li><p>Our single model LGBM has private score -2.242 <a href="https://www.kaggle.com/jesucristo/single-lgbm-2-242-top54">Single LGBM -2.242 = Top54 </a></p></li> <li><p><a href="https://www.kaggle.com/jesucristo/mol-features">dataset with features</a></p></li> <li><p><a href="https://www.kaggle.com/jesucristo/molsubs">dataset with submissions</a></p></li> </ul> <p>The worst part is that I know, and my mates know, that it's true. It's a good anecdote about my career as a Data Scientist 😂. Thank you <a href="/inversion">@inversion</a> :) </p>
Predicting Molecular Properties
43 place solution of 2 Experts and the farmer
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks! Now <a href="/yoichi7yamakawa">@yoichi7yamakawa</a> is like a domain expert on this field :D</p> <p>Thanks!!!</p> <p>Thank you for your comment, I'm happy to share our idea for you, even if it's a little bit :D</p> <p>Thank you! I will read your kernel and learn from it!</p> <p>Thanks!!!</p> <p>Thank you for your information! The one reason using networkx calculate ring is understanding the structure of molecule. The libraries is easy to use but black box a little bit.</p> <p>We use a subset of LGBM feature for MPNN (e.g. openbabel feature for node and edge feature.)</p> <p>First of all, we would like to thank everyone who participated in this competition, host members and Kaggle Team! I’m happy to finish this competition with really stable LB, no shake down :D</p> <h2>Modeling part</h2> <p>Mainly our team used LightGBM with various features (around 500), and adopt NN model (MPNN) customizing based on <a href="/hengck23">@hengck23</a> ’s MPNN model for bringing model diversity to enhance the blending. Also we understood the seed averaging on this competition is very useful, so utilized large number of seed average.</p> <p>In detail, please see the following slide.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F291723%2F70b9d209795661e831c5216aea880947%2Fsolution.png?generation=1567041478975554&amp;alt=media" alt="solution"></p> <h1>Features part</h1> <p>We struggled to find the features describing the global/local environment of atoms. We have made a lot of features by using chemical libraries (Openbabel, Dscribe, RDkit). These libraries are very helpful because we have little domain knowledge at the starting point.</p> <p>Some effective features are below.</p> <h3>distance</h3> <p>Some distances are essential features. We experimented with many distance between - index0 and index1 atoms - index0/1 and atom in α,β,γ-substituents - index0/1 and atom of each type(‘H’,‘C’,‘N’,‘O’) - nearest neighbors of index0/1</p> <p>We also use features from this brilliant notebook(<a href="https://www.kaggle.com/criskiev/distance-is-all-you-need-lb-1-481">https://www.kaggle.com/criskiev/distance-is-all-you-need-lb-1-481</a>) by <a href="/criskiev">@criskiev</a></p> <h3>Angle</h3> <p>Angle feaures also have very strong effects. We tried making a lot of hand-crafted features. - bond angle - plain angle - dihedral angle with index0/1 - stats of dihedral angle with index0, 3-atoms-away from index0</p> <h3>atom type</h3> <p>We use atom type(‘H’,‘C’,‘N’,‘O’) as features instead of their electronegativity. Using atom type not only of index0/1 but of some substituents and neibor atoms, we tried to describe the structures around the atoms involved in the coupling.</p> <h3>partial charge</h3> <p>partial charge features played important role. We mainly made them using <code>GetPartialCharges</code> in Openbabel. Some important features are below. - partial charge valus itself of idnex0/1 - the diff/ratio of partail charge of index0/1</p> <h3>characteristics of substituents</h3> <p>According to this document (<a href="https://www.ucl.ac.uk/nmr/NMR_lecture_notes/L3_3_97_web.pdf">https://www.ucl.ac.uk/nmr/NMR_lecture_notes/L3_3_97_web.pdf</a>), it seems effective to make features describing α,β,γ-substituents.</p> <h3>the hybridization of atoms</h3> <p>This feature are also from Openbabel.</p> <h3>ACSF features</h3> <p>This local descriptor works well. This features derive from the chemical library Dscribe (<a href="https://singroup.github.io/dscribe/tutorials/acsf.html">https://singroup.github.io/dscribe/tutorials/acsf.html</a>) We didn’t tune g2,4_params because we didn’t have time enough. So, this features may have more room of improvement.</p> <h3>Fingerprint</h3> <p>To describe the characteristics of the molecules, chemists seems to molecular fingerprints. We made some featrues using Morgan Fingerprint(very huge bits), MACCS Fingerprint(167 bits) They are global descriptors, and it works a little.</p> <h3>topological data analysis</h3> <p>Maximum and minimum radius of persistent homology with <a href="https://ripser.scikit-tda.org/">ripser</a>.</p> <h3>Bond ring feature</h3> <p>Circle size of bond ring created with networkx.</p> <h3>PCA feature</h3> <p>Explained variance PCA feature calculated based on xyz position, aming to represent global shape of a molecular.</p> <h1>Team member</h1> <p><a href="/yoichi7yamakawa">@yoichi7yamakawa</a> &amp; <a href="/kenmatsu4">@kenmatsu4</a> </p> <p>Thanks!!!</p>
Predicting Molecular Properties
5th place solution : DL guys
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Thanks</h1> <p>First of all, a big Thank You to the organizers for this outstanding competition. This competition is very challenging and interesting in many points of view.</p> <p>Secondly, congratulations to all teams which completed this competition, whether you are in the medals or not. We learned a lot from your solutions and your discussions. We didn’t have any expertise in neither chemistry nor in graph neural net before this competition, this experience has been very enlightening for the 3 of us.</p> <p>Also big kudos to the top 4 teams, your usage of Transformers are quite eye opening (honestly we did think about it but was not audacious/confident enough to test it out). Maybe this competition will open a new paradigm of Deep Learning for molecular properties - Transformer is all you need :D</p> <p>A last thank you to my teemmates and coworkers Lam Dang and Thanh Tu Nguyen :) It was very fun competing with you.</p> <h1>Solution</h1> <p>Without further ado, here is a highlight of our solution: - On macro level our best submission is a 2 layer stacking: - The base level consists of different variant of the general Graph Neural Net with edge, node and global representation with some variations (cf. Architectures below) - It was implemented with pytorch and pytorch_geometric. - The 2nd level is some metamodel trained on our validation set of 5000 molecules : 1 linear stacking model and 1 LGBM (cf. Stacking section below) - The final submission is a blend of 2 meta model</p> <h2>Architecture:</h2> <p>The final architecture is based on the paper <a href="https://arxiv.org/abs/1812.05055">https://arxiv.org/abs/1812.05055</a>. We tried different variations to improve this architecture, here is a summary of what worked and what didn’t work: - Normalization: We found that LayerNorm worked better that BatchNorm for this data and helps improve convergence - Softplus vs ReLU: Softplus did provide a ~ 0.1 boost of logMAE for our models vs a ReLU baseline - Edge to node message gating: We found that adding some gating mechanism to the edge representation before the scatter_mean (see torch_geomrtric) for node update helps - Edge to edge convolution: Guillaume implemented something that seem to work very well. He noticed after a feature importance test that the most important one was by far was the angle between an edge and the edge with the closest atom to the first edge. To integrate this angle feature for more than the closest edge, we updated each edge with a convolution of the edge in question and its neighboring edges in the graph (more specifically the neighboring edges that chemically connects two atoms), and putting in this convolution the angle of the edge vectors. This architecture tweek made our architecture 5 times slower but gave us a 0.15 improvement compared to the best model without it. - 1 prediction tail per type: All types share a GNN “body”, but we found that having different MLP for each type helps. - In some variants, before feeding into output MLP layers, we pool all the edges and nodes in the chemical bond path from atom_0 to atom_1. It seems to have helped in the beginning of the competition but our best model did not use it. - For our architectures, we found that having a representation of the link between atom_0 and atom_1 is important. Also including the global representation as inputs of the top layers is important</p> <h2>Stacking:</h2> <ul> <li>Our single best model is the one with edge to edge convolution which gives us -2.9. But we have various models around -2.7 which are the variants of it. By stacking all (20 models) we got -3.13 on LB. </li> <li>Another thing we found out at the last day helps improve our score from -3.13 to - 3.15 is adding checkpoints of our models to stacking pipeline. So finally, we have 50 predictions to do stacking.</li> </ul> <p>Our final result is a blend of LGBM and HuberRegressor. - LGBM: 20 GroupKFold on all bond types together - HuberRegressor: 20 GroupKFold on every bond type separately. </p> <h2>Computation:</h2> <p>We have : - 1 GTX 1080ti x 2 months + 1 RTX 2080ti x 1 month - 1 RTX 2080ti x 3 month - 1 V100 x 2 month (rented) + 4 V100 x 2 weeks (rented)</p>
Predicting Molecular Properties
62nd Place Solution.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Code Specification</h1> <p>I'll try to briefly summarize different components of my solution.</p> <h2>Model Structure</h2> <p>I created 3 models. ( 4th one was a <a href="https://www.kaggle.com/ioohooi/chemistry-of-best-models-1-839">public kernel </a> ) 1. Catboost (LB: -1.95) 2. LightGBM (LB: -1.87) 3. GNN (LB: -2.05)</p> <h3>Features for Catboost and LightGBM</h3> <p><code>Bond detection</code> Initially I inferred bonds(<a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/find_edges.py">Code</a>). However, I later used <a href="http://openbabel.org/wiki/Main_Page">openbabel</a> for bond detection. </p> <p><code>Neighbor detection</code> I defined neighbors in 5 ways: a. Using the <strong>edge data</strong>, I was able to define 1st neighbor atoms, 2nd neighbor atoms and so on. I was also able to figure out intermediate atoms in path from atom_index_0 to atom_index_1. (<a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/compute_intermediate_atoms.py#L121">Code</a>) b. Using distance between atoms and standard bond length, neighbors were inferred which were close enough to be considered to be bonded (<a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/neighbor_features_distance.py#L104">Code</a>) c. Based on atom_index. For example, neighbors of atom_index 5 will be 3,4,6,7. (<a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/neighbor_features_atom_index.py">Code</a>) d. k-nearest atoms were considered as neighbors. <a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/edge_features.py#L182">Code</a> e. I divided the region around center of the bond by creating right circular cones with different base angles. Cone's axis was the bond. It gave me neighbors which were in <strong>different angular neighborhood</strong> ([0°-60°], (60°-120°) and so on). <a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/conical_segmented_features.py">Code</a></p> <p><code>Neighbor based features</code> I created aggregate of Electronegativity, Valency, Lone pairs, bond angles,mass and distance for neighbors generated from above 5 ways.</p> <p><code>Angle based features</code> a. Dihedral angle and some other angles were also computed for 2,3 bond apart atom pairs. (3JHH,3JHN,3JHC,2JHH,2JHN,2JHC) b. Aggregate of angles for neighbors. </p> <p><code>Estimation of partial charge</code> Using electronegativity information and co-ordinates of atoms, I computed electronegativity vectors normalized by distance between the atoms. Doing a vector sum of these electronegativity vectors over bond neighbors yielded me an estimate of partial charge on each atom. <a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/edge_features.py#L95">Code</a>. Later in the competition, I started to use sum of electronegativity difference normalized by distance as a measure of partial charge (no vector addition).</p> <p><code>Potential based features</code> Used yukawa,coulomb potential using the estimated charge. <a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/atom_potentials.py#L14">Code</a>.</p> <p><code>Other features</code> I used some of the atom features generate by openbabel. Using edge data, I was able to also infer cycle length and other related features (<a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/cycle_features.py#L23">Code</a>). I also used bond energies as feature between bonds of the molecule(<a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/bond_features.py#L31">Code</a>). Carbon hybridization was used as a feature. For 3 bond distant atom pairs which had 2 sp2 hybridized carbon atoms as intermediates, Cis/Trans configuration was used as a feature (<a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/intermediate_atom_features.py#L144">Code</a>). I also tried to estimate how much pi bonds have electron 'donor' tendency. (<a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/pi_donor.py">Code</a>)</p> <h3>Features for GNN</h3> <p>I had to recompute the edge features as they were to be computed between each possible atom pairs in a molecule. I ended up coding up a numpy heavy implementation of the top features again which I had computed for tree based models. Earlier features were computed mostly using pandas.</p> <h1>Journey</h1> <p>I started this competition after completing a <a href="https://www.kaggle.com/c/competitive-data-science-predict-future-sales">playground competition</a>. In the beginning I started working on it without looking at discussions and public kernels . Such arrogance :D . With time, however, improvements started saturating. Then I skimmed through the discussions ;) . <img src="https://i.chzbgr.com/full/6493143296/hA923C6DD/" alt=""> Again, after sometime, progress plateaued and with all humility I started understanding different ideas present in the public domain. <a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/96436#latest-565548">https://www.kaggle.com/c/champs-scalar-coupling/discussion/96436#latest-565548</a> and <a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/93928#latest-588773">https://www.kaggle.com/c/champs-scalar-coupling/discussion/93928#latest-588773</a> were the among the most helpful to me. <code>When you know that something will most probably work, it is much easier to code. ;)</code> At some point, I saw a comment stating that one should use mse as the loss instead of mae. Initially, I had tried both of them but had not got much difference. Now however, I got a 0.2 jump !! At this point I was around -1.5 LB. Then I got wind of GNN from <a href="https://www.kaggle.com/fnands/1-mpnn">https://www.kaggle.com/fnands/1-mpnn</a> .It was a nice experience reading up the GNN paper and understanding the code of the kernel. This collaborative nature of kaggle is just amazing. One can learn so much if only one is willing to look around and work. In my case, memory was the main issue with GNN. My features were not fitting into the memory. I changes all features to float16 and selected top few features . Then I started getting somewhere. 5 fold CV helped a lot to remove the jitter from the prediction. I did about 40-50 epochs in each fold with hand curated learning rate scheduling. On average, starting from 1e-3, I reduced LR every 8 iterations by a factor of .5 . </p> <p>Looking back, I loved this journey and will definitely participate in more kaggle competitions.</p> <h1>Technical Takeaways from the competition</h1> <ol> <li>I learnt to use <strong>numpy</strong> much better and learnt to avoid pandas <strong>groupby,stack,unstack</strong> operations as much as possible. I was able to reduce my feature generation time about 10-50 times in different cases. When you think about it, you will be surprised how much can be done using matrix multiplications. <a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/gnn_common_utils_molecule_properties.py#L75">Distances</a>, <a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/numpy_angle_computation.py">angles</a>, aggregated features for neighbors and much more. It started when I looked at this amazing kernel <a href="https://www.kaggle.com/cpmpml/ultra-fast-distance-matrix-computation">https://www.kaggle.com/cpmpml/ultra-fast-distance-matrix-computation</a> </li> <li>I've a few redundant features due to which unnecessary time was spent in their computation, codebase got quite big and I was running into memory issues. In future, I'll try to <strong>add one feature at a time</strong></li> <li>By looking at the brief summary of top solutions, I see that there was <strong>much less feature engineering</strong> for neural network based solutions. In future, I'll start on NN based models quite early into the competition.</li> <li>It is worth the time to do KFold CV. Results improve significantly.</li> </ol> <p>It was a good competition. The experience taught me so many things and I'm greatful for kaggle and the competition organizers for this. I'll update it with notebooks soon.</p>
Predicting Molecular Properties
6th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you Zhenlan. I only used a hidden coupling edge state in the MPNN model. So I have only looked at applying multi-head attention layers on atom/node states. </p> <p>Thanks! 1. Similarly as for the hidden states for atoms I introduced a hidden state for the edges between atoms for which we had to predict the scalar coupling constant. In the MPNN model the hidden states for atoms are iteratively updated by means of an update function, and I used a similar mechanism to iteratively update these hidden scalar coupling edge states. 2. I think you're understanding of graph distance is correct. It represents the number of atoms on the shortest path from atom_A -&gt; atom_B. I computed it using the RDkit GetDistanceMatrix function (documentation here <a href="http://rdkit.org/docs/source/rdkit.Chem.rdmolops.html">http://rdkit.org/docs/source/rdkit.Chem.rdmolops.html</a>. 3. These are all the scalar coupling edge features I used: 'sc_type', 'eucl_dist', 'eucl_dist_min_atom_radius', 'dist_electroneg_adj', 'normed_dist', 'diangle', 'cos_angle', 'cos_angle_nn0', 'cos_angle_nn1'. The 'sc_type' is one hot encoded. The distance features 'eucl_dist_min_atom_radius', 'dist_electroneg_adj' were taken from Giba's kernel.</p> <p>First off, I would like to thank Kaggle and the hosts for a well organised competition and giving us such an interesting scientific problem to work on. I learned a great deal about training and developing neural nets (and a little bit about chemistry ;)). I hope that the final solutions are useful to the organizers as well.</p> <p>That being said, I’m excited to share my final model. In short: I used a custom GNN architecture that combines the message passing elements from the MPNN model (<a href="https://arxiv.org/pdf/1704.01212.pdf">https://arxiv.org/pdf/1704.01212.pdf</a>) with the heavy use of multi-head attention layers as in the Transformer Encoder (<a href="https://arxiv.org/pdf/1706.03762.pdf">https://arxiv.org/pdf/1706.03762.pdf</a>). On its own an 8-fold submission of this model scored -3.039 on private LB (single folds scored between -2.83 and -2.88 on their validation sets).</p> <p><em>Journey to the final model:</em> Quite quickly upon entering this competition I found several discussion topics on Graph CNNs, which led me to the MPNN paper. I started implementing this model from scratch in Pytorch and given initial promising results and later also finding Cheng’s MPNN starter kit, I was convinced I was on the right track.</p> <p>As many others have noted, the MPNN model doesn’t necessarily work well as a plug and go solution for this problem. But with some customisation, I was able to get an 8-fold MPNN model to produce a -2.873 score on private LB.</p> <p>The most important things I changed or added to the MPNN architecture to get there: • Add separate message passing function for scalar-coupling atom pairs. • Incorporate additional edge, atom and molecule level features. Most significantly, angle based features for scalar coupling edges (i.e. dihedrals for 3J, cosine angles for 2J and nearest neighbour cosine angles for all types). • For the Message Passing Function applied to bond connections, I added an attention mechanism over incoming messages based on cosine angles. • Replace set2set with a Gaussian attention layer based on Euclidean distance. • Introduced scalar coupling edge state that is updated alongside atom states. • The head of the model used dense skip connection to the final node states of the update process and raw features. • The write head also included a residual block that was specialised per scalar coupling type. • Before the final prediction I first predicted each of the four scalar coupling contributions separately. These four terms plus a learned residual are finally added to predict the scalar coupling constant. The four contribution predictions where added to the loss function. • Increased hidden state dimension (up to 300).</p> <p>As the long list suggests, the model grew increasingly complex and the additions started to feel increasingly hacky. Also, I wasn’t happy with how the model scaled with hidden state dimension. Anything above 300 no longer fit on the Kaggle kernel GPU memory. This mostly stems from the edge network whose final layer has a weight parameter whose size grows at a cubic rate (i.e. 300^3 = 27M params). I wasn’t able to make any compromise on the flexibility of this edge network without drastically reducing performance.</p> <p><em>Final Model:</em> With this in mind I started thinking of a different architecture where message passing layers and attention layers (which seemed to be the most promising elements from my MPNN) could be neatly stacked. The general architecture of the Transformer Encoder provided the basic building blocks for this: stacked encoder blocks and sublayers connected through residual connections with layer norm. In this case however the encoder blocks are build up of two message passing layers, followed by three different types of multi-head attention layers with a final point-wise feed-forward network.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2862251%2Fbfb9329eff94b50b6d393bed498f6188%2FMolTransformer.png?generation=1567079164494797&amp;alt=media" alt=""></p> <p>The message passing layers (one for bond connections and one for virtual scalar coupling edges) are largely as described above for the MPNN. Note that unlike the attention layers, the message passing layers' parameters are tied across encoder blocks. To allow for much larger node dimension I changed the full matrix multiplication for message passing into a convolution with a fixed kernel size of 128. In the MPNN model this significantly decreased performance but for the Transformer model the gain from increased hidden state size easily offset the loss due to less flexible message passing.</p> <p>The three multi-head attention layers are: 1. Euclidean distance based Gaussian attention. Similar to the Gaussian attention module in my MPNN. 2. Graph distance based attention. Computes attention through an embedding of the graph distance matrix. 3. Scaled dot product self-attention. Exactly the same as in the Transformer Encoder.</p> <p>Although the final layers in the block resemble the encoder blocks of the Transformer model, there are several additional layers designed specifically to capture the structure and relationships among atoms in a molecule. The final write head is the same as described for the MPNN.</p> <p><em>Training:</em> Training was facilitated by the fastai library. The final model was trained for 100 epochs on all folds using the one_cycle learning rate policy, a max learning rate of 5e-4 and weight decay of 1e-2. I also experimented with snapshot ensembling over an additional 40 epochs using a warm restart learning rate schedule, but the gains from this technique are marginal (~ -0.01). Up to the final submission I had done nearly all training and experimenting on Kaggle kernels. But, for the final submission I used a model with a hidden node dimension of 650, 10 encoder blocks and 10 attention heads. This model was pretty big (at least by my standards) and implied I had to do the final training on the GCP platform, using distributed training on 2 V100 GPUs. Total training time on this setup was roughly 1.5 days per fold. My top submission also included the 8-fold submission of the MPNN model. This gave me about -0.05 of a boost.</p> <p>I want to specially thank <a href="/hengck23">@hengck23</a> for sharing the MPNN starter kit, which really helped me along with how to setup training and load in data for these GNN models, and the people who submitted public kernels. Nearly all my features either came from or where heavily inspired by these kernels.</p> <p>I’ll try to get a kernel published in the next few days that trains a smaller version of the transformer model.</p> <p>Thanks for reading and I’ll happily answer any questions</p> <p><em>Update:</em> - github repo: <a href="https://github.com/robinniesert/kaggle-champs">https://github.com/robinniesert/kaggle-champs</a> - kaggle kernel of smaller model w 15 epochs of training: <a href="https://www.kaggle.com/robin111/molecule-transformer-w-message-passing">https://www.kaggle.com/robin111/molecule-transformer-w-message-passing</a></p> <p>Thanks Ahmet! I had some prior experience with smaller neural nets (mostly tabular models), but nothing with models of this sort of size.</p> <p>Thanks Michel. Yes, I plan on cleaning up my code a bit and making the repository public over the weekend.</p> <p>Hi XY, glad to see the kernel is of use to you. <code>angle_in_df</code> and <code>angle_out_df</code> together contain all the angles between the chemical bonds in the molecule and the index of the bond the angle corresponds to. Take for example a molecule with the following bonds: <code>(a0, a1), (a0, a2), (a0, a3), (a1, a4)</code> then <code>angle_in_df</code> would contain the following angles and angle indices: <code>(angle_201, 0), (angle_201, 1), (angle_301, 2), (angle_014, 3)</code> and <code>angle_out_df</code> would only have <code>(angle_014, 0)</code>. So angle out refers to the fact that it contains angles centred around the second atom in a bond pair. Check the github repo for the code used to create these inputs.</p>
Predicting Molecular Properties
8th place solution - a densely connected GNN
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Good question. I don't see why not. </p> <p>Thanks to the organizers for such an interesting competition. It was a lot of fun exploring the current state of the art of graph neural networks and trying to improve it. I also want to thank <a href="/artgor">@artgor</a> , <a href="/tunguz">@tunguz</a> , <a href="/borisdee">@borisdee</a> and <a href="/psilogram">@psilogram</a> for being amazing team mates. In short, our solution is a custom Graph Neural Network with SchNet interaction at its core, which alone scores ~ -2.95 on leaderboard. The model is a single model for all bonding types.</p> <p>Since my team already had good engineered features for their lgb (with lb score of -2.0) when I joined, I could fully concentrate on what I love: finding a strong deep learning based solution. </p> <p><strong>The journey was as following</strong></p> <p>I started with refactoring Hengs starter kit for MPNN which was at around (-1.4 LB) for speed optimization to have a better speed of iterating ideas. After 2 days I got it down from 15 min to 90 sec per epoch on my GTX1080Ti , without losing any accuracy. That enabled to quickly explore different ideas to improve MPNN which I implemented then. However the leaderboard score was still worse than our lgb. </p> <p>So I continued to read a lot of papers related to GNN:</p> <ul> <li>papers referenced in forum, and </li> <li>newer papers referencing that papers</li> <li>papers and git repos found by stalking top LB positions</li> <li>papers shared by <a href="/artgor">@artgor</a> who posted new ones in our slack before I have read the previous</li> </ul> <p>I implemented most of them in hacky way to see if they fit for this competition. Besides others I implemented the papers listed below. Some had usable git repositories for others I need to implement more or less from scratch because they used a different input format or were coded in tensorflow/keras whereas I had setup everything in pytorch. All of them are supposed to predict properties on molecular level, so I adjusted where necessary to work on atom pair level:</p> <ul> <li><a href="https://arxiv.org/abs/1704.01212">MPNN</a></li> <li><a href="https://github.com/SeongokRyu/Graph-neural-networks/blob/master/tutorials/gat.ipynb">GAT</a></li> <li><a href="https://github.com/txie-93/cgcnn">Crystal GNN</a></li> <li><a href="https://arxiv.org/abs/1706.08566">SchNet</a></li> <li><a href="https://github.com/materialsvirtuallab/megnet">MegNet</a></li> <li><a href="https://github.com/blackmints/3DGCN">3DGNN</a></li> </ul> <p>For me SchNet not only had the best code basis but also got decent results and was relatively fast, so I started building on that one. In general the model consists of 3 parts:</p> <ol> <li>encoder for nodes and edges</li> <li>consecutive blocks of interaction between encoded nodes and edges</li> <li>regression head </li> </ol> <p>As mentioned by other solutions SchNet lacks several things, but the core interaction between atoms and edges is good and fast. So I spent some weeks improving that baseline step by step by adding new stuff I read in other papers or I found useful during computer vision competitions. From every 10 ideas I implemented a maximum of one worked, and it felt like digging for treasures. At the end not much was left from the original SchNet apart from the core interaction function between nodes and edges.</p> <p><strong>The most architectural enhancements were the following</strong></p> <ul> <li>using additionally edge features like bondtype, angles</li> <li>additionally updating edges (as in MegNet)</li> <li><a href="https://arxiv.org/pdf/1608.06993.pdf">densenet similar connections</a> between interaction blocks</li> <li><a href="https://arxiv.org/abs/1709.01507">squeeze and excitation block</a> at the end of an Interaction</li> <li>encoding and using also molecule level features as in Hengs starter</li> <li>using edge level hidden units for final regression</li> <li>using molecule level hidden units for final regression</li> <li>using a deep architecture (12 interaction blocks)</li> </ul> <p><strong>some details on training</strong></p> <ul> <li>preprocessing: save graph to disk (40 min) </li> <li>bs: 64</li> <li>optimizer: Adam</li> <li>loss: weighted lmae (weights by inverse frequency of coupling type counts)</li> <li>lr_schedule: Cyclic Cosine annealing with initial lr 0.0005 and cycles of 200 epochs which enables to continue training easily by just adding a new cycle.</li> <li>validation: groupkfold (10fold) per molecule</li> <li>epochs: 1200 (and still underfitting)</li> </ul> <p><strong>our final submission:</strong></p> <p>We basically just bagged versions of the above model trained with different hyperparameters which took some resources. So we split training by <a href="/borisdee">@borisdee</a> <a href="/tunguz">@tunguz</a> and myself. Some version of MegNet scoring around -2.5 LB made it also in the final blend but with very minor contribution. <a href="/psilogram">@psilogram</a> then used a simple mae optimizer to blend the different models on bonding type level. -&gt; private leaderboard score of -3.001 </p> <p><strong>things that did not work</strong> - additional task of autoencoding edge or node features - metric learning - and thousand of others.</p> <p>Happy to answer any questions and thanks for reading.</p> <p>I must add that a 6layer version of our model with about half the parameters still scores -2.91 LB and is significantly faster to train. We trained that in kaggle kernels.</p> <p>working on some illustrations:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1424766%2F08edcad3e8ea6a2446a8b8d0c69af278%2FNN%20champs.001.jpeg?generation=1567405111654062&amp;alt=media" alt=""></p> <p>Interaction Block:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1424766%2Fe41816914a4d02650cfe75908d271b28%2FNN%20champs.003.jpeg?generation=1567406675179912&amp;alt=media" alt=""></p> <p>the molecule features add only small value to lmae (&lt; 0.05). The biggest contribution are </p> <ul> <li>updating bond hidden state (~ -0.15)</li> <li>add last bond hidden state to regression head (~ -0.15)</li> <li>have dense connections between interaction blocks (~ -0.1)</li> <li>squeeze and excitation for atom hidden (~ -0.05)</li> </ul>
Predicting Molecular Properties
NN and LGB solution for top 5% (and some cool graphs)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks, <a href="/veeralakrishna">@veeralakrishna</a> </p> <p>Hi guys, I created two kernels explaining how we used LGB and NN models to get a silver medal in this competition. I know it is not as cool as all the graph NNs, but might be nice for someone :)</p> <p>1) <a href="https://www.kaggle.com/felipemello/features-for-top-5-lb-with-nn-or-lgb">Features for top 5% LB with NN or LGB</a> show how we calculated the features we used. The main one that wasn't shared publicly is calculating forces resultants on the axis of atom0 to atom1. Features based on this principle were the most important to my model. Here is one:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2689207%2F7af99ef7b86015faae42eca6866673ac%2Fforce_resultant.png?generation=1567102497246402&amp;alt=media" alt=""></p> <p>2) <a href="https://www.kaggle.com/felipemello/nn-and-lgb-tricks-and-pipeline-for-top-5-lb">NN and LGB tricks and pipeline for top 5% LB </a> describes the pipeline used. There were a few tricks that greatly boosted our score, like using <a href="https://www.kaggle.com/speedwagon/permutation-importance">permutation importance</a> to select features and setting distance for LGB as yukawa distances (exp(-dist)/dist).</p> <p>We also added some features extracted from a NN layer as features to the LGB, and they look awesome. It always impress me how beautiful neural networks are. Here is an example:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2689207%2F75927fece7206a6c97c0279d8e938dcb%2Ffeat_nn_6.png?generation=1567102931116829&amp;alt=media" alt=""></p> <p>It took a while to upload everything and detail the code, so, if you enjoy reading it somehow, consider upvoting as a reward for my time :)</p> <p>Hope you guys enjoy it. Happy Kaggling.</p> <p>Glad you liked it. To be honest, it was also my first time with permutation importance and feature extraction. Every competition is a great opportunity to learn new great stuff :)</p>
Predicting Molecular Properties
Newcomer paying attention - 26th place, suggestions?
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you.</p> <p>Thank you for your kind words.😄 </p> <p>First of all, <strong>tons of thank you</strong> to the Kaggle community, the Kaggle team and competition hosts for the event. I am just starting out in the field (doing my bachelors now) and so if I made any mistake here or you have any suggestions, please let me know. I took part in the competition to learn and be comfortable with Pytorch with a hands on approach. I did lots of tensor manipulation. I did not have much expectation(it is intimidating to see so many data science experts, also I joined the competition late). And then I got a medal... weird.</p> <p><strong>Model details</strong> Too much talk, real deal is as follows: I trained <strong>one model responsible for calculating J-coupling</strong>(referred as JC next) <strong>of all types</strong>. The model <strong>produced 8 outputs, one each for a type, of which 7 of them were in don't care condition for any input</strong>. I directly optimized the evaluation metric, using it as loss. Like many teams I used an <strong>attention based NN model</strong> (Fast.ai's multi-head-attention of text module was an very important piece), to generate features and concatenated it with other engineered features like from 'distance is all you need' kernel. Wikipedia helped me generate few more features. Overall, it is just a <strong>variant of BERT.</strong> Atoms of the molecule whose JC was to be found were fed to the model one by one as input steps, zero padded 30 input steps. I added a classification token(well, actually a series of numbers that represented the learned token) as the first input step (time step sounded weird and atom step weirder) about which I did not know what would be suitable so I trained a small NN based on simple features like input type. All other input step consisted of features that was concerned with a particular atom. Each time step had a binary feature which told the model if the atom for which JC was to be computed. Features were collected from the attention network by concatenating output of first input step (say special input step, due to lack of better name, I think I need to be better with names) maximum and average of its output along all input steps with outputs from steps of atoms whose JC was to be calculated, all total 5X length of encoded output of any input step). These with hand engineered features (after going through few residual layers) were passed through another residual layer and then to a feed forward layer to generate 8 outputs, which was then trained with Adam optimizer using cyclic learning rate.</p> <p><strong>Training details</strong> At first I trained a models on the Kaggle kernel itself, and the best I was able to reach was -2.015 LB score in those 9 hours, batchsize of 1024 was used. I trained the final model on GCP (thank you for those free credits) for around <strong>30 hours with a batchsize of 2048 on a V100 GPU</strong>, gradients were noisy with smaller batches. I did not do oversampling, the loss function took care of it. But, I did scale the embedding gradients based on their frequencies. I could not train it any more, credits were gone, and the competition was ending too. But 30 hours of training gave lb score of <strong>-2.607 from a single model.</strong> Later, I fine tuned the model for first three types(time was up) on Kaggle kernel by accordingly adjusting the weights for the respective type of JC. It gave me boost of around -0.015, not that it would have changed my final rank😁. </p> <p>I learned a lot, read research papers, blogs, watched Jeremy Howard's lecture videos. It was fun.</p> <p><strong>Again, thanks to everyone.</strong></p> <p>Also, if you have any suggestions, or any thing at all, leave a remark.</p> <p>PS: wish me luck for my upcoming mid-semester examinations, I will need it.</p>
Predicting Molecular Properties
Solution 7th (CPMP view)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks, and congrat for your GM title, well deserved!</p> <blockquote> <p>If I use fully connected layer, then each atom are dependent.</p> </blockquote> <p>The real issue in that case is that you build a model that depends on the order in which atoms are listed in the molecule. </p> <p>I used the same features ad lgb standalone plus the NN prediction. I think this works because the features used for lgb are very different from what the NN learned. Think of it as stacking (using the NN prediction as feature), but also using original features.</p> <p>I have 2 1080 Ti that were mostly used to train Ahmet's model, and partly to develop mine. Ahmet only has 2 K40. And Outrunner has 2 1080 Ti as well.</p> <p>First of all, I'd like to thank my team mates, without whom I would not have landed where we landed, and even would not have entered the competition. Let me also way that I'm very happy to see Ahmet become competition GM. And Outrunner is only one gold away from it. I'm sure he'll become GM in his next competition. I was very lucky to team with them. It proves 1 + 1 + 1 &gt; 3.</p> <p>I also want to thank <a href="/inversion">@inversion</a>, Kaggle staff in general, and CHAMPS for setting up this very challenging, unflawed contest. It is refreshing to not have any leak. Using test data wasn't helpful really either. And the way <a href="/inversion">@inversion</a> managed the two crisis (QM9 use, and unplanned kernel share) was very effective.</p> <p>I was very reluctant to enter this competition, for two reasons. First, I was convinced that people with domain knowledge would have an unfair advantage. I was right as the top of LB shows a high proportion of teams with chemists on board. Second, it was clear from early discussions that graph NN would be key, and I had absolutely no experience with it. This second reason is a bit weak given most of us here are in same situation!</p> <p>Then Ahmet asked me if I would join, because we almost teamed in a previous competition and said we would team ASAP. It motivated me to complete an initial effort using lightgbm. I developed a core set of features, then added features specific to each type. The features are based on the graph defined by bonds computed using open babel (thanks <a href="/borisd">@borisd</a> for the starter code). Angles for 2 bonds path, and dihedral angle for 3 bonds paths were key. I then added features based on the number of 1, 2, 3 bond paths starting on every atom on the path, using aggregates like number by bond type, angle averages, distance averages, etc. Depending on the type, number of features was between 500 and 2000. I trained one lgb per type. With this I got to 2.0 on LB with one fold the day I entered. Once trained on all folds, this model got 2.1 on LB. </p> <p>There are few things I did to improve the model. </p> <p>First, I use a very large number of leaves, 2**10 -1. I am not sure why this is good, but it was. Other parameter tuning wasn't very important.</p> <p>Second, I duplicated train and test rows with types 2JHH and 3JHH by swapping the two atoms, before feature engineering, because scalar coupling is symmetric. Final prediction for one row in the original train or test was the average of the two rows predictions after expansion. This was mentioned early in the forum I think.</p> <p>Third, for 3Jxx bond types, there is often more than one 3 bonds path between the two atoms. Here again I duplicated, using one row for each 3 bonds path between the two atoms, then averaging their predictions back for final prediction. I did not see this discussed anywhere.</p> <p>Row duplication improves lgb score by about 0.002, not much, but still useful.</p> <p>I then teamed with Ahmet, and given he had a NN with much better performance than my lgb model I gave up on my base model. I'll let Ahmet describe his NN (edit: description is available <a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/106421#611963">here</a>). Let me just say that with some improvements, like using logcosh as loss, and expanding its size, we got to -2.64 on LB using single 4 fold cv run. I was very happy given this moved us way above where lgb could move us to. </p> <p>Lgb was useful still, this time as a post processing step. As I did in a previous competition (Web Traffic Forecasting), I used lgb to predict the residuals of the NN, i.e. predict the difference between the NN prediction and the target. Tuning lgb with huber loss and a decaying learning rate (yes, this common deep learning practice can also be used for lgb) yields a -0.15 boost on LB. This plus bagging several runs eventually led to a score better than -2.8. </p> <p>To make sure we get a gold medal we knew we would need another strong model. I started developing a graph NN from scratch in keras, and was getting CV o -1.9 and LB -2.08 nwith just 4 features (atom and bond type, mulliken charge, and dihedral angle for 3 bond types). I stopped working on it when we teamed with Outrunner. Indeed, Outrunner skills in NN clearly outperfom mine, by far. He had a set of 13 NN models that yield almost -2.9 LB once bagged. He further improved them to yield -2.92 or so soon after merging. I will let him describe these models if he wants to. </p> <p>The first thing we did after teaming was to run my lgb postprocessing on his validation prediction. He was using a single train/val split with only 3% or so of validation data. Yet, even with this limited data lgb could extract about 0.025 more. Blending the result with what we had added another 0.03 or so, which moved us better than -2.95. </p> <p>The last progress we made was based on the NN approach used. A lot of people have used Graph NN where one molecule is a sample, and all coupling for that molecule can be predicted together. Outrunner and Ahmet use another way, with one sample per train and test row. They have features that capture the relative position of every atom compared to the two for which the coupling is computed. For instance distances to these two atoms, and an angle, and more. But distance to the two atoms and an angle doesn't uniquely identify atom positions. There is a circular symmetry. A way to remove this symmetry is to consider other atoms. Ahmet was doing it but not Outrunner. I suggested to use middle atoms. For 2JHx, a0 and a1 are both bond to a middle atom am. For 3JHx, a0 is bond to am0 which is bond to am1 which is bond to a1. Outrunner added distance to am, or am0/am1 depending on the type. This boosted performance to where we finally ended. Unfortunately, given we teamed late, he could only retrain 3 of his models with the new feature. Bagging these leads to -2.99 LB. Running lgb on top of it and blending with Ahmet-CPMP blend yields -3.032. I guess we could have landed higher with more models retrained, but we lacked time and/or GPUs.</p> <p>Bottom line is I am happy this very challenging competition ended. I probably should have entered earlier, my first sub is from July 22, only 5 weeks ago. But teaming with two talented people more than made for the lack of time.</p> <p>Edt, you'll find Ahmet NN description <a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/106421#611963">here</a>. Outrunner told me his NN is similar to Ahmet's hence he won't provide a separate description.</p> <p>Thanks Ahmet. It was great teaming with you and Outrunner.</p> <p>I tried to find it and couldn't. Here is an example.</p> <p>Assume 100 samples with target 0. Assume your current lgb model prediction is 0.5. Gradient of mae is 100 and if you use lr = 0.01 then you get an additional tree that moves the prediction to -0.5. Then you oscillate between 0.5 and -0.5 each time you add a tree. You could chose a smaller learning rate, bu then convergence would be extremely slow at the beginning. That's why starting with a large lr then decreasing will help.</p> <p>Someone explained it early in the forum, I don't have the link handy.</p> <p>There is an offline option for xgboost (and maybe lightgbm too).</p> <blockquote> <p>This is like a NN gradient decent!!</p> </blockquote> <p>Indeed, gradient boosted machines like lgb, xgb or catboost perform a gradient descent.</p> <blockquote> <p>How much does dihedral angle for 3 bonds gain?</p> </blockquote> <p>I haven't tested in the final models, but I think that when I added it is was at least 0.05</p> <p>Thanks.</p> <blockquote> <p>how and why you apply learning rate decay with LGBM?</p> </blockquote> <p>How: using <code>learning_rates</code> parameter of lgb.train()</p> <p>Why: because of MAE. lr decay helps converge.</p>
Predicting Molecular Properties
chimps solution (#37)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>oh yes, sorry! -2.1. I wish it was -3.1 :P</p> <p>Hi, </p> <p>37ish is not too glamoourous but we worked very hard and I wanted to share our aproach :) Our solution was based mainly in GNN which scored -2.317 provided by our team mate <a href="https://www.kaggle.com/jessky">https://www.kaggle.com/jessky</a> and some lgb ensemble that scores a bit less than -2.1. Then our best nnet (bagged) scores almost -1.9. For lgb and nnet models we benefit a lot from oof predictions of FC. We kept adding different prediction and the score improved by ~ -0.4 just adding those. Besides some ad-hoc FE, we obviously made use of all the amazing distances and amazing calculations available in kernels. Mainly: acsf descriptors and the sergii distances ( <a href="https://www.kaggle.com/criskiev/distance-is-all-you-need-lb-1-481">https://www.kaggle.com/criskiev/distance-is-all-you-need-lb-1-481</a>) </p> <p>Our final ensemble can be seen in here: <a href="https://www.kaggle.com/chechir/champs-ensemble">https://www.kaggle.com/chechir/champs-ensemble</a> Thanks to the organisers, and all the awesome kernel authors in this competition !</p>
Predicting Red Hat Business Value
#2 solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Red Hat Business Value <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>[quote=Fancy_helloworld;136985] I'd like to ask what is the strategy of combining the three models? average , stacking or something else? <br> [/quote]</p> <p>Simple weighted average of predicted probabilities. The formula is: 0.4*(linear model) + 0.25*(kNN) + 0.25*(public script) + 0.1*0.5.</p> <p>[quote=zhanli;136503] when you mentioned &quot;the predictions (target probabilities) for the group 7&quot;, What is the group 7? From group_1 or activity_category? [/quote]</p> <p>That's group_1. The number 7 is the value of group_1 feature.</p> <p>[quote=Scirpus;136477] Congrats and thanks for sharing - could you possible tell me the difference in AUC score between 2 and 3 - thanks in advance [/quote]</p> <p>The difference (for the best submission) is approximatelly 0.002. But this doesn't mean that without 3 my final score will be 0.002 lower. It will be only approximatelly 0.001 lower due to &quot;fair&quot; overfitting (that is made unintentionally).</p> <p>Solution consists of three parts.</p> <ol> <li>For the groups presented in the training sample I constructed some probabilistic interpolation model. It may be explained by the following illustration. The next plot shows the predictions (target probabilities) for the group 7 over the dates. <img src="https://www.kaggle.com/blobs/download/forum-message-attachment-files/4893/f1.gif" alt="enter image description here" title></li> </ol> <p>For the dates occurred in the training sample the probability is 1, for another dates the probability decreases.</p> <p>The next plot shows the same predictions, but for the whole dates range. <img src="https://www.kaggle.com/blobs/download/forum-message-attachment-files/4894/f2.gif" alt="enter image description here" title></p> <ol start="2"> <li>We have 34224 different groups in the data. This number is the actual size of the sample, because just the groups are the objects from statistical point of view.</li> </ol> <p>The only problem is that the features take different values inside the groups. For each group and for each feature I calculated a histogram. The bins of the histograms are the new features. This may be called as &#8220;fuzzy&#8221; version of binary encoding.</p> <p>The prediction is a mixture of three models:</p> <p>A) logistic regression;</p> <p>B) kNN;</p> <p>C) XGBoost based public scripts.</p> <ol start="3"> <li>The last part is the leaderboard feedback.</li> </ol> <p>[quote=zhanli;136509] One more question for the second graph: Is the second graph based on the training data in group_1= 7 ? [/quote]</p> <p>The both graphs show the same curve, but in different scales. The plots have the same meaning as in <a href="https://www.kaggle.com/dmi3kno/predicting-red-hat-business-value/redhat-hack-in-plain-english-eda/discussion">this topic</a>.</p>
Predicting Red Hat Business Value
#3 Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Red Hat Business Value <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I&#8217;d like to thank Loiso for the LB 0.987, group_1 and date trick kernel, and radar for the 0.98 xgboost on sparse matrix kernel as these sparked my interest into the unique challenges of this competition. My model was just a simple xgboost, but it was the data where I discovered some insights that helped pushed me to 3rd place. Here are my insights:</p> <ol> <li>Expanded upon the group_1 and date trick. I added more categories to Loiso&#8217;s script. Because AUC only cares about the order of predictions, the actual numbers I assign are arbitrary as long as they are in the correct order. I used numbers closer to 0 or 1 when I was more confident.</li> </ol> <p>0/1 &#8211; Assign a perfect score only if the group_1 and date was the same between train and test. Based on training data, it&#8217;s impossible for the outcome to vary within a group and date combination.</p> <p>0.05/0.95 &#8211; Used when the train dates surrounding the test example where either both 1 or both 0. Although rare, there is some slight chance that the outcome could change twice causing a wrong answer.</p> <p>0.1/0.9 &#8211; Used when the test example is before or after all train examples. This is less confident than 0.05/0.95 because only 1 outcome change is required for a wrong answer.</p> <p>0.025/0.975 &#8211; Same as 0.05/0.95 except the outcome already changed twice elsewhere in the group. It&#8217;s unlikely that an outcome changes more than twice.</p> <p>0.075/0.925 &#8211; Same as 0.1/0.9 except the outcome already changed twice elsewhere in the group. It&#8217;s unlikely that an outcome changes more than twice.</p> <p>0.5 &#8211; Used when the train dates surrounding the test example do not agree. These are the hardest to predict because the outcome could easily go either way. Luckily, these aren&#8217;t very common.</p> <p>0.497 &#8211; Used for the test groups that don&#8217;t exist in the training data. The strange 0.497 number was left over from an old process for comparing predictions to the 0.5 group.</p> <ol start="2"> <li><p>Originally I was weighting the above scores with an xgboost prediction, but it wasn&#8217;t until I switched the process around that I really started to get competitive submissions. I mimicked what the leak would have looked like on the training data by looping thru every person in training data and finding what that person&#8217;s above score would be had they been in the testing data. I then used this score as a predictor in my xgboost model.</p></li> <li><p>char_10 is a tricky variable to use as one-hot encoding because it has so many levels. I found the leave one out technique to be better, but there was risk of overfit. I did a modified leave one out technique by leaving out a whole person and calculating the average outcome for the rest of the people in each char_10. This ended up being a pretty good predictor.</p></li> <li><p>I found group_1 to be a better predictor when left as continuous. Low values of group 1 are more likely to be 1.</p></li> </ol> <p>Group by Outcome.png</p> <ol start="5"> <li><p>My model was at the activity level, but activities within the same group are likely to have the same outcome. I found an adjustment after making my predictions to address this, and it helped improve my score. I found the most extreme prediction (farthest from 0.5) within a group. I then gave 90% weight to this prediction at the group level and 10% weight to the raw prediction at the activity level.</p></li> <li><p>And yes, I did the rule based overrides to correct for ML model shortcomings that radar mentioned doing in this post: <a href="https://www.kaggle.com/c/predicting-red-hat-business-value/forums/t/23786/long-story-of-1-solution">https://www.kaggle.com/c/predicting-red-hat-business-value/forums/t/23786/long-story-of-1-solution</a></p></li> </ol> <p>[quote=Sameh Faidi;136318]</p> <p>Can you explain this point more?</p> <ol start="2"> <li>Originally I was weighting the above scores with an xgboost prediction, but it wasn&#8217;t until I switched the process around that I really started to get competitive submissions. I mimicked what the leak would have looked like on the training data by looping thru every person in training data and finding what that person&#8217;s above score would be had they been in the testing data. I then used this score as a predictor in my xgboost mode</li> </ol> <p>[/quote]</p> <p>Lets say this example is from same group in the training:</p> <p>people_id date outcome</p> <p>1 2022-01-01 1</p> <p>1 2022-01-02 1</p> <p>2 2022-01-03 1</p> <p>1 2022-01-04 1</p> <p>2 2022-01-05 0</p> <p>I calculate what the #1 result would be if I didn't know what the person 1 outcome was, but knew what the person 2 outcome was. I then did the same thing if I didn't know the person 2 outcome but knew the person 1 outcome. If you do the exercise you should get this: 0.9, 0.9, 0.95, 0.5, 0.9. This is then used as a feature in my model.</p> <p>If the combination is same I assign a 0 if the training outcome is 0 and 1 if training outcome is 1. For other cases, When I wrote x/y the first value is what to do when training outcome is 0 and second is what to do when training outcome is 1.</p>
Predicting Red Hat Business Value
Competition thoughts/source code (#19)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Red Hat Business Value <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I experimented with an external stacker on top of the (unstacked) XGB model and my group code - turns out I probably made the right call, on the third post-deadline submission attempt I got to .993029/.993169 - slightly lower but still good for 19th. </p> <p>(I didn't make any actual model changes, since I wanted the submissions to reflect what I would have gotten if I went that path yesterday.)</p> <p>(Source up as ps-stacker.ipynb and ps-xgb.ipynb anyhow)</p> <p>If 17304 isn't removed, it can cause problems with CV.</p> <p>I made people_per_group a feature, and the key thing was that it made my score jump <em>a lot</em> immediately... ;)</p> <p>...and I used XGB's feature importance feature and some other code that didn't get uploaded to github because I only had it in a branch I didn't submit... perhaps this weekend I'll break that out seperately, it's quite useful.</p> <p>Thanks :) It's a bit of a misnomer, basically a second layer to the leak where groups with all one value in the train set had no dates before or after left uncovered in the test set - and hence assigned all 0's or 1's by the leak processing code. So those groups and everyone in them could be completely ignored.</p> <p>The biggest thing was that one big group with 30% of the training data, all 0's... enough to skew the mean outcome by ~17%!</p> <p>I only worked on this for two weeks - starting after the Talking Data leak/messup. I wound up building on what I learned about Pandas from DuneDweller&#8217;s script, and was able to get to the top 10 with a week to go&#8230; and then I sputtered, possibly because of a (viral?) thing that sapped my thinking last week. Still my first silver medal in a while - and tied with Springleaf for my best score.</p> <p>The leak was quite interesting and reshaped the data set a lot. But even if a time split had been used, there still would have been a lot of quasi-leaks as there would still have been many all 0/1 groups in the test set. It&#8217;s not like TalkingData where one looks at the leak and thinks &#8220;what so-called randomizer did they use?&#8221; - it evolved out of the dataset itself.</p> <p>The main ideas were to cross verify the leak, and then after I had some reproducibility issues, creating a stratified split on people_id. With those I was able to get to .9927xx&#8230; and not much further.</p> <p>Interestingly, one group (17304) contains 30% of the dataset and is totally 0 outcome. I think it&#8217;s a group of random non-customers on the Internet or something.</p> <p>group_1 itself was a perfectly valid feature, from my CV and XGB analysis. There was some overfitting on it, but nothing major.</p> <p>edit 21 Sep: I forgot to mention that a key feature (worth ~.0005) was the # of people per group. xgb heavily and accurately keyed in on single person groups.</p> <p>Also there are tons of duplicates, they tended to be 0 outcome but I was unable to implement it in a model in time. (I bet that was used by some of the top 10.)</p> <p>My last idea (which I started with less than a day to go!) was to build a stackable model that determined whether a group was likely to be all 0&#8217;s, 1&#8217;s, or mixed. I got it down to a multiclass logloss of .735, and it was a very powerful feature - in fact I fuzzed and binned it in the final XGB model that used it! I was able to get from .992914 to .993118 using it. Trying to do that with only three submissions left was very nerve wracking, but my last two got progressively better at least.</p> <p>And my very final thing before the competition deadline was to go through and copy and very slightly (but not enough!) edit the actual jupyter notebooks I used. I ran them all under kaggle&#8217;s Python Docker container on three different machines (i7 2600, dual w5590, and dual e5605)</p> <p>No, one really <em>didn&#8217;t</em> need all that aging-yet-still-very-useful hardware for this competition, but I found that xgboost works better with sparse models&#8230; but is <em>very</em> variable. So the copy that ran on the 5605 produced a better model, and I submitted that.</p> <p>And I got bit by the xgb-liking-sparse-matrices thing badly - an entire line of models I was working on was slightly worse on the LB, even after feature engineering. Between that and my viral? thing, that got me stuck for a bit...</p> <p>Potentially embarassing code quality aside (I find Jupyter notebooks the equivalent of working on a breadboard), you can look at the notebooks at <a href="https://github.com/happycube/chadslab/tree/master/kaggle/redhat">my github</a> now. There&#8217;s some nifty use of Pandas, I think.</p> <p>Now&#8230; when&#8217;s the next xgb/pandas friendly competition*? ;) Guess it's time to learn streaming methods...</p> <hr> <p>PS... Congrats to all the winners! I'm looking forward to hearing how raddar got to .995!</p> <p>PS2 - Getting Bosch's data into Pandas is much easier than I expected ;)</p> <p>You're (all) welcome, and thanks!</p> <p>I discarded all the larger 0 and 1 only groups, then removed duplicates and leaks. Leaks were discarded using data from a 10-fold process, rerunning the leak code on each fold's test set. The end result (especially, ironically, in the version I didn't use in the end) got very close to the test set's leak proportions.</p> <p>The 3-way model is group3d.ipynb (eratz file-name version numbering, even though I was using a local git repo ;) ) in my source code set. The output of that model is a pickled Pandas dataframe with group_id and the 0/1/both variables, which is merged into the main dataframe in the xgb model and used as features.</p> <p>A neat bit of pandas-fu I ran into the other day is that dictionaries with equal-sized lists/arrays turn into nicely labeled DataFrames without having to massage columns after creation.</p> <p>If you're interested in figuring out Pandas and haven't seen it already, go to DuneDweller's TalkingData script. Something clicked for me after I went through that and understood how it all worked.</p>
Predicting Red Hat Business Value
Congratulation to winners and Solution Sharing.!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Red Hat Business Value <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>[quote=Willie Liao;136126]</p> <p>The &quot;leakage&quot; turned it into more of a math puzzle and not the usual throw data at xgboost and tune. Kind of refreshing.</p> <p>[/quote]</p> <p>Agreed.! </p> <p>This has been a fun competition with the leakage and the guys at the top of the LB have come up with phenomenal performance.!</p> <p>Congrats @raddar, @Victor, @Joshua Havelka.! I bet you have something exceptional which others like us struggle to find. </p> <p>Please let us know more about your findings. features, algorithms and if possible source codes.!</p> <p>Would love to hear the approaches of others as well.</p> <p>Thanks in advance :)</p>
Predicting Red Hat Business Value
long story of #1 solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Red Hat Business Value <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>You are right, people counter normalizer is unnecessary. Im just a little more comfortable working with fractions and was trying some SVM models, for which such scaling probably is important; Thank you for your remarks :)</p> <p>[quote=Silogram;136184]</p> <p>Congratulations raddar! A well-deserved victory. Just out of curiosity, what was your LB score without 'manual exploitation'? (confession: we tried using that technique at the end too but found only one group where it helped)</p> <p>[/quote]</p> <p>Didnt have much time to check it but its about 0.9935 I think</p> <p>[quote=ihadanny;136182]</p> <p>Hey, great job! Can someone please explain this:</p> <blockquote> <p>tf-idf was especially useful (in short, for each varriables&#8216; attribute calculation was done: #of people with same attribute in a group/#of people with same attribute in population)</p> </blockquote> <p>If we have a separate variable per attribute value - e.g. a variable for field_char_10_val_777 and another variable field_char_10_val_333, I understand why should we count the number of times each value appears in the group, I don't understand why does it matter to scale down each of them according to the popularity of 777 and 333 in other groups? It would be like multiplying those 2 variables each in a different constant scalar, which doesn't mean anything for xgboost and for random-forest... </p> <p>[/quote]</p> <p>let me explain how its done:</p> <p>Lets say you have variable char_10 and its attribute type_30. this specific type_30 appears in group_1 10 times; and in overall population type_30 appears 100 times. so you get the ratio 10/100=0.1; Now this number itself is very useful information to any model, as this number can be compared among different groups, i.e. group_2, which has a ratio of 0.05 only; So the model then tries to answer the question &quot;is this attributes' higher ratio compared to other groups is helpful predicting the outcome?&quot;.</p> <p>Hope it helps.</p> <p>[quote=utility;136176]</p> <p>@raddar, you did an amazing job. You managed to find something that most of participants failed to notice, imho thats the most important thing in ML. Respect.</p> <p>Although there is one question i'm very tempted to ask. If i were you, i would have been prepared for that:) What is the difference between manually setting right answers based on LB feedback and hand-labeling?</p> <p>[/quote]</p> <p>Yes, I agree there is a thin line between that. However, as this has been done before in previous competitions, no matter what form it ended up with, I felt a little bit comfortable about it (but actually feeling sick that I had to do it to win). my overrides btw are not plain 0/1. They have a ranking within themselves too, like 0.939,0.940,0.941,0.942.. so actually im not hand labeling specific activity with its outcome, but rather make a customized prediction based on the information I had at hand and which was in some form easy to obtain;</p> <p>As I said, I dont feel very well about overrides, but felt very sad at one point that top guys with not so probably as good models were doing for a while.</p> <p>@SK, I mostly agree with your points.</p> <p>In industry it might be common to reestimate the model every few days/weeks, so if there was no leak present, date features would not have been that serious issue (still issue but at much smaller scale). So if you call it a signal, its even better </p> <p>I think the real problem RedHat is facing, is how to cluster a new customer with only few activities which do not follow previous patterns. I am quite certain that group_1 variable is a snapshot of current clustering of their customers; so a new customer comes in, does few activities and has been assigned to a cluster where he is the only customer. Now the problem is - does he bring value? what if the certain period has to pass to determine new customer's first activity is valuable or not? I believe my 1st level models or their versions could partially solve this problem;</p> <p>If I was RedHat I would have made this a time series problem, so both existing clients, which show strong signal, both new clients would be evaluated. Anyway, we just have to wonder what the actual problem they are facing anyway:)</p> <p>[quote=Scirpus;136164]</p> <p>Utterly brilliant - I agree with you that it would have been great to produce a production ready model - not only did you win but you also produced the most popular script!</p> <p>[/quote]</p> <p>hehe, sad thing is that I have not used public script ideas in my models; just experimented with sparse matrices and wanted to share:)</p> <p>Hello guys,</p> <p>I&#8216;m going to write a story how I did it, because knowing why I did it is hard to tell in few words:)</p> <p>This competition started as a goal to reach top10 for grandmaster title, which ended up with a stressful race for top1 finalle. Luckily, I made some good decisions in the last day which guaranteed me top1 spot. For me this competition was not something I enjoyed very much but still glad everything worked out in the end. My final model is quite simple and if no leak was present, it might have been production friendly. First, proper cross-validation set was very important. Tricks I did to have a representative CV set:</p> <p>1) Remove observations of group_1=17304, both from train set and test set; That correspons to 30% of training data set, and this group has all outcome=0 (which was used an override rule in making prediction files)</p> <p>2) Use Distinct operator for gruop_1&#8216;s which have 3000+ number of rows (this step was very important to remove potential auc bias for several of my CV folds)</p> <p>3) Create random unstratified 5-fold cv set based on people file </p> <p>My modeling concept was rather simple &#8211; reduce original problem to few smaller ones, and combine them in 2nd level model. I have built several models using principles below:</p> <p>a) Select an activity in each group_1&#8216;s timeline (I used first/last activity in a timeline)</p> <p>b) Collect all other activities within a group which has the same outcome label</p> <p>c) Aggregate features &#8211; tf-idf was especially useful (in short, for each varriables&#8216; attribute calculation was done: #of people with same attribute in a group/#of people with same attribute in population)</p> <p>d) Add other simple and not so simple features (i.e. group_1 id value, #activities in a group, #people in a group, min/max dates etc.); I did not use any feature interactions or likelihood features.</p> <p>e) Build a classifier on the dataset &#8211; I only used xgboost; was able to reach ~0.84 AUC (bare in mind, no leakage has been used till this point yet!) which in my mind was a fantastic result; If I was RedHat, I would have made this as a target to model, but oh, nevermind.</p> <p>To do it properly I had to think of some novel cross-validation approach as my split was based on people_id, and on this aggregated data level my CV had to be based on some kind of aggregated CV split scheme as well; My approach worked well, but along the way I had to build ~15 xgboost models to make it work (this was necessary to make 2nd level model work). Such proper CV scheme was important, but I&#8216;m not going into details how it works for now. </p> <p>So at this point I have 4 very well performing 1st level models (2 models which perform very well on public LB, and 2 simillar versions which yield best CV score in 2nd level model); 2nd layer model required very careful scripting skills, as I implemented leakge solution in a cross-validated way, so the predictions of outcome changes within group would be learned in a ML way (simple rules in public scripts are not that good!); So in fact 2nd level model solves 2 problems &#8211; predict probabilities imputing observations affected by leakage, and predict probabilities for observations not affected by leakage. The model itself is simple but some smart features were included as well to capture time trends within group/population.</p> <p>So at this point I have very decent model, and in the middle of competition I see myself at #4 place, and I see top3 guys improving their score day by day. So I put some time into it to think about it &amp; discover that they have been doing some manual exploitation using public LB to increase the leakage. As public/private split was random (took some time to discover), one can use a hand-crafted submissions to test groups not affected by leakage to get auc results for that specific gruop, and determine what the probable outcome of whole group is. Do this for as many submissions taking largest group_1&#8216;s as you can, and you might detect some gruops, that ML model misclassifies badly; having that in mind manually create some overrides based on that; So in my submissions i started using rule-based overrides to corret ML model shortcomings.</p> <p>For my final submission i used a simple average of my best performing LB model and best performing model on CV &#8211; this was my last effort to overtake Victor, and for my surprise it provided very high uplift to my score; I want to thank Victor for not making this easy for me in last few days, as I got myself a little too relaxed at one point:)</p> <p>p.s. if I did not used any overrides I think I still would have finished within top 5. Sorry for NoHat team, which seems to have made the best ML model there.</p> <p>Split by people would have not blocked from such activities anyway. I started probing using groups which had at least 2 people_id... after ~10 submissions that is how i realized it was random split.</p> <p>I actually asked Kaggle how they feel about probing the leaderboard and I quote:</p> <p><code>Kaggle has no choice but to allow the act of public leaderboard probing because we have no way to detect it or enforce a rule against it.</code></p> <p>Cheers.</p>
Predicting a Biological Response
The code of my best submission
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting a Biological Response <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to all the participants in this thread for their useful comments. Nevertheless I would like to invite all the participants of the competition to use this thread for posting code and discussing the posted code. It is my opinion that when it is about scientific programming &quot;devil is in the details&quot;, so even though I appreciate the discussion about methods - as in every thread of the Kaggle competitions -, this time I warmly welcome reproducible results, i.e. code that can be run, discussed, dissected and modified by everybody. And even criticised.</p> <p>I understand that there could be different feelings among the participants about publicly posting their own code. But I am sure that many of you like to share code as much as ideas, suggestions, references etc. In the end, is it so different?</p> <p>Here you can find the code of my best submission (17th):</p> <p><a href="https://github.com/emanuele/kaggle_pbr">https://github.com/emanuele/kaggle_pbr</a></p> <p>It is a simple blending of Random Forests, Extremely Randomized Trees and Gradient Boosting. A trick to get a better score was linearly stretching the prediction to fill [0,1]. Unexpectedly it did better than Platt calibration.</p> <p>The code is based on the excellent scikit-learn Python library.</p> <p>I'm publishing my code to invite other participants to do the same.</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
12th Place Solution - Overview with code files
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you <a href="/mpsampat">@mpsampat</a> !!, do not hesitate to ask any questions about it.</p> <p>Thank you very much <a href="/cdeotte">@cdeotte</a>, it is an honor to read your congratulations here!</p> <p>Congratulations to you and thanks a lot <a href="/micheomaano">@micheomaano</a> for sharing your notebooks during the competition, your code is the soul of the code files that are in this solution.</p> <p>First of all, I would like to thank all the people and organizations that have made this Competition possible. In capital letters, THANK YOU to all the TEAMS that with their dedication and effort I hope contribute to improve the diagnosis of prostate cancer and thereby improve people's lives. Indeed, my most sincere congratulations to the WINNERS.</p> <p>I am very happy as you can imagine. In a few lines I share with you a quick overview of my time in this Challenge.</p> <h3>Kaggle Learning</h3> <p>I want to comment here what is usually included in the acknowledgments part but I reserve this special section to highlight the work of those competitors who have made my final solution better, 1) because their ability was not present in my initial knowledge or 2) because their performance improves together with the experience of mine. I mean, in no order of priority,</p> <ul> <li><strong>(Salman)</strong> <a href="https://www.kaggle.com/micheomaano" target="_blank">@micheomaano</a>:</li> </ul> <ol> <li><a href="https://www.kaggle.com/micheomaano/tf-record-256-256-48" target="_blank">Dataset tf-record-256-56-48</a></li> <li><a href="https://www.kaggle.com/micheomaano/tpu-training-tensorflow-iafoos-method-42x256x256x3" target="_blank">TPU Training Tensorflow Iafoos Method 42x256x256x3</a></li> <li><a href="https://www.kaggle.com/micheomaano/pandas-42x256x256x3-inference" target="_blank">Pandas 42x256x256x3 Inference</a></li> </ol> <ul> <li><strong>(Qishen Ha)</strong> <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a>:</li> </ul> <ol> <li><a href="https://www.kaggle.com/haqishen/train-efficientnet-b0-w-36-tiles-256-lb0-87" target="_blank">Train EfficientNet-B0 w/ 36 tiles_256 [LB0.87]</a></li> <li><a href="https://www.kaggle.com/haqishen/panda-inference-w-36-tiles-256" target="_blank">PANDA Inference w/ 36 tiles_256</a></li> </ol> <ul> <li><strong>(RAHUL SINGH INDA)</strong> <a href="https://www.kaggle.com/rsinda" target="_blank">@rsinda</a>:</li> </ul> <ol> <li><a href="https://www.kaggle.com/rsinda/panda-inference-efficientnet-b1" target="_blank">Panda Inference EfficientNet-b1</a></li> </ol> <ul> <li><strong>(Iafoss)</strong> <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a>: The Best Accelerator in the Competition, ahead of TPUs.</li> </ul> <h3>Submission Notebook</h3> <p>I have shared an original copy of my inference kernel without additional cleaning as well as a dataset that includes the necessary weights of each of the models that are used in obtaining the final submission,</p> <ul> <li><a href="https://www.kaggle.com/coreacasa/12th-place-solution-quick-save-inference" target="_blank">Quick Save Inference</a></li> <li><a href="https://www.kaggle.com/coreacasa/pandaenetb042x256x256x3" target="_blank">Dataset Model Weights for Inference</a></li> </ul> <h3>[TPU] Kaggle/Google(Colaboratory)</h3> <p>For all my trainings I used the free TPU resources offered by Kaggle / Google (Colaboratory). Thank you very much.</p> <h3>Training One: My only approach to validation</h3> <p>Very closed to Salman's training kernel I just re-ran its code to complete cross validation. I ran a fold up to 60 epochs to see the evolution of the loss and the rest down to 40 epochs.</p> <p>Individually the behavior of the folds was more or less similar in final loss values (mse) and in the number of times in which it stopped improving. The issue is that its merge did not improve the individual performance of some of them over LB and their performance was also uneven when they were introduced into an external ensemble.</p> <p>The noise of the labels is a probable cause as already discussed in the discussions or perhaps the sensitivity of the qwk metric to even small variations in mse when its jump to LB.</p> <ul> <li><strong><a href="https://www.kaggle.com/coreacasa/code-base-training-one" target="_blank">code-base-training-one</a></strong> file, training topics:</li> </ul> <p><code>Size Image</code> 256<br> <code>Size Tiles</code> 256<br> <code>Tiles</code> 42<br> <code>Augmentation</code> horizonal p=0.5 and vertical p=0.5 flips<br> <code>Validation</code> StratifiedKFold 5 on isup grade classes<br> <code>Arch</code> EfficientNetB0<br> <code>Convolutional Base's Weight</code> Imagenet trainable<br> <code>On Top</code> GlobalAveragePooling2D, Dropout(0.5), Dense(1024)<br> <code>Output</code> Dense(1) regression objective<br> <code>Loss</code> mean_squared_error<br> <code>Optimizer</code> Adam<br> <code>Leaning Rate</code> 5e-04 init<br> <code>Reduce LR</code> decreasing 0.5 with patience 3 epochs<br> <code>Save</code> weights only with best validation loss epochs<br> <code>Batch Size</code> 64</p> <h3>Training Two: Art(Instinct) Validation</h3> <p>I never tried detecting noisy labels to remove them from training data. In general I am not in favor of losing any existing information, although in principle it could be harmful by elevating the non-regular part of a data generating process. I would rather transform data than remove it.</p> <p>I didn't try either any transformation so I thought about training the models with full dataset in order to prevent the possible existence of more noise in some folds than in others, which probably would be increasing the variability in the inference results.</p> <p>Art Validation appears here and it is when the art of the data scientist enters and it is his instinct that determines the goodness of fit and stability of performance in generalization against new observations. Yes, this is Alchemy.</p> <ul> <li><strong><a href="https://www.kaggle.com/coreacasa/code-base-training-two-enets" target="_blank">code-base-training-two-enets</a></strong> file, from which I trained 3 members of the EfficientNet family. Changes on training one training topics:</li> </ul> <p><code>Tiles</code> 48<br> <code>Validation</code> Art Validation on instinct<br> <code>Arch</code> EfficientNetB0, EfficientNetB1 and EfficientNetB2<br> <code>Convolutional Base's Weight</code> Noisy Student trainable<br> <code>Output</code> Dense(5,activation='sigmoid) ordinal regression objective<br> <code>Loss</code> sigmoid_cross_entropy_with_logits<br> <code>Leaning Rate</code> custom with 5up, 3sustain, 0.8decay<br> <code>Limits LR</code> 1e-05min, 4e-04max<br> <code>Save</code> weights only with best loss epochs<br> <code>Batch Size</code> 32<br> <code>Epochs</code> 60</p> <ul> <li><strong><a href="https://www.kaggle.com/coreacasa/code-base-training-two-densenet" target="_blank">code-base-training-two-densenet</a></strong> file, from which I trained 1 member of the DenseNet family. Changes on training topics of the previous net family:</li> </ul> <p><code>Arch</code> Densenet121<br> <code>Convolutional Base's Weight</code> Imagenet trainable<br> <code>Epochs</code> 40</p> <h3>Inference: Diversity of Archs, nTiles and TTAs</h3> <p>Of the 2 training processes shown above, the following models were available,</p> <ol> <li>EfficientNetB0 (5 skf), 42x256x256x3</li> <li>EfficientNetB0 (1), 48x256x256x3 </li> <li>EfficientNetB1 (1), 48x256x256x3 </li> <li>EfficientNetB2 (1), 48x256x256x3 </li> <li>DenseNet121 (1), 48x256x256x3 </li> </ol> <p>Having re-run the Salman kernel, from the public notebooks referenced at the beginning I had,</p> <ol> <li>EfficientNetB0 (1 skf), 36x256x256x3 (Qishen Ha) </li> <li>EfficientNetB1 (1 skf), 36x256x256x3 (RAHUL SINGH INDA)</li> </ol> <ul> <li><p><strong>Test Time Augmentation</strong><br> <code>Type A: 5xTTA deterministic</code> <br> 1xoriginal, 1xTranspose, 1xVerticalFlip, 1xHorizontalFlip, 1xTranspose-&gt;VerticalFlip-&gt;HorizontalFlip<br> <code>Type B: 4xTTA pseudo deterministic</code> <br> 1xoriginal, 1xVerticalFlip, 2xHorizontalFlip(p=0.5)-&gt;VerticalFlip(p=0.5)<br> <code>Type C: 2xTTA random </code> <br> 2xHorizontalFlip(p=0.5)-&gt;VerticalFlip(p=0)</p></li> <li><p><strong>White Padding Tile Extraction (Qishen modes)</strong><br> 1x add zero pad and 1x add 256 pad, that is, 2 different extractions for ALL the images.</p></li> </ul> <h3>Model Selection and Final Ensemble</h3> <pre><code>(3/10)*Public-Quishen [TTA Type A] (3/10)*Public-RAHUL SINGH INDA [TTA Type A] (1/30)*EfficientNetB0-Fold0-Training One [TTA Type C] (1/30)*EfficientNetB0-Fold2-Training One [TTA Type C] (1/30)*EfficientNetB0-Fold4-Training One [TTA Type C] (1/15)*EfficientNetB0-Training Two [TTA Type C] (1/15)*EfficientNetB1-Training Two [TTA Type C] (1/15)*EfficientNetB2-Training Two [TTA Type C] (1/10)*DenseNet121-Training Two [TTA Type B] </code></pre> <p>The random component of the TTAs was not seed (I'll be lucky) and the reproducibility of the results may vary with it. I have just re-run my inference kernel and the results are Private Score 0.92983 (0.92960 original) and Public Score 0.89443 (089352 original).</p> <p>With this models structure I was only able to test the last day of the competition. For example, this other ensemble got Private Score 0.93047 and Public Score 0.88889, not including random component in TTA.</p> <pre><code>(3.5/10)*Public-Quishen [TTA Type A] (3.5/10)*Public-RAHUL SINGH INDA [TTA Type A] (1/15)*EfficientNetB0-Training Two [TTA Type A] (1/15)*EfficientNetB1-Training Two [TTA Type A] (1/15)*EfficientNetB2-Training Two [TTA Type A] (1/10)*DenseNet121-Training Two [TTA Type A] </code></pre> <p>One more, my last submission and that finished tight after the deadline got Private Score 0.93052 and Public Score 0.89110,</p> <pre><code>(3.5/10)*Public-Quishen [TTA Type A] (3.5/10)*Public-RAHUL SINGH INDA [TTA Type A] (1/30)*EfficientNetB0-Fold0-Training One [TTA Type C] (1/30)*EfficientNetB0-Fold2-Training One [TTA Type C] (1/30)*EfficientNetB0-Fold4-Training One [TTA Type C] (1/15)*EfficientNetB0-Training Two [TTA Type C] (1/15)*EfficientNetB1-Training Two [TTA Type C] (1/15)*EfficientNetB2-Training Two [TTA Type C] </code></pre> <h3>That is all, Thanks a lot!</h3> <p>By the way, I still tremble with fear<br> Update: No longer!</p> <p>There is no math behind it if that is what you mean!</p> <p>Hi <a href="https://www.kaggle.com/gaur128" target="_blank">@gaur128</a>,<br> They are weights about the participation of each of the TTA types in the models or about each of the models in the final solution. They are based on the experience of performance on the leaderboard during the competition.</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
16th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>For one card with mixed precision it was at least 3 bs (64x256x256) for b4 + 2-8 accumulations steps ~ virtual batch was around 24 always</p> <p>Sure</p> <p><strong>Thank you very much to organizers, participants and my teammates</strong> <a href="/ryunosukeishizaki">@ryunosukeishizaki</a> <a href="/rinnqd">@rinnqd</a> for such competition.</p> <p>We can name our solution as <strong>"zero public LB to hero private LB"</strong>. In public we could even get a bronze and in private we are in top 20 teams. It's not a lucky submission because we have a lot of them and success points are real.</p> <ol> <li><p><strong>Removing noise</strong> (marks, duplicates) based on <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/151323">this Zac Dannelly</a> and <a href="https://www.kaggle.com/iamleonie/panda-eda-visualizations-suspicious-data">this Leonie</a>, and also my own manual clean up</p></li> <li><p>Training efficientnet-b0, b2, b4 and mixnet-xl on <strong>different tiles sizes</strong> 36x256x256 (level 1) =&gt; 49x256x256 (level 1) =&gt; 64x256x256 (level 1) without regularization and with high (dropout 0.4)</p></li> <li><p><strong>Combining cleaning dataset training and raw data</strong></p></li> <li><p>Blending based on local <strong>CV weights</strong> w = w / np.sum(w)</p></li> </ol> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1192776%2Fe5ab54359ade13256e9c62dbfdf5a80f%2F.PNG?generation=1595481733380627&amp;alt=media" alt=""></p> <p>5.<strong>Trust your local CV</strong> and train stable models!</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
16th place solution (all you need to know)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I want to thank kaggle and the organizers for hosting such interesting competition. And i like to thank my teammates <a href="/mpware">@mpware</a> , <a href="/tikutiku">@tikutiku</a>, <a href="/phoenix9032">@phoenix9032</a> and <a href="/virajbagal">@virajbagal</a> for such an interesting competition journey. And congratulations to all the winners! It's been a great competition, and my team has spend a lot of time in this competition and finally glad to share that all the hard work paid off.</p> <h1>What worked:</h1> <h2>Image Preprocessing</h2> <ul> <li>We experiment with a lot of tiling strategies including publicly shared by <a href="/iafoss">@iafoss</a> and <a href="/akensert">@akensert</a> and end up using both in our training pipeline. (25x256x256) worked best for us.</li> <li>We also randomly replaced white tiles (tiles with color avg &gt; 240 ) with other tiles and it gives boost in our CV.</li> <li><p>Removing duplicates (or adding them to only train set), and removing few noisy images as mentioned <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/151323">here</a> helped a little.</p> <h2>Augmentations</h2></li> <li><p>Our models are trained on a wide range of augmentations. Few of them are shown below: ``` transforms_train = albumentations.Compose([ albumentations.HorizontalFlip(p=0.3), albumentations.VerticalFlip(p=0.3), albumentations.Transpose(p=0.3), albumentations.RandomGridShuffle(), albumentations.OneOf([ albumentations.GaussianBlur(blur_limit=1), albumentations.MotionBlur(), ], p=0.01), ])</p> <h2>use this after initial training for 40 epochs</h2></li> </ul> <p>transforms_train_hard = albumentations.Compose([ albumentations.HorizontalFlip(p=0.5), albumentations.VerticalFlip(p=0.5), albumentations.Transpose(p=0.3), albumentations.ShiftScaleRotate(shift_limit=0.1, scale_limit=0.2, rotate_limit=5, border_mode=cv2.BORDER_CONSTANT, p=0.1), albumentations.RandomGridShuffle(), ]) ```</p> <h2>Ensemble</h2> <p>Our final solution is a simple average of the following models. For Karolinska: <code> 1. ResNet34 2. EfficientNet-B0 3. EfficientNet-B1 </code> For Radboud: <code> 1. EfficientNet-B0 2. ResNet34 3. EfficientNet-B1 4. UNet with EfficientNet-B1 backbone, segmentation head + classification head </code></p> <h1>The short story about UNet models (MPWARE speaking):</h1> <p>Main part of the team was working on CNN approaches without masks so I decided to focus only on models involving masks to try to add more diversity to our ensemble. The idea was that segmentation should help the model to discover what does matter for our target. As masks were different between Radboud (0 to 5) and Karolinska (0 to 2), Unet models per provider were trained separately. Input was a 25x128x128 tile-based image from half medium resolution (mask built accordingly). Around 100 WSI had missing masks so it was a chance to create, with some other additional WSI, an hold-out balanced dataset to follow correlation between CV, Hold-Out and LB. If all correlate then our models could be considered stable and safe for private dataset. After a few trainings we succeeded to have CV and Hold-Out correlated and we noticed that LB was always better. We also noticed that CV were quite bad on Karolinska and quite good on Radboud. After some investigations and attempts to improve Karolinska masks, it became obvious that segmentation approach will not work for Karolinska. However, it had potential to bring boost on Radboud so more models were trained by applying random density (between 0.2 and 0.9) tile selection to cover most possible tiles configuration. This procedure provided a nice boost +0.01 but required around 128 epochs. 4 folds of such UNet/radboud models were integrated in the ensemble (using classification head only to average). Finally, it seems to have contributed to stabilize our ensemble.</p> <h1>What did not worked:</h1> <ol> <li>Removing gray backgrounds from radboud images.</li> <li>Training models with Multiple Sample Dropouts.</li> <li>Using H&amp;E Normalizations and augmentations.</li> <li>Relabelling bad labels.</li> <li>Models with heavy base like EfficientNet-B3,B4,B5, etc.</li> <li>RegNet y and x various variants 080 till 32.</li> <li>Adding data_provider as feature.</li> <li>Adding extra head to predict <code>isup_grade</code> from <code>gleason_scores</code>.</li> <li>ArcFaceloss</li> <li>Multitask learning with more weight on the Gleason Score base loss</li> </ol>
Prostate cANcer graDe Assessment (PANDA) Challenge
18th place solution: DenseNet + RNN based
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="/amshoreline">@amshoreline</a> Sure.</p> <p>The idea is to prevent similar images from being in train and validation set. If such images appear, there will be information leak from training set to validation, and thus validation metrics will be biased (also <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/155954">discussed here</a>).</p> <p>For instance consider these two images: | 6226ebfc1f9b743a8b02db4eb7145738 | 3c659b2837afab3af6b952fcbaa6a515| | --- | --- | | <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F615797%2Fdde60e6fe94f5fa52ab84020f32dac4d%2F6226ebfc1f9b743a8b02db4eb7145738.png?generation=1595925176175386&amp;alt=media" alt=""> | <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F615797%2F9a2d686c4a616542bdadc7f810c31030%2F3c659b2837afab3af6b952fcbaa6a515.png?generation=1595925212356794&amp;alt=media" alt=""> |</p> <p>My decision was to put similar images like above either in training set or in validation, but prevent images of similar series from getting into the both.</p> <p>I calculated image hashes using <a href="https://pypi.org/project/ImageHash/">ImageHash</a> to be able to evaluate how similar the train images are. Then I calculated distance (hamming distance) between each image pair. And merged the images that are close into the clusters (recursive approach: if any of the images of two clusters have distance smaller than a threshold, these two clusters merge into single one)</p> <p>Then I took "representative" from each of the cluster and did a training/validation k-fold split using these "representatives"</p> <p><a href="/yukkyo">@yukkyo</a> , Thank you!</p> <p>I sorted the tiles by the brightness descending (I worked with negative image: 255 - original Image). The brightest one goes first.</p> <p>But there was an important step before the ordering. I left only those tiles, which had more green than red. This is to filter out pen or marker marks which often look like large white straps in the negative image. If I did not do it, the white pen mark tiles went first in the brightness sorted sequence. And that was an issue.</p> <p>I coerced the training sequence to the needed length. If there were too few tiles originally, I repeated (cycled) the sequence to match the needed length. It there were too many tiles, I trimmed the sequence (discarded the "tail").</p> <p>The training was very sensitive to the input order, when I did order shuffling the model did not train at all. I guess that is because the tiles with relative information could be simply trimmed out for the cases when original sequence was too long, and the training signal was completely missing.</p> <p>First stage (frozen DenseNet backbone with imagenet weights) is for preventing imagenet features from being completely wiped by large error gradient originating from random initialized later layers. Thus transfer learning is utilized.</p> <p>2nd and 3rd are split due to GPU resources limitation. I would have used end-to-end training (with all the network unfrozen and long sequence for GRU units) but it did not fit into the GPU memory.</p> <p>Thus I split it. 2nd phase (all network is unfrozen, shorter sequence) is to tune the visual features extractor (densenet weights) for the particular application.</p> <p>3rd phase is aimed to tune the GRU units with long enough sequences. Plus it can be seen as a variation of <a href="https://arxiv.org/pdf/1706.04983.pdf">FreezeOut</a>. </p> <p>First of all, thanks to the organizers, Kaggle team and all of the participating kagglers!</p> <p>That's the first time when I get so high place! I'm really glad that all of my work done in these 3 months is rewarded. And as usual I learned a lot during this challenge!</p> <p>My solution is quite different from the concat pooling (by Iafoss) based mainstream approach.</p> <p>It is:</p> <p>0) image similarity clustering via image hashing (splitting clusters into tr/val sets, not images) 1) rotation of a whole (middle resolution) image to arbitrary angle with crop preventions 2) extraction of tissue tiles (256x256) 3) Global Contrast Normalization (across all of the extracted tiles from single image) 4) DenseNet121 backbone (imagenet pretrained) -&gt; Dense feature extractor -&gt; 2 GRU layers -&gt; single head ISUP grade regression (logcosh loss) 5) Multiple generations of discarding the "hard or wrong labelled" images by MAE&gt;2.5 threshold 6) 5-Fold CV during training, keeping the gleason_score frequencies balanced while splitting the train and validation. 7) 3 stage training: - backbone frozen, long tile sequence 64 tiles - all unfrozen, shorter tile sequence of 16 tiles - backbone frozen, long tile sequence of 64 tiles again 8) short train batch size of 2 (for regularizing effect)</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
1st Place Solution [PND]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="/iafoss">@iafoss</a> I treated 2,121 images as <code>duplicates</code> (script is below). <a href="https://www.kaggle.com/yukkyo/imagehash-to-detect-duplicate-images-and-grouping">https://www.kaggle.com/yukkyo/imagehash-to-detect-duplicate-images-and-grouping</a></p> <p>Of course, this includes False Positive (not true duplicate image) and False Negative (true duplicate image, but I've missed it). And we can change this rate by imghash threshold.</p> <p>If you look at example of above kernel, you can see that there are many False Positive and this threshold value(0.9) seems a bit low. But I chose this value because I wanted to avoid putting the same image in different folds any more than that.</p> <p><a href="/mawanda">@mawanda</a> thx! I only used my own machine (with TitanRTX x 2). Each training requires 1 GPU.</p> <p>However, I don't know about my other teammates' machines.</p> <p><a href="/cateek">@cateek</a> I'll add a few things that I remembered.</p> <p>As you pointed out, the CV was unstable and I was looking at Public LB to make some adjustments. Also at that time I continued to use a light model (EfficientNet-B1) to avoid overfitting to <code>noise that remained in local after denoising</code> and <code>Public LB</code>.</p> <p><a href="/cateek">@cateek</a> thx and congrats 2nd place ! Very interesting. Indeed, my CV was unstable in my case as well.</p> <p>I also considered stain normalization, but I didn't adopt it because I was afraid of test run time.</p> <p><a href="/tattaka">@tattaka</a> thx ! I convert each label value to bin (ex. 2 -&gt; <code>[1, 1, 0, 0, 0]</code>) and using sigmoid for predicting. So there are no negative value on my case.</p> <p><a href="/jakobw">@jakobw</a> thx! This point is very useful. When using this method, I think it's important that how to split Train/Valid. How did you split it?</p> <p><a href="/songwonho">@songwonho</a> thx ! On my single model, before and after removing the noise, the following is what it looks like</p> <ul> <li>before: Public: 0.892, Private: 0.916</li> <li>after: Public: 0.901, Private: 0.932</li> </ul> <p><a href="/piantic">@piantic</a> thx ! I'll see you at another competition soon!</p> <p>Freezing bn is to use the pre-train parameters without updating the BN parameters. If you use PyTorch, you can implement Freeze BN by replacing <code>model.train()</code> to <code>model.eval()</code>. (However, be careful when using things like <code>nn.Dropout()</code>)</p> <p>Congratulations to everyone and thanks for the hosts for preparing this competition!</p> <h4>I published <a href="https://docs.google.com/presentation/d/1Ies4vnyVtW5U3XNDr_fom43ZJDIodu1SV6DSK8di6fs/edit?usp=sharing" target="_blank">slide</a>!</h4> <h4>Our code is <a href="https://github.com/kentaroy47/Kaggle-PANDA-1st-place-solution" target="_blank">here</a>!</h4> <h1>Proposed Denoising Method</h1> <p>We're very suprised that we finished 1st, and our simple label-denoising method (suprisingly) boosted up PB.</p> <p>The competition was all about handling noisy labels, so we worked hard on finding good ways to denoising.</p> <p>Here is our simple denoising method by <a href="https://www.kaggle.com/kyoshioka47" target="_blank">@kyoshioka47</a>:</p> <h2>Getting cleaned labels</h2> <ul> <li>train k-folds with effnet-b1 (Almost identical to Qishen's kernel) <ul> <li>Model specifics in fam_taro( <a href="https://www.kaggle.com/yukkyo" target="_blank">@yukkyo</a> ) part</li></ul></li> <li>Predict hold-out sets with the trained model. We get <code>pred</code> with this step.</li> <li>Remove the training data which has a high disparity between ground truth and pred. The filtered labels will be called cleaned labels.</li> </ul> <p>We calculate <code>disparity</code> by the absolute difference of ISUP between GT and pred. Data with disparity larger than 1.6 was simply removed. </p> <p>Here is the psuedo codes. probs_raw is the raw prediciton results (ISUP)</p> <pre><code># Base arutema method def remove_noisy(df, thresh): gap = np.abs(df["isup_grade"] - df["probs_raw"]) df_removed = df[gap &gt; thresh].reset_index(drop=True) df_keep = df[gap &lt;= thresh].reset_index(drop=True) return df_keep, df_removed df_keep, df_remove = remove_noisy(df, thresh=1.6) show_keep_remove(df, df_keep, df_remove) </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1670024%2F6f6bbe0dfdb2bd5ba10057a1ba32f040%2Farutema.png?generation=1595474367096439&amp;alt=media" alt=""></p> <h2>Retraining</h2> <p>Retrain model with using the denoised labels. <br> We get CV 0.94 LB 0.90 PB 0.934 with a simple Qishen Eff-b0 model with k-folds.<br> Ensambling with different models further boosted to 1st place.</p> <p>We tried CleanLab too, but that did not perform well in CV/LB so we sticked with this.</p> <h1>1. Our final submission</h1> <ul> <li>Select 1 (public LB 0.910, private LB 0.922)<ul> <li>Resnext50_32x4d(poteman)</li></ul></li> <li>Select 2 (public LB 0.904, private LB 0.940)<ul> <li>Effnet-B0(arutema47) + Effnet-B1(fam_taro)<ul> <li>Simple average (<code>1 : 1</code>)</li></ul></li></ul></li> </ul> <p>Suprisingly, even with several weight patterns, the PB was 0.940.</p> <h1>2. Resnext50_32x4d( <a href="https://www.kaggle.com/poteman" target="_blank">@poteman</a> ), public 0.910, private 0.922</h1> <p>This was our best LB model.</p> <ul> <li>Split kfold: stratified kfold with imghash(threshold 0.90)</li> <li>iafoss tile method<ul> <li>tile size 256, tile num 64</li></ul></li> <li>model:resnext50_32x4d</li> <li>head: 3 * reg_head + 1 * softmax head</li> </ul> <h1>3. Effnet-B1(fam_taro), public 0.901, private 0.932</h1> <ul> <li>Split kfold<ul> <li>stratified 5 kfold with gleason-score and imghash similarity (threshold 0.90)<ul> <li>convert <code>negative</code> to <code>0+0</code></li> <li>how to grouping by imghash similarity<ul> <li>This is based on <a href="https://www.kaggle.com/appian" target="_blank">@appian</a> 's kernel<ul> <li><a href="https://www.kaggle.com/appian/panda-imagehash-to-detect-duplicate-images" target="_blank">https://www.kaggle.com/appian/panda-imagehash-to-detect-duplicate-images</a></li></ul></li> <li><a href="https://www.kaggle.com/yukkyo/imagehash-to-detect-duplicate-images-and-grouping" target="_blank">https://www.kaggle.com/yukkyo/imagehash-to-detect-duplicate-images-and-grouping</a></li></ul></li></ul></li> <li>In my opinion, split method is import point for our denoise method.<ul> <li>Because we use prediction of out of fold</li> <li>If you put the duplicate images in a different fold, I don't think denoise will work for them</li></ul></li></ul></li> <li>Data<ul> <li>iafoss tile method</li> <li>tile size 192, tile num 64</li></ul></li> <li>Model: Effnet-B1 + GeM<ul> <li>label: isup-grade and first score of gleason(10 dim bin)</li></ul></li> <li>Make final sub by 3 steps<ul> <li>Local train &amp; predict</li> <li>Remove noisy label<ul> <li>extended <a href="https://www.kaggle.com/kyoshioka47" target="_blank">@kyoshioka47</a> method</li> <li>Change threshold for each isup-grade and data-provider</li></ul></li> <li>Re-train</li></ul></li> <li>Not work for me<ul> <li>Remove noisy by confident-learning</li> <li>Cycle GAN augmentation(karolinska radboud)</li> <li>test with AdaBN &amp; Freezing BN at train</li> <li>CutMix, Mixup (before denoising)</li></ul></li> </ul> <pre><code>def remove_noisy2(df, thresholds): gap = np.abs(df["isup_grade"] - df["probs_raw"]) df_keeps = list() df_removes = list() for label, thresh in enumerate(thresholds): df_tmp = df[df.isup_grade == label].reset_index(drop=True) gap_tmp = gap[df.isup_grade == label].reset_index(drop=True) df_remove_tmp = df_tmp[gap_tmp &gt; thresh].reset_index(drop=True) df_keep_tmp = df_tmp[gap_tmp &lt;= thresh].reset_index(drop=True) df_removes.append(df_remove_tmp) df_keeps.append(df_keep_tmp) df_keep = pd.concat(df_keeps, axis=0) df_removed = pd.concat(df_removes, axis=0) return df_keep, df_removed def remove_noisy3(df, thresholds_rad, thresholds_ka): df_r = df[df.data_provider == "radboud"].reset_index(drop=True) df_k = df[df.data_provider != "radboud"].reset_index(drop=True) dfs = [df_r, df_k] thresholds = [thresholds_rad, thresholds_ka] df_keeps = list() df_removes = list() for df_tmp, thresholds_tmp in zip(dfs, thresholds): df_keep_tmp, df_remove_tmp = remove_noisy2(df_tmp, thresholds_tmp) df_keeps.append(df_keep_tmp) df_removes.append(df_remove_tmp) df_keep = pd.concat(df_keeps, axis=0) df_removed = pd.concat(df_removes, axis=0) return df_keep, df_removed # Change thresh each label each dataprovider thresholds_rad=[1.3, 0.8, 0.8, 0.8, 0.8, 1.3] thresholds_ka=[1.5, 1.0, 1.0, 1.0, 1.0, 1.5] df_keep, df_removed = remove_noisy3(df, thresholds_rad=thresholds_rad, thresholds_ka=thresholds_ka) show_keep_remove(df, df_keep, df_removed) </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1670024%2F8f930b3a7b3e30fe877ab049d0ed3b13%2F2020-07-23%2017.14.29.png?generation=1595492134245391&amp;alt=media" alt=""></p> <p><a href="/jakobw">@jakobw</a> If you apply just a stratfied kfold, the duplicate image would go into a different fold. In that case I don't think this denoising method is very good for the duplicate images. And I think that affects the Score.</p> <p>We used imghash to put the duplicate images in the same fold. How do you handle duplicate images?</p> <p><a href="/yassinealouini">@yassinealouini</a> thx! This idea came to me by @kentaroy47. However, it's simple, but it came out of many experiments.</p> <p>The point of this idea is that my model may be more accurate than the Original Label.</p> <p>Specifically, prior to this idea I found that updating all of Radboud's labels to my out of fold predictions and re-training them would raise the public LB. This led me to think that the accuracy of the model might be better than Original Label.</p> <p>However, this denoising method has many problems (ex. breaking LocalCV, discarding to hard examples). Be careful if you use this method.</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
25th place solution [Kaggle_gaggle]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all Thank you very much to organizers and thanks to <a href="/kyunghoonhur">@kyunghoonhur</a> for collaborating with me!</p> <p>We are happy to get unexpected medal(Our PB rank is 124th). Individually, this is my first medal on kaggle and this medal made me even more into kaggle.</p> <p>Based on <a href="/haqishen">@haqishen</a> train &amp; inference notebook, we will mention some of things we tried and a comment about how those works affect on our final result.</p> <h2>Tile Size Selection</h2> <ul> <li>256x256x36 tiles : best results</li> <li>128x128x16 tiles : lower result than 256x256x36 tiles. It seemed necessary to raise the image resolution.</li> <li>256x256x16 tiles : better result than 128x128x16 tiles, but not satisfactory.</li> <li>256x256x36 tiles with little white as possible : Since 256x256x36 tiles have lots of white spaces, we tried to remove white spaces based on <a href="/rftexas">@rftexas</a>’s <a href="https://www.kaggle.com/rftexas/better-image-tiles-removing-white-spaces">notebook</a>. It achieves lower train loss than simple 256x256x36 tiles but quadratic weighted kappa score did not improved.</li> </ul> <h2>Augmentation</h2> <p>Several different augmentation were tested (Transpose, VerticalFlip, HorizontalFlip, RandomRotate, Blur, etc), but not much performance improvement was seen. Just taking basic augmenation configuration based on <a href="/haqishen">@haqishen</a> ’s notebook. Albumentation library</p> <blockquote> <p>Transpose(p=0.5) VerticalFlip(p=0.5) HorizontalFlip(p=0.5) All the augmentation were made at 2 levels: tile level + after the tile concatenated</p> </blockquote> <h2>Model</h2> <p>Similar to other competition (Deep learning for image classification), the most popular model architecture (Resnet, efficientnet) we tried. Among many several Resnet model structure, SE_Resnext50 was shown the highest score (except more than 50 model because our GPU limitation). Efficientnet showed stable and high score at CV. We couldn't get high level of efficientnet model due to our GPU unfortunately , but some discussion let us know that deep and heavy size model will lead to overfit (Effnet b6) So we focus on Efficientnet B0 and B1, between them not much difference shown.</p> <h2>Optimizer &amp; schedular</h2> <p>Adam optimzer Adam + GradualWarmupScheduler + CosineAnnealingLR</p> <h2>Inference</h2> <p>a) TTA(Test Time Augmentation)</p> <p>Based on tile generation method from Quishen Ha kernel, slight augmentation was added when conducting tile extraction That code is at mode=0 or mode=1 option of PANDA dataset generation class. Difference between mode =0 and mode1 is the sequence of tile into the concatenated input (36 x tile). So, when inferencing model, mode1 tile and mode 2 tile were considered as augmented data for test time augmentation(TTA). Additionally, we added transform augmentation in the same way of train (2 levels, tile + concatenated input). From several experiments, TTA showed quite positive effects on our public score when increasing the number of augmentation data. However, considering this competition is code competition which limits the submission time below 9 hours, we made intermediate number of TTA not as much like more than 100 TTA for preventing over of regular submission time.</p> <blockquote> <p>16TTA(mode=0) + 16TTA(mode=1) Transpose(p=0.5) VerticalFlip(p=0.5) HorizontalFlip(p=0.5)</p> </blockquote> <p>b) Model Ensemble</p> <p>The hardest part in this competition was how consider overfit on our training data and how predict shake up from private data. We carefully watched our CV score and LB score and continuously compere them. At last, from the comparison CV and LB for each fold, we got the fold which had the most similar result between CV and LB score.</p> <p>Ensemble result [Efficient net b0(fold0) and Efficient net b1 (fold0 and fold1)] showed the best score at public score and final(private) score both.</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
2nd Place Solution [Save the Prostate]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you very much! Yes I am planing to share code :) </p> <p>for me 11 min. For <a href="/rguo97">@rguo97</a> on high resolution 2nd level takes 30–40 min . For rest of the team between 7-20 min. </p> <p>I train first using 49 Tiles. Save The Weights of the model. Change Number of Tiles to 81, Load the saved weights and continue training. Both 49 and 82 Tiles use one cycle learning rate schedule. </p> <p>1) Yes 50, 50 2) <code>nn.MSELoss()</code> </p> <p>First of all, Thank you very much to organizers. Second I would like to thanks my team. We had such a positive, encouraging working environment. Our team contribution generates most of the ideas which you will read below and are shared by members. </p> <h1>Simple Resnet34 (DrHB)</h1> <h1>Image Preprocessing</h1> <p>I used medium resolution, the only preprocessing I did was to remove the white background and store medium resolution on SSD drive: </p> <p>```</p> <h1>function taken from R Guo</h1> <p>def crop_white(image, value: int = 255): assert image.shape[2] == 3 assert image.dtype == np.uint8 ys, = (image.min((1, 2)) &lt; value).nonzero() xs, = (image.min(0).min(1) &lt; value).nonzero() if len(xs) == 0 or len(ys) == 0: return image return image[ys.min():ys.max() + 1, xs.min():xs.max() + 1]</p> <p>```</p> <h1>Cleaning data</h1> <p>Like in APTOS competition, it was essential to clean images from pen marks, etc. I have used excellent work from this post: <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/151323">https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/151323</a> This also reduced the gap between CV and LB</p> <h1>Image Augmenatiosn</h1> <p>Augmentation occurred at two levels. (Slide and Tile): </p> <h3>1) Full slide</h3> <p>After the biopsy slide is open, we do random padding and applying one of the following transformations (similar to R Guo). </p> <p><code> def get_transforms_train(): transforms=A.Compose( [ A.ShiftScaleRotate(shift_limit=0.1, scale_limit=0.05, rotate_limit=10, border_mode=cv2.BORDER_CONSTANT, p=0.5,value=(255,255,255)), A.OneOf([ A.Flip(p=0.5), A.RandomRotate90(p=0.5), ], p=0.3 ) ] ) return transforms </code></p> <h3>2) Tile</h3> <p>For each tile I used standard fastai <code>GPU</code> augmentations: <code>rotate=(-10, 10</code>),<code>flip vertically (p=0.5)</code>. For the padding I used <code>reflection</code> it gave a slight boost on CV </p> <h1>Model</h1> <p>I decided to use a very simple model resnet34 but throughout competitions ended up doing few modifications </p> <h3>1) Making square features:</h3> <p>The main idea is builds up on <a href="/iafoss">@iafoss</a>. Aftter resnet enccoder we reshape features to look like a square in a following way: <code>x = x.view(x.shape[0], x.shape[1], x.shape[2]//int(np.sqrt(N)), -1)</code> . Here <code>N</code> represents Number of Tiles. After this we pass all the features to <code>SqueezeExcite</code> Block</p> 2) SqueezeExcite block <p>After reshaping features, we added 1 SE block to enable the network to learn features for individual slides based on tiles. </p> <p>experiment done by <a href="/cateek">@cateek</a> ```</p> <h1>code adopted</h1> <h1><a href="https://github.com/rwightman/pytorch-image-models/tree/master/timm/models">https://github.com/rwightman/pytorch-image-models/tree/master/timm/models</a></h1> <p>def make_divisible(v, divisor=8, min_value=None): min_value = min_value or divisor new_v = max(min_value, int(v + divisor / 2) // divisor * divisor) # Make sure that round down does not go down by more than 10%. if new_v &lt; 0.9 * v: new_v += divisor return new_v def sigmoid(x, inplace: bool = False): return x.sigmoid_() if inplace else x.sigmoid() class SqueezeExcite(nn.Module): def <strong>init</strong>(self, in_chs, se_ratio=0.25, reduced_base_chs=None, act_layer=nn.ReLU, gate_fn=sigmoid, divisor=1, **_): super(SqueezeExcite, self).<strong>init</strong>() self.gate_fn = gate_fn reduced_chs = make_divisible((reduced_base_chs or in_chs) * se_ratio, divisor) self.avg_pool = nn.AdaptiveAvgPool2d(1) self.conv_reduce = nn.Conv2d(in_chs, reduced_chs, 1, bias=True) self.act1 = act_layer(inplace=True) self.conv_expand = nn.Conv2d(reduced_chs, in_chs, 1, bias=True) def forward(self, x): x_se = self.avg_pool(x) x_se = self.conv_reduce(x_se) x_se = self.act1(x_se) x_se = self.conv_expand(x_se) x = x * self.gate_fn(x_se) return x ```</p> <h3>3) Pooling Layer</h3> <p>Once the feature passed thru SqueezeExcite Layer, I did Normal pooling. Our experiment showed that the batch normalization layer was messing with the last layer's features, so we removed it and saw a slight jump on local cv. </p> <p><code> self.pool = nn.Sequential(AdaptiveConcatPool2d(), Flatten(), nn.Linear(2*nc,512), nn.ReLU(inplace=True), nn.Dropout(0.4), nn.Linear(512,7), </code></p> <h3>4) Final Head</h3> <p>I used two heads. One head was for classification second was for regression. I noticed that training with two looses makes training much smoother (with sigmoid trick below) and yields higher local CV (0.88 -&gt; 0.90). In the final prediction, I use output only for the regression head. </p> <p>One small modification that I did before calculating loss is that the regression head used sigmoid to scale outputs between (-1. 6.). This enables much smoother training without bumps and faster convergence.</p> <p>```</p> <h1>idea taken from fastai</h1> <p>def sigmoid_range(x, low, high): return torch.sigmoid(x) * (high - low) + low ```</p> <h1>Training</h1> <p>I trained in two phases. In the First phase was trained with 49 tiles and later finetuned with 81 tiles. Both phases were using standard one cycle.</p> <h1>Final Model</h1> <p>I trained 5 fold wich resulted on the CV of 0.911 and PB: 0.922.</p> <p>Our Best Ensemble was simple average. Of 4 models. </p> <p><code> @drhb resnet34 5 FOLD (CV -0.911) + <a href="/rguo97">@rguo97</a> 5 FOLD (two stage attention model CV 0.92 ) + <a href="/xiejialun">@xiejialun</a> FOLD (EFNET) (CV 0.915-0.917) + <a href="/cateek">@cateek</a> Se 1 FOLD (CV -0.91) Final Standing </code></p> <p><a href="/xiejialun">@xiejialun</a> <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/169303">https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/169303</a> <a href="/rguo97">@rguo97</a> <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/169108#940504">https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/169108#940504</a></p> <p><code>LB: 0.914 PB: 0.937</code></p> <p>yes ensembling helped us a lot =) Congratulations on 1st place =)</p> <p>In my case is it was 60=) For other it was between (10-40).</p> <p>Thank you for the questions. If I understand your question correctly, we have not tried to do diffrent seeds me and <a href="/cateek">@cateek</a> had split which we created when we merged and we used this split to run all our experiments... </p> <p>By the way. I just want to say again thank you very much for your wonderful kernels and insight during the competition. </p>
Prostate cANcer graDe Assessment (PANDA) Challenge
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to everyone and host of competition! I am surprised that I am get 3thd place on private leaderboard. I was just lucky 🙂 . I stopped improving my solution 1 month ago, because has problems with my GPU and didn't want to spend more credits on AWS, because I have not seen any improvement on LB and CV.</p> <p>I am experimented with different networks and my custom tile cropping, hard augmentations. But the best results I get on simple sollution based on <a href="/haqishen">@haqishen</a> <a href="https://www.kaggle.com/haqishen/train-efficientnet-b0-w-36-tiles-256-lb0-87">kernel</a>. I am trained 2 effnetb0 and used round logits before average them, this approach give me 0.880 on public and 0.934 on private.</p> <p>Github with my experiments: <a href="https://github.com/Dipet/kaggle_panda">https://github.com/Dipet/kaggle_panda</a></p>
Prostate cANcer graDe Assessment (PANDA) Challenge
3rd public/20th private solution-segmentation + simple tiles and multiheaded attention
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, thanks to the organizers <a href="/wouterbulten">@wouterbulten</a> as i understand it is not easy to collect a dataset like this. Second, thanks to my teammates <a href="/rvslight">@rvslight</a> <a href="/aksell7">@aksell7</a> <a href="/ruozha001">@ruozha001</a> who worked hard with me. Third, congrats to <a href="/iafoss">@iafoss</a> for his solo gold and thanks to him for sharing the incredible tile idea and also all the participants who worked hard on this competition.</p> <p>We suffered in the shakeup, dropping from 3rd to 20th place, but i think our approach is quite interesting, and our selected sub was pretty good and balanced at both public (0.921) and private (0.927) with just 4 models.</p> <p>First I will briefly describe important details of my method using simple tiles which can generate a 0.927 single model single fold private score. My main idea is to keep things simple, apply attention, and use enough augmentation to avoid overfitting to label noise. My pure pytorch code is released on github at <a href="https://github.com/Shujun-He/PANDA">https://github.com/Shujun-He/PANDA</a> (see folder layer1test4maxmeanwuncertainty for the pure pytorch pipeline and I will clearn up and update later). Later, I will detail the segmentation part of our solution. </p> <p>Our best private score (not selected) was achieved by ensembling 5 models (2 simple tiles and 3 segmented tiles) and using median avg (middle 3). Best simple tile (given by iafoss' tile function) setting was 36x256x256, and any number above 36 also works.</p> <h1>Model architecture</h1> <p>Since iafoss released his tile idea, I immediately thought of using attention so the network can learn importance of different tiles and make predictions based on the set of tiles for each WSI. Here sometimes a full blown transformer encoder layer is used and sometimes just nn.MultiheadAttention + Mish activation. Also, resnext50 proved to be much better in this competition. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3355848%2F6845534ba574bd21bfa2006e0f7471b1%2Farch.PNG?generation=1595536578888614&amp;alt=media" alt="">. </p> <p>Mathematically, each tile becomes a feature vector after passed through the backbone, and the transformer encoder layer just operates on the set of feature vectors. 2D positional encoding can be added here but I did not think it was important based on reading about prostate cancer diagnosis. Here we usually used model=512 and nhead=8 same as the default setting of original transformer paper.</p> <p>I actually used multitasking learning by adding multiple attention classifiers on top the backbone:</p> <p>```python class MultiheadAttentionClassifier(nn.Module): def <strong>init</strong>(self,num_classes,out_features,ninp,nhead,dropout,attention_dropout=0.1): super(MultiheadAttentionClassifier, self).<strong>init</strong>() self.attention=nn.MultiheadAttention(ninp, nhead, dropout=attention_dropout) self.classifier=nn.Linear(ninp*2,num_classes) self.dropout=nn.Dropout(dropout) self.mish=Mish()</p> <pre><code>def forward(self,x): x=x.permute(1,0,2) x,_=self.attention(x,x,x) x=self.mish(x) x=x.permute(1,0,2) max_x,_=torch.max(x,dim=1) x=torch.cat([torch.mean(x,dim=1),max_x],dim=-1) x=self.dropout(x) x=self.classifier(x) return x </code></pre> <p>```</p> <p>This always resulted in much better CV convergence than just using isup grade, and lb was alway higher than CV so I stuck with the multitasking learning.</p> <h1>Augmentation</h1> <p>Augmentation wise I use cutout (replacing cutout region with just white pixels) 50% of the time and the other 50% I change the gamma. The tiles always have 50% chance of being rotated/ flipped/transposed. In our N=64 runs, I used a new augmentation which I call whiteout, where I simply turn some tiles white so the model can learn to be invariant to white tiles. </p> <p><code>python def whiteout(tensor,n=6): sample_shape=tensor.shape to_drop=np.random.choice(tensor.shape[1],size=n,replace=False) tensor[:,to_drop]=1 return tensor </code></p> <p>Later I found that after whiteout, even when using masked pooling (blocking white tiles), the model gives almost identical results, indicating that our model is invariant to white tiles.</p> <p>```python MultiheadAttentionClassifier with masked pooling and masked attention: class MultiheadAttentionClassifier(nn.Module): def <strong>init</strong>(self,num_classes,out_features,ninp,nhead,dropout,nlayers=1,attention_dropout=0.1): super(MultiheadAttentionClassifier, self).<strong>init</strong>() encoder_layers = nn.TransformerEncoderLayer(ninp, nhead, ninp*2, attention_dropout) self.attention = nn.TransformerEncoder(encoder_layers, nlayers) self.classifier=nn.Linear(ninp*2,num_classes) self.dropout=nn.Dropout(dropout)</p> <pre><code>def forward(self,x,mask): x=self.dropout(x) x=x.permute(1,0,2) src_key_padding_mask=mask==0 x=self.attention(x,src_key_padding_mask=src_key_padding_mask) x=x.permute(1,0,2) max_x,_=torch.max(x+src_key_padding_mask.unsqueeze(-1)*(-1e-9),dim=1) mean_x=torch.sum(x*mask.unsqueeze(-1),dim=1) tile_count=torch.sum(mask,dim=1).unsqueeze(-1) mean_x=mean_x/tile_count x=torch.cat([torch.mean(x,dim=1),max_x],dim=-1) x=self.dropout(x) x=self.classifier(x) return x </code></pre> <p>```</p> <h1>Progressive upsampling</h1> <p>One thing that really sped up my training was the usage of progressive upsampling. Training is usually 45 epochs with first ten epochs on half resolution tiles (downsized with cv2.resize). At 25 and 36 epochs, learning rate is reduced 10 times. This is a cool idea for people with limited computing power and for people who have a lot, it speed up training even more.</p> <h1>Segmentation model</h1> <p>To be updated. But to put it simply, we basically used masks on lowest resolution images to train a segmentation model distinguishing if the particular tile has cancer in it or not. Subsequently tiles were selected based on which ones are more likely to contain cancer based on trained segmentation model. This method should be better at predicting class 2,3,4,5, which was the case in CV at least. Somehow this method worked not so well in the private test set; however, ensembling this method with my models which use simple tiles still gave a boost.</p> <p>Combining the segmentation tiles with simple tiles worked well in public and also in private (just not as much as the boost in private given by denoising). Based on lb and cv, we thought that segmentation tiles would have better performance on 2,3,4,5 while simple tiles would be better at 0, 1, so the combination logically made sense.</p> <p>I had some worries that this method may be too biased towards predicting cancer, which is probably the reason it did not work well in private (judging from single model scores of segmentation tiles). Surprisingly, in private test set, when we made a mistake, where we used simple tiles instead of segmentation tiles on a model trained on segmentation tiles, we received a higher private lb in that submission than using segmentation tiles, which was of course not submitted since we identified that error.</p> <h1>Conclusion</h1> <p>In the end, we had multiple moments where we selected a 0.932 run, which would have resulted in a gold medal rather than a high silver. However, we changed it based on some reasoning that i still don't think is wrong. So just unlucky. </p> <p>About top solutions, I see most of them using some type of denoising method or just getting lucky based on some public kernels. Of course, using a large ensemble (~10 models) helps as well. What is really surprising to me is how denoising did not bring any recognizable improvement on public lb. I cannot help but think that there is some unintended difference between public and test set, because there is no reason denoising shouldn't work for public lb. In fact, I tried to do some denoising, but the results were not convincing and I stopped, which I do not consider a mistake, because there was no way to validate that anyone's denoising method was indeed working properly.</p> <p>Thank you and no problem!</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
4th place solution [NS Pathology]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Using hardvoting improved pubLB about 0.004. The effect on privateLB seems to be a bit less, but at least it didn't have a negative effect. I think this is probably because using hardvoting is less susceptible to the negative effects of lower-performing models than using avg.</p> <h2>Robustness strategy for shake</h2> <p>We were afraid to shake in this competition from the beginning. So we were very careful in our choice of methods. First of all, we worked hard to create a good validation, but no matter what we did, we couldn't create a validation that would work with LB. We were able to get our local CV score up to over 0.95. We figured this was due to the fact that there are many similar images in the train data. And there are so many of them that I've given up on removing them. So we decided to trust publicLB. It's easy to create an over-fitted submission to the publicLB by submitting a similar approach over and over again, but that's not the true ability of that approach. We have tried many techniques such as tiling methods, data cleansing, use of masks, per-provider learning, loss, custom architectures, etc., but none of them clearly improved the publicLB. However, I noticed that using a model larger than seresnext50, I was able to consistently exceed 0.90 at publicLB. So we finally thought that we could achieve consistently high scores in public and private LB by using multiple large models. I also noticed that using hardvoting instead of avg at the end can improve the stability of publicLB. This simple idea seemed to be correct. We also took the lessons learned from the APTOS shake down and made sure that the tiles are distributed evenly without tissues being cut off.</p> <p>First of all Thank you very much to organizers and thanks to <a href="/sinpcw">@sinpcw</a> for fighting with me! Our solution is simple. We ended up with the following models in our final ensemble submission.</p> <p>&gt; efficientnet_b5 x 4 &gt; seresnext101-64x4d x2 &gt; seresnext101-32x4d x2 &gt; resnest101e x2 &gt; gem+efficientnet-b3 x 1</p> <p>We used hard voting for the ensemble method, not soft voting. This definitely improved our score! We also used the average if the one getting the most votes in our hard voting did not get more than 1/3 of the total votes. But this method only worked on publicLB.</p> <p>In training, We use the technique of tiling the images in the following links. This allows us to ensure that the tissues are evenly distributed across all tiles.It is also able to perform Data Augmentation by changing the scaling factor. We use 512x512x16 from middle layer. We've also tried using 1024x1024x16 from highest resolution layer, but there was no improvement. <a href="https://www.kaggle.com/hirune924/image-loader-test">https://www.kaggle.com/hirune924/image-loader-test</a></p> <p>We also use syncBN. This was important when training the larger models. blue line is normal BN, brown line is syncBN. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1626653%2F3adeecd96e2f4fa58855db72e7a93999%2FsyncBN.png?generation=1595467440685526&amp;alt=media" alt=""></p> <h3>O2U-Net</h3> <p>(This method seemed to work in the privateLB, but we didn't use it in the end because we couldn't see the effect in the publicLB) We also tried using O2U-Net to remove the data noise, but didn't work in publicLB. But data cleansing of radboud only seemed to work for privateLB. seresnext50 trained on noise removed dataset for only radboud achieves 0.933 in privateLB.(if without data cleansing privateLB 0.915) <a href="https://openaccess.thecvf.com/content_ICCV_2019/papers/Huang_O2U-Net_A_Simple_Noisy_Label_Detection_Approach_for_Deep_Neural_ICCV_2019_paper.pdf">https://openaccess.thecvf.com/content_ICCV_2019/papers/Huang_O2U-Net_A_Simple_Noisy_Label_Detection_Approach_for_Deep_Neural_ICCV_2019_paper.pdf</a></p> <p>I share a notebook that calculates the noise level based on the recorded loss by O2UNet. <a href="https://www.kaggle.com/hirune924/o2unet-loss-aggregate">https://www.kaggle.com/hirune924/o2unet-loss-aggregate</a> The effect of data cleansing on private LB is also described in the 1st place solution. <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/169143">https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/169143</a></p> <h3>Usefull tools</h3> <p>PyTorch Lightning <a href="https://github.com/PyTorchLightning/pytorch-lightning">https://github.com/PyTorchLightning/pytorch-lightning</a> Hydra <a href="https://hydra.cc/">https://hydra.cc/</a> Neptune ai <a href="https://neptune.ai/">https://neptune.ai/</a> KAMONOHASHI <a href="https://github.com/KAMONOHASHI">https://github.com/KAMONOHASHI</a></p> <p>We trained with mini batch size=2x4 using 4 GPUs. In this case, it seems that normal BN can not accurately estimate the statistics of the batches</p> <p>I added about O2UNet details.</p> <p>This is code snipet for Train Val spilit ``` kf = sklearn.model_selection.StratifiedKFold(n_splits=10, shuffle=true, random_state=2020)</p> <p>for fold, (train_index, val_index) in enumerate(kf.split(df.values, df["isup_grade"].astype(str) + df["data_provider"],)):   df.loc[val_index, "fold"] = int(fold) df["fold"] = df["fold"].astype(int)</p> <p>train_df = df[df["fold"] != 1] valid_df = df[df["fold"] == 1] <code> We also calculated many types of validations by trimming the validation dataset, which helped us to estimate the performance of the model. </code> avg_val_loss (simple val loss) val_acc (simple val accuracy) val_qwk (simple val qwk) karolinska_qwk (val qwk using only karolinska) radboud_qwk (val qwk using only radboud) sample_qwk (val qwk using only isup&gt;1 ) val_qwk_o (observed of simple val qwk) val_qwk_e (expected of simple val qwk) public_sim_qwk (val qwk using only ((data_provider == 'karolinska') &amp; (isup &gt; 2.5)) | ((data_provider == 'radboud') &amp; (isup &lt; 2.5))) private_sim_qwk (val qwk using only ((data_provider == 'radboud') &amp; (isup &gt; 2.5)) | ((data_provider == 'karolinska') &amp; (isup &lt; 2.5))) ```</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
5th place approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I've learned a bunch from this contest - big thanks to the organizers and all who took a shot at solving the problem, and congrats to the winners! My approach is nothing novel but here goes.</p> <h3>Models</h3> <p>My solution was an ensemble of semi-supervised ImageNet models based on @Iafoss' <a href="https://www.kaggle.com/iafoss/panda-concat-tile-pooling-starter-0-79-lb">concat tile pooling </a></p> <ul> <li>resnext50_32x4d_ssl: input 192x192, 256x256</li> <li>resnext50_32x4d_swsl: input size 384x384</li> </ul> <p>The only thing I changed was removing the final dropout layer and training the head for a few epochs before unfreezing the model. And of course @haqishen's genius <a href="https://www.kaggle.com/haqishen/train-efficientnet-b0-w-36-tiles-256-lb0-87">BCE loss</a>.</p> <h3>Data</h3> <p>I generated tile sizes 256 and 384 from the medium resolution based on @akensert's <a href="https://www.kaggle.com/akensert/panda-optimized-tiling-tf-data-dataset">optimized tiling</a>.</p> <p>There was a performance trade-off between selecting more tiles and larger batch size so I settled on randomly sampling <em>k</em> tiles from the top <em>N</em> tiles for each epoch.</p> <p>| model | input size | k | N | bs | ----- | --- | -- | -- | -- | resnext50_32x4d_ssl | 192 x 192 | 28 | 40 | 10 | resnext50_32x4d_ssl | 256 x 256 | 32 | 40 | 6 | resnext50_32x4d_swsl | 384 x 384 | 14 | 24| 6</p> <p>Training with a smaller size(128) seemed to overfit while the larger size(512) was unstable because I had to lower the batch size</p> <h3>Augmentations</h3> <p>Hue/saturation augmentations didn't improve CV so I stuck to affine transforms - rotations, flips, zoom, warp - all from the default fastai transforms. Randomly shuffling the tiles every other epoch also seemed to help.</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
65th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>You are welcome :D</p> <p>Thanks for organizer! Congratulations to winners! I create this topic just want to sharing my solution(get 0.920 private lb). I know my solution will not so great as top solutions, but here I want to record something and hope you can give some advice.😄 </p> <ul> <li><p>Image Pre-process</p> <ul><li>Filter polluted tiles.(precompute and save the useful(pass filtering) tile ids of each image) <ul><li>change each tile from rgb to hsv, and observe some polluted tiles you will find the thresholds to filter polluted tiles :D</li> <li>how check each tile(already hsv)? Actually if you use thresholds above to fiter on tile-level, you will waste many useful tiles which have tissue information about half of tile-size. So I random sample 20 squares(length of side = tile-size//10), if the the number of squares, not be filted by the thresholds above, can reach 20xpass-ratio, I will treat this tile is useful and will also use it.</li> <li>If the number of tiles from a iamge is &lt; 36(I choose 36 tiles, size is 224x224). I use np.random.choice to supplement until reaching 36.</li></ul></li> <li>Patch Image(game changer 1) <ul><li>Like what <a href="https://www.kaggle.com/iafoss/panda-16x128x128-tiles">PANDA 16x128x128 tiles</a> did. Thanks <a href="/iafoss">@iafoss</a> !!!</li></ul></li></ul></li> <li><p>Models</p> <ul><li>efn-b0x3(I will call them b0-1, b0-2, b0-3 below) + efn-b4</li></ul></li> <li>Optimizer <ul><li>AdamW(default parameters)</li></ul></li> <li>Scheduler <ul><li>OneCycle(epochs=30, steps-per-epoch=int(np.ceil(len(train_dl)/acc-grad-step)) <ul><li>about pct-start, I try 0.1, 1/30, 2/30, and 0.1 works best.</li></ul></li></ul></li> <li>Loss(it's the game changer 2) <ul><li>BCEWithLogitsLoss(thanks <a href="/haqishen">@haqishen</a> !!!), details in [Train EfficientNet-B0 w/ 36 tiles_256 [LB0.87]] (<a href="https://www.kaggle.com/haqishen/train-efficientnet-b0-w-36-tiles-256-lb0-87"></a><a href="https://www.kaggle.com/haqishen/train-efficientnet-b0-w-36-tiles-256-lb0-87">https://www.kaggle.com/haqishen/train-efficientnet-b0-w-36-tiles-256-lb0-87</a>)</li></ul></li> <li>Train Details <ul><li>efn-b0(use tiles) <ul><li>batch-size=4, use gradient accumulation(also 4), so the batch-size can be treated as 4x4(not actually 4x4 of course)</li></ul></li> <li>efn-b4(use full-image) <ul><li>batch-size=16, resize lv1 tiff to 448x448</li></ul></li> <li>apex amp in kaggle is too hard to be installed well. So I use gradscaler from torcu.cuda.amp(seems torch version should &gt;= 1.5, I forgot the specific version).</li></ul></li> <li>Ensemble <ul><li>0.35 x (b0-1 + b0-2) + 0.25 x b0-3 + 0.05 x b4</li></ul></li> <li>Models Details <ul><li>b0-1(local score: 0.885) and b0-2(local score: 0.873) are the 2 highest local score folds of my 5-stratified-folds, and if mix other folds(local score all &lt; 0.87) will hurt lb, so I only remain the 2 highest.</li> <li>b0-3(local score: 0.872) was trained by re-sampling data. And I only keep 5% as valid-set in order the model can see more data.</li> <li>b4(local score: 0.794). It was trained by full-image(resize lv1 tiff to 448x448, data is same like the best local score fold of 5-stratified-folds). The reason why I train a full-image model is I can't find some efficient ways to make model know the context relationship by using tiles, so I train a model.</li></ul></li> <li>TTA <ul><li>only use different patch-mode, same like [Train EfficientNet-B0 w/ 36 tiles_256 [LB0.87]] (<a href="https://www.kaggle.com/haqishen/train-efficientnet-b0-w-36-tiles-256-lb0-87"></a><a href="https://www.kaggle.com/haqishen/train-efficientnet-b0-w-36-tiles-256-lb0-87">https://www.kaggle.com/haqishen/train-efficientnet-b0-w-36-tiles-256-lb0-87</a>)</li></ul></li> </ul> <hr> <p>As for bce to label. First, pred.sigmoid(), and then check how many value &gt; 0.5. And I will treat the pred is what value. pseudo code: x, = np.where(pred[i]&gt;0.5), label[i] = len(x) Why I supplement a b0 and b4 is because I listen the iWildCam 1st solution from megvii. They got iWildCam 2020 1st and shared they solution. Thanks megvii !!! Never dream about get 0.920 and shake to 70( main reason maybe the test-set is too small for qwk and I was lucky enough :) ) What's more, thanks <a href="/lopuhin">@lopuhin</a> , I use <a href="https://www.kaggle.com/lopuhin/panda-2020-level-1-2">PANDA: Level 1 and 2 images</a> on colab which created by him :D Feel free to discuss and post you point about my solution. Thanks for reading :D</p> <p>I use intermediate image. And as I use <a href="https://www.kaggle.com/lopuhin/panda-2020-level-1-2">this dataset</a>, I often call it lv1 image 😄 </p> <p>Thank you ! I also read your team's solution, the experiments and results are quite robust :D I'm ready to meet your team in next cv competition✋</p> <p>Of course :D 1. about thresholds the thresholds are the thresholds about h,s,v channel respectively to filter the polluted tiles(main color of pollutions are green and blue). And if we turn a image from rgb space to hsv space, we can filter the blue and green just focus on h channel. And with auxiliary s,v channel, I filter a big part of pollution :D 2. about ensemble weights Because test data is so small, I turn to trust public lb. So the weights finally I choose are after trying some combinations.(not the best combination, after ending, I try some different weights and get better private lb result but worse public lb result). But I don't try many experiments for limited submissions during competition.</p> <p>Hope it can help you :D</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
6th place solution : noise robust learning [BarelyBears]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you. I’m currently cleaning our git repo, one moment please :)</p> <p>First of all, we would like to thank organizers for such an interesting and realistic problem. Implementations are available: <a href="https://github.com/analokmaus/kaggle-panda-challenge-public">https://github.com/analokmaus/kaggle-panda-challenge-public</a></p> <h1>TL;DR</h1> <p>Label noise is the biggest challenge in this competition. We used <strong>online uncertainty sample mining(OUSM)</strong> and <strong>mixup</strong> to robustly fit CNN models, and blended 4 models with different settings to stabilize the results. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1973217%2F7c9a14d7f3d3de7bf65ce0b4c363fe69%2Fkaggle-panda-challenge.001.png?generation=1595831415189859&amp;alt=media&amp;width=500" alt=""></p> <h1>Tile-based multi instance learning model</h1> <p>The first challenge in this comp was how to deal with those extremely large images. Thanks to <a href="/iafoss">@iafoss</a> ’s great notebook, we used almost identical model with various backbones and tile sizes. We modified classifier part to ordinal regression (<a href="https://arxiv.org/abs/1901.07884">CORAL loss</a>).</p> <h1>Preprocessing and data augmentation</h1> <p>Data augmentation for tile-based CNN model can be applied in two ways: slide level and tile level. Slide level augmentations are applied to whole slide images before tiles are extracted. Since the point is to create slightly different tile sets, we used shift, scale, rotate. Tile level augmentations aim to improve feature extractor performance, and we used shift, scale, rotate, flip, and random dropout. The idea of random dropout is to randomly fill a tile with mean pixel value and regularize the model.</p> <h1>Postprocessing</h1> <p>We used 4 times TTA during inference and optimized thresholds to maximize QWK values. </p> <h1>Validation strategy</h1> <p>As written in task description, the label quality in train data differs a lot from those in test data. So from the very beginning we assumed this part would be critical in this comp. Roughly speaking, train data: noisy and big public test data: clean but small private test data: clean but small So our strategy is <strong>IGNORE CV, CARE ABOUT PUBLIC LB, AND TRUST METHODOLOGY.</strong> For us, the results were unstable due to small size of test data, but not so ‘lottery’.</p> <h1>Handling noisy labels</h1> <p>We read tens of papers about handling noisy labels, and implemented some of them such as:</p> <ul> <li>loss functions (DMI loss, DAC loss, Symmetric loss, <strong>OUSM loss</strong>, etc..)</li> <li>training procedure (CleanNet, Iterative Self-training, <strong>mixup</strong>, etc..)</li> </ul> <p>The common ideas among them are that noisy samples should have different features from correct samples, thus noisy ones should have bigger loss. OUSM(Online Uncertainty Sample Mining) is an approach, in which samples with high loss are excluded from each mini-batch. According to <a href="https://arxiv.org/abs/1901.07759">previous research</a>, this method works with skin lesion classification problem where similar kind of label noise exists. In PANDA competition, it gave us stable boost from around 0.87 to 0.90 on public LB. Then we trained models with different random seeds, and collected samples which are often judged as noise(with big loss). We excluded 10% of ‘most likely to be noisy’ samples from each label because due to the imbalance in label distribution, grade &gt;= 2 samples are more likely to be judged as noise. This new datasets should be less noisy than the original one, and models trained on this new dataset achieve 0.91 on public LB. Apart from OUSM, mixup also showed good performance on public LB. This is consistent with <a href="https://arxiv.org/abs/1710.09412">original paper</a> which reported performance improvement with label corruption.</p> <h1>Pipeline overview</h1> <p>Our pipeline is simple average of the following models</p> <ul> <li>5 fold 224x64Tile-based model, se-resnext50 (OUSM)</li> <li>5 fold 224x64 Tile-based model, se-resnext50 (OUSM with different params)</li> <li>5 fold 224x64 Tile-based model, se-resnext101 (OUSM)</li> <li>5 fold 256x36 Tile-based model, efficientnet-b0 (mixup) This model scored 0.903 on public LB, and 0.932 on private LB. Compared to <a href="/haqishen">@haqishen</a> 's model with no denoising, our final model showed +0.018 on public and +0.017 on private.</li> </ul> <p><a href="/mobassir">@mobassir</a> OUSM implementation is here: <a href="https://github.com/analokmaus/kaggle-panda-challenge-public/blob/208caf4c83a5ab9d181e66eee447cd2e475d58dc/models/noisy_loss.py#L97">https://github.com/analokmaus/kaggle-panda-challenge-public/blob/208caf4c83a5ab9d181e66eee447cd2e475d58dc/models/noisy_loss.py#L97</a> It is very simple!</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
7th Place Solution(simple but messy)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, Thank you very much to organizers.</p> <p>This challenge is very similar to APTOS-2019 which I have worked for months as a course assignment. So I simlpy used the pipeline of my course assignment(Based on <a href="https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/107947">Lex Toumbourou‘s solution</a>, thanks a lot) with some revised details. Also thanks a lot to <a href="https://www.kaggle.com/iafoss/panda-16x128x128-tiles">Iafoss</a> and <a href="https://www.kaggle.com/haqishen/train-efficientnet-b0-w-36-tiles-256-lb0-87">Qishen Ha</a> for their useful notebooks.</p> <p>Our model is simple but messy.</p> <h2>Tiles</h2> <p>We tried 256x256x32, 192x192x64, 154x154x100 but they didn't show some difference on Public LB. </p> <p>We also propoesd a new tile approach. It can contain more pathological parts without destroying the shape features. Large size tile ensures that the shape features will not be lost, while small size tiles ensure that the blank area is not that large.</p> <pre><code>def get_tiles_combine(img,mode=0): images = np.ones((1536, 1536, 3))*255 h, w, c = img.shape result_all=[] pad_h = (256 - h % 256) % 256 + ((256 * mode) // 2) pad_w = (256 - w % 256) % 256 + ((256 * mode) // 2) #print(pad_h,pad_w,c) img2 = np.pad(img,[[pad_h // 2, pad_h - pad_h // 2], [pad_w // 2,pad_w - pad_w//2], [0,0]], 'constant',constant_values=255) windows=[256,256,256,256,192,192,128] x_start=0 for i in range(len(windows)): result = [] window_size=windows[i] for x in range((h+pad_h)//window_size): for y in range((w+pad_w)//window_size): tile=img2[x*window_size:(x+1)*window_size,y*window_size:(y+1)*window_size] result.append([x,y,tile.sum()]) #print(len(result)) result.sort(key=lambda ele:ele[2]) result=result[:1536//window_size] #print(len(result),result) for y in range(min(1536//window_size,len(result))): xx=result[y][0] yy=result[y][1] result_all.append([xx,yy]) images[x_start:x_start+window_size,y*window_size:(y+1)*window_size]=\ img2[xx*window_size:(xx+1)*window_size,yy*window_size:(yy+1)*window_size].copy() img2[xx*window_size:(xx+1)*window_size,yy*window_size:(yy+1)*window_size]=255 x_start=x_start+windows[i] return images </code></pre> <h2>Models</h2> <p>We simply used Efficientnet-B0. We tried B1-B3,Densenet and Resnext, but they didn't show some difference on Public LB and need more GPU memory.</p> <p>According to APTOS-2019, we used the GeM pooling:</p> <pre><code>def gem(x, p=3, eps=1e-6): return F.avg_pool2d(x.clamp(min=eps).pow(p), (x.size(-2), x.size(-1))).pow(1./p) class GeM(nn.Module): def __init__(self, p=3, eps=1e-6): super(GeM,self).__init__() self.p = Parameter(torch.ones(1)*p) self.eps = eps def forward(self, x): return gem(x, p=self.p, eps=self.eps) def __repr__(self): return self.__class__.__name__ + '(' + 'p=' + '{:.4f}'.format(self.p.data.tolist()[0]) + ', ' + 'eps=' + str(self.eps) + ')' </code></pre> <p>We also designed a more complex model based on b0. It has lower Public LB(average about 0.85) so we didn't add it to our final models. But it got the highest single-model Private LB (max 0.926, average about 0.920, What a pitty!). We will do some more experiments on this model.</p> <h2>Loss and label</h2> <p>BCE Loss and label smoothing: 3-&gt;[0.95,0.95,0.95,0.95,0.05,0.05]</p> <p>We also tried regression with mse loss and smooth L1 loss, but they didn't show any improvement.</p> <h2>Ensemble</h2> <p>8 models with 6 * TTA:</p> <blockquote> <p>1: fold_1 b0 256-tile Public LB:0.879, Private LB:0.904.</p> <p>2: fold_3 b0 256-tile Public LB:0.879, Private LB:0.899.</p> <p>3: fold_4 b0 256-tile Public LB:0.886, Private LB:0.883.</p> <p>4: fold_4 b0 combine-tile Public LB:0.879, Private LB:0.910.</p> <p>5: fold_4 b0 256-tile Public LB:0.880, Private LB:0.909.</p> <p>6: fold_4 b0 256-tile Public LB:0.891, Private LB:0.920.</p> <p>7: fold_4 b0 combine-tile Public LB:0.881 Private LB:0.917.</p> <p>8: fold_0 b0 256-tile Public LB:0.872, Private LB:0.906.</p> </blockquote> <p>The final model has Public LB:0.894, Private LB:0.932.</p> <p>We split the dataset into 5 folds. Fold_x above means we use the xth fold as the test dataset and other folds as the training dataset. I don't know why your label smoothing doesn't work either😂. I think it is not strange we have different performance between public LB and Private LB since the dataset is noisy(Which lead to the shakeup). Maybe label smoothing is not the real factor. I use the label smoothing just because there is a deviation between CV and LB, and I want to avoid overfit. </p>
Prostate cANcer graDe Assessment (PANDA) Challenge
8Th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>No, use as large a batch as you can while training at least 32 tiles per case. I also add a lot of randomness (random tile selection weighted by "darkness". augmentated datasets etc.) to help the attention train to find the best tiles.</p> <p><a href="/yukkyo">@yukkyo</a> We just used some kernel someone shared that uses imagehash. For duplicates we used a lower threshold than suggested and got quite a few false positives but we don't think it matters much since all it did was to force those slides to be in the same splits. For suspicious someone released a list at some point of weird cases like blank slides, no mask, pen marks etc. I assumed these were likely to be not representative of the test set so I removed them from the validation splits.</p> <p>That being said it is not clear this strategy really paid off. It just seemed like a good idea at the time amd we didnt really have a way of testing because we believed the LB to be a poor predictor of the private.</p> <p>Despite some evidence of randomness I'd like to share the ideas we used: * 10 model ensemble based on local CV and decent LB. * Different predictions between models (regression, bins and ordinal regression) * Some of it used bags of tiles and other stack the tiles in squares * Efficient nets (I trained only b0 but partner had a few b4 * My models were trained in two steps. First a model with an attention layer is made (was shared by me in some thread). Then this attention layer and model is reused to predict weights for tiles. Then a model is retrained with a lower number of tiles (9 or 16). I have some 9 tiles models that were both fast and were going at 0.90CV+. On top of it it allowed us to inspect a larger amount of tiles during inference (128 tiles) and just select the best 9 or 16. * My partner used in his model a NetVlad layer which maybe hell talk about in this thread. * Ensembling with mean + round was better than majority voting for LB (and is what we used) but actually our best solution uses majority voting (which we didnt select). * We also built a CV without duplicates and without "suspicious slides".</p> <p>In the last weeks after making the team and how obvious it seemed the shake up would be big I started to mistrust LB and try to bring diversity to the ensemble. As long as a model was at LB &gt; 0.88 that was good enough if the CV was among the best ones.</p> <p>Learned a lot during this competition. Thanks to organizer.</p> <p>Note: We also have a solution at 0.936 that we didn't select :(</p> <p>Sorry about that, fixed.</p> <p>Additionally my code is available on github (in a messy state :)) push/pulling was the easiest way for me when using GCP to train bigger models. here: <a href="https://github.com/arroqc/pandacancer_kaggle">https://github.com/arroqc/pandacancer_kaggle</a></p>
Prostate cANcer graDe Assessment (PANDA) Challenge
9th Place Solution [ResNeSt might be the key]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all Thank you very much to organizers.</p> <p>My solution is simple. I ended up with the following models in my final ensemble submission.</p> <p>Label Smoothing + Ordinal regression + efficientnet_b0 x 2 Label Smoothing + Ordinal regression + GeM + ResNeSt50 x 2</p> <p>I was not confident with efficientnet_b0 since there was difference of about .02-.03 between the CV (.90XX) and Public LB (.87XX). It seemed to me like it was overfitting the LB despite using Label Smoothing and less stable. Whereas, ResNeSt50(Split-Attention Network) gave me stable difference between Public LB (.85XX) and CV (.86XX) along with Label Smoothing.</p> <p>We all knew that qwk was not a stable metric for given amount of Public Test Images. We also knew that training data seemed to have ample amount of noise. So, My intuition was to select one less scoring more stable model and one best Public LB scoring model for ensembling. <br> I manually weighted the models during ensembling. I observed that giving more weight to efficientnet_b0 (Best Public LB scoring Model) leads to decrease in Public LB, therefore I selected the ensemble having equal weights (simple average).</p> <p>After the releasing of Private LB the same Resnest50 gave a private LB score of .90XX.</p> <p>I was pretty sure to atleast land in bronze level zone, but shake up seemed to be very rough eventually landing me in gold zone. (completely unexpected)</p> <p>For training, I used both the techniques for tiling the images (List of tiles and Single large Image of Tiles). I used 36x256x256 (level - 1) tiles along with simple augmentations like hflip,flip,transpose,rotation..</p> <p><strong>What More Could Have been Done ??</strong></p> <p>Since I was using kaggle and Colab for training, I was constrained to 30 hr weekly and 12hr limit respectively. It is clear that I don't had enough gpu compute to do more experiments.</p> <p>But the following seemed to be much effective approach :- </p> <p>I also tried training seresnext50 + AdaptiveConcatPool + Classification (pretty sure that Ordinal Regression would have scored better) on Single Large Image of tiles dataset with noisy labels removed (Images Having Pen marks ) along with <strong>Progressive Resizing</strong> (ie) training in following sequence:-</p> <p>16x128x128 (Level-2) --&gt; 16x160x160 (Level-2) --&gt; 20x224x224 (Level-1) and so on....</p> <p>Make sure to load weights from previous stage while training the next stage.</p> <p>Due to the limited compute power I wasn't able to train seresnext50 on larger image size, my final input image size was 20x224x224 which gave me CV of .81XX and Public LB of .82XX (pretty stable). When the Private LB came, the resnext50 model trained in very same way mentioned above gave private LB of .89XX. Since I wasn't able to train this on larger images I wasn't able to use it in final ensemble.</p> <p>I am pretty sure it would have given a decent boost to my final score.</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
Lightgmb crop-wise solution (public 0.9)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Crop-wise model</h2> <p>I think my approach is different from approaches presented in public kernels, because I did not use an end-to-end model that makes predictions throughout the slide. Instead I trained network on single 512x512 crops from level 0 to predict ISUP-grade. Of course, it is impossible to accurately predict the class using only one crop from slide, but this way you can get a model that gives out good crop embeddings without troubles with batch-size/resolution. For training I use effnetb5 with softmax activation and crops with cancer area more then 10%(any type of cancer).</p> <h2>Aggregation part</h2> <p>After training I split the entire slide into crops(left only those in which the proportion of tissue was above 10%) and separately predict embeddings and probabilities. I use embeddings and probabilities to aggregate some features for lightgbm model: - element-wise statistics on embeddings(min, max, std, median) - statistics on probas - the number of crops with a certain class - attention-like features: obviously, not all crops are equally important for predicting the ISUP. So, instead of mean, i use weighted sum where weight is sum of ISUP probabilities(I have it from my effb5 crop-wise model). Therefore, crops containing only healthy tissue will have a low weight, while crops containing a lot of cancerous tissue will have a high weight.</p> <p>I used out of the box lightgbm classification model to predict final slide-level probabilities. Since I trained the crop-wise model in whole train, there was a leak in the features, but since strong augmentations were used, it was insignificant. Instead of regression, the final prediction was sum(i*p_i) with [0.5, 1.5, 2.5, 3.5, 4.5] threasholds(i - the i'th ISUP class, p_i - the lightgbm probability of this class) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F744554%2F25836385b85525212c3acd86ba8e8459%2Farch.png?generation=1595457060891999&amp;alt=media" alt=""></p> <h2>Training dataset modifications</h2> <p>As I know, there was some mistakes in training dataset, so i tried to fix them: If I have, for example slide with 4+5, I can add embeddings from another (4+5) slide, and new sub-slide will have the same 4+5 class. Furthermore 4+5 means that class 4 have highter area then class 5, but pathologists could be wrong in determining areas, so i can add 4+4 class from another slide to make my current slide more specific. Also I can add 0+0 tissue to every slide and it will not be harmful I used this augmentation-like techniques to expand dataset for 200000 rows(instead of default 10000) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F744554%2F489fd9a4595a8973800f1ba367aaf59d%2Fdata_aug.png?generation=1595457693350028&amp;alt=media" alt=""></p> <h2>Embedding TTA</h2> <p>During inference I can make predictions 100 times(get for example random 90% of embeddings) and average probabilities. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F744554%2F3de0730d0fff89bac6f719e4094f6cab%2Ftta.png?generation=1595458209610605&amp;alt=media" alt=""></p>
Prostate cANcer graDe Assessment (PANDA) Challenge
Lightweight siamese network solution (ResNet18 -> PB 0.8966)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>In this post I present the ideas for the <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/overview">PANDA</a> Kaggle competition. Please refer to the description of the competition for more insights. <a href="https://github.com/chiamonni/">@chiamonni</a> and <a href="https://github.com/mawanda-jun/"></a><a href="/mawanda">@mawanda</a>-jun worked on this project (<a href="https://github.com/chiamonni/PANDA_Kaggle_competition">repo</a>).</p> <h1>Contents</h1> <ul> <li>Problem overview</li> <li>Dataset approach</li> <li>Network architecture</li> <li>Results</li> </ul> <h1>Problem overview</h1> <p>The Prostate cANcer graDe Assessment (PANDA) Challenge requires participants to recognize 5 severity levels of prostate cancer in prostate biopsy, plus its absence (6 classes).</p> <p><img src="https://storage.googleapis.com/kaggle-media/competitions/PANDA/Screen%20Shot%202020-04-08%20at%202.03.53%20PM.png" alt="Illustration of the biopsy grading assigment"></p> <p>Therefore, this is a classification task.</p> <p>The main challenges Kagglers faced where related to: - <strong>dimensionality</strong>: images were quite large and sparse (~50K x ~50K px); - <strong>uncertainty</strong>: labels were given by experts, which were sometimes interpreting the cancer gravity in different ways.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2101276%2Ffe60ff9e1ab7555ba3c4f3f1866631b2%2F192863a82b5a954ba0fa56b910574e1a.jpeg?generation=1596380531799397&amp;alt=media" alt="cancer image"></p> <h1>Dataset approach</h1> <p>I decided to analyze each image and extract relevant "crops" to be stored directly on disk in order to reduce compute time while reading the images from disk. Therefore, I used the 4x reduced images (level 1 of original dataset) and extracted squared patches of 256px with the "akensert" <a href="https://www.kaggle.com/akensert/panda-optimized-tiling-tf-data-dataset">method</a>. Then, I stored the crops in an image with the slideshow of crops.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2101276%2Fcc3ab4fed6daa78c122b54d8a80a9d31%2F0b6e34bf65ee0810c1a4bf702b667c88.jpeg?generation=1596380612573127&amp;alt=media" alt="akensert crops"></p> <p>Each image came with a different number of crops. So, I realized a binned graph counting how many times a certain number of crops occured. The "akensert" method is the first metioned, the "cropped" one is a simple "strided" crop, in which I kept each square that was at least covered with 20% of non-zero pixels.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2101276%2F1aee3408042bed534ee52feb5148860a%2Fnumber_crops_personal_akensert.png?generation=1596380665002903&amp;alt=media" alt="number of crops"></p> <p>From the graph it is clear that the "akensert" method is more reliable (the curve is tighter) than the first I explored. In addition, I decided to select 26 random selected crops from each image: - in the case they were less than 13 I doubled them, and filled the remaining with empty squares; - in the case they were more, I randomly selected 26. I thought about this method as a regularization. In fact, the labels could have been assigned wrongly and selecting only a part of the crops could lead to a better generalization capability of my model. In addition, I forced my model to understand the gravity of the cancer from a part of the whole image in the 40% of the dataset, which I think helped it to generalize the proble better.</p> <h2>Dataset augmentation</h2> <p>I found out that modifying the color of the images (with random contrast/saturation/ecc) augmentations was not giving me any particular advantage. In addition, I found out that simple flipping/rotation really helped me out in leveraging the differences between CV and LB. I also added a random occlusion augmentation, which covered each crop with a rectangle of ranging size of [0, 224) and really helped me in generalize the model performance w.r.t. the LB. As a side note, I think that those augmentations really helped my model perform so well in the private leader board (I gained +3% accuracy).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2101276%2Fe3889ccbe4626e95b3a1bc5eeabf6946%2Ftest.jpeg?generation=1596380705355303&amp;alt=media" alt="test"></p> <p>An example of the resulting augmentations, with 8x8 crops.</p> <h1>Network architecture</h1> <p>For the network architecture I took inspiration from the method used from experts, that is: 1. look closely to the tissue; 2. characterize each tissue part with the most present gravity of cancer patterns; 3. take the two most present ones and declare the cancer class.</p> <p>Therefore, I created a siamese network which received each crop at a time with shared weights. The output of each siamese branch was then <strong>averaged</strong> with the others as a sort of polling, and then brought to the <a href="https://www.kaggle.com/haqishen/train-efficientnet-b0-w-36-tiles-256-lb0-87">binned</a> output. See the image below for further insight.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2101276%2Fc96851ac147d872029fa0cd09de4daab%2Fnetwork_architecture.png?generation=1596380747042850&amp;alt=media" alt="network architecture"></p> <p>Since my computing resources were limited in memory (8GB VRAM, Nvidia 2070s) I was able to train this network with a <a href="https://github.com/facebookresearch/semi-supervised-ImageNet1K-models">ResNet18 semi-weakly pretrained</a> model.</p> <h1>Cross-validation</h1> <p>Since my model was performing so coherently among the CV and LB I decided not to do any cross validation. In fact, I simply trained the model with a 70/30 train/validation split of the whole training set.</p> <h1>Hyper parameters selection</h1> <p>The best hyper parameters I selected, within the trained weights, are under the folder <code>good_experiments</code>.</p> <h1>Results</h1> <p>The aforementioned architecture resulted in: - CV: 0.8504 - LB: 0.8503 - PB: 0.8966</p> <p>Those results are quite interesting, since most of the competition participant used a EfficientNetB0 which is far bigger and more accurate in most benchmarks. I would have liked to train this particular architecture on a bigger machine, with more interesting architectures, hopefully with even better results.</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
Silver medal solution -> 24 place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you <a href="/yousof9">@yousof9</a> . It is my profession, this is what I do where I work</p> <p><a href="/kurianbenoy">@kurianbenoy</a> My bad, I corrected the title now. Thanks for the observation Computer vision is my best area of expertise but I sometimes participate on others also if they seem interesting. Any new thing that I can learn is a useful thing</p> <p>Thank you <a href="/spidyweb">@spidyweb</a> </p> <p>Hello to all participants, organizers and people who are here to learn from the successful solutions or to exchange opinions</p> <p>This was an interesting competition with a lot of challenge even for the experimented computer vision programmers. The main challenge was the fact that the provided data was limited and noisy I am glad with the result, this being the 3rd consecutive Kaggle medal (after gold at Bengali and bronze at M5). I will mention some of the things I tried here and a comment about how that implemented technique worked for me.</p> <p><strong>Pre-processing techniques</strong> a) Tiles design - 256x256x36 tiles <em>(best results)</em> - 256x256x49 tiles <em>(a lot of white space and image was too big to sustained with the number of provided training data)</em> - 128x128x144 tiles <em>(lower results than the 256x256x36 tiles, probably some patterns are interupting by making the slides smaller)</em> - 512x512x9 tiles <em>(also lower results than 256x256x36, the tiles being so big they had a lot of white space when select them)</em> Additional comment: <em>256x256x36 seemed to be the sweet point between making the tiles too big where they will have a lot of empty pixels and 128x128x144 where the tiles patterns are interupted</em></p> <p>b) Data selection Due to noisy labels I design a system to eliminate data which had the biggest probability of being labeled wrong. I used the best 5 fold ensable to predict on training data, average the prediction and eliminate data where the difference between prediction and real label was biggest than a specific threhold The thresholds tested were: - 2 (137 data eliminated) - 3 (39 data eliminated) - 4 (7 data eliminated)</p> <p>Results comment: <em>Best CV results were obtain where I eliminated data where abs(prediction-truth labels)&gt;3 where 39 data were eliminated</em></p> <p><strong>Model arhitectures</strong> - Efficientnet B0 - Efficientnet B1 - Efficientnet B2 - Efficientnet B3 - Efficientnet B4 - SE_Resnext50</p> <p>Result comment: <em>Best result was on B2 arhitecture, B3-B4 lead to overfit and the rest of them simply did not worked for me</em></p> <p><strong>Data Augmentation</strong> - Transpose - Vertical Flip - Horizontal Flip - RandomRotate90 - Rotate at a random angle - Shift starting position pad when making tiles</p> <p>Result comment: <em>All the augmentation were made at 2 levels: tile level + after the tile ensamble In the best results I used: Transpose + Vertical Flip + Horizontal Flip</em></p> <p><strong>Optimizer + scheduler</strong> - Adam + GradualWarmupScheduler + CosineAnnealingLR - RangerLars + GradualWarmupScheduler + CosineAnnealingLR - RangerLars + OneCycleLR - Adam + GradualWarmupScheduler + ReduceLROnPlateau</p> <p>Result comment: Although I tried to use RangerLars in a variety of combination of schedulers, best result was obtain with the good old Adam (Adam + GradualWarmupScheduler + CosineAnnealingLR)</p> <p><strong>TTA</strong> TTA composed of: - Transpose - Vertical Flip - Horizontal Flip - Shift starting position pad when making tiles</p> <p>Result comment: <em>Usually I recommend to use on TTA similar techniques with what you are using on training augmentation, otherwise the prediction will see images that were not taught by the model. This is why I used:Transpose + Vertical + Horizontal Flip</em></p> <p>Congratulations to all participants and to the organizers !!!</p> <p>See you on the next computer vision competition !</p> <p>My pleasure, good luck with the experiments !</p> <p><a href="/cnzengshiyuan">@cnzengshiyuan</a> . After you trained a model with onecycle or cosineannealing the final lr at the end is very small. If you want to retrain this model after you finish the first training you have 2 possibilities: 1) Use the same lr like the end of the first training session or even smaller and it should work but it will take some time to see improvements (small lr -&gt; very small steps) 2) Use a bigger lr but this is very risky. It can drag you out of the local minimum you have found instead of further exploring it and the loss will increase. Another possibility is to increase a little bit the lr and on the retrain come with a cosine anneling, to somehow simulate the CosineAnnealingWarmRestarts </p>
Prostate cANcer graDe Assessment (PANDA) Challenge
This is a complete fiasco - N place solution [17th Public]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Despite the final standing, I'd like to share this post anyway..</strong> *<em>Everything below was written before this collapse was revealed lol</em>*</p> <p>First thanks to the hosts for making this competition, the most unpredictable and intriguing one in my experience.</p> <p>Additionally, congratulations to the winners and thanks to all kagglers who competed hard and made me worry about staying in public top10 all the time. Unfortunately, around 2 last weeks before the competition ends all new ideas stopped working for me (but not for many of others), so my concerns about top10 became reality.. So, it was tough but fun!</p> <p>One more thanks goes to the Dutch service provider HOSTKEY (<a href="https://www.hostkey.com/gpu-servers">https://www.hostkey.com/gpu-servers</a>) that granted me with the free GPU server access during this challenge. This really helped me to investigate and to experiment more broadly .</p> <h2><strong>Details:</strong></h2> <p>In the end my solution is quite simple one and based only on the classification, without using the masks. My 2 final solutions are the average and voting ensembles of several models, all trained mostly on the single fold (in fact I added only 2 other models from different folds to increase a bit the stability) with the D4 TTA.</p> <p><em><strong>Data:</strong></em> a) Remove several suspicious samples (somewhere from discussions), rearrange similar samples to the same folds (didn't bring too much) &amp; 5-folds b) I used only medium resolution and tiles 256x256 (I was really upset when the combination 36x256x256 was announced on the forum lol) c) I used slightly different strategy of the tiles sampling : if number of tiles in the image is less than 36 -&gt; add missing tiles by random selection of existing ones. Otherwise, select 36x1.3 = 46 most informative tiles (to engage more available information), take always first 24 of them. And always take random 12 from the rest. It was considered as an additional "data augmentation / generalisation" method. <strong><em>Augmentations:</em></strong> Hflip, Vflip, RandRotate, RandBrightnessContrast, ShiftScaleRotate <strong><em>Networks architecture:</em></strong> a) At some point I found that have almost no impact of heavy and fancy backbones (i was also quite limited in resources to train with big BS), so stayed mostly with ResNet18/34 with AdaptiveAvgPool2d. (Included one good ResNeXt50 model in the ensemble) b) I extended the baseline networks with additional convolutional block and spatial attention (this constantly improved my CV but made convergence longer). Before this block tensors were reshaped back to the "tile-level" as [BS, C, H*sqrt(N_tiles), W*sqrt(N_tiles)] c) Gated Attention module after the Avgpool - most important customisation on the architecture level that always helped. d) <strong><em>Training:</em></strong> a) big enough batch - 24 or 32 b) Adam + ReduceOnPlateau or MultiStep c) CrossEntropy / Focal Loss d) Baseline networks ResNet-18 / 34 / ResNext50_32x4d_swsl e) warmup f) apex O1</p> <h3><strong>What didn't work (completely, almost or was the same):</strong></h3> <ul> <li>Regression</li> <li>Optimising of Gleason scores directly (N+N) - 10 classes</li> <li>Effnets / RegNets / Inception</li> <li>MaxPool / GeM</li> <li>patch sizes of 128/224/384/512</li> <li>Fancy optimisers as Radam/Ralamb/Ranger</li> <li>Custom Loss / Combination of Losses / OHEM / LabelSmoothing / HybridCappaLoss</li> <li>Stain normalisation (tried to process all training with Vahadane and Macenko from staintools)</li> <li>Highest resolution didn't improve the performance</li> <li>Tiles to the single big image</li> <li>Multi-task learning (tried: 1 - split classes 0/1 vs 2/3/4/5 -&gt; binary + multiclass on each; 2 split -&gt; binary + regression on each)</li> <li>Sequence models such as LSTM extension on the features like <a href="https://www.nature.com/articles/s41598-020-58467-9">here</a> or <a href="https://www.researchgate.net/publication/323591215_Differentiation_among_prostate_cancer_patients_with_Gleason_score_of_7_using_histopathology_image_and_genomic_data">here</a>. However, one such model was included in the final ensemble.</li> <li>I also tried to cut 4 sets of tiles with the vertical and horizontal shift of 1/2 of tiles size. Observed no improvement either by using them during the training (tried to select the most informative / to increase the tiles number / to periodically select of such different sets by epochs) or inference. I also tried to cut 4 different sets of tiles for each image and create 4 samples from each one on the fly and feed it as independent samples - no improvement</li> <li>Use only non-empty tiles / Balance all samples as 90% of non-empty + 10% always empty to standardize the input type</li> <li>Inference and averaging on the 36 "standard" tiles + most important 18 with 2 variants of shift worked quite good for many of the model on CV but didn't help on LB</li> <li>In some experiments I noticed that 36 is not always the optimal tiles number, but it didn't always work</li> <li>TTA with brightness/contrast/scaling</li> <li>Use segmentation masks (tried to use only the original masks, didn't try to pseudo-label karolinska with radboud-like type - maybe that was the key)</li> <li>Could forget to mention something else that I tried, but for sure I just didn't the golden seed!</li> </ul> <p>What was particularly annoying is that common ensembling didn't really work and often inclusion of strong single models didn't improve neither CV nor LB, but who would expect that with QWK!</p> <p>And what is surprising my best private submission (<strong>0.932</strong>) is the single model that even wasn't included in the ensemble.. </p> <p>At least i'm happy with the stable and robust solution, that gives constantly 0.91 on local CV, public and private LB.</p> <p>Cheers, peace, bisou!</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
[11th place solution] I have survived in this storm
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you so much</p> <p>You are welcome.</p> <p>Thanks</p> <p>Thank you</p> <p>Thank you so much, you are very welcome.</p> <p>Thanks</p> <p>Thank you</p> <p>Thanks</p> <p>You re very welcome</p> <h2>Summary</h2> <ul> <li>Tile extraction is based on my <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/146855" target="_blank">public pipeline</a> with <strong>128x128x128</strong> tiles from intermediate resolution layer</li> <li>Label nose removal gives <strong>~0.005 public and 0.01+ private LB boost</strong></li> <li>tile cutout + tile selection augmentations</li> <li><a href="https://arxiv.org/pdf/1509.07107v2.pdf" target="_blank">kappa loss</a></li> <li>majority voting ensemble of 8 <strong>ResNeXt50</strong> based models (<strong>0.917 public and 0.930 private LB</strong>)</li> <li>more advanced tile selection could give <strong>~0.004 boost</strong> at private LB on average (and the maximum private LB score of <strong>0.941</strong>)</li> </ul> <h2>Introduction</h2> <p>To begin with, I would really like to express my gratitude to organizes and kaggle team for making this competition possible. It was really enjoying working on it and learned many new things. By sharing some of my ideas in this competition, such as <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/146855" target="_blank">tile pooling base pipeline</a> used by many participants, gave me 3 kernel gold medals, so I have reached the kernel grand-master rank. And I have received my first solo competition gold medal. Also, I would like to say congratulations to all winners and people who received medals.</p> <p>However, this day is quite sad for many participants, especially ones who worked very hard throughout the entire competition and got down at private LB. The <strong>chose of the metric by organizers could be done more wisely</strong>: 500+500 test set is definitely not enough for QWK. It is not really normal when LB score is changing by 0.005+ when a different seed is used. The things got deteriorated when the third digit became available for LB score: many people got seduced by overfitting LB noise.</p> <p>Below I outline the main things that worked for me. I have tried many more, but most of them have never worked, and I couldn't get any further improvement of my LB during the last month.</p> <h2>Main challenges</h2> <p>This competition to a large extent was about dealing with noisy data and train/test bias: as reported by organizers, the Redbound train data has only about <strong>0.853</strong> QWK, and I expect that Karolinska train data has 0.95-0.96 QWK. Beyond this, since Redbound data is graded by students, and Karolinska data is graded by only a single expert, while the test data is graded by 3 experts, there could be train/test bias because of the subjective opinion of people performing grading the train set. Therefore, <strong>solely relaying on CV was not really good strategy in this competition</strong>: at some point I saw a consistent decrease (~10 different models) of LB score when I ran training for longer, while CV was increasing. It confirms the hypothesis about the bias, and the trick was to train models only for limited number of epochs (even if CV could be increased), 32-48 depending on the setup, to <strong>prevent learning the bias</strong>.</p> <p>Meanwhile, LB was also not the best thing to trust because of severe noise, but some ppl tried to fit random seed as a hyperparameter 😄. The right thing, in my opinion, in this competition was to find the balance between CV and LB, and <strong>trust to your intuition and the experience gained in the previous competitions</strong>.</p> <h2>Noise</h2> <p>It is the most important part of this competition, in my opinion. After organizers have disclosed that there is a substantial level of noise, especially in Redbound train data, I have explored a number of techniques to deal with the noise: progressive label distillation, JoCoR (Joint Training with Co-Regularization), Co-teaching, negative learning, excluding hard examples from the batch, etc. However, most of them didn’t really work well here. The additional challenge is the bias between train and test and unstable LB. The thing I found to be the best for this data is removal of the uncertain examples from training set based on the out of fold predictions. I excluded ~1400 Redbound and 300 Karalinska data, so my clean training set contains about 8700 items. Relabeling the excluded images didn’t improve the performance. At the end of the competition <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/161909" target="_blank">some ppl have discovered this trick as well</a>, so I got nervous about my LB position 😬 <br> <strong>This trick gave ~0.005 public LB boost and 0.01+ private LB boost.</strong></p> <h2>Pipeline</h2> <p>The method I have used is mainly based on my <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/146855" target="_blank">tile pooling pipeline</a> with several additional tricks:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1212661%2Fe6fe32d759a28480343001aa3c661723%2FTILE.png?generation=1588094975239255&amp;alt=media" alt=""></p> <p>Based on my public kernel, one could reach ~0.90 public and 0.91 private LB averaged (over different submissions) using 36x256x256 tile setup and the kappa loss (see below) without any other changes.</p> <p><a href="https://arxiv.org/pdf/1509.07107v2.pdf" target="_blank"><strong>kappa loss</strong></a>: I have used one minus<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1212661%2F99945116e2c9228e352645ee5f0bdfcc%2F2.png?generation=1595472932908419&amp;alt=media" alt=""></p> <p>(both predictions and labels are centered based on the mean value of labels). In my experiments I found that kappa loss &gt; sorted <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/155424" target="_blank">binning loss</a> &gt; binning loss &gt; MSE &gt; CE. The only issue with the loss is that bs should be sufficiently large: I needed to pretrain models on low resolution and then continue training on intermediate resolution with bs =6-8 (<strong>progressive resizing</strong>), while at bs 1-2 I couldn't get convergence. The predicted value is limited within [-0.5,5.5] as <code>yp = 6*sigmoid(p) - 0.5</code>. In addition, I have CE aux for prediction of the Gleason score with 0.08 weight.</p> <p><strong>tile cutout</strong>: <strong>Instead of using all tile tiles, why not to randomly select part of them</strong> (let's say 96 out of 128). So, I can use large bs and the model is regularized in the same way as if cutout is used. It gave me quite good boost for CV and quite fair boost at LB.</p> <p><strong>128x128x128 tiles from intermediate resolution</strong>: It appeared that many smaller tiles work better than 36x256x256. I think that it helps to select the tissue areas more effectively and at the same time prevents overfitting. </p> <p><strong>tile selection augmentation</strong> The idea is quite simple: instead of <a href="https://www.kaggle.com/iafoss/panda-16x128x128-tiles" target="_blank">generating a single tile set</a>, I can generate 4 with adding sz/2 padding to x, y, or both before cutting the image into tiles and selecting ones having the most of the tissue. So, each tile in these 4 datasets will be different, but it is important not to mix tiles from them. During training I select the dataset by random, so effectively I have x4 data. It is an approximation of tile selection with random offset each time, which would be even more effective (based on my experience in Severstal competition), though, too slow to be used with intermediate res images. I also tried TTA based on tile selection, (as well as selection of the tile set with the largest tissue area out of 4), but I couldn't get any statistically significant improvement.</p> <p><strong>The above tricks gave ~0.005 boost over my baseline if I consider multiple submissions</strong>. Though, score from submission to submission could change quite a bit.</p> <p><strong>Advanced tile selection</strong>: In addition to my main pipeline I also tried to use the method proposed by <a href="https://www.kaggle.com/akensert" target="_blank">@akensert</a> <a href="https://www.kaggle.com/akensert/panda-optimized-tiling-tf-data-dataset" target="_blank">here</a> with 128x128 tiles (but didn't use it for my final sub). It gave ~0.934 private LB single model performance on average for 8 different single 4 fold model subs (<strong>and maximum 0.941 private LB</strong>) and ~0.910 average public LB (~0.912 maximum). Too bad that I didn't create an ensemble based on this method. The trick that I have used in the model for training with such tiles is <strong>n-pooling</strong>: at test I apply the pooling only to nonempty tiles (n for a particular image, while the batch may contain some extra empty tiles for padding), and training is done with random selection of 96 tiles with repetitions (so I don't consider white tiles, which can change the mean statistics at pooling).</p> <p><strong>High resolution</strong>: I tried to train several models on high res/2 resolution, 128x256x256 tiles. With tile cutout I could use batches of size 4 (and include 64 random tiles). However, the results were slightly worse than ones for 128x128x128 tiles from the intermediate resolution layer. It indicates that <strong>going to higher resolution would likely provide only a minor boost</strong>, even if I try to optimize my pipeline for training with small batches. Some idea I had is based on having two conv parts for intermediate and high res tiles. First pass through the low res model selects tiles having the highest uncertainty. Next, the selected tiles (but in high res) are passed through the second conv part. The produced feature maps are downscaled twice and replace the low res feature maps that had high uncertainty. Finally, pooling and head are applied to produce the final prediction. This method would allow to keep overall statistics of tiles with only correcting ones that model is not confident about. However, too large level of noise in the training set, noisy LB inconsistent with train labeling, and small potential gain, which would likely be overshadowed by the noise, have prevented me from going into this direction. Also, more complicated pipeline is more likely to be broken under such competition, where there is no certain way to evaluate the performance.</p> <p><strong>Augmentation</strong>: I have used Albumentations with the following parameters:</p> <pre><code>Compose([ HorizontalFlip(), VerticalFlip(), RandomRotate90(), ShiftScaleRotate(shift_limit=0.0625, scale_limit=0.3, rotate_limit=15, p=0.9, border_mode=cv2.BORDER_CONSTANT), OneOf([#off in most cases MotionBlur(blur_limit=3, p=0.1), MedianBlur(blur_limit=3, p=0.1), Blur(blur_limit=3, p=0.1), ], p=0.2), OneOf([#off in most cases OpticalDistortion(p=0.3), GridDistortion(p=.1), IAAPiecewiseAffine(p=0.3), ], p=0.3), OneOf([ HueSaturationValue(10,15,10), CLAHE(clip_limit=2), RandomBrightnessContrast(), ], p=0.3), ], p=1) </code></pre> <p><strong>Model</strong>: All my models are based on <strong>ResNeXt50</strong>, similar to my public kernel, with batch norm in the head replaced with Group-norm. The optimizer, best model selection based on CV, and other things are similar to my public kernel, and I was using 32-48 epochs, depending on the setup. In addition, I tried ResNet34, ResNeXt101, and EfficientNet, while all of them were performing worse. I think ResNet34 may be not capable enough for this task, while ResNeXt101 is too large to do training on my computer with sufficient bs. However, I would say that <strong>the model is the minor thing in this competition, and the main role is played by considering the noise and by optimizing the pipeline: there is no magic model, but there are hard work and solid understanding of the task and the data</strong>.</p> <h2>Final ensemble</h2> <p>The submission that gave me the 11th place (<strong>0.930 private LB/0.917 public LB</strong>) is based on a majority voting ensemble of 8 models (4 fold) with 6 TTA. They are trained with different train/val splits and other modifications in the training procedure. On average each of the models trained in such manner gave <strong>~0.930 private and ~0.910 public LB</strong> single model 4 fold performance (with the <strong>maximum of 0.938 and 0.916</strong>, respectively). So, I got quite fair score, not good or bad luck (and my LB position almost haven’t changed). However, the large number of models was a way to survive in this storm. My another ensemble of 11 models, not selected as a final one, got 0.934 private LB. And as I mentioned above, more advanced tiling gives about <strong>0.004 boost</strong> on private LB (while similar public score as my main approach based on 128x128x128 tiles), with the average of <strong>~0.934</strong> and the maximum of <strong>0.941 private LB</strong>, but unfortunately, I haven’t built an ensemble based on them for my final submissions.</p> <p><strong>the code snippets are available at:</strong> <a href="https://github.com/iafoss/PANDA" target="_blank">https://github.com/iafoss/PANDA</a></p> <p>And I would like to congratulate all participants and wish the best luck in the next competitions. I hope some of my tricks would be useful to you.</p> <p>Thank you so much, I'm happy about it.</p> <p>You are welcome. 1) You are right, I have used 1 - k, I fixed the typo. 2) When I checked tile selection TTA, I got slightly lower CV for several models (and lower LB as well). It was not very rigorous, but I wouldn't expect such TTA to be important in the competition (I just put more models in the ensemble to make it more stable). Meanwhile, I was using tile selection for training because it looks to be a natural augmentation for this data. 3) I tried several things. One worked the best for me is based on the following. I ran progressive label distillation d1 and d2, and generated soft adjusted labels as <code>l_a = (4*l_true + l_d1 + l_d2)/6</code> and <code>l_a = (6*l_true + l_d1 + l_d2)/8</code> for Redbound and Karalinska data respectively (the weights are selected to have about ~1200 and ~200 different labels for those datasets). After that I drop images with <code>abs(l_true - l_a) &amp;gt; 0.5</code>. In addition, I dropped Redbound data with <code>abs(l_d2-l_true) &gt; 0.75</code> and images suggested <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/148060">here</a>. I'm not sure right now about the exact numbers, but my training set was including 8700 images. One I expected to work the best, but in reality it gave lower results for 4 4-fold models I submitted to LB (and private LB is lower on average as well) is the following. About a month before the end of the competition I had 10+ models trained with the noise removal procedure from above. I computed mean and std values for each prediction and dropped images with <code>abs(l_avr - l_true)/std &amp;gt; 10</code>. The idea is that <strong>if the difference is small, but the models are confident about their predictions, I must exclude the image</strong>. However, <strong>if the difference is large while models are not sure about the prediction, I'd expect it to be a hard example rather than an incorrect label</strong>. It excluded 990 and 236 Redbound and Karolinska data point with the same criterion applied to both datasets. Moreover, the CV for models trained with such exclusion was ~0.93/~0.94 for different providers (quite close, which I expected to be good), not like with the above method giving ~0.93/0.96. However, I couldn't get anything good at LB on average, not exactly sure why this method failed. 4) I started with running experiments on low res layer tiles, so single fold training takes only ~30 min and 4 fold ~2 hours. Therefore, I could run a number of them quite quickly. When let's say, after several trials for a particular method I couldn't get CV above 0.80 (while CV for simple training is 0.84), I just quit the method. It probably took about a day to understand and implement a new method and get preliminary results. If I got CV comparable with the baseline method, I have a try for training on intermediate resolution tiles, but it's more time consuming and takes 1-2 days for 4 folds to train depending on the method. 5) It's quite interesting observation. I've never tried to downsize large images by 4 times instead of using the intermediate layer. Also, I would rather expect that if more information is provided, the better model performance, unless there are some other things related to bs, loss behavior, etc.</p> <p>Thank you.</p> <p>Thank you so much, I really appreciate it.</p> <p>Thank you. I'm really sorrow that you dropped so much at private LB. Getting competition GM is a little bit far away for me, but I made one of the required steps - solo gold (it was my main objective in this competition). </p> <p>You are welcome, I'm really glad that my idea about tiling worked so great in this competition.</p> <p>You are very welcome. Thanks, it happens, with such unstable LB and train/test mismatch even staying in gold range was not that simple. </p> <p>For 4 fold CV on low res I have 0.872 for sorted binning loss vs 0.880 for kappa (pay attention that I exclude noisy images). For single fold intermediate res I got 0.950 (0.952 with individual threshold adjustment, but I'd expect it's just overfitting the val during adjustment) vs 0.950. In terms of LB performance I had only a single fold sub for intermediate res images: 0.92922/0.90120 vs. 0.93299/0.91343 (for the same conf but kappa loss). So the difference at private is not that huge, and the results are likely affected a lot by noise, but low public score has demotivated me enough from looking more into binning loss, when it was posted. My expectation is that those two losses are giving nearly the same result, while kappa loss is slightly better.</p> <p>x is the output from the model. Try bs 64, in my procedure I first did pretraining on low res and then used intermediate res with bs = 8. Though, even with bs = 16 there should be convergence, not like u are showing. Did u try just to do regular training with lr ~1e-3? Also if u use sigmoid, the predicted labels in the metric evaluation should be computed as lp = (Ng*torch.sigmoid(x.float()).view(-1)).long().</p> <p>The model outputs a single value per image if u want to use this loss (regression), so u get the same number of elements in your output and target. For the code above, the target is just isup_grades.</p> <p>Thanks</p> <p>I have used the following one:</p> <p><code> y_shift = df.isup_grade.mean() Ng = 6 def Kloss(x, target): x = Ng*torch.sigmoid(x.float()).view(-1) - 0.5 target = target.float() return 1.0 - (2.0*((x-y_shift)*(target-y_shift)).sum() - 1e-3)/\ (((x-y_shift)**2).sum() + ((target-y_shift)**2).sum() + 1e-3) </code></p> <p>Though, without sigmoid it should also work. What is your bs?</p> <p>Thank you.</p> <p>It seems that u are referring to a <strong>different kappa loss</strong> that I haven't tried. If u want to try mine, make sure that you are using centered yp and y in the loss based on mean value of labels (see my writeup) and <code>yp = 6*sigmoid(p) - 0.5</code>. I expect you are interested in a <strong>run without noisy labels removal</strong>, otherwise CV is different. I ran such training only <strong>with nearly basic setup ~3 month ago</strong>. In my run below the things are nearly identical to my public kernel, but bs = 64, and the loss is kappa + 0.1*aux (Gleason score), [fold 0 in my kernel]: <code> 0 0.619686 0.726757 0.441646 01:10 1 0.503786 0.420728 0.734977 01:05 2 0.446397 0.388014 0.749429 01:04 3 0.421686 0.371897 0.765389 01:05 4 0.424101 0.440600 0.702009 01:06 5 0.388839 0.349748 0.775638 01:06 6 0.387164 0.497251 0.642067 01:06 7 0.364439 0.363708 0.763292 01:06 8 0.367793 0.471586 0.694158 01:06 9 0.365068 0.383838 0.752568 01:06 10 0.359331 0.413165 0.738522 01:06 11 0.339751 0.343124 0.783672 01:06 12 0.325875 0.324556 0.789526 01:06 13 0.335336 0.349040 0.776613 01:06 14 0.335342 0.606504 0.568188 01:06 15 0.333862 0.304566 0.805746 01:06 16 0.315612 0.302613 0.808751 01:06 17 0.298632 0.285233 0.826465 01:07 18 0.293765 0.318087 0.801963 01:06 19 0.287576 0.295604 0.811252 01:06 20 0.275854 0.303913 0.806941 01:06 21 0.266260 0.291326 0.812348 01:07 22 0.257481 0.347336 0.778186 01:07 23 0.241969 0.262429 0.836433 01:07 24 0.237443 0.264275 0.835410 01:07 25 0.221909 0.263148 0.837879 01:07 26 0.215041 0.251062 0.846085 01:07 27 0.206626 0.257341 0.840549 01:07 28 0.193016 0.248696 0.847726 01:08 29 0.188344 0.258596 0.844116 01:07 30 0.178238 0.251107 0.846781 01:08 31 0.173396 0.246989 0.850495 01:08 32 0.164201 0.245519 0.848950 01:08 33 0.161503 0.242258 0.852881 01:08 34 0.153965 0.240758 0.853131 01:08 35 0.157219 0.242864 0.853106 01:08 </code> Next I took the produced model and continued training it on 36x256x256 setup with bs = 6 and max lr of (1e-4,1e-3) [It's nearly my first run on intermediate res without any optimization]: <code> epoch train_loss valid_loss d_kappa_score kappa_k kappa_r time 0 0.344915 0.283919 0.815302 0.791614 0.782375 09:47 1 0.335626 0.285945 0.819567 0.763080 0.806630 09:46 2 0.379465 0.268116 0.829591 0.842391 0.774673 09:48 3 0.316176 0.233314 0.854984 0.860656 0.807655 09:50 4 0.292933 0.361592 0.745009 0.664846 0.725402 09:49 5 0.332429 0.244246 0.846598 0.859488 0.799022 09:54 6 0.281213 0.257858 0.830744 0.812808 0.802393 09:52 7 0.283063 0.226978 0.860622 0.841165 0.833361 09:52 8 0.244150 0.212531 0.871141 0.873798 0.833969 09:55 9 0.252861 0.231499 0.852559 0.857308 0.810779 09:52 10 0.303027 0.253684 0.839478 0.855612 0.786137 09:53 11 0.279593 0.214638 0.868501 0.869391 0.834018 09:55 12 0.250773 0.203354 0.878194 0.886358 0.837234 09:56 13 0.239266 0.203964 0.877210 0.887096 0.836048 09:58 14 0.259900 0.193430 0.881883 0.889351 0.847430 09:58 15 0.190317 0.215813 0.863405 0.861846 0.830002 10:01 16 0.268994 0.207127 0.875795 0.881434 0.832446 10:03 17 0.157652 0.210765 0.866115 0.870783 0.833130 10:05 18 0.220389 0.195599 0.884750 0.882381 0.855319 10:07 19 0.207806 0.190755 0.883505 0.888639 0.850455 10:08 20 0.188866 0.204237 0.874564 0.883008 0.833871 10:10 21 0.186228 0.202038 0.878626 0.887706 0.838257 10:10 22 0.226118 0.236399 0.848630 0.874477 0.778278 10:10 23 0.174489 0.184914 0.888356 0.896993 0.849679 10:09 24 0.198199 0.202828 0.877724 0.870652 0.852208 10:09 25 0.174334 0.176265 0.893822 0.906256 0.857446 10:10 26 0.175599 0.179044 0.892619 0.901621 0.858356 10:02 27 0.165271 0.177503 0.895425 0.904506 0.860751 09:58 28 0.145186 0.181107 0.888282 0.895656 0.853604 09:55 29 0.158865 0.179561 0.892518 0.895663 0.862738 09:57 30 0.192933 0.182072 0.894250 0.905655 0.857467 09:56 31 0.166352 0.171575 0.899262 0.900083 0.873222 10:00 32 0.133393 0.185716 0.886050 0.896071 0.850738 09:59 33 0.145026 0.190942 0.880710 0.891096 0.844466 10:00 34 0.133870 0.170243 0.897480 0.907618 0.863825 10:02 35 0.122654 0.169633 0.901467 0.911328 0.868964 10:04 36 0.130831 0.178497 0.893576 0.904467 0.856378 10:07 37 0.135886 0.172254 0.898089 0.908669 0.864243 09:53 38 0.140824 0.167994 0.900024 0.912449 0.864562 09:55 39 0.120919 0.168626 0.898917 0.911365 0.864052 09:56 40 0.111123 0.166505 0.900856 0.908967 0.869388 09:58 41 0.140773 0.167067 0.899353 0.908505 0.866439 10:03 42 0.122937 0.168974 0.900271 0.911877 0.866887 10:03 43 0.127722 0.170843 0.900197 0.909060 0.867776 10:06 44 0.109837 0.168787 0.901611 0.908684 0.871180 09:51 45 0.110626 0.165956 0.902472 0.912478 0.869869 16:13 46 0.117968 0.167239 0.901686 0.910909 0.869625 09:55 47 0.123976 0.166429 0.902506 0.913288 0.869211 09:56 </code></p> <p>You are very welcome</p> <p>You are welcome</p> <p>Read the description provided to the repo, it includes the links.</p> <p>You can check the <a href="https://github.com/iafoss/PANDA" target="_blank">my github repo</a> that provides the code I used for training my final models and links to different inference kernels.</p>
Prudential Life Insurance Assessment
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prudential Life Insurance Assessment <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Only the mean and the minimum of the 48 keyword_value_meantarget values belonging to each observation, but there was a different target for each of the 7 classifiers (see first part of Modeling in the description).</p> <p>[quote=idle_speculation;108393]</p> <p>Congrats!</p> <p>If you don't mind me asking, what made you consider the variable split calibrations?</p> <p>[/quote]</p> <p>While creating a regression-like summed prediction for Response with the XGBClassifier combo is good for subsequent calibration, two observations with similar predicted values would be calibrated together, even though they might have different Response probability distributions, and have different 'most probable Response' values.</p> <p>What I wanted to achieve with split calibration was to use a binary variable where the two variable values had different Response distribution (especialy in the middle Response values) to try and calibrate separately some of the observations with similar summed predictions but different Response probabilities.</p> <p>Alternately, it can be viewed as a method of generating ensembled predictions by doing several different split calibrations on the same raw preds and combining them, but (apart from doing a couple of simple median combinations) I did not pursue this line of thought very far.</p> <p>Hello all, here's my writeup. Hope you find it insightful (I certainly did learn a lot in the course of the competition)!</p> <p><strong>Feature engineering:</strong></p> <ul> <li>create dummy vars for Product_Info_2 (keep everything else as numeric)</li> <li>calculate sum of all Medical_Keyword columns</li> <li>for each binary keyword-value pair, calculate the mean of the target variable, then for each observation take the mean and the minimum of the keyword-value-meantargets</li> </ul> <p><strong>Modeling:</strong></p> <ul> <li>for i in 1 to 7: build an XGBClassifier to predict the probability that the observation has a Response value higher than i (for each of the seven iterations, the keyword-value-meantarget variables were calculated for that target variable)</li> <li>for each observation, take the sum of these seven predicted probabilities as the overall prediction</li> <li>this yields quite a bit better correlation with the target variable (and thus good raw material for calibration) than using an XGB regressor</li> </ul> <p><strong>Calibration:</strong></p> <ul> <li>the aim is to find the boundaries that maximize the kappa score</li> <li>boundaries are initialized according to the original Response distribution of the training dataset</li> <li>then in a step, for all boundaries, possible boundary values are examined in a small range around the current boundary value and the boundary is set to the value which gives the most improvement in kappa (independently of the other boundaries - this was surprising that it worked so well)</li> <li>steps are repeated until none of the boundaries are changed during a step</li> <li>it is a quite naive algorithm, but it turned out to be fairly robust and efficient</li> <li>this was done on predictions generated by repeated crossvalidation using the XGBClassifier combo</li> </ul> <p><strong>Variable split calibration:</strong></p> <ul> <li>the difference here is that the crossvalidated preds are split into two subsets, based on some binary variable value (eg. a Medical_Keyword variable) of the observations</li> <li>calibration then takes place for the two subsets separately (but with a kappa objective calculated over the entire set), in the manner described above</li> <li>I didn't find an exact rule for picking a good splitting variable (strong correlation with Response seems to be necessary, but does not guarantee a good split), so I tried several (some of which were better than non-splitting calibration, others were worse)</li> <li>for example, some good ones were: Medical_History_23, Medical_History_4, InsuredInfo6</li> <li>also tried splitting into more than 2 subsets, without much success</li> </ul> <p><strong>Ensembling:</strong></p> <ul> <li>disregarding the combination of the 7 XGBClassifiers, the only ensembling I did was creating some combined solutions by taking the median predictions of a small number of other solutions</li> </ul> <p><strong>Evaluating calibrations:</strong></p> <ul> <li>K-fold crossvalidation, but with an important twist: each test fold was &quot;cross-validated&quot; again to imitate public/private test set split (the inner crossvalidation had a k of 3 to approximate the 30-70 leaderboard split)</li> <li>this yielded a very interesting insight: given two calibrations with roughly equal average performance (over all folds), if calibration A does better on the public test set, calibration B is very likely to outperform A on the private set (this appears to be a quirk of the kappa metric)</li> <li>accordingly, I picked the solutions which ranked #2 and #5 on the public leaderboard, since these both had very strong average performance in crossvalidation but slightly underperformed on the public leaderboard</li> </ul> <p><strong>Final results:</strong></p> <ul> <li>as it turned out, despite having the right idea about public/private error, I underestimated some solutions which had relatively weak average performance in crossvalidation but ended up doing extremely well on private</li> <li>I did not select my best private submission for the final two (highest private score was 0.68002)</li> <li>out of my 11 'high-tech' (that is, using all the modeling and calibration techniques listed above) submissions, 5 were good enough for 1st place on the private board, 4 would place 2nd, one would reach 6th, and the worst would yield 7th place (at least I can say that I had no intention of picking any of the latter two)</li> <li>if my calculations are right, randomly selecting two out of the 11 would have resulted in 1st place with a probability of ~72.7 %</li> </ul> <p>G&#225;bor</p>
Prudential Life Insurance Assessment
3rd place solution summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prudential Life Insurance Assessment <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>@Gabor S &amp; pythonomic</p> <p>I'm sorry to disappoint, but my cross-validation strategy is quite ordinary. I broke the training set into 17 folds, trained on 16 and used the out-of-sample prediction on the remaining fold. Perhaps my conviction that the public leaderboard was unreliable made the difference.</p> <p>Some monte-carlo simulations of the variance in the demominator of Quadratic Weighted Kappa were what convinced me. I tested sampling at size similar public leaderboard and found the standard deviation in the results was around 1%. That may sound small, but it's the difference between 1st place and 139th place in the final standings.</p> <p>Since there were obvious differences between the train and test set, I can't really fault anyone for trying to fit the leaderboard. I know that I spent quite a bit of time convincing myself that the training set provided no evidence that the distribution of response in the test set was any different.</p> <p>First off, luck was a big factor in this competition so let me try and alleviate some guilt:</p> <p>Qingchen, I'm sorry for edging you out of 3rd place so narrowly. From your post, clearly you put serious effort into building out your ensemble. I know how frustrating it is to wish you'd just built one more tree into a model. Hang in there. Next time the dice could roll in your favor.</p> <p>As for my solution, there are two logical parts. The first part computes class probabilities for each of the eight response levels. The second part maximizes the expected value of the Quadratic Weighted Kappa (QWK) for the given set of probabilities.</p> <p><strong>Class Probabilities:</strong> These were computed through stacking, so let me describe from the bottom up.</p> <p><strong>attributes</strong></p> <hr> <ol> <li>base attributes, number of keywords, a few other things suggested on the forum</li> <li>2D tnse embedding</li> <li>the 2D embedding generated by a 4096-256-16-2-16-256-4096 autoencoder</li> <li>The first 30-dimensions of a SVD decomposition of the categorical features</li> <li>kmeans clustering with 64 centers</li> <li>quadratic interactions selected by lasso mse regression</li> <li>nodes of a 256-tree 50-node random forest selected by lasso mse regression</li> </ol> <p><strong>level 1 models</strong></p> <hr> <ul> <li>tree based models: 8 xgboost models minimizing: mse, possion, multinomial, mae*, tukey*, or QWK* loss. </li> <li>knn: 8 k-nearest neighbor models with k from 50-1000</li> <li>neural nets: 6 neural networks minimizing: mse, mae, multinomial, or QWK* loss</li> <li>linear: 1 lasso mse regression</li> </ul> <p><strong>level 2 models</strong></p> <hr> <ul> <li>both use the level 1 models as inputs</li> <li>multinomial xgb</li> <li>multinomial neural net</li> </ul> <p><strong>QWK optimization:</strong> Uses the average of the level 2 models as the class probabilities.</p> <ol> <li>randomly select a category 1-8 for every test example</li> <li>iterate through the test examples one by one and change each class to whatever maximizes the expected value of the QWK based on the probabilites of the level 2 models.</li> <li>repeat step 2. over the entire test set until convergence is reached</li> <li>submit result</li> </ol> <p><strong>*Note:</strong> Some of the loss functions are custom implementations. For xgboost these are implemented in C++ and for the neural nets they are implemented in either Keras or Lasagne. </p>
Prudential Life Insurance Assessment
Solution -- 2nd place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prudential Life Insurance Assessment <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The solution can be found here <a href="https://github.com/zhurak/kaggle-prudential">https://github.com/zhurak/kaggle-prudential</a></p> <p>Of course :). In one or two days.</p> <p>I tuned them manually. Usual I start from eta = 0.01, subsample = 0.5, colsample_bytree = 0.5 and then try to improve the validation score according to intuition :). So it is simple few steps to get a more or less stable CV score.</p> <p>Hello to everyone! It was an amazing experience! Never thought that a simple Linear Regression could be the best :)</p> <p><strong>Main insight: this dataset has no respect to your efforts and overfits whatever you do. So it is better to concentrate on robustness then accuracy.</strong></p> <p>If shortly my final model was the following:</p> <p><strong>1. Initial Feature Engineering (taken from public scripts):</strong></p> <pre><code>all_data['Product_Info_2_char'] = all_data.Product_Info_2.str[0] all_data['Product_Info_2_num'] = all_data.Product_Info_2.str[1] all_data['Product_Info_2'] = pd.factorize(all_data['Product_Info_2'])[0] all_data['Product_Info_2_char'] = pd.factorize(all_data['Product_Info_2_char'])[0] all_data['Product_Info_2_num'] = pd.factorize(all_data['Product_Info_2_num'])[0] all_data['BMI_Age'] = all_data['BMI'] * all_data['Ins_Age'] med_keyword_columns = all_data.columns[all_data.columns.str.startswith('Medical_Keyword_')] all_data['Med_Keywords_Count'] = all_data[med_keyword_columns].sum(axis=1) all_data.apply(lambda x: sum(x.isnull()),1) all_data['countna'] = all_data.apply(lambda x: sum(x.isnull()),1) </code></pre> <p><strong>2. Next step was to calculate prediction for <code>y=1,2,3,4,5,6,7,8</code> and <code>y&lt;3,&lt;4,&lt;5,&lt;6,&lt;7</code>.</strong> </p> <p>To calculate these probabilities I used an ensemble from one <code>binary:logistic</code> Xgboost, one <code>multi:softprob Xgboost</code>, one <code>Random Forest</code> and one <code>LogisticRegression</code> (I tried a lot of ensembles but they didn't help). So overall it added me 13 additional features.</p> <p><strong>3. Linear Regression!</strong></p> <p>Here was the place when mystery began! I tried a huge amount of regressors and ensembles of them but they never beat the simplest Linear Regression! That puzzle really got me and made me confused for a few days until I realized that all &quot;work&quot; is done during probability calculations. So I decided to concentrate on them hopefully that my model at least won't overfit due to linear model.</p> <p><strong>4. The function to search for cutoffs.</strong> Until last day I used the cutoff function which suggested someone on the forum. It's the one with these code:</p> <pre><code>def apply_offset(data, bin_offset, sv, scorer=eval_wrapper): data[1, data[0].astype(int)==sv] = data[0, data[0].astype(int)==sv] + bin_offset score = scorer(data[1], data[2]) return score offsets = np.ones(num_classes) * 1.3 offset_train_preds = np.vstack((train_preds, train_preds, X_train['Response'].values)) for j in range(num_classes): train_offset = lambda x: -apply_offset(offset_train_preds, x, j) offsets[j] = minim(train_offset, offsets[j], disp = False) </code></pre> <p>But then I noticed that it is not monotonic so I changed the function to the simple one:</p> <pre><code>def digit((x1,x2,x3,x4,x5,x6,x7)): res = [] for y in list(train_preds): if y &lt; x1: res.append(1) elif y &lt; x2: res.append(2) elif y &lt; x3: res.append(3) elif y &lt; x4: res.append(4) elif y &lt; x5: res.append(5) elif y &lt; x6: res.append(6) elif y &lt; x7: res.append(7) else: res.append(8) return res def train_offset((x1,x2,x3,x4,x5,x6,x7)): res = digit((x1,x2,x3,x4,x5,x6,x7)) return -quadratic_weighted_kappa(y_train, res) x0 = (1.5,2.9,3.1,4.5,5.5,6.1,7.1) offsets = fmin_powell(train_offset, x0, disp = False) </code></pre> <p>In this case the one can get the monotonic cutoffs which is not so overfitted as apply_offset.</p> <p>Actually thats all. A plenty of work but eventually the simplest the best :)</p> <p>P.S. Don't hesistate to ask me any question.</p> <p>[quote=Andreu;108313]</p> <p>Great, congratulations and many thanks for sharing! Just want little doubt: may you possibly elaborate a little more why it was so important for the cutoffs to be monotonic, please?</p> <p>[/quote]</p> <p>It's just essential for them to be monotonic since target variable is ordinal. I mean that you don't want to search for your 8s in your [1,2] interval output, for example. You would want to search it in [6,8] interval at least.</p> <p>[quote=SkyLibrary;108315]</p> <p>Congrats @Bohdan Zhurakovskyi </p> <p>Two questions, for step 2, are you performing a two stage learning with 5-cross validate out of bag prediction of y=1,2,3,4,5,6,7,8 and y&lt;3,&lt;4,&lt;5,&lt;6,&lt;7 on training data? Are you stacking probabilities together with original features to pass to linear regression in step 3 or only trained on these 13 probabilities? </p> <p>[/quote]</p> <p>I added these 13 probabilities to other features and feeded it all to linear regression. Using only 13 probabilities gives me worse result.</p> <p>[quote=skcary;108312]</p> <p>Did you train your models on the entire train set and then use those predicted probabilities? For example in R it might look something like this:</p> <pre><code>model &lt;- xgboost(data=train[,feature.names], label=train$Response,objective=&quot;binary:logistic&quot;) train$binaryLogistic &lt;- predict(model, train[,feature.names]) </code></pre> <p>Or how did you split up the train data? For example split the data in half and use one half to predict the other?</p> <p>[/quote]</p> <p>Hi, skcary. I splited train into 10 pieces and used 9 of them to predict the last. For the test dataset I used all train.</p>
Prudential Life Insurance Assessment
Solution and experience sharing - 4th place (should have been 2nd)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prudential Life Insurance Assessment <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This competition has been full of learning experiences. For me, it was a mistake made on the second to last submission resulting in a coin-flip decision for my last submission and the subsequent loss of 2 positions plus $10000. Like a lot of other people here, I started this competition about 10 days before the end of the Homesite competition and ramped up the intensity over time. Below I describe my solution and some learning experiences. </p> <p><strong>1. The public LB was severely overfit</strong></p> <p>Everyone should have known this, but I will go into some more detail. First of all, the complete test set was 19765 observations, and it's further exacerbated by the fact that only about 30% of that (~5900 observations) were used to calculate the public LB. What made the overfitting worse is that the Kappa score heavily punishes extreme errors (e.g., prediction risk = 8 for when actual risk = 1) and this competition is filled with a ton of them. Finally, when the best public LB test script was scoring ~0.004 better than my best reg:linear xgboost model but was ~0.004 worse in my 5-fold CV score, it was clear that the public LB could not be trusted, and that even a huge gain in the public LB when there's no gain in CV should not be used.</p> <p><strong>2. Relying solely on the CV score was not necessarily ideal either</strong></p> <p>I only found this out as I was checking submission scores to compile this post. Some of the models that performed better in my CV actually performed worse in the private LB and vice versa. I used 5-fold CV by simply cutting out five similarly sized chunks from the training data. Would 10-fold and/or stratified CV have been more accurate? </p> <p><strong>3. Ensembling models gets you very, very, far</strong></p> <p>My final solution consists of 16 different xgboost models which uses 4 different methods crossed with 2 different feature engineering methods (taken directly from the public scripts) and 2 different post-processing methods (optimizing offsets and optimizing cutoffs). The methods are:</p> <ul> <li>reg:linear objective - linear regression works quite well.</li> <li>count:poisson objective - also works quite well since response variable is count from 1 to 8.</li> <li>multi:softmax objective, then multiplied the probabilities by the category values (1, 2, 3, ..., 8) to get a continuous value.</li> <li>personally developed method (I won't discuss this as it's mostly intuition and heuristics). This was my best performing method.</li> </ul> <p>Here are the private LB scores and expected ranks for each of the models:</p> <ul> <li>poisson offset: 0.67242 - 150th</li> <li>poisson offset second: 0.66988 - 343rd</li> <li>poisson cutoff: 0.67198 - 171st</li> <li>poisson cutoff second: 0.67207 - 168th</li> <li>linear offset: 0.66936 - 391st</li> <li>linear offset second: 0.66714 - 880th</li> <li>linear cutoff: 0.67134 - 219th</li> <li>linear cutoff second: 0.66854 - 505th</li> <li>multi offset: 0.67223 - 159th</li> <li>multi offset second: 0.67250 - 143th</li> <li>multi cutoff: 0.67203 - 169th</li> <li>multi cutoff second: 0.67268 - 130th</li> <li>personally developed offset: 0.67604 - 31st</li> <li>personally developed offset second: 0.67611 - 28th</li> <li>personally developed cutoff: 0.67584 - 32nd</li> <li>personally developed cutoff second: 0.67586 - 32nd</li> </ul> <p>As far as I knew, there were two different ways to ensemble results. One is to use offset/cutoff and rounding on each model and then ensemble the results, and the other is to ensemble the raw predictions and then use offset/cutoff and rounding on the ensemble. The first method worked better for me on the public LB (I know, poor practice) so that's what I stuck with.</p> <p>Here are the ensemble scores:</p> <ul> <li>poisson only ensemble: 0.67225 - 158th</li> <li>linear only ensemble: 0.67201 - 170th</li> <li>multi only ensemble: 0.67306 - 118th</li> <li>poisson x linear ensemble: 0.67316 - 111st</li> <li>multi + poisson + linear ensemble: 0.67622 - 25th </li> <li>multi + poisson + linear + personally developed ensemble: 0.67730 - 14th</li> <li>weighted multi + poisson + linear + personally developed ensemble: 0.67817 - 6th</li> </ul> <p>However, to be safe, I also included a solution with the second ensemble into my full ensemble, it improved me to my final private LB score.</p> <p><strong>4. Unique models can be great</strong></p> <p>Scirpus' <a href="https://www.kaggle.com/scirpus/prudential-life-insurance-assessment/genetic-programming-ii-lb-0-662/code">genetic programming model</a> was amazing. I was foolish to not have investigated it earlier, and it went into costing me 2nd place and $10000. My plan was to use my final two submissions as the selected ones, with one of them being my final ensemble and the other one including the GP model into my ensemble. Unfortunately I made a mistake with my second to last submission so I had to scratch my plan and choose one of the two options. Ultimately I chose not to include the GP model because I hadn't tested it in my CV and was afraid that it was overfit like the public scripts. That turned out to be a mistake because including it would have resulted in a private LB score of 0.67936, good for 2nd place. Moral of the story, thoroughly investigate ideas to minimize the guessing!</p> <p>[quote=Dmitriy Guller;108297] Did you optimize all 7 cutoffs at once? I tried optimizing greedily, one cutoff at a time. It actually worked very well, sometimes better than optim. It took two passes for the cutoffs to converge, and I could do it at 0.02 level of precision in a couple of minutes. [/quote]</p> <p>Yeah, I optimized all 7 cutoffs at once. In hindsight it probably would've been better to optimize them one at a time, but because the offsets were optimized one at a time I thought that it might've been better to optimize the cutoffs all at once. It turned out great for my CV and public LB results as my optimized cutoffs were performing around 0.001 better than the offsets, but that didn't show up in the private LB though. </p> <p>[quote=Michael Maguire;108278]</p> <p>Thanks Qingchen! I'd love to hear more about your private offset/cutoff method (whatever you are willing to share). I also spent a lot of time playing with ensembles and my best one would have finished around 12th, but I didn't choose it. I got the best results using geometric means and simple rounding (I tried to optimize the cutoffs after taking the geo mean but the CV got worse on the couple I tried so I stopped doing that). I began to distrust my geometric mean CVs because once I started getting .67-.68 CVs the public LB results became so random...I guess my lesson learned is trust my CV!! While not <em>completely</em> trustworthy on this one, I think CV increases = private LB increases much more often than not!</p> <p>[/quote]</p> <p>Geometric means would've been a cool idea to try, I hadn't thought about that. My offsets were the same as from the scripts, and for cutoff I wrote a variation of brute-force search which was basically searching at the 0.1 level precision for the optimal cutoffs. A single run of cutoff optimization took under 2 hours at that level of precision. </p> <p>[quote=Stoik;108274]</p> <p>My question on the side is: how do you know all your private LB scores already? All I can see right now seem to be the public LB scores only, even the post-deadline submissions.</p> <p>[/quote]</p> <p>Make a submission to the leaderboard now and you can see the private LB score of that submission.</p> <p>[quote=xaviercapdepon;108357]</p> <p>@ Quingchen: I have a question about the weighted avg (I may be stupid): how do you determine that the mean or weighted mean is best result over all ? Are you keeping an out of fold ? </p> <p>[/quote]</p> <p>I simply assigned greater weights to the models that had better CV performance, nothing special there. I also tested the ensemble weights a bit using my CV evaluations and it showed that only a little bit of extra weight for the better models was sufficient.</p> <p>[quote=ucisee;108327]</p> <p>How did you ensemble the different models? Did you take the mean of the different models?</p> <p>[/quote]</p> <p>I took a weighted mean, where the better performing models got a slightly higher weight, but overall very basic.</p>
Psychopathy Prediction Based on Twitter Usage
My solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Psychopathy Prediction Based on Twitter Usage <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hey everyone, though I didn't do that great, I thought I'd share my own solution and attempts. (Sometimes knowing what doesn't work can be more valuable!)</p> <p>&nbsp;</p> <p><a href="http://camdp.com/blogs/predicting-psychopathy-using-twitter-data">http://camdp.com/blogs/predicting-psychopathy-using-twitter-data</a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>btw my website is like brand new, so mind any bugs.&nbsp;</p>
Quick, Draw! Doodle Recognition Challenge
11th place solution with limited hardware resources up to 2xP40
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quick, Draw! Doodle Recognition Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Good question, I think that it depends on not only engineering skills but also hardware resources. From my point of view, I picked our best models based on Input size, Batch size, model architecture, and training tricks. Also, there is a good paper helping to choose model architecture: <a href="https://arxiv.org/pdf/1810.00736.pdf">https://arxiv.org/pdf/1810.00736.pdf</a></p> <p>Yeah, even though I'm Vietnamese, he has been my idol for almost 20 years.</p> <p>It's like class-weight in fit-generator function on Keras with class_weight was the inverted number distribution of training dataset.</p> <p>I wish one day we could take part in a challenge as a team!</p> <p>Our solution is pretty simple. <strong>1. CNN</strong> - We tried some relatively small models with 100k/class and 64x64, 128x128 and 224x224 input size first. - Then we retrained with full data + weighted loss on se-resnext101 (.944), se-resnext50 (.943), se-resnet50 (.942), resnet50 (.942), densenet169 (.939), xception(.938), densenet121(.934) with batch size &gt;= 400, 128x128 input size. - It took 4 weeks.</p> <p><strong>2. RNN</strong> - We tried some public kernels and modified them (deeper, bigger and stronger, replacing LSTM by attention/GRU, using timestamp on raw data) with 100k/class first. - Then, we trained some best of them with full data and got best results around 0.93x. - It took nearly 2 weeks.</p> <p><strong>3. Inference</strong> - TTA (hflip) + Ensemble (0.8 * CNN + 0.2 * RNN) + optimization(Secret Sauce + magic wand). - Private LB without optimization: 0.94701</p> <p><strong>4. Our weakness</strong> - Limited on something as in the article, we were not able to try some bigger input size with big enough batch size. - We did not find a suitable way of filtering the training dataset on unrecognized images. - Only depend on optimization made our model roughly overfit. However, the best model on the public LB was our best model on private LB, too. Big thank to God for that!</p>
Quick, Draw! Doodle Recognition Challenge
12th place code release
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quick, Draw! Doodle Recognition Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone, we are releasing our solution to the Doodle challenge. It's located under the <a href="https://github.com/liaopeiyuan/ml-arsenal-public">ml-arsenal-public</a> repository, where in the future you may find the code to all of my high scoring-solutions. The solution journal will be released in the future, but in short, we used various models, encoding methods, input sizes, and a careful ensemble (done by @ShenTao) to stabilize the score.</p>
Quick, Draw! Doodle Recognition Challenge
14th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quick, Draw! Doodle Recognition Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes, I used validation map@3 to choose learning rate.</p> <p>Because I joined this competition with just two weeks left, I decided to train only basic classifier. So, It was really lucky to have achieved a relatively high score.</p> <p>After reading discussions (especially <a href="https://www.kaggle.com/hengck23">Heng</a>'s posts), I decided to use following settings.</p> <h2>Dataset</h2> <ul> <li><em>Train</em>: All images including unrecognized</li> <li><em>Validation</em>: 500 images per class</li> <li>3 channel, each are 1/3, 2/3, 3/3 of the total strokes.</li> <li>224 x 224</li> </ul> <h2>Training</h2> <ul> <li>cross entropy loss</li> <li>batch size: 256 for se_resnext50, 128 for se_resnext101 and xception.</li> <li>adam optimizer</li> <li>learning rate 0.00025</li> <li>reduce learning rate when MAP@3 has stopped improving by half</li> <li>no augmentation</li> </ul> <h2>Inference</h2> <ul> <li>average last ten weights (I saved checkpoints every 5000 step)</li> <li>horizontal flip tta with weight 0.5</li> </ul> <p>In total I trained 3 models:</p> <ul> <li>se_resnext50, se_resnext101, xception</li> </ul> <p>All of them have similar Public LB scores, 0.947x. After ensembling all of them Public LB 0.950x.</p> <p>Congratulations to the winners and thanks for all the participants!</p> <ul> <li>I used resnet34 with 128x128 input to explore proper learning rate. I tried 0.001, 0.0005, 0.00025, 0.0001. After training for 1 hour each, the best one were selected.</li> <li>I chose the largest batch size available. I have 4 Titan X, but available gpus are varied because my work. I didn't try gradient accumulation.</li> <li><a href="https://github.com/Cadene/pretrained-models.pytorch">Cadene/pretrained-models.pytorch</a> is used.</li> </ul>
Quick, Draw! Doodle Recognition Challenge
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quick, Draw! Doodle Recognition Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>PNASNet5Large, 128 size on 8 GPUs</p> <p>it turned out that model was not sensitive to the parameters, only feature generation added to the quality slightly</p> <p>each model predicts 10 labels only one of which is correct, thus, we have a binary problem with 10 examples</p> <p>as usual :)</p> <p>we are glad that it was usefull</p> <ol> <li>we used different variations of Beluga's pre-processing, no silver bullets, just wanted to increase diversity</li> <li>all models went to lgbm ensembling</li> <li>Pavel put all data into RAM for pytorch, I used chunks for keras as it was done in public kernels</li> </ol> <p>first we used several boosters (lgbm, xgb, cat) to ensemble CNN and RNN models, then post-processed each solution to balance classes, then blended all submits together</p> <p>0.94996 on private, you are close :)</p> <p>Big thanks to Google for hosting this flawless competition and collecting such a great dataset. I'm also very excited to become top-5 in overall user ranking and even more excited for my teammate <a href="https://www.kaggle.com/pavelost">Pavel Ostyakov</a> who got his second 1st place in a row! </p> <h2>CNN</h2> <p>First of all, Pavel did what he does best - trained a bunch of pytorch classification models. Here is the list of architectures: resnet18, resnet34, resnet50, resnet101, resnet152, resnext50, resnext101, densenet121, densenet201, vgg11, pnasnet, incresnet, polynet, nasnetmobile, senet154, seresnet50, seresnext50, seresnext101. </p> <p>One and three channels preprocessing were used as well as different image sizes starting from 112 and up to 256. The best model got 0.946 score, in total there were around 40 models. However, the gold could be achieved with a single model.</p> <h2>RNN</h2> <p>I trained a couple of LSTM models based on the <a href="https://www.kaggle.com/huyenvyvy/bidirectional-lstm-using-data-generator-lb-0-825">best public kernel</a>. Tweaked the architecture a bit, got rid of dropouts and achieved 0.893 score. Would love to hear in the comments how you got better results.</p> <h2>LightGBM</h2> <p>How do you ensemble models with too many classes? This issue has been already resolved during <a href="https://www.kaggle.com/c/cdiscount-image-classification-challenge/discussion/45733">Cdiscount’s Image Classification Challenge</a>. The idea is the following: for each sample and for each model you collect top 10 probabilities with the labels, then convert them into 10 samples with the binary outcome - whether this is a correct label or not (9 negative examples + 1 positive). It's easy to feed such a dataset to any booster because the number of features will be small (equal to the number of models). On top of that, I also added some time-specific features. The most significant was maximum timestamp from the raw representations of the strokes.</p> <h2>Secret sauce (aka "щепотка табака")</h2> <p>As it was mentioned by <a href="https://www.kaggle.com/hengck23">Heng CherKeng</a> a month ago <a href="https://www.kaggle.com/c/quickdraw-doodle-recognition/discussion/70540#416772">classes in a test set were equally distributed</a>. It was a very important clue which seemed to be lost in the depths of the forum. I also did not see this comment but arrived at the same conclusion by noting that (112199+1)/340=330 (number of samples in the test set plus one is divisible by the number of classes). Knowing the structure of the test set gave us an average boost of 0.7% for every model. </p> <p>The algorithm behind postprocessing is the following: for the most popular class decrease all the probabilities iteratively by the same small value until it is no longer the most popular, repeat this procedure until all classes become equal. This technique was also used in <a href="https://www.kaggle.com/c/sp-society-camera-model-identification/discussion/49334">one of the previous competitions</a> (see github link for the code).</p> <h2>Blending</h2> <p>After struggling for a week and producing 17 different balanced submits Pavel left me with the 5 last attempts to improve our public score of 0.956. I used <a href="https://www.kaggle.com/paulorzp/ensemble-weighted-voting">this public ensembling kernel</a> and scored 0.957 after the first attempt. Changing weights from 5-i to 1/(i+1) gave us a slight additional boost (it mimics map3 weights) and the 1st place. </p> <h2>Data</h2> <p>We used 34000 random samples as the overall holdout set and 1 mln samples for building second layer models. All first layer models were trained on 49 mln simplified samples. Raw data features were only added to LightGBM model.</p> <h2>Key takeaways</h2> <ul> <li>Read forum carefully, especially when <a href="https://www.kaggle.com/hengck23">Heng CherKeng</a> is present</li> <li>Study past solutions from similar competitions</li> </ul> <p>in order not to discourage other participants let's say we had much more gpus than needed to win this competition</p> <p>coined by famous GM Vladimir Iglovikov</p> <p>thanks for sharing!</p> <p>great catch, thank you</p> <p>I believe Pavel had 256 gb and used pytorch version 0.4.1 without any issues of this kind</p> <p>due to the noise in the data, it is highly unlikely</p> <p>on our own of course</p>
Quick, Draw! Doodle Recognition Challenge
21st place solution [LB 0.948] on simplified data only
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quick, Draw! Doodle Recognition Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks everybody for the competition! The task was really interesting and had huge amounts of data. Thanks very much to my teammate who writes really perfect code on Python. You should understand me, he writes <strong>comments</strong> to functions and uses <strong>types</strong> in Python! Also thanks to <a href="https://www.kaggle.com/scitator"></a><a href="/scitator">@scitator</a> for his perfect training ML framework <a href="https://github.com/Scitator/catalyst">Catalyst</a>, all models were trained with the help of it.</p> <p>Now I’d like to tell you about my and <a href="https://www.kaggle.com/artyomp">Artyom Palvelev</a> solution using simplified data only.</p> <h1>data preprocessing</h1> <p>Pictures of size 128x128 gave the best combination of score and training time. I don’t have time information, so I encoded the following data in three channels:</p> <ol> <li>The index of line (linearly from 10 for the first line to 255 for the last one)</li> <li>The number of strokes in the line</li> <li>Just constant 255 <h1>Our models</h1></li> </ol> <p>Firstly, I tried models from forum like MobileNetV2, but they showed poor performance, so I trained something deeper. My first good model was SE_ResNext50 0.942 LB. I used CosineAnnealingLR and averaged top4 checkpoints. Then we merged with Artyom and tried different model architectures. Blend with his models gave 0.944 LB. In the final submission we had:</p> <ol> <li>SE_ResNext50 (~0.942LB)</li> <li>SE_ResNext101 (~0.944LB)</li> <li>NASNet-A-Large (~0.944LB)</li> <li>SENet154 (~0.945LB)</li> <li>CBAM_ResNet50 (~0.941LB)</li> </ol> <p>We also used <em>gradient accumulation</em> to increase batch size to 1024 because data was noisy and bigger batch size gave better performance.</p> <h1>LGBM</h1> <p>We decided to use LGBM to ensemble our models because it is fast and usually gives better results than other methods. We used same idea as <a href="https://www.kaggle.com/pavelost">Pavel Ostyakov</a> described in his <a href="https://www.kaggle.com/c/cdiscount-image-classification-challenge/discussion/45733">5th place Cdiscount solution</a>. We predicted top10 classes by our best network, concatenated probabilities of other networks, added class_id feature and gave binary label: whether the class_id is correct or not. This method resulted in validation score 0.001 higher than a simple average. So, if you have enough time, always try LGBM to ensemble. Final submit with LGBM -&gt; 0.948 LB</p> <h1>What we tried and didn’t work</h1> <ul> <li>CatBoost, RF, XGBoost, ensemble of 3rd level</li> <li>Tuning models on clean data. We predicted the train dataset with our best models and dropped out pictures with small probability for the correct class (about 1M samples). It gave small boost on validation, but we didn’t have enough time, so we only trained a couple of models for about 2-3 epochs. So, I suppose, it is also a good idea for noisy data.</li> </ul> <h1>What we didn’t try but it worked</h1> <ol> <li>We didn’t noticed that the test is balanced (we could use same technique as in <a href="https://www.kaggle.com/c/sp-society-camera-model-identification">Camera Model Identification</a> and get gold) Others say, it give plus ~0.7% score to any submission.</li> <li>TTA with deleting 20% strokes (people on forum said it also improved score). I tried just light augmentations like flips and shift_scale_rotate but they made score on validation even worse.</li> </ol> <p>Hi! Sorry for such a long answer. At first, I took my best network and predicted validation with it. Then, I took only top10 classes by probability of prediction. I concatenated predictions of other networks and added <strong>class_id</strong> as a feature. Target feature was <strong>is it true, that target_id is class_id</strong>. So I got binary classification problem and for every sample from validation and test I got 10 samples. In the inference stage I just took top3 probabilities of predictions of my LGBM.</p> <p>Hi! I have used SGD optimizer, starting from 0.1 LR. Scheduler params were: T_max=7, eta_min=0.001. I divided the whole dataset on 70 epochs for annealing stage.</p>
Quick, Draw! Doodle Recognition Challenge
24th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quick, Draw! Doodle Recognition Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Sorry, but I can not give you advice because I have not experienced the same error as you.</p> <p>You may not be interested, but our 24th solution is below.</p> <p>-. data generate</p> <p>-. Model</p> <p>-. optimize, loss, etc</p> <p>-. ensemble</p> <ol> <li><p>data generate</p> <ul><li>Using simplified file</li></ul></li> </ol> <p>def draw_cv2_color(raw_strokes, size=256, lw=6, time_color=True):</p> <pre><code>img = np.zeros((BASE_SIZE, BASE_SIZE,3), np.uint8) for t, stroke in enumerate(raw_strokes): inertia_x = 0 inertia_y = 0 for i in range(len(stroke[0]) - 1): color = int(255 - 245*(float(t)/len(raw_strokes))) if time_color else 255 ## strokes order print(color) sx = stroke[0][i] sy = stroke[1][i] ex = stroke[0][i + 1] ey = stroke[1][i + 1] color_v = (np.sqrt((sx-ex)*(sx-ex) + (sy-ey)*(sy-ey)) / np.sqrt(size*size)) * 255 ## strokes distance like speed (1~0) color_a = (np.sqrt((inertia_x-ex)*(inertia_x-ex) + (inertia_y-ey)*(inertia_y-ey)) / np.sqrt(size*size*4)) * 255 ## strokes distance like acceleration (1~0) _ = cv2.line(img, (sx, sy), (ex, ey), (color,color_v,color_a), lw) inertia_x = 2*ex -sx inertia_y = 2*ey-sy if size != BASE_SIZE: return cv2.resize(img, (size, size)) else: return img </code></pre> <ul> <li>Using raw file</li> </ul> <p>def draw_cv2_color_new(raw_strokes, size=256, lw=6, time_color=True, last_drop_r = 0.0):</p> <pre><code>stx_min, sty_min = 99999, 99999 stx_max, sty_max = 0,0 ett=0 # How fast to complete less than 20 seconds for t, stroke in enumerate(raw_strokes): if t == len(raw_strokes) -1: ett = int(stroke[2][-1]) for i in range(len(stroke[0])): stx_min = min(stx_min, int(stroke[0][i])) stx_max = max(stx_max, int(stroke[0][i])) sty_min = min(sty_min, int(stroke[1][i])) sty_max = max(sty_max, int(stroke[1][i])) limit_ett = 20*1000 ofs = 15 if int(sty_max-sty_min+2*ofs) &amp;gt; 6000 or int(stx_max-stx_min+2*ofs) &amp;gt; 6000: img = np.zeros((6000,6000,3), np.uint8) else: img = np.zeros((int(sty_max-sty_min+2*ofs), int(stx_max-stx_min+2*ofs),3), np.uint8) for t, stroke in enumerate(raw_strokes): inertia_x = 0 inertia_y = 0 pre_st_t = 0 for i in range(len(stroke[0]) - 1): color = int(255 - 245*float(t)/len(raw_strokes)) if time_color else 255 ## stroke order sx = int(stroke[0][i]) - stx_min +ofs sy = int(stroke[1][i]) - sty_min +ofs st = stroke[2][i] ex = int(stroke[0][i + 1])- stx_min +ofs ey = int(stroke[1][i + 1])- sty_min +ofs et = stroke[2][i+1] time = et-st if time ==0: time = 1 color_v = min(int((np.sqrt((sx-ex)*(sx-ex) + (sy-ey)*(sy-ey)) / time)*255.0), 255) ## speed color_a = min(int((np.sqrt((inertia_x-ex)*(inertia_x-ex) + (inertia_y-ey)*(inertia_y-ey)) / np.sqrt(time*time))*255.0), 255) ## acceleration (1~0) _ = cv2.line(img, (sx, sy), (ex, ey), (color,color_v,color_a), lw) if i==0: color_inter = int((float(et-pre_st_t)/limit_ett)*245)+10 _ = cv2.circle(img, (sx, sy), lw, (0,0,color_inter), -1) ##interval time if i==len(stroke[0])-2 and t == len(raw_strokes) -1: color_end = int((float(ett)/(limit_ett)*245))+10 _ = cv2.circle(img, (sx, sy), lw, (0,color_end,0), -1) ##end time inertia_x = 2*ex -sx inertia_y = 2*ey-sy pre_st_t=et return cv2.resize(img, (size, size)) #lw reflects how big the picture is drawn, also the aspect ratio is reflected </code></pre> <p>2.. Model Structure</p> <ul> <li><p>Best single model </p> <p>: InceptionResnetV2 (139,139,3) size input local valid score is 0.9516.</p> <p>: using raw file, using 'imagenet' weights, batch size 180</p></li> </ul> <p>base_model = InceptionResNetV2(input_shape=input_shape, weights='imagenet',include_top= False)</p> <p>x = base_model.output</p> <p>x = GlobalAveragePooling2D()(x)</p> <p>x = Dense(1024, activation='rule')(x)</p> <p>x = Dropout(0.3)(x)</p> <p>predictions = Dense(340, activation='softmax', name='lastfc')(x)</p> <p>model = Model(inputs=base_model.input, outputs=predictions)</p> <p>3.. Optimizer, Loss, etc.</p> <ul> <li><p>In first train about 1epoch(50M set) , I used adam and learning rate 0.002, </p> <p>categorical cross entropy loss</p></li> <li><p>Second train , I used adam accumulation 500 iters and learning rate 0.002, </p> <p>categorical cross entropy 10% and top3 loss 90%.</p></li> <li><p>I did not have enough time to train to the saturation.</p></li> </ul> <h1><a href="https://github.com/keras-team/keras/issues/3556">https://github.com/keras-team/keras/issues/3556</a></h1> <p>import keras.backend as K</p> <p>from keras.legacy import interfaces</p> <p>from keras.optimizers import Optimizer</p> <p>class AdamAccumulate(Optimizer):</p> <pre><code>def __init__(self, lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0., amsgrad=False, accum_iters=1, **kwargs): if accum_iters &amp;lt; 1: raise ValueError('accum_iters must be &amp;gt;= 1') super(AdamAccumulate, self).__init__(**kwargs) with K.name_scope(self.__class__.__name__): self.iterations = K.variable(0, dtype='int64', name='iterations') self.lr = K.variable(lr, name='lr') self.beta_1 = K.variable(beta_1, name='beta_1') self.beta_2 = K.variable(beta_2, name='beta_2') self.decay = K.variable(decay, name='decay') if epsilon is None: epsilon = K.epsilon() self.epsilon = epsilon self.initial_decay = decay self.amsgrad = amsgrad self.accum_iters = K.variable(accum_iters, K.dtype(self.iterations)) self.accum_iters_float = K.cast(self.accum_iters, K.floatx()) @interfaces.legacy_get_updates_support def get_updates(self, loss, params): grads = self.get_gradients(loss, params) self.updates = [K.update_add(self.iterations, 1)] lr = self.lr completed_updates = K.cast(K.tf.floordiv(self.iterations, self.accum_iters), K.floatx()) if self.initial_decay &amp;gt; 0: lr = lr * (1. / (1. + self.decay * completed_updates)) t = completed_updates + 1 lr_t = lr * (K.sqrt(1. - K.pow(self.beta_2, t)) / (1. - K.pow(self.beta_1, t))) update_switch = K.equal((self.iterations + 1) % self.accum_iters, 0) update_switch = K.cast(update_switch, K.floatx()) ms = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params] vs = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params] gs = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params] if self.amsgrad: vhats = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params] else: vhats = [K.zeros(1) for _ in params] self.weights = [self.iterations] + ms + vs + vhats for p, g, m, v, vhat, tg in zip(params, grads, ms, vs, vhats, gs): sum_grad = tg + g avg_grad = sum_grad / self.accum_iters_float m_t = (self.beta_1 * m) + (1. - self.beta_1) * avg_grad v_t = (self.beta_2 * v) + (1. - self.beta_2) * K.square(avg_grad) if self.amsgrad: vhat_t = K.maximum(vhat, v_t) p_t = p - lr_t * m_t / (K.sqrt(vhat_t) + self.epsilon) self.updates.append(K.update(vhat, (1 - update_switch) * vhat + update_switch * vhat_t)) else: p_t = p - lr_t * m_t / (K.sqrt(v_t) + self.epsilon) self.updates.append(K.update(m, (1 - update_switch) * m + update_switch * m_t)) self.updates.append(K.update(v, (1 - update_switch) * v + update_switch * v_t)) self.updates.append(K.update(tg, (1 - update_switch) * sum_grad)) new_p = p_t # Apply constraints. if getattr(p, 'constraint', None) is not None: new_p = p.constraint(new_p) self.updates.append(K.update(p, (1 - update_switch) * p + update_switch * new_p)) return self.updates def get_config(self): config = {'lr': float(K.get_value(self.lr)), 'beta_1': float(K.get_value(self.beta_1)), 'beta_2': float(K.get_value(self.beta_2)), 'decay': float(K.get_value(self.decay)), 'epsilon': self.epsilon, 'amsgrad': self.amsgrad} base_config = super(AdamAccumulate, self).get_config() return dict(list(base_config.items()) + list(config.items())) </code></pre> <p>4.. Ensemble</p> <ul> <li><p>I used weighed average ensemble by local valid score and argmax corr.</p></li> <li><p>I used InceptionResNetV2 (raw, simple), Xception (raw), Resnet50 (simple).</p></li> <li><p>I did not have time to train both (raw, simple)</p></li> <li><p>Average weight calculation is below code</p></li> </ul> <p>def get_score_w(local_score):</p> <pre><code>ls = np.array(local_score) sub = ls - ls.min() div = sub/sub.max() add = div + 0.1 nor = add/add.max() sqr = nor*nor print(sqr) return sqr </code></pre> <p>def get_corr_w(clsnp):</p> <pre><code>corxlist=[] for idx1, cls1 in enumerate((clsnp)): corylist=[] for idx2, cls2 in enumerate((clsnp)): cor_max = np.corrcoef(np.argmax(clsnp[idx1],axis=1),np.argmax(clsnp[idx2],axis=1))[0][1] corylist.append(cor_max) corxlist.append(corylist) df = pd.DataFrame(corxlist,columns=names, index=names) #print(df) corr_w = [] for i in range(df.shape[0]): count = 0 thr = 0.95 for v in df.values[i]: if v &amp;gt; thr : count+=1 corr_w.append(1.0/count) return np.array(corr_w), df </code></pre> <p>def getensemble_w(clsnp, local_score):</p> <pre><code>score_w = get_score_w(local_score) for i in range(score_w.shape[0]): if score_w[i]==1.0: score_w[i] = 1.2 #max score add 20% corr_w = get_corr_w(clsnp)[0] ensemble_w = score_w*corr_w return ensemble_w </code></pre> <p>Everybody enjoy the competition~!!</p> <p>stroke generate result using raw set</p> <p>stroke generate result using simple set </p> <p>I enjoyed being with you in this competition. Let 's get together next time~~.</p> <p>Using simple data set Red channel : strokes order Green Channel : 1 stoke distance Blue Channel : Distance point specified by previous step inertia</p> <p><a href="https://storage.googleapis.com/kaggle-forum-message-attachments/433305/10829/simple%20set%20generater%20info.PNG">https://storage.googleapis.com/kaggle-forum-message-attachments/433305/10829/simple%20set%20generater%20info.PNG</a></p>
Quick, Draw! Doodle Recognition Challenge
5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quick, Draw! Doodle Recognition Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>No, one image contains multiple strokes. So one file contains an array of strokes for one image. It simplifies the reading process, because you can shuffle ids and then by id it is easy to load any random image. It also solved memory problem, because it was pretty hard to fit all strokes in memory.</p> <p>Yes, sequence number</p> <p>First of all, congrats everyone with the end of the competition. It was an exciting experience, and we want to share our approach.</p> <p><strong>Handling the data</strong></p> <p>Both simplified and raw data were used. To be able to read any random image, all strokes from CSV files were separated to one image per binary file. It took about 400GB on SSD, but it allowed to start different experiments very fast.</p> <p><strong>Our models</strong></p> <p>In total we trained 3 main models:</p> <ol> <li>Se-Resnext50</li> <li>DPN-92</li> <li>Se-Resnext101</li> </ol> <p>All of them were pretrained on imagenet. We used different image sizes, 128 -&gt; 192 -&gt; 224 -&gt; 256. It was clear almost from the beginning - the bigger image size - the bigger score in both local validation and public lb. It was hard to train 256px due to limited GPU resources. Using fit predict and 128px image it was pretty straightforward to get 0.944 public LB.</p> <p>In the middle of the competition after merging with @firenero, we had ~ 0.948 public lb score. To move forward, it was essential to use time information which only exists in the full dataset. We encoded each stroke using 3 channels:</p> <ul> <li>Delay value scaled to 0-255.</li> <li>Draw time per stroke scaled to 0-255</li> <li>Number of strokes scaled to 0-255</li> </ul> <p>It gave a significant boost in local validation and gave us ~0.951 public LB. Another important thing is batch size; we tuned all our models with huge batch size increasing it with each snapshot up to 10K.</p> <p>@firenero is going to tell more about the final phase and how we achieved 0.953.</p>
Quick, Draw! Doodle Recognition Challenge
8th place novel solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quick, Draw! Doodle Recognition Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Please pardon the quality of the following, but I’ve rushed it out. I will plan to release a detailed blog post/paper at a later date. My approach is novel, complex, and a challenge to communicate, but I wanted to get this information out while everyone was still interested!</p> <h1>Key Insight</h1> <p>I leveraged a principal lesson in Deep Learning: Let the network learn the features rather than hand crafting them. The standard approach is to rasterize the drawings, crudely encode time as colour, and then pass to a pre-trained RGB ResNet. Instead, I implement a differentiable, trainable module to do this. Furthermore, I replace the standard nn.Conv2d(3, 64, kernel_size=7, stride=2, ...) and 2x2 MaxPool at the start of a ResNet with the same module.</p> <p>High Level Visual: <img src="http://alekseynp.com/images/quickdraw_diagram1.png" alt="enter image description here"></p> <p>Key benefits:</p> <ul> <li><p>Deep features are computed from the time component rather than some RGB hack</p></li> <li><p>Deep features also computed from stroke data, leveraging the native format before “discarding” it and switching to images. Access to connectedness information.</p></li> <li><p>Uses efficiency of convolutions on grids where a pure sequence or point cloud model fails to</p></li> </ul> <h1>Implementation</h1> <p>Enormous amount of details that I will cover later.</p> <p>Detailed visual #1: <img src="http://alekseynp.com/images/quickdraw_diagram2.png" alt="enter image description here"></p> <ul> <li><p>Begin with strokes as defined by a series of points.</p></li> <li><p>Difference the x’s and y’s to get dx, dy segment vectors. Average the ts.</p></li> <li><p>Now we have strokes as defined by a series of segments.</p></li> <li><p>Process strokes with a sequence module to generate 32 features.</p></li> <li><p>Unroll those features with a window 2 convolution to generate 64 features per original point.</p></li> </ul> <p>Detailed visual #2: <img src="http://alekseynp.com/images/quickdraw_diagram3.png" alt="enter image description here"></p> <ul> <li><p>Draw points into a 32x32x64 image as per the diagram.</p></li> <li><p>When points collide their feature vectors are averaged. I think I would have max-pooled if I could have implemented it.</p></li> <li><p>At this stage in the network a 32x32 image could be thought of as equivalent to having started a normal image ResNet at 256x256.</p></li> </ul> <h1>Sequence Module</h1> <pre><code>Conv1d(3, 32, kernel_size=3, stride=1, padding=1, dilation=1) BatchNorm1d(32) ReLU(inplace=True) Conv1d(32, 32, kernel_size=3, stride=1, padding=2, dilation=2) BatchNorm1d(32) ReLU(inplace=True) Conv1d(32, 32, kernel_size=3, stride=1, padding=4, dilation=4) BatchNorm1d(32) ReLU(inplace=True) Conv1d(32, 32, kernel_size=3, stride=1, padding=8, dilation=8) BatchNorm1d(32) ReLU(inplace=True) Conv1d(32, 64, kernel_size=2, stride=1, padding=(1,0)) </code></pre> <h1>Rasterization Module</h1> <pre><code>from apex import amp import torch from torch.autograd import Function class PointsToImage(Function): @staticmethod @amp.float_function def forward(ctx, i, v): device = i.device batch_size, _, num_input_points = i.size() feature_size = v.size()[2] batch_idx = torch.arange(batch_size, device=device).view(-1, 1).repeat(1, num_input_points).view(-1) idx_full = torch.cat([batch_idx.unsqueeze(0), i.permute(1, 0, 2).contiguous().view(2, -1)], dim=0) v_full = v.contiguous().view(batch_size * num_input_points, feature_size) mat_sparse = torch.cuda.sparse.FloatTensor(idx_full, v_full) mat_dense = mat_sparse.to_dense() ones_full = torch.ones(v_full.size(), device=device) mat_sparse_count = torch.sparse.FloatTensor(idx_full, ones_full) mat_dense_count = mat_sparse_count.to_dense() ctx.save_for_backward(idx_full, mat_dense_count) return mat_dense / torch.clamp(mat_dense_count, 1, 1e4) @staticmethod @amp.float_function def backward(ctx, grad_output): idx_full, mat_dense_count = ctx.saved_tensors grad_i = grad_v = None batch_size, _, _, feature_size = grad_output.size() if ctx.needs_input_grad[0]: raise Exception("Indices aren't differentiable.") if ctx.needs_input_grad[1]: grad = grad_output[idx_full[0], idx_full[1], idx_full[2]] coef = mat_dense_count[idx_full[0], idx_full[1], idx_full[2]] grad_v = grad / coef grad_v = grad_v.view(batch_size, -1, feature_size) if isinstance(grad_output, torch.cuda.FloatTensor): return grad_i, grad_v else: return grad_i, grad_v.half() points_to_image = PointsToImage.apply </code></pre> <p>Various other details:</p> <ul> <li><p>Packing strokes of varying length into tensors of fixed size in order to do 1D CNNs is a non-trivial thing to do and beyond the scope of this post</p></li> <li><p>Pytorch 0.4.1</p></li> <li><p>Used NVIDIA’s apex amp (<a href="https://github.com/NVIDIA/apex/tree/master/apex/amp">https://github.com/NVIDIA/apex/tree/master/apex/amp</a>) to train exclusively at half precision</p></li> <li><p>Trained on all the data</p></li> <li><p>Raw not simplified</p></li> <li><p>LMDB for memory mapped data</p></li> <li><p>Adam optimizer</p></li> <li><p>Models typically took about 2.5-3 days to converge on a system with a 1080Ti and a Titan</p></li> <li><p>Used pre-trained imagenet weights</p></li> <li><p>Froze those weights and only trained my additional modules for the first 1k-5k iterations</p></li> <li><p>Held out 50k examples for validation during training</p></li> <li><p>Held out 1 million examples for blending</p></li> <li><p>Probably never completed even 2 complete passes through all the data during training. Convergence came first.</p></li> <li><p>Used gradient accumulation via multiple backwards calls in Pytorch to finish training at huge batch sizes</p></li> </ul> <h1>Results</h1> <p>SEResNeXt50 32x4d at core. Local validation:</p> <ul> <li><p>Acc@1: 0.8648</p></li> <li><p>Mapk3: 0.9073</p></li> <li><p>CE Loss: 0.5092</p></li> <li><p>Public LB: 0.94781</p></li> <li><p>Private LB: 0.94915</p></li> </ul> <p>Best ensemble:</p> <p>Six best models as measured by local CE Loss.</p> <ul> <li><p>2 x SEResNeXt50 at core</p></li> <li><p>3 x SEResNeXt101 at core</p></li> <li><p>1 x ResNet34 at core</p></li> <li><p>Weighted arithmetic mean of probabilities. Weights = (1/loss)**24</p></li> </ul> <p>Local validation:</p> <ul> <li><p>Acc#1: 0.8683</p></li> <li><p>Mapk3: 0.9101</p></li> <li><p>CE Loss: 0.4937</p></li> <li><p>Public LB: 0.95142</p></li> <li><p>Private LB: 0.95101</p></li> </ul> <p>Very slight optimization bias in local validation score. I used my validation set to select “six best” and the 24 exponent. Zero overfit to public LB. hence why I moved up from 12 to 8 in the shakeup.</p> <p>Other comments:</p> <ul> <li><p>No RNNs in my ensemble</p></li> <li><p>No Image CNNs in my ensemble</p></li> <li><p>Kaggle competitions are a hell of an environment to try to do something novel in. I spent weeks messing around with Deep Residual PointNet++ networks, but never surpassed public LB 0.916</p></li> </ul> <p>Finally:</p> <ul> <li><p>GG to everyone</p></li> <li><p>Super proud of Guanshuo Xu for solid solo performance and disciplined execution enabling a move from 4th to 2nd into the money in the Private LB shakeup!</p></li> </ul> <p>Yes I did. I don't know if it creates as much consistency as I would like. Within any one drawing the segments are close to the same length, but not necessarily between drawings. I didn't study the raw data close enough to know. How consistent are lengths? By forcing "long" drawings and "short" drawings all to have 256 points, segment lengths may actually vary more after <code>resample_to</code></p> <p>Oops! Thanks</p> <p>I just put code up here: <a href="https://github.com/alekseynp/kaggle-quickdraw">https://github.com/alekseynp/kaggle-quickdraw</a></p>
Quick, Draw! Doodle Recognition Challenge
A noob's journey to the 258th(Top 20%) place.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quick, Draw! Doodle Recognition Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all congratulations to all of the contestants who participated and were able to get the results they were expecting and to all the medal winners. Thanks to Google AI and Kaggle for this great competition. Also, thanks to <a href="https://www.kaggle.com/gaborfodor"></a><a href="/beluga">@beluga</a> for his awesome shuffle csv and mobilenet kernels, <a href="https://www.kaggle.com/hengck23">@hengck</a>, and others who have provided valuable information in the discussions that helped other contestants like me to get to the final scores. <br></p> <p>I was able to get to the top 12% before I exhausted all of my google credits and so does my hope for getting any higher in this competition.<br></p> <ul> <li><p>I started with my own CNN+LSTM approach and <a href="https://www.kaggle.com/gaborfodor"></a><a href="/beluga">@beluga</a> shuffle csv's approach which consists of 3 conv2d+maxpooling layers on the top and two bidirectional LSTM layers in the end, which give me a LB score of 0.85 as I was training on Kaggle kernels I can't train this framework long enough to get any higher. <br></p></li> <li><p>After this, I started with beluga's Mobilenet kernel and trained it on 64X64 images for as long as possible, saved weights and used this weights to initialize training for next attempt until I started getting diminishing returns. I then used those weights to train and initialize till the images of size [128X128]. I was still training only on 30k images per class which helped me in getting to 0.91 on LB. I can't get any further with 30k images on Kaggle kernels as I have used ReduceLR, CyclicalLr and other approaches to reach till this point.<br></p></li> <li><p>Now, I had some remaining credits in my GCP account from TGS competition where I was able to get to the Top 14%. At this point I realized that I don't have enough credits to get to the point where I can get any medal and rather than using Mobilenet, I thought of learning and trying something new. So, I started with PyTorch and SEResNeXt and trained that on all of the images for 1 epoch before exhausting my credits. It helped me in getting to 0.919.<br></p></li> <li><p>I then blended my Mobilenet and SEResNeXt submission for a final score of 0.922 before finishing the competition.<br></p></li> <li><p>In the end, I learned to work with two frameworks(Keras and Pytorch), learned about RNN/LSTM that helped me in getting to Top 1% in Quora competition and I am ending with acquiring more knowledge than I started with. <br></p></li> </ul> <p>I am ending my write-up with a tip for all the beginners like me who come to Kaggle but never start Competing because of the fear that they are not good enough. I also didn't have any prior programming experience or studied any high-level mathematics before starting my DS journey and still, I was just able to learn a little but whatever I learned is hard to forget now. All you have to do is try. The kernel sections are full of helpful kernels that will get you started and discussions are full of good people that will help you when you will get stuck. There is no better way to learn than learning by doing. So jump into competitions and start butting your head against the problems. <br></p> <p><strong>Stay Calm and Keep Kaggling!!</strong></p>
Quick, Draw! Doodle Recognition Challenge
First Kaggle Competition Experience (Team: rm-rf / | Private LB: 385)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quick, Draw! Doodle Recognition Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks and that's a very motivating analogy :)</p> <p>Thanks. I was aware of that but my approach relied on the "draw" function to first draw the images, which in itself would take a lot of hours to generate and followed by training which would be even for longer duration on my GTX-1070. Could you share your approach? How did you train on the 100% of the data?</p> <p>Thanks! And yes, I'd definitely love to team up in the future.</p> <p>Hi Everyone, This was my First ever kaggle competition and these are a few notes for a few beginners like me. (Dear Experts, please excuse this thread if it doesn't have many interesting insights.) </p> <p>Thanks to GoogleAI team for hosting the competition. Congratulations to Team Pablos, <a href="/wowfattie">@wowfattie</a> and Team mgchbot for the Top 3 finishes. </p> <p>Special Thanks to Master <a href="/radek1">@radek1</a>, Grandmaster <a href="/hengck23">@hengck23</a> and Grandmaster <a href="/gaborfodor">@gaborfodor</a> for the amazing starter pack, the amazing discussions and the great starter kernel!</p> <p><strong>My Team Rank:</strong> 385 Private, 391 Public. </p> <p><strong>Team Name:</strong> "rm-rf /", with <a href="/init927">@init927</a> my buisness partner. </p> <p>My First kaggle competition felt like a 100 Mile sprint where you are competing against people on Supercars (GrandMasters with a LOT of experience) while I was running barefoot.</p> <p>Our approach relied on the starter kernel shared by Master Radek and the Kernel shared by GrandMaster Beluga. The best submission was a "blend" of a MobileNet from the Kernel and a ResNet 152 Model trained using fastai library</p> <p>We had jumped into the competition past mid-way since its launch and personally it was fascinating and completely overwhelming to keep up with the overflow of the ideas-I was surprised that even the Top performers are generous with their ideas and share them publically. </p> <p>As a First competition, We'd work our way to a submission each night, wake up to have lost the submission by 20 ranks, rework towards a better submission and repeat!</p> <p>Ideas that worked:</p> <ul> <li><p>Training on 1% of the data with 256 image size, fine-tuning to 5% of the data with 128 image size, fine-tuning to 20% of the data with 64 image size. </p> <ul><li>ResNet 18 &lt; ResNet 34 &lt; ResNet 50 &lt; ResNet 152 showed a consistent increase in performance (Pre-Trained models, fine-tuned using fastai)</li></ul></li> </ul> <p>Compute: Starting out, I wasn't sure if I had enough compute for the competition, turns we had more than sufficient-20 (10+10) kaggle kernels for small experiements, a 1070 based "laptop" for bigger experiments and p3 instances on AWS for extra experiementation, while documenting our approach with Google Sheets.</p> <p>Personally, It was an amazing learning experience, I learnt much more than I had ever learnt via a MOOC. I'd definitely love to participate in more competition and slowly work my way upwards. </p> <p>In the end, Our best submission was limited by our experience to make a better submission with our hardware setup. If anyone is still not sure about taking part in a kaggle comp, I'd say just jump in. Make your first submission, see it fall down on the LB and try to keep up!</p> <p>PS: If anyone has any suggestions towards our approach or any comments on how to have better approached the competition, I'd be very thankful.</p>
Quick, Draw! Doodle Recognition Challenge
My First Medal Callback: From Novice to 106th place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quick, Draw! Doodle Recognition Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for your experience! Before we run a full-dataset experiment, we usually use small dataset with small batchsize to "pretrain" it, and then we use fine-tuning and graduately increase the image size and batch size to make convergence.</p> <p>Thanks! And wish you can get good grades very soon!</p> <p>Thank you for your congratulation! Image-based competition do requires GPU hardware, so I recommend you to use kaggle kernel gpu and model.save skill to continuously train your model. You can also ask if there's anyone wants to form a team.</p> <p>In addition, there are lots of machine-learning competition in kaggle where gpu is not strictly needed.</p> <p>Thanks! That sounds a good idea! But mechanical drawing should follow strict rules, not doodle. :-&gt;</p> <p>Thanks! And the same with you, too. </p> <p>Congrat for your great job, too :-D . We can share experience for each other in future kaggle competition!</p> <p>I am an amateur whose major is Mechanical Engineering, and this is the first time I enrolled in a Featured Kaggle Competition. Here I have some experience to sharing with you guys(good for kaggle beginners):</p> <ol> <li><p>Save your model frequently. Google Doodle has a large datasets(About 50000000), and you will spend several days in training this datasets. For me, 128*128size with resnet50, 2 epochs takes 3-4 days. One day, I planed to train the data for 12 hours. However, the server shutdown accidently, making me disappointed and exhausted. So save your model frequently, model.save('model.h5') and model.save_weights('weights.h5')</p></li> <li><p>Nohup jupyter notebook There will be some network connection error and shutdown your jupyter notebook accidently if you don't use nohup commmand. If you use nohup command, your program still run even if you close the network connection.</p></li> <li><p>Get GPU from Kaggle Kernel In discussion, most of you say that you have 1-4 gpus. If you want to try different neutral networks, your limited gpu resource will be an obstacles. In fact you can use Kaggle Kernel gpu, with model.save('model.h5') and model.save_weights('weights.h5') method stated on 1, you can train more networks.</p></li> <li><p>Vote and check forked notebook from others At first, I voted and forked beluga's notebook. Then I tried to change the neutral network from mobilenet to resnet50, densenet121, etc. When the public LB stucked at about 0.915 for several days, I decided to do data augmentation. When I checked beluga's shuffle-csv notebook, I found pd.read_csv(..., nrows = 34000)! I only used 1/5 of the datasets! In previous, I only focused on neutral network improvement. I changed it to the total datasets, and the accuracy improved a lot. So after you forked other's notebook, check everything at the beginning!</p></li> </ol> <p>For my last result, I use 128*128 size image, about 1-2 epochs for full dataset, train 2 models: densenet121(batchsize: 170) and resnet50(batchsize: 250). size(validation dataset) = 0.3 * size(full dataset) The best resnet50 model: 0.931 Public LB The best densenet model: 0.934 Public LB and 0.932 Public LB Ensemble: 0.934_densenet(weight = 2), 0.932_densenet(weight = 1.8), 0.931_resnet(weight = 1.65) result = 0.937 Public LB It seems that various neutral network models can ensemble a higher accuracy. TTA: symmetric deformation of test_simplified.csv, but no help in accuracy.</p> <p>At last,I am so happy to both win a medal and get help from you guys.:-)</p>
Quora Insincere Questions Classification
10th place solution - Meta embedding, EMA, Ensemble
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Insincere Questions Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here are all my submissions. 1: keras, 2-4:pytorch</p> <ol> <li><p><a href="https://www.kaggle.com/tks0123456789/projection-meta-embedding-and-ema?scriptVersionId=7916644">Projection meta embedding and EMA(version 1/4)</a>, 0.69480(Public)</p></li> <li><p><a href="https://www.kaggle.com/tks0123456789/pme-ema-6-x-8-pochs?scriptVersionId=10163202">PME_EMA 6 x 8 pochs(version 2/14)</a>, 0.69568</p></li> <li><p><a href="https://www.kaggle.com/tks0123456789/pme-ema-6-x-8-pochs?scriptVersionId=10224276">PME_EMA 6 x 8 pochs(version 10/14)</a>, 0.70551, 0.70964(Private)</p></li> <li><p><a href="https://www.kaggle.com/tks0123456789/pme-ema-6-x-8-pochs?scriptVersionId=10275816">PME_EMA 6 x 8 pochs(version 14/14)</a>, 0.70061, 0.70921</p></li> </ol> <h2>Preprocessing</h2> <p>Separating punctuations only. Spell correction didn't work for me.</p> <h2>Model structure</h2> <p><strong>Average</strong> ensemble of 6 models of the same network.</p> <pre><code>Embedding(max_features, 600) Linear(in_features=600, out_features=128, bias=True) ReLU() GRU(128, 128, batch_first=True, bidirectional=True) GlobalMaxPooling1D() Linear(in_features=256, out_features=256, bias=True) ReLU() Linear(in_features=256, out_features=1, bias=True) </code></pre> <h2>Projection Meta Embedding(PME)</h2> <p>Meta embedding is a method for combining multiple pretrained embeddings and dicussed in <a href="https://www.kaggle.com/c/quora-insincere-questions-classification/discussion/71778">3 Methods to combine embeddings</a>. PME is <strong>Unweighted DME ([4])+ ReLU</strong>. It concats several frozen pretrained embeddings and project to a lower dimensional space by linear layer with ReLU activation.</p> <h2>Exponential Moving Averaging of weights(EMA)</h2> <p>It caluculates exponential moving average of weights during training. It is usually done on a minibatch level. I chose 10 updates per epoch for speed. [1], [2], [3] use EMA.</p> <h2>Tuning for ensemble</h2> <h3>n_embed: #of pretrained embeddings in a single model.</h3> <p>I tried n_embed=1, 2, 3, 4, and 2 is best. Using 4 embeddings is better for a single model F1, but lacks model diversity, which cause worse ensemble performance.</p> <h3>epoch</h3> <p>The followings are mean F1 of 10-fold CV. The best epoch is 5 for a single model and 8 for ensemble. <img src="https://raw.githubusercontent.com/tks0123456789/kaggle-Quora/master/cv10.png" alt="Image"></p> <p>[1] <a href="https://arxiv.org/abs/1703.01780">A. Tarvainen and H. Valpola(2017) Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results.</a></p> <p>[2] <a href="https://arxiv.org/abs/1804.09541">Yu, A. W., Dohan, D., Luong, M.-T., Zhao, R., Chen, K., Norouzi, M., and Le, Q. V.(2018) QANet: Combining local convolution with global self-attention for reading comprehension</a></p> <p>[3] <a href="https://arxiv.org/abs/1611.01603">Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi (2016) Bidirectional attention flow for machine comprehension.</a></p> <p>[4] <a href="https://arxiv.org/abs/1804.07983">Douwe Kiela, Changhan Wang, Kyunghyun Cho (2018) Dynamic Meta-Embeddings for Improved Sentence Representations</a></p> <p>0.0015-0.002 for EMA. I don't know improvement by PME because I didn't try other embedding method. Code and result are <a href="https://github.com/tks0123456789/kaggle-Quora">here</a>.</p>
Quora Insincere Questions Classification
117th solution. Achieve 0.701 PB and 0.703 LB in 4000 seconds.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Insincere Questions Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks @Neuron Engineer. Glad that you also did well. I had a great time learning new things.</p> <p>Why it should be downvoted?</p> <p>Here is my solution. Simple one directional GRU, with some effective preprocessing for OOV words. No spell correction or anything. I have used 5 fold cross validation with exponential moving average ( which is not necessary ). Code is bit clumsy and keras lovers, don't find it happy as it is in pure tensorflow :-) . If any doubts, please do comment. Hope it will be useful.</p> <p><a href="https://www.kaggle.com/s4sarath/cudnngru-best-final">https://www.kaggle.com/s4sarath/cudnngru-best-final</a></p> <p>1.) I have <code>preprocessed by word basis</code>. Normally, I will split by space and check if a word is present in the vocab. If present, I will add it to my train embedding else I will preprocess only that word and cache it to a dictionary. For eg: <code>goood</code> might not be in glove or fasttext, then I will preprocess (a lot of sub and small functions are there) will give me <code>good</code> which is in glove. So, I will add it to mapping_dict = {<code>goood</code> : <code>good</code>}. Then, next time <code>goood</code> is appearing, I dont have to preprocess it, I will first look at <code>mapping_dict</code> and so on. My final vocab size was ```192000</p> <p>2.) I have used <code>GRU</code> one side, with <code>256</code> dimensional embeddings. <code>Cross validation</code> of <code>5</code> fold, where each folds have different epochs, I used <code>[5,5,5,4,4]</code>. Used <code>Adam Optimizer</code> for minimzing the objective.</p> <p>3.) I have used <code>meta embeddings</code> <code>Glove+Paragram</code> as per <code>@shijuan's</code> Kernel</p> <p>4.) Used tensorflow,<code>CudnnGRU</code>, it is very fast and max len is <code>150 for a sentence.</code></p> <p>Pretty much that's it. </p> <p>??</p>
Quora Insincere Questions Classification
125th solution. 12folds average
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Insincere Questions Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here is our solution based on Benjamin's great kernel.</p> <p><a href="https://www.kaggle.com/bminixhofer/a-validation-framework-impact-of-the-random-seed">https://www.kaggle.com/bminixhofer/a-validation-framework-impact-of-the-random-seed</a></p> <p><a href="https://www.kaggle.com/decoflight/tranc-submit?scriptVersionId=10104245">https://www.kaggle.com/decoflight/tranc-submit?scriptVersionId=10104245</a></p> <p>These are the main differences.</p> <ul> <li>Use maxlen = 35, n_folds = 12</li> <li>Make 2 datasets by truncating like this.</li> </ul> <p><code> x_train_pre = pad_sequences(x_train, maxlen=maxlen, truncating='pre') </code> <code> x_train_post = pad_sequences(x_train, maxlen=maxlen, truncating='post') </code></p> <ul> <li>At fitting, we use 1 of 2 alternately. </li> <li>At predicting, we use both of them and average the predictions.</li> </ul> <p>CV: 0.694 -&gt; 0.701</p> <p>Public Leaderboard: 0.696 -&gt; 0.703</p>
Quora Insincere Questions Classification
13th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Insincere Questions Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks</p> <p>Thanks</p> <p>Thanks :)</p> <p>Thank u</p> <p>Thanks!</p> <p>Hi, kagglers<br> I have published the 13th place kernel</p> <p><a href="https://www.kaggle.com/canming/ensemble-mean-iii-64-36">https://www.kaggle.com/canming/ensemble-mean-iii-64-36</a> <br> the sum of my solution:</p> <ul> <li>LaTex and https cleaning.</li> <li>Puncts split.</li> <li>Embedding:Glove*0.64+ Params*0.36</li> <li>3 models to ensemble</li> <li>tune architecture params and AdamW weight decays</li> </ul>
Quora Insincere Questions Classification
15th Solution - focus on models
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Insincere Questions Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>thank you! </p> <p>thank you. if remove any of these models, the final performance will drop dramatically, not just RCNN. and we do simple splits, not CV. because of the time limit.</p> <p>we do a lot of experiments on it, simple weights are the best choice. the weights are based on our observations and determined by humans.</p> <p>it is my first competition, and our term focus on models.</p> <p>now, we have release our model in <a href="https://www.kaggle.com/xiaobai1123q/15th-place-solution">https://www.kaggle.com/xiaobai1123q/15th-place-solution</a> </p> <p>we run our model again ( because the submitted kernel is a version ), and achieve a better result than leaderboard. I hope we can bring you some help.</p> <p>in the text preprocessing stage, we don't have any personal work, all of which are public kernels.</p> <p>our main job lies in the four models we ensemble. then, i will briefly explain.</p> <p>the first model is RCNN. the second model is LSTM(128) + GRU(96) + maxpooling1D + dropout(0.1). the third model is LSTM(128) + GRU(64) + Conv1D + maxpooling_concatenate. the fourth model is LSTM(128) + GRU(64) + Conv1D + Attention.</p> <p>we used the word vector concatenated by glove and fasttext. we set max_features = None and we set max_len = 57.</p> <p>questions, advises, suggestions are all welcome.</p>
Quora Insincere Questions Classification
18th place solution from 300-th at Public LB
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Insincere Questions Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We just select two submissions, one use hard typo correcting and the another doesn't. At first this is just for a reason of time limiting (fixing typo takes much time) but correcting typo results in getting lower score.</p> <p>haha thanks a lot and have a nice partner ;)</p> <p>Thanks!</p> <p>Thanks!</p> <p>Thanks Quora for great competition and thanks all who participated and contributed here. We were surprised at this result since we were in bronze in Public LB. Our (me and <a href="https://www.kaggle.com/hattan0523">@hattan0523</a> ) kernel is here. <a href="https://www.kaggle.com/kentaronakanishi/18th-place-solution">https://www.kaggle.com/kentaronakanishi/18th-place-solution</a> The local CV of our model is around 0.696-0.698.</p> <p>Our points are below:</p> <ul> <li>simple 2 layers RNN model with units=96 by 5 fold</li> <li>use semantic bernoulli dropout for embedding layer</li> <li>remove any filter when using keras tokenizer</li> <li>batch size control to get larger batch size in later epochs</li> <li>cut data length at max length in a batch</li> <li>learn embedding weights only at last epoch</li> <li>lots of lucks by using my wedding anniversary as seed</li> </ul> <p>We don't use in our case:</p> <ul> <li>capsules: contribute but take too much time</li> <li>attention: only effect val_loss and no change in CV score</li> <li>CNN based model: It worked but increasing rnn width is better for us</li> <li>GBDT models (XGboost, LightBGM): They don’t achieve our desired CV score, and a lot of time are needed to run. Thus, we don’t use an ensemble or stacking method. </li> <li>word typos: a kernel (<a href="https://www.kaggle.com/sunnymarkliu/more-text-cleaning-to-increase-word-coverage">https://www.kaggle.com/sunnymarkliu/more-text-cleaning-to-increase-word-coverage</a>) shows many typos of the data set at In[18] of kernel. We rechecked and modified them. This is good for public LB, however not good for private LB.</li> </ul> <p>All questions and suggestions are welcome.</p> <p>Thanks!!</p>
Quora Insincere Questions Classification
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Insincere Questions Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We used Keras in this competition. I shortly tested Pytorch and did not see any speed gains.</p> <p>Exactly and thanks :)</p> <p><a href="/learnmower">@learnmower</a> For half of the epochs you increase the LR, and for half of the epochs you decrease it. So basically step_size is half of the epochs and you only do one full cycle.</p> <p>Same model fit several times.</p> <p>Thanks! Hyperparameter tuning based on CV.</p> <p>Regarding rank prediction: Assume you have predicted probabilities for a single model, you then transform them into ranks (e.g., rankdata in numpy). Then you average the ranks when combining individual models and divide by the length so your final predictions again end up between zero and one. For then finding a fixed threshold a simple strategy is to just take the mean best threshold on multiple CV runs or similar simulations. However, there are still outliers in both directions depending on the split so if you are really unlucky your fixed threshold is "far" away from the optimal. So what we tried is testing various fixed thresholds and evaluate how far the resulting F1 score is compared to if you would take the optimal threshold for this fold. We then finally chose that threshold that had the least average deviation from the optimal. Of course, you can still get unlucky, but to a lesser extend. Does that make it a bit clearer? I will try to add an image example to the top post.</p> <p>Yeah, 10 times the same model.</p> <p>As written, we actually tried much larger batch sizes up to 10240 allowing us to fit close to 20 models. Results looked good as well, but we saw convergence after combining around 10 models or so and would have needed to further fine-tune the higher batch size models which is why we submitted the 512 batch size models. But as you say, the higher the batch size to choose, the larger the max sequence length is, ending up in exactly what you observed. </p> <p>I can't say for sure how much as we decided on using both quite early in the competition, but according to CV it was definitely worth it. We tried a few other things like concatenating etc. which led to similar results, but worse runtime. In the end, this is another aspect of the overfit/underfit discussion above and utilizing other things might lead to the need for doing the combination of embeddings a bit differently.</p> <p>First of all, we want to thank Kaggle for hosting the competition and Quora for providing such a large dataset. Last 3 months were quite exhausting for us with a steep learning curve and tons of the ideas we wanted to try out. In the following we try to summarize some of the main points of our solution.</p> <p><strong>Model Structure</strong> We played around with a variety of different model structures, but in the end resorted to a quite simple one that is very similar to those posted here <a href="https://www.kaggle.com/c/quora-insincere-questions-classification/discussion/79824">https://www.kaggle.com/c/quora-insincere-questions-classification/discussion/79824</a>. It’s basically a Single Bi-LSTM 128 followed by a Conv1D with kernel size 1 only and GlobalMaxPooling afterwards plus additional dropout layers with minimal dropout. We additionally use a few statistical features.</p> <p><img src="https://i.imgur.com/zUY9tVN.png" alt="enter image description here"></p> <p><strong>Embeddings</strong> First, we use all tokens from both train and test data for our vocabulary. We do the simple pre-cleaning that was posted in a kernel at the start of the competition and split by space afterwards (spacy and nltk resulted in similar performance). We do not lowercase, but keep uppercase, and do not limit the vocab at all. For embeddings we use glove and para where we weight glove a bit higher. The most important thing now is to find as many embeddings as possible for our vocabulary. We had a few steps to achieve this, like checking singular and plural of the word, checking lowercase embeddings, removing special tokens, etc. For public test data we had around 50k of vocab tokens we did not find in the embeddings afterwards. Even though we tried a few different strategies for handling the OOV tokens, we resorted to a single OOV token with a single random embedding vector. </p> <p><strong>Threshold</strong> We spent a lot of time trying to figure out good strategies for choosing a good threshold for classification. Over time, we saw that estimating the threshold on validation data and then applying it on test data does not really work. There is a large variation on optimal thresholds. So what we did instead is to try to find a fixed threshold on CV that produces the least deviation for the f1 score from the optimal threshold. We saw that we can get more stable results when we produce ranks on the predicted probability and average the ranks instead of averaging probabilities. For final submission we then chose the best CV threshold. This also allowed us to fit the model on the complete data without the need to rely on a random split and less training data. The visualization below shows that in action (not necessarily our final eval). On the x-axis we plot the different fixed thresholds and on the y-axis we see the deviation from the optimal F1 score across folds using this fixed threshold (see CV chapter below). The blue line is the mean, green is median, purple is minimum, red is maximum, and bars are stds. So for example here, if we choose a threshold in the range of 0.927 we expect the F1 score to be not much worse (around 0.001) compared to choosing the optimal threshold (which we can't do for test data). In practise, this might of course deviate further and we could also see larger deviations on PLB. For further elaboration, please check the comments.</p> <p><img src="https://i.imgur.com/2NPwBIR.png" alt="enter image description here"></p> <p><strong>Runtime tricks</strong> We aimed at combining as many models as possible. To do this, we needed to improve runtime and the most important thing to achieve this was the following. We do not pad sequences to the same length based on the whole data, but just on a batch level. That means we conduct padding and truncation on the data generator level for each batch separately, so that length of the sentences in a batch can vary in size. Additionally, we further improved this by not truncating based on the length of the longest sequence in the batch, but based on the 95% percentile of lengths within the sequence. This improved runtime heavily and kept accuracy quite robust on single model level, and improved it by being able to average more models.</p> <p><strong>Fitting</strong> We use a one cycle policy with Nadam optimizer (you can do this with the typical CyclicLearningRate implementations by just changing the step size to half your total iterations). We chose a batch size of 512. We could achieve similar results by even taking 10 or 20 times higher batch sizes, which goes hand in hand with recent research on fast convergence. With these larger batch sizes we could even fit close to 20 models, but results stabilized close to 10 models which is why we chose to go with the smaller batch size in the end. However, there might still be some room left here if one properly tunes this.</p> <p><strong>Multiple models</strong> In the end, we managed to fit more than 10 models on the complete training dataset with help of the runtime tricks mentioned before. Our best final private score even had only a runtime of 6000 seconds (I think they used a bit better hardware for running), so there would be space for 1-2 more models. With larger batch sizes even much more might be feasible. As mentioned, we then average the rank predictions of each model and use our specified threshold for prediction.</p> <p><strong>Embrace the randomness</strong> As it was necessary to utilize CUDNN Layers in this competition, there was some randomness involved that could be quite frustrating from time to time. I saw many people trying to fix seeds etc. and some claiming they could completely remove the randomness by using Pytorch (I still don’t believe this BTW as CUDNN has atomic operations). However, as mentioned before, a well working strategy in this competition was to combine multiple models and to end up with a good ensemble, those models should be a bit different to each other. So having different random initializations etc. can be helpful. Seeing people setting the seed as a hyperparameter is weird.</p> <p><strong>CV Evaluation</strong> What I saw many people doing wrongly in this competition, and we also only figured this out after a while, is to trust their single out-of-fold evaluation. However, in this competition, it is crucial to combine (average) multiple models (in our case the same model). That means that our CV evaluation looks like the following. We do a k-fold split (mostly 10-fold) and fit the same model up to v-times on the same training split and then successively evaluate it on the single out of fold. So for the first split, we first fit one model and evaluate it, then a second one and evaluate the average and so forth. We repeat this for all 10 folds, landing us with e.g., 100 model fits overall, and then we can take a look at the median or mean over all folds for v-model-ensembles. The reason for doing this is that f1 scores are very different on the split you have. For one 10% split you might end up with a maximum of 0.72 and for the other you might end up at 0.705 or similar. So repeating the split 10 times, fitting the same model v-times for each split, and then looking at the grand picture gave us the best overall evaluation. This routine helped us to compare individual solutions with each other. BTW our final scores are exactly what we would expect from our CV evaluation, but again this might be lucky :)</p> <p><strong>Robustness and over/underfitting</strong> Around 2 weeks before final submission, our results became so stable that changing things did not alter results much. Things like finding more OOV embedding vectors resulted in same results, using slightly different layers ended up being similar, and other things. This was a bit frustrating, but in the end things worked out. In the end, it was important to find a good balance between over and underfitting (as always). Underfitting too much led to good single model performances, but was worse for combining models, and the other way around. For example, if your model overfits, there can be many different solutions to tackle this, e.g., add dropouts, or reduce the vocab size, or reduce model complexity, etc. So if someone says on kaggle that one things works for him/her, that does not necessarily mean that it will work for you as you might already be doing something similar that has similar effects (a good example is the Gaussian noise discussion).</p> <p><strong>What did not work for us</strong> Mostly you only read what worked, but here is an incomplete shortlist of what did not work for us. This does not mean that it doesn’t work at all, but rather that it was worse for our specific solution.</p> <ul> <li>Different optimizers (focal loss was similar though)</li> <li>Label smoothing</li> <li>Auxiliary learning / multitask learning</li> <li>Snapshot learning</li> <li>Pseudo labeling</li> <li>Fitting own embeddings with gensim</li> <li>Spelling correction</li> <li>Taking median/percentile of predicitons instead of average</li> <li>More complex layers and architectures (Attention, QRNN, Capsule, larger/multiple LSTM layers, larger CNN kernel sizes, LGBM or bag of words)</li> <li>Word collocations - Several words put together can bear a completely new meaning, which is not captured by embeddings. Glove turned out to have quite a lot of such collocations with words put together using "-" sign. So we replaced examples like "ethnical cleansing" with "ethnical-cleansing", which is then captured by a more appropriate glove embedding. It showed no improvement on CV.</li> <li>Extra statistical features - Presence of statistical features added a little bit to the accuracy based on CV, but we saw no improvement with other extra features, like sentiment or bag-of-words based variables.</li> <li>Replacement of words with synonyms - An idea of replacing all nationalities (or e.g. political party) with the same word did not work at all.</li> <li>Order the train data by the length of the sentences - This approach gave a dramatic improvement in the fitting time because each batch contained only sentences with similar sizes, but it hurt the accuracy of the model too much.</li> </ul> <p>The model focuses only on certain sequence lengths in each batch and has no diversity.</p>
Quora Insincere Questions Classification
1st place solution (public LB)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Insincere Questions Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Motivation</h2> <p>The purpose of this competition wasn't really clear to me. So many SOTA NLP models were introduced in 2018 and none of them were allowed during the competition. It sounds like a joke to approach a complicated NLP task when even ULMFiT (kudos to <a href="/jhoward">@jhoward</a>) is prohibited. So I decided to have some fun together with <a href="/evgeny000">@evgeny000</a> and <a href="/mathurinache">@mathurinache</a>.</p> <p><a href="https://i.imgur.com/IH4fZyx.gif?1"><img src="https://i.imgur.com/IH4fZyx.gif?1"></a></p> <h2>Getting true labels</h2> <p>Many of you were wondering about 0.782 score. The solution is quite straightforward: we just <strong>scraped the answers</strong>. The process was the following:</p> <ol> <li>It's easy to notice that Quora links are very much similar to the actual question asked. By applying several heuristics you can back reverse the link from the question. For example, <a href="https://www.quora.com/Why-did-you-quit-your-job-at-Amazon">https://www.quora.com/Why-did-you-quit-your-job-at-Amazon</a></li> <li>Insincere questions can be then detected by the tag <em>QuestionRestrictedInsincerePrompt</em> in the HTML code which can be obtained by python requests library. However, there is even more elegant solution to that.</li> <li>At some point, we've noticed that if you add <em>/log</em> to any Quora question link than you get the full history of page changes including topics, comment, users, etc. Try it <a href="https://www.quora.com/unanswered/Are-Muslims-really-ashamed-of-their-religion/log">here</a>. As you may see there is a log line "Question marked as possibly insincere by Quora Content Review". That is exactly what we were looking for. </li> </ol> <p><a href="https://imgflip.com/i/2tqg7u"><img src="https://i.imgflip.com/2tqg7u.jpg"></a></p> <h2>Second stage labels</h2> <p>Of course, that's not the end of the story. Scraping 1 stage answers would be useless without getting 2 stage labels. How can we guess them? First, we tried using data from the <a href="https://www.kaggle.com/c/quora-question-pairs">previous Quora competition</a>. We collected around 2000 insincere questions from there. The second option was using Related Questions but none of them were insincere. </p> <p>Our last resort was <strong>toxic users</strong>, i.e. non-anonymous users with many insincere questions. For example, <a href="https://www.quora.com/profile/Jim-Lunde/questions">this guy</a>. We collected a list of such users based on train/test datasets and then scraped all of their questions. Some of these questions were not in 1 stage datasets which seemed to be very promising - they should have been in the 2 stage data. Unfortunately, as you can see by our private score we were wrong (for the best, probably).</p> <p><a href="http://www.quickmeme.com/img/98/98e8d07eb0ee43919ffe3526f0037200e832438fe2d9a89f71bdb209f321df01.jpg"><img src="http://www.quickmeme.com/img/98/98e8d07eb0ee43919ffe3526f0037200e832438fe2d9a89f71bdb209f321df01.jpg"></a></p> <h2>Small technical challenges</h2> <ul> <li>Notebooks and scripts on Kaggle are limited by the size of 1Mb. How could we squeeze several thousand questions into the code? We converted an array of questions into txt file, then compressed it with tar.gz, then converted the archive to base64 code and inserted it into the script. 10K questions were stored using only 500Kb of space. </li> <li>We used <a href="https://www.ip-adress.com/proxy-list">list of 50 proxies</a> and 100 threads to parallel our scraping process. At some point, my own IP address was completely banned from requests to Quora. The access was restored after a couple of days, though.</li> <li>Part of code that shows user questions was written in JS so some advanced knowledge of Selenium framework was required.</li> </ul> <p><a href="http://www.quickmeme.com/img/8c/8c7e8d075b2177dd011ad4ba4657b5164b80eff9af8ec16105c8856b23ffe240.jpg"><img src="http://www.quickmeme.com/img/8c/8c7e8d075b2177dd011ad4ba4657b5164b80eff9af8ec16105c8856b23ffe240.jpg"></a></p> <h2>Takeaways</h2> <ul> <li>There is always a room for a creative approach to a Kaggle competition</li> <li>You never fail if you learn something new</li> <li>Whatever you do - make it fun</li> </ul> <p>it seems like annotation was a bit different from Quora Content Review (which we all were trying to improve)</p> <p>who would've thought</p> <p>cores before hoes</p>
Quora Insincere Questions Classification
20th solution - 2 models, various embeds, mixed loss
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Insincere Questions Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I was shocked when I saw the final standing. We never passed the 0.7 baseline on public LB and it was really frustrating. I basically gave up and just prayed this competition was about CV instead of public LB. It turned out to be true. I teamed up with <a href="https://www.kaggle.com/kukicap">YangHe</a> in the last week and we decided to make two submissions: one focusing on CV and one focusing on public LB. I was responsible for the CV one. My final submission reached CV 0.698 and public LB 0.699.</p> <p>Anyway, this is my <a href="https://www.kaggle.com/jihangz/20th-solution-4-folds-2-models-mixed-loss">kernel</a>.</p> <p><strong>Pre-processing:</strong></p> <p>Basically the public kernel, with some bug fixed (order of punc clean/contraction clean) and more contraction cleaning. I also used multiprocessing to speed things up. I met a bug when using Keras Tokenizer with PyTorch model: I couldn't set num words=None in the Tokenizer. It would run into some CUDA error during the training phase. So I fitted the Tokenizer locally and set num words = len(tokenizer.word_index) in the kernel.</p> <p><strong>Model:</strong></p> <p>I built two models:</p> <ol> <li><p>concat(GloVe, FastText) embedding + LSTM + TextCNN with kernel size [1, 2, 3, 4] + 2 dense layers, with some batch normalizations and dropout layers</p></li> <li><p>mean(GloVe, Para) embedding + LSTM + GRU + concat(GlobalAvgPool, GlobalMaxPool) + 2 dense layers, with some dropout layers</p></li> </ol> <p>We noticed the <a href="https://www.kaggle.com/c/quora-insincere-questions-classification/discussion/79911">bug</a> in the embedding dropout after the submission deadline.</p> <p><strong>Training:</strong></p> <p>I split the training data into 4 folds.</p> <p>Loss: BCE + soft F1 loss. I changed from BCE loss to this mixed loss on the last day and it gave me 0.003 boost on public LB and 0.002 boost on CV. It gave a stabler threshold v. F1 curve at the optimal point and I believe this granted us the 20th position. I also tried BCE + Lovasz, BCE pretrain and Lovasz fine-tune, BCE pretrain and soft f1 fine-tune, etc. Some of them didn't improve the model, others didn't converge at all. The model didn't converge when I was using pure soft F1 loss. This might be due to the imbalance of the label. Oversample might be needed when using soft F1 loss, but I didn't have the time to try. </p> <p>I used consine schedule with max LR = 0.003, and trained each model 4 epochs. I think consine schedule is better than step schedule and it is my favorite scheduler of all time. Notice that overfitting the training set a little would give a stabler threshold v. F1 curve. That's why all 0.7 public kernels overfit, I also tried AdamW with weight_decay = 0.0001, and it indeed gave better result. I didn't use it since it took more time to run.</p> <p><strong>Post-processing:</strong></p> <p>Average of all 8 classifiers and set the threshold based on oof prediction. I have made an all positive submission to figure out there were 3376 insincere questions in the public test data. I noticed that a lot of solutions to the past competition set the threshold so that the ratio of predicted label in test set is the same as training set. However, I didn't do that because I felt that it would be dangerous to use the same strategy in a binary classification problem.</p> <p><strong>Some Takeaways:</strong></p> <ol> <li><p>Stability is the key. You want threshold as insensitive as possible. </p></li> <li><p>Model is not the most important thing. The major variation is in the embedding layer.</p></li> <li><p>Read discussion, read public kernels, read solutions to similar past competitions, read solutions to different past competitions.</p></li> <li><p>When you fork someone's code, read it! It might not be bug-free!</p></li> <li><p>Don't give up! The shakeup is REAL!</p></li> </ol>
Quora Insincere Questions Classification
22nd Solution - 6 Models and POS Tagging
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Insincere Questions Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to everyone who participated in the awesome kernels and discussions that happened during this competition as well as my brilliant teammates. Always great to have people to bounce ideas off of. </p> <p>Here is the link to our final solution: <a href="https://www.kaggle.com/ryches/22nd-place-solution-6-models-pos-tagging">https://www.kaggle.com/ryches/22nd-place-solution-6-models-pos-tagging</a></p> <p>The guts of our solution was largely driven architected the same as the kernels we made public. </p> <ol> <li><a href="https://www.kaggle.com/christofhenkel/how-to-preprocessing-when-using-embeddings">https://www.kaggle.com/christofhenkel/how-to-preprocessing-when-using-embeddings</a></li> <li><a href="https://www.kaggle.com/ryches/parts-of-speech-disambiguation-error-analysis">https://www.kaggle.com/ryches/parts-of-speech-disambiguation-error-analysis</a></li> <li><a href="https://www.kaggle.com/mihaskalic/lstm-is-all-you-need-well-maybe-embeddings-also">https://www.kaggle.com/mihaskalic/lstm-is-all-you-need-well-maybe-embeddings-also</a></li> <li><a href="https://www.kaggle.com/christofhenkel/inceptioncnn-with-flip">https://www.kaggle.com/christofhenkel/inceptioncnn-with-flip</a></li> <li><a href="https://www.kaggle.com/christofhenkel/keras-starter">https://www.kaggle.com/christofhenkel/keras-starter</a></li> <li><a href="https://www.kaggle.com/ryches/parts-of-speech-disambiguation-error-analysis">https://www.kaggle.com/ryches/parts-of-speech-disambiguation-error-analysis</a></li> </ol> <p>I have written a relatively comprehensive description of our entire solution in the link above, but to give a summary:</p> <p>In this competition we were able to train a total of 6 models for a total of 74 epochs. How did we fit so many epochs into our 2 hour limit? We filtered out the easy examples. <a href="/christofhenkel">@christofhenkel</a> figured out by looking at the histogram of our predictions that within a few epochs our models had already confidently classified over 70 percent of our training samples. We trained a model really quickly in order to filter these easy questions. Once we threw those samples away we were able to train models just as accurately only using the 30 percent that remained. </p> <p>Now that we had this additional time we trained 5 models paired with different embeddings based on how they performed in our offline ensembling. Our hillclimbing found that the best combination with 5 models was:</p> <ul> <li>DPCNN with reversing and glove embeddings</li> <li>A bidirectional gru into an lstm with the glove embeddings. (this was very similar to what we used for the toxic comment challenge and was our strongest individual model here as well)</li> <li>a parrallel lstm and gru model w/glove embeddings</li> <li>parts of speech bidirectional lstm and gru model w/paragram embeddings</li> <li>parts of speech parallel lstm and gru model w/news embeddings</li> </ul> <p>These choices actually seemed to make some sense given that we have a CNN model, our strongest LSTM/GRU models, use our strongest embedding 3 times and use POS tagging as an augmentor/differentiator to our weaker embeddings. </p> <p>The POS models ended up doing worse individually but when ensembled significantly boosted our score. Our second submission used 8 models in total and still got a worse score than our 6 models with two of them being POS. If we did not do the filtering trick then we would not have enough time to do the POS tagging as it is relatively slow. I have a more detailed write-up of the POS models in the parts of speech disambiguation kernel I shared. </p> <p>Thank you. </p> <p>Yeah. I detailed this in the final kernel writeup along with the comparison between our 8 model and 6 model with POS. Looks like it isnt finished running though. We had to take a leap of faith that everyone was overfitting the leaderboard. </p> <p>Yes. You are correct</p> <p>They are initialized as the same and then are trained. I go into more detail in my pos disambiguation kernel</p>
Quora Insincere Questions Classification
25th place solution - unfreeze and tune embeddings!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Insincere Questions Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Fixed it. Sorry for the broken link. </p> <p>_edit: creating and padding sequences within a dplyr-pipe might be the problem! The docker image version changed and runtime seems really bad - will dig into this later.</p> <p>Time and memory was limited. Using reticulate + python reduced memory and was faster. Let me know if there is a "better" way :)</p> <p>thank you! </p> <p>Thanks - and congratulations on winning this competition!</p> <p>Hi all - I tackled this competition in R &amp; keras. Right after stage 1 docker-images got updated and I had a really bad feeling. I am relieved now that all worked out!</p> <h2>Preprocessing:</h2> <p>I did some basic preprocessing (replacing common typos and separating special characters) – nothing special here (<a href="https://www.kaggle.com/springmanndaniel/preprocessing-in-r">link to preproc kernel</a>)</p> <h2>Embeddings</h2> <p>I combined R and Python (Reticulate) to load and merge (GloVe + Para) pretrained embeddings. This way I could save some time. (<a href="https://www.kaggle.com/springmanndaniel/combine-r-and-python-to-load-embeddings">link to embedding kernel</a>)<br> Vocabulary was built on training-data only (196090 words).<br> All words that did not appear in GloVe/ Para were replaced by zeroes. Towards the end of each models (last epoch) training phase I turned the embedding layer to trainable (see <strong>Boosting</strong>). This way each model overfitted a little bit + the model created some representation for missing words. </p> <h2>Keras Model</h2> <p>I used a single model architecture and trained it on 6 folds. The final ensemble was a simple average of the six runs. <br> The model was a mix of LSTM, Convolutions and fully connected Layers. (<a href="https://www.kaggle.com/springmanndaniel/25th-place-solution?scriptVersionId=10255060">link to model kernel</a>)</p> <ul> <li>Epochs: <strong>4</strong></li> <li>Learning-rate: <strong>0.003, 0.003, 0.003, 0.001</strong></li> <li>Batch-size: <strong>512x2</strong> on epoch 1-3 and <strong>512x1.5</strong> on epoch 4</li> <li>Input sequence length: <strong>60</strong></li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F606532%2F11198a9e0b5f678a58ec8246948a3902%2Fmodel_arch.png?generation=1609753363950890&amp;alt=media" alt=""></p> <p><strong>CV SCORE: ~0.6977</strong> <br> <strong>Fold: 1</strong> Val F1 Score: <strong>0.695</strong> Val Loss: 0.0954 best thresh: 0.4 (unknown words = 51432)</p> <p><strong>Fold: 2</strong> Val F1 Score: <strong>0.698</strong> Val Loss: 0.0942 best thresh: 0.38 (unknown words = 7871)</p> <p><strong>Fold: 3</strong> Val F1 Score: <strong>0.695</strong> Val Loss: 0.0952 best thresh: 0.38 (unknown words = 20)</p> <p><strong>Fold: 4</strong> Val F1 Score: <strong>0.695</strong> Val Loss: 0.093 best thresh: 0.36 (unknown words = 20)</p> <p><strong>Fold: 5</strong> Val F1 Score: <strong>0.701</strong> Val Loss: 0.0917 best thresh: 0.41 (unknown words = 20)</p> <p><strong>Fold: 6</strong> Val F1 Score: <strong>0.700</strong> Val Loss: 0.0943 best thresh: 0.36 (unknown words = 20)</p> <h2>Threshold</h2> <p>I calculated the threshold based on validation data. </p> <h2>Boost</h2> <p>What boosted my model most was unfreezing embeddings towards the end of each run and updating unknown words by their newly learned representations so that subsequent models could utilize more words for training.<br> This helped because each of the 6 models overfitted slightly - which added more diversity to the final ensemble &amp; helped the model to deal with unknown words.</p> <p>It looks like this:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F606532%2F3dc4bde4aadf4c928bd9662cba72826f%2Funfreeze.png?generation=1609751408734952&amp;alt=media" alt=""></p> <p>Cheers<br> dan</p> <hr> <p>edit:</p> <ul> <li>fixed typos</li> <li>added keras network</li> </ul>
Quora Insincere Questions Classification
27th kernel
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Insincere Questions Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks! congrats to you too</p> <p>Hi 5 :)</p> <p>welcome!</p> <p>Lots of failed runs, lots of failed experiments and finally a great big shakeup, nothing amazing but making our kernel public at <a href="https://www.kaggle.com/dicksonchin93/kfold-tfidf-trial">https://www.kaggle.com/dicksonchin93/kfold-tfidf-trial</a> just to share , public CV before second stage was at 0.683 and private after second stage is 0.69721, something like 1k + position before second stage (with another kernel, 0.683 should be at the 4k position LOL) and shot to 27th after, a pleasant surprise, solution is an average blend of 5 models which includes all 4 embeddings with a local CV of 0.7028. Some notable difference is we did lemmatization and lowering and uppering the words to find oov words in each embeddings , made a keras sparse model with tfidf and some additional feature engineering from kernels and previous competitions , made a boosting RNN model on top of a TFIDF to Ridge model, this made the optimal threshold to be pushed to the 0.4-0.5 range in local CV, we used a little parallel computation to made all the processing to run in time, tested quite a few setups and find that its best to run pytorch gpu models in the main interface as is and run the other thread asynchronously, initializing pytorch gpu variables in a thread in kaggle kernels fail for some reason. Also, we had to opt for a 'weaker' set of RNN models( RNN architecture with less hidden cells, etc. ) so that the kernel could run in time although stronger RNN model gave us a higher local CV, we find that concatenation of the embedding's does help the NN models to converge faster and it uses all the embedding information's more optimally with regards to time and resource utilized than just running individual embedding models. Last but not least, we opt out of CNN models since it takes a lot more time to run the model. We made some mock test too in the end by sampling the test dataframe with a times 6 fraction just to make sure the kernel will run in the second stage. I once thought collaboration in a big team is hard, and should be even harder in a kernels competition! But now I know its easy, you just have to work independently from the other half of the team :) special thanks to <a href="/learnmower">@learnmower</a> for great contributions to the kernel, and also thanks to <a href="/wrosinski">@wrosinski</a> for optimizing some parts of the code. Also thanks to <a href="/mchahhou">@mchahhou</a> for his kernel in mercari </p>
Quora Insincere Questions Classification
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Insincere Questions Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Summary</h2> <p>I used a single NN with 1-layer Bi-GRU and dense layers for statistical features. I did seed averaging for ensemble. I used PyTorch to write NN.</p> <p>Key factors of my solution are</p> <ul> <li>Tune hyperparameters based on solid CV </li> <li>Train word embeddings on the competition dataset. I guess most participants didn't?</li> <li>Faster training techniques to train more models. It's adaptive lengths of sequences to input RNN for each batch</li> </ul> <h2>Preprocessing</h2> <p>I inserted spaces around characters except alphabets and numbers. Then, I used keras tokenizer, which splits by only space.</p> <p>After tokenization, I applied spell correction to OOV words. The rough idea of the spell correction algorithm is to find words with 0 or 1 levenshtein distance while ignoring cases. Precisely, there are a few heuristics.</p> <p>In my case, devising preprocessing including the above spell correction did not change CV score so much.</p> <h2>Model architecture</h2> <p>The main part is 1-layer Bi-GRU with hidden size 128 followed by the concatenation of max pooling, average pooling and first/last positional outputs. Another part is dense layers for statistical features. The outputs of 2 network parts are concatenated, then fed to dense layers.</p> <pre>QuoraModel( (embedding): Embedding(222910, 668, padding_idx=0) (text): RNNBlock( (rnn): GRU(668, 128, batch_first=True, bidirectional=True) ) (features_dense): Sequential( (0): Linear(in_features=92, out_features=32, bias=True) (1): ReLU(inplace) (2): Linear(in_features=32, out_features=16, bias=True) (3): ReLU(inplace) ) (dense): Sequential( (0): BatchNorm1d(1040, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Dropout(p=0.25) (3): Linear(in_features=1040, out_features=64, bias=True) (4): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace) (6): Dropout(p=0.1) (7): Linear(in_features=64, out_features=1, bias=True) ) ) </pre> <h2>Embedding</h2> <p>I used glove and wiki-news pretrained word embeddings. I also trained 64 dimensional word embeddings on the competition dataset (train+test) with fastText. In addition to them, I added 4 binary features to word embeddings (all upper chars?, first char upper?, only first char upper?, OOV word?). Finally, I concatenated all of them. The parameters of word embeddings are freezed during training.</p> <h2>Statistical features</h2> <ul> <li>the number of words</li> <li>the number of unique words</li> <li>the number of characters</li> <li>the number of upper characters</li> <li>Bag of characters: Implemented by <code>CountVectorizer(ngram_range=(1, 1), min_df=1e-4, token_pattern=r'\w+',analyzer='char')</code></li> </ul> <h2>Length of sequences to input RNN</h2> <p>For the faster training, I adjusted the lengths of sequences for each batch. When training, I used the maximum length of sequences in the batch or 55 length by applying pre-truncation if the maximum length over 55. When predicting for test, the truncation is applied if the length is over 70 instead of 55.</p> <p>Thanks to this trick, I was able to train 6 models on kernel compared 5 models without this trick.</p> <h2>Training</h2> <p>I used Adam with learning rate 0.001. The learning rate is multiplied by 0.8 after each epoch.</p> <p>I got the best CV score with batch size 256. But, batch size has the trade-off between score and training time. As I increase batch size, CV score gets worse and training gets faster. I chose batch size 320 by checking CV score and training time on kernel.</p> <h2>Ensemble</h2> <p>I did seed averaging of 6 models. I trained 6 models with different seeds. 5 epochs are spent for each model. Each model is trained on the full train dataset, in other words, I didn't use k-fold split to train different models.</p> <p>I averaged the predictions of 6 models. Then, I made the final binary predictions with threshold 0.36.</p> <h2>Local validation</h2> <p>I did 5 fold CV for the local validation. For each fold, I used predictions after ensemble rather than predictions by 1 model for more stable CV, closer CV score to LB score and more optimal hyperparameter search when ensemble.</p> <h2>CV score</h2> <p>I show CV scores of my model used for private LB and several models without some feature. 0.70974 is the CV score for the model used for private LB.</p> <pre>|Removed feature |score | |----------------------------------|-------| |no removal |0.70974| |4 binary embedding feature |0.70957| |spell correction |0.70953| |statistical features |0.70877| |word embeddings trained on dataset|0.70794| </pre> <p>Thanks!</p> <p>About threshold selection, there is no fancy thing. It was chosen based on local CV and hard-coded. After obtaining ensembled predictions, I checked CV scores with different thresholds and chose roughly a good threshold.</p> <p>The CV scores with different thresholds of the model used for private LB are</p> <pre>|threshold|CV score| |---------|--------| |0.350 |0.70961 | |0.355 |0.70969 | |0.360 |0.70974 | |0.365 |0.70976 | |0.370 |0.70969 | |0.375 |0.70959 | |0.380 |0.70946 | </pre> <p>The best threshold is 0.365 for the predictions in this case. But, I didn't care about the difference of CV scores with threshold 0.36 and 0.365 because it's small and the best threshold varies around 0.36~0.365 when training models with the completely same hyperparameter setting.</p>
Quora Insincere Questions Classification
33rd place solution- FastText embedding
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Insincere Questions Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Let me start by thanking everyone here who participated and contributed. We learned a lot of tips and tricks from community shared kernels and discussions. The competition was challenging in terms of finding correlated local validation, running the solution in 2 hours, and producing reproducible results, to name a few. We found a nearly correlated validation set before 1 week of the competition end. We used simple averaging where each trained NN model (Pytorch) was different either in terms of learning rate, pre-processing, embedding or architecture to maintain model diversity. One important thing we realised was: given small learning rate and large number of epochs, single FastText embedding based models were beating our glove+paragram models. So we focused on tuning FastText based models which could provide considerable score within 5-6 epochs. We also added Gaussian Noise to some models after embeddings to reduce the overdependence of RNN on specific keywords.</p> <p><strong>Solution summary:</strong></p> <ul> <li><strong>Runtime</strong>: 6352.2 secs</li> <li><strong>Preprocessing</strong>: Cleaning special characters, number pre-processing, misspell cleaning (For some models changed preprocessing sequence to add diversity)</li> <li><strong>Embedding</strong>: GLoVe, FastText, Paragram embeddings </li> <li><strong>Neural Network architecture</strong> (trained for 5-6 epochs with no fold): <ul><li>Stacked LSTM-GRU-128 hidden units, with GloVe+Paragram embedding</li> <li>Stacked LSTM-GRU 60 hidden units with attention and capsule, and GloVe+Paragram embedding</li> <li>Stacked LSTM-GRU-60 hidden units with GloVe+Paragram embedding</li> <li>Stacked LSTM-GRU-60 hidden units with FastText embeddings</li> <li>Stacked LSTM-GRU-80 hidden units with FastText embeddings and different preprocessing sequence </li></ul></li> <li><strong>Blending</strong>: Averaging prediction of each model with linear regression coefficients </li> </ul> <p><strong>Things that did not work</strong></p> <ul> <li>We tried pseudo labelling in different ways, but it didn't provide major boost considering its running time for it, so we dropped it in the end.</li> <li>We tried variety of preprocessing techniques to no avail. All of them tend to decrease the LB score with slight improvement in cv. Fearing overfitting pre-processing to training data with kept it to minimum.</li> <li>One trick that we tried was weight saving and retraining. For example, we trained the model and saved its weights before the model reached optimum. Then for the next model we loaded the weights for LSTM and GRU and did not pass gradients through them. This forced the new parts of model like an extra CNN layers or linear units to cover up for this. This saved time as the new model reached optimum within 2 epochs. But this did not add considerable benefit to the ensemble. In my opinion, majority of the information pertaining text was captured by RNN units, leaving little information required to be captured by newly added layers. Do let me know your thoughts on this experiments and its results.</li> </ul> <p>Special thanks to <a href="http://www.kaggle.com/shujian">Shujian</a>, <a href="http://www.kaggle.com/bminixhofer">Benjamin Minixhofer</a>, <a href="http://www.kaggle.com/christofhenkel">Dieter</a>, <a href="http://www.kaggle.com/ryches">Ryches</a>, <a href="http://www.kaggle.com/tunguz">Bojan</a>, to name a few, for great kernels and discussions!</p> <p>This wouldn’t have been possible without awesome teammates, <a href="http://www.kaggle.com/ashish2123">Ashish</a> and <a href="http://www.kaggle.com/rsrade">Rahul</a>, who put in lot of effort and made this competition a great learning experience.</p> <p>Thanks for reading, I am planning to release the code in few days after cleaning it. </p> <p>Edit: Github repository: <a href="https://github.com/soham97/Quora-Insincere-Questions-Classification-Challenge-NLP">https://github.com/soham97/Quora-Insincere-Questions-Classification-Challenge-NLP</a></p>
Quora Insincere Questions Classification
38th solution, data driven to find embedding weights
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Insincere Questions Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Model structure</h2> <p><img src="https://upload-images.jianshu.io/upload_images/6673934-43708f05b93e32fe.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240" alt=""></p> <h2>Embedding weight</h2> <p>The weight of embedding: 0.5 * glove+0.4 * para+0.1 * wiki. The weights are obtained as follows: First, the vector <code>x</code> of the three embeddings in the sentence is obtained, and the shape is (batch_size, maxlen, embed_size, 3). and then I create weight(shape: (3, )) into the softmax layer and multiply it with the tensor <code>x</code>. I divide the data into 5-folds, and each fold is averaged by the embedding weight as the final embedding weight. The way that embedding weights are obtained is data driven.</p> <h2>Word embedding</h2> <p>First I try to preserve the case of the word, and then get the word vector through the pre-training word vector. If there is no match, then the word will be: lower upper capitalize processing to find the corresponding word vector, etc. If not found, the vector is randomly generated. But the general practice is not effective. The main reason for my guess is: There are some weird spellings like: "tHe", "ThE". I look up the text and find that they originally mean "the". But in The glove finds the corresponding embedding vector. These words appear relatively few times, so their confidence is low, when training glove, and they mean the same. So I changed them all into Lowcase.</p> <h2>CV evaluation</h2> <p>On discussion, everyone is worried about whether or not to trust local cv. But a lot of people are just trust single model local cv, not trust ensemble model local cv.. For the former, the assessment is the single model, and the second is the ensemble model.. The ensemble, used at the end of the game should be ensemble model local cv. The methods are as follows: we divide the training set into k folders, then train all the models that need to be blended and blend the prediction results, so that we can get a prediction result of fold, and so on. In addition, I use the corr to see the similarity of fold training, I hope the model is good and different. My approach is dangerous, after all, split folds is random, the more random seeds, the less stable the final result.</p> <h2>Loss Function</h2> <p>I tried to use Focal loss as loss function, the single model local cv is very good, there is 0.712. But don't forget just said, this is only single model local cv, not ensemble model local cv. Finally, due to time, it does not do its ensemble local CV, but I think it might be better to adjust the hyperparameters than the Binary Cross Entropy.</p> <p>May be my English is poorly. So I write the blog in my <a href="https://zhuanlan.zhihu.com/p/57550039">知乎</a>, the Chinese Quora. Thx everyone share solutions and kernel. I hope that in the future I will share more interesting and effective technical knowledge like everyone else.</p>
Quora Insincere Questions Classification
3rd place kernel
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Insincere Questions Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It's a common ensemble method. See <a href="http://cs231n.github.io/neural-networks-3/#ensemble">http://cs231n.github.io/neural-networks-3/#ensemble</a></p> <p>No post-processing after model predictions. Just weighted average of predicted probabilities and threshold</p> <p>Sorry I'm unable to provide exact numbers about improvement. You can have some try in the kernel?</p> <p>Just routing 5fold CV. In the submission kernel I train on all data due to the time limit </p> <p>Yes, I expected this to work before I coded this, and I confirmed the effectiveness by experiments. Finding as many word vectors as possible is one of the keys for this competition because 1) word vectors were obtained by external data which means we train on much larger data (legally), 2) word vectors have attribution to cluster similar words which means better generalization to unseen words.</p> <p>No, I'm not. I forgot to delete that part</p> <p><a href="/canming">@canming</a> Do you have any idea why? I don't see anything keras can do but spacy cannot</p> <p>hmm.. Depends on how to define data cleaning. Note that I did use stemmers, spell correcters and so on to process. Your example is more like manual cleaning and may not cover as much. And I don't think fixings like changing "aren't" to "are not" relate with our target.</p> <p>I did not try keras tokenizer, so no direct comparison. But I did read the code of keras tokenizer, and there's nothing fancy of it. So I believed spacy tokenzier should be at least as good as keras's if not a lot better</p> <p>Hi, I have published the 3rd place kernel. <a href="https://www.kaggle.com/wowfattie/3rd-place">https://www.kaggle.com/wowfattie/3rd-place</a></p> <p>I used a lot of others works. The key factors of my method are: - Spacy tokenizer - No truncation of tokens - Try stemmer, lemmatizer, spell correcter, etc. to find word vectors - 2 layer of globalmaxpooling - checkpoint ensemble - Local solid CV to tune all the hyperparameters</p> <p>Questions, advises, suggestions are all welcome.</p> <p>EDIT: I forgot to mention that all the punctuations are included. "if token.pos_ is not "PUNCT" has no actual effect</p>
Quora Insincere Questions Classification
44th solution: Add all the randomness or how to improve your ensemble when all your models suck
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Insincere Questions Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><img src="https://www.memecreator.org/static/images/memes/4987185.jpg" alt="add all the randomness"></p> <p>I mentioned my ideas on how to decrease the correlation between my models in a previous post. Here is the code with some comments. Hope it helps someone.</p> <p><a href="https://www.kaggle.com/mschumacher/44th-place-add-all-the-randomness">https://www.kaggle.com/mschumacher/44th-place-add-all-the-randomness</a></p>
Quora Insincere Questions Classification
4th place solution (with github)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Insincere Questions Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>In my case, it didn't improve my score, but I think it might work depending on the task or implementation of the attention module.</p> <p>Hi guys, <br> It is a little bit late, but I published my solution as below: <br> <a href="https://github.com/k-fujikawa/Kaggle-Quora-Insincere-Questions-Classification">https://github.com/k-fujikawa/Kaggle-Quora-Insincere-Questions-Classification</a> <br> <a href="https://www.kaggle.com/kfujikawa/4th-place">https://www.kaggle.com/kfujikawa/4th-place</a> <br> Here I will try to summarize some of the main points of my solution.</p> <h1>Summary</h1> <p>The key factors of my solution are:</p> <ul> <li>Word2Vec fine-tuning</li> <li>400dim random sampling from 600dim word embedding per CV</li> <li>Simple 2layer BiLSTM model with maxpooling</li> <li>5-fold CV and averaging model outputs</li> </ul> <p><img src="https://raw.githubusercontent.com/k-fujikawa/Kaggle-Quora-Insincere-Questions-Classification/master/overview.png" alt="overview"></p> <h1>Details</h1> <h2>Preprocessing</h2> <p>I refered to the public kernel (<a href="https://www.kaggle.com/hengzheng/pytorch-starter">https://www.kaggle.com/hengzheng/pytorch-starter</a> ) for the most part, and I made slight modifications as below:</p> <ul> <li>Exclude filter of punctuations that <a href="https://github.com/keras-team/keras-preprocessing/blob/master/keras_preprocessing/text.py#L169">Keras Tokenizer has by default</a></li> <li>Apply misspell corrections before punctuation spacing</li> <li>Insert spaces around characters except alphabets and numbers</li> </ul> <h2>Embedding</h2> <p>In order to improve the word embeddings which are frequent in Quora dataset but not included in pretrained vectors (Glove and Paragram), I fine-tuned the word embeddings on the competition dataset (train+test) with Word2Vec (CBOW). I show the results of preliminary experiments to confirm whether these word embeddings are improved or not. <br> <a href="https://www.kaggle.com/kfujikawa/word2vec-fine-tuning">https://www.kaggle.com/kfujikawa/word2vec-fine-tuning</a></p> <p>I attempted to use word vectors obtained by concatenating before and after fine-tuning, but it was difficult due to the problem of calculation cost. Therefore, I decided to obtain word embeddings from 600 to 400 dimensions randomly for each CV. This approach was effective not only to reduce computational cost but also to increase model diversity among CVs, so contributed to improve the score of the Public LB, although the score of the local CV has decreased.</p> <h2>Model architecture</h2> <p>I adopted simple 2layer BiLSTM model with maxpooling. Model details are shown as below:</p> <pre><code>BinaryClassifier( (embedding): Embedding( (module): Embedding(212418, 402) (dropout1d): Dropout(p=0.2) ) (encoder): Encoder( (module): LSTMEncoder( (rnns): ModuleList( (0): LSTM(402, 128, batch_first=True, bidirectional=True) (1): LSTM(256, 128, batch_first=True, bidirectional=True) ) ) ) (aggregator): Aggregator( (module): MaxPoolingAggregator() ) (mlp): MLP( (layers): Sequential( (0): Linear(in_features=262, out_features=128, bias=True) (1): ReLU(inplace) (2): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): Linear(in_features=128, out_features=128, bias=True) (4): ReLU(inplace) (5): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (out): Linear(in_features=128, out_features=1, bias=True) (lossfunc): BCEWithLogitsLoss() ) </code></pre> <h2>Statistical features for words</h2> <ul> <li>Whether or not the word is included in pretrained embedding</li> <li>IDF score</li> </ul> <h2>Statistical features for sentences</h2> <ul> <li>the number of characters</li> <li>the number of upper characters</li> <li>the rate of upper characters</li> <li>the number of words</li> <li>the number of unique words</li> <li>the rate of unique words</li> </ul>
Quora Insincere Questions Classification
63th single model 5-fold
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Insincere Questions Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>first of all, learn a lot from others</p> <p>1、common preprocess in discussion</p> <p>2、optim adam+CLR fine-tuning step_size=500 (batch_size=1024)</p> <p>3、model concat cnn features、rnn k-max feature and attention output</p> <p>4、 5-fold and 3 epoch every fold</p> <p><img src="https://www.kaggle.com/c/quora-insincere-questions-classification/discussion/80970/pic3.png" alt="model structure"></p>
Quora Insincere Questions Classification
70th position. Pytorchtext model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Insincere Questions Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My simple model. Mix 3 Embedding. Used torchtext <a href="https://www.kaggle.com/leighplt/glove-wiki-gnews-full-set?scriptVersionId=10248742">https://www.kaggle.com/leighplt/glove-wiki-gnews-full-set?scriptVersionId=10248742</a></p>
Quora Insincere Questions Classification
7th place solution - bucketing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Insincere Questions Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks!</p> <p>Hi,</p> <p>Here I explain my solution.</p> <p>I think bucketing and checkpoint-ensembling are the key factors of my solution, since my preprocessing and my model are quite basic.</p> <h1>Preprocess</h1> <p>The core part here is using NLTK TweetTokenizer.</p> <ol> <li>Split each question_text by " " (space).</li> <li>Replace words that has "*" with "FWORD", since NLTK TweetTokenizer will split by "*", but I want to use words like "f**k" as the single token rather than ["f", "*", "*", "k"].</li> <li>Join by " ", then apply NLTK TweetTokenizer.</li> <li>Split each word by ' (single quote) and -. e.g., ["it's", "nice"] -&gt; ["it", "'s", "nice"]</li> <li>Load pretrained embeddings. For "FWORD", using the average of the embeddings of ["fuck", "shit", "*"]. For OOV, using the average of embeddings.</li> </ol> <p>I think that the preprocessing other than applying TweetTokenizer doesn't make big difference, since whether applying such "*"-replacement or not doesn't change the local CV score. The only reason why I subimitted this version is just I couldn't ignore the time I spent for preprocessing. XP</p> <h1>Model</h1> <ol> <li>Embedding layer. Simple average of Glove and Paragram embeddings (thus dim=300). Keep fixed.</li> <li>Dropout (keep_prob=0.6)</li> <li>Bi-LSTM (each cell_size=128)</li> <li>Bi-LSTM (each cell_size=128)</li> <li>Concatenation of the average-pooling of the first Bi-LSTM, the max-pooling of the second Bi-LSTM and attention of the second Bi-LSTM. (thus dim=3*256)</li> <li>Dense with tanh (dim=32)</li> <li>Output with sigmoid</li> </ol> <h1>Training</h1> <h2>Use bucketing.</h2> <p>Bucketing is to make a minibatch from instances that have simillar lengths to alleviate the cost of padding. This makes the training speed more than 3x faster and thus I can run 9 epochs for each split of 5-fold.</p> <p>I must have seen the TensorFlow tutorial page that describes bucketing (it shoud be the tutorial of "sequence-to-sequence model"), however, somehow I couldn't find that page now.</p> <p>For other training details,</p> <ul> <li>Objective function: vanilla sigmoid_cross_entropy</li> <li>Optimizer: Adam with default parameters</li> <li>Batch size: 512</li> <li>Maximum sequence length of the input: 400</li> </ul> <h1>Postprocess</h1> <p>For each 5-fold model, apply checkpoint-ensembling to maximize each validation score. Without checkpoint-ensembling, the average validation score is about 0.694. After checkpoint-ensembling, it is about 0.700.</p> <p>After checkpoint-ensembling, ensemble 5 models by averaging output probabilities and thresholds, then submit.</p> <p>Finally, Thanks everyone working for this competition! I really enjoyed this competition with such a big data!</p> <p>Thank you!</p>
Quora Insincere Questions Classification
From 400-ish public to 26 private
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Insincere Questions Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello, and congratz to everybody who made it 'till the end! Special thanks to people who shared stuff during the challenge, I learnt a lot.</p> <p>I'll give you a brief overview of my model that made it top the 26th:</p> <ul> <li><p>Preprocessing : Some special characters cleaning, number processing, contractions &amp; mispells replacement and latex tags cleaning. No lowering though.</p></li> <li><p>Embeddings : Concatenation of glove, fasttext and paragram.</p></li> <li><p>Some features : Toxic words ratio, Total length, word vs unique words, ratio of capital letters.</p></li> </ul> <p><strong>Model:</strong> </p> <ul> <li>I used PyTorch</li> <li>Single model , 5 folds, 4 epochs : <ul><li>Embedding layer + some noise</li> <li>LSTM, 64 Units (unidirectional)</li> <li>GRU, 32 Units (unidirectional)</li> <li>Attention, maxpool &amp; average pool on the outputs of both rnns</li> <li>Concatenating them with features</li> <li>32 units dense + reLu + Batchnorm + Dropout</li> <li>And the final layer</li></ul></li> </ul> <p>CV : 0.688, Public LB : 0.700 </p> <p>This model was not my best one on the LB, but it had a good CV and an average LB which made me trust it more than the others.</p> <p>Thanks for reading, feel free to ask me any question! I'll probably make my code public, but it needs some cleaning first. </p> <p>Thanks ! I expected/hoped to jump over people that forked public kernels and did not put much effort in the competition, but surely did not expect to end up 26th !</p> <p>Performances are roughly the same, but Attention works way better on an unidirectional layer.</p>
Quora Question Pairs
0.29900 Final Solution! :)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Question Pairs <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello All,</p> <p>It's been a while since the competition ended, but I have finally got around to summarizing the reasoning &amp; codes behind each feature used in our final model. Since this was the first time our team has ever addressed an NLP problem &amp; entering a Kaggle competition, we were super happy with just making it into the top 30%!!! :)</p> <p>I just thought I'd share the location of the article I wrote summarizing the features we used and why, and hopefully this will also assist new Kagglers getting ready to challenge an NLP problem (a lot of features were heavily inspired by our fellow Kagglers posts on the discussion board!):</p> <p><a href="https://www.linkedin.com/pulse/kaggle-quora-question-pairs-mar-2017-may-priscilla-li?lipi=urn%3Ali%3Apage%3Ad_flagship3_profile_view_base%3BokgKfjOZRXCS%2BP2r1eCsow%3D%3D">https://www.linkedin.com/pulse/kaggle-quora-question-pairs-mar-2017-may-priscilla-li?lipi=urn%3Ali%3Apage%3Ad_flagship3_profile_view_base%3BokgKfjOZRXCS%2BP2r1eCsow%3D%3D</a></p> <p>Big Thanks to the Kaggle Community!</p> <p>Priscilla</p>
Quora Question Pairs
14th Place Solution - Code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Question Pairs <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://github.com/Wrosinski/Kaggle-Quora">https://github.com/Wrosinski/Kaggle-Quora</a> Here's the code for my part of our solution, which placed 14th. </p> <p>It's supposed to be an overview of used methods, like I wrote in the repo. Again, big thanks to all the authors of Kernels and various posts, which we've found very useful. I tried to mention most, but I'm sure to have forgotten about some. Sorry for that in advance!</p> <p>Congratulations to all the competitors, even if you haven't won anything, then you sure have learned a great lot like we did :).</p> <p>Thank you! I agree, until the very end :).</p> <p>Congratulations to you too!</p>
Quora Question Pairs
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Question Pairs <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks a lot! <br> I'll be glad to see you there! :D</p> <p>Hi everyone ! </p> <p>First of all, thanks to Kaggle and Quora for this tough and exciting competition, it has been a pleasure for us to work on it, we learnt a lot of things, thank you ! </p> <p>We also thank our wives/GFs for their patience while we were coding on sunny week-ends :) </p> <p>We also want to deeply congratulate all competitors, especially Depp Learning team, who really scared us until the last moment ! </p> <p>Among us, Maximilien is a PhD student in a Chair of research (Data Analytics &amp; Models in Insurance) between BNP Paribas Cardif and Lyon University, and the rest of us are colleagues at the Datalab of Cardif. Being all based in Paris surely helped for efficient team work. </p> <hr> <h1>1/ Features</h1> <p>We distinguish three kind of features : embedding features, classical text mining features and structural features. <br> <strong>Embedding features</strong> </p> <ul> <li>Word embeddings (Word2Vec)</li> <li>Sentence embeddings (Doc2Vec, Sent2Vec)</li> <li>Encoded question pair using dense layer from ESIM model trained on SNLI</li> </ul> <p><strong>Remark:</strong> Sentence embeddings were challenged but were not that much informative compared to Word2Vec </p> <p><strong>Classical text mining features</strong> </p> <ul> <li>Similarity measures on LDA and LSI embeddings.</li> <li>Similarity measures on bag of character n-grams ( TFIDF reweighted or not) from 1 to 8 grams.</li> <li>Abhishek's and owl’s kindly shared features.</li> <li>Edit and sequence matching distances, percentage of common tokens up to 1, 2, ..., 6 when question ends the same, or starts the same</li> <li>Length of questions, diff of length</li> <li>Number of capital letters, question marks etc...</li> <li>Indicators for Question 1/2 starting with "Are", "Can", "How" etc... and all mathematical engineering corresponding</li> </ul> <p>We also used stanford corenlp to tokenizer, postagger and ner to preprocessing text input for some deep learning models. </p> <p><strong>Structural features (i.e. from graph)</strong> </p> <ul> <li>We built density features from the graph built from the edges between pairs of questions inside train and test datasets concatenated. We had counts of neighbors of question 1, question 2, the min, the max, intersections, unions, shortest path length when main edge cut....</li> <li>We went further and built density features to count the neighbors of the questions neighbors… and questions neighbors neighbors .. (inception). We also counted neighbors of higher order which also were neighbors of lower order (loops). </li> <li>We tried different graph structures : we built undirected and directed graphs (edges directed from question 1 to question 2), we also tried to separate the density features of question 1 from the features of question 2 to generate non commutative features in addition to commutative ones. </li> <li>We built features describing the connex subgraph the pair belonged to : Number of edges, number of nodes, % of edge in train</li> <li>We also computed the same features on sub graphs built only from the edges of questions which both appear more than once. What we wanted was to remove fake questions which we thought were damaging the graph features by changing its structure.</li> <li>Finally as other teams, we weighted our graphs with some of our initial models. We tried logit and rescaled prediction but raw prediction worked best. We also weighted the graphs with one of our similarity features.</li> </ul> <hr> <h1>2/ Models</h1> <p>We worked on two main architectures for our NNets : <strong>Siamese</strong> and <strong>Attention</strong> Neural Networks. </p> <ul> <li>Siamese LSTM with pretrained Glove embedding</li> <li>Decomposable attention (<a href="https://arxiv.org/abs/1606.01933">https://arxiv.org/abs/1606.01933</a>) with pretrained FastText embedding. This model achieve ~0.3 on cv</li> <li>ESIM (<a href="https://arxiv.org/abs/1609.06038">https://arxiv.org/abs/1609.06038</a>) with pretrained FastText embedding. This is our best pure Deep Learning NLP model, it achieves ~0.27 on CV. However this model take too long to run, we only add it once in the first stacking layer</li> <li>We noticed that DL complex architecture contributed in the first stacking layer but did not do better than simple MLP on second layer</li> </ul> <p>One of the key issue was to select and incorporate some of our traditional features into these networks. <br> We used FastText and Glove pre-trained embeddings with trainable=False, since our attempts to fine-tune them didn't lead to any improvement. <br> Eventually, neural networks trained on both text sequences and our graph / text mining features proved to be our best single models. <br> In the end, we also tried to train siamese models on a character level to provide further diversity to our stacking, but it is hard to tell whether it was really helpful. <br> We then tried more classical algorithms to exploit graphical features, such as XGB / LGBM which worked pretty well as usual. </p> <hr> <h1>3/ Rescaling</h1> <p>To balance with the difference of target distribution between train and test, we also looked a bit closer on the analysis of sweezyjeezy (thanks again for your contribution which helped almost all the participants) posted here : </p> <p><a href="https://www.kaggle.com/c/quora-question-pairs/discussion/31179">https://www.kaggle.com/c/quora-question-pairs/discussion/31179</a></p> <p>We figured we could reduce the log loss by optimizing the rescale. We did not found a better hypothesis to modelize the distribution of the data in the test dataset, but we made it more accurate by using it on local subsamples of the data. </p> <p>We found that the train/test biais is very different on 3 perimeters: </p> <ul> <li>Perimeter 1: qid1_count = qid2_count = 1</li> <li>Perimeter 2: min_qid_count = 1 &amp; max_qid_count &gt; 1</li> <li>Perimeter 3: min_qid_count &gt; 1</li> </ul> <p>We tried the public rescale and the same rescale but by perimeter. It works well for the first layer models but as we go deeper in our stacking, we found that the public rescale is not strong enough while the rescale by perimeter is too strong. We optimized our rescale so that it falls in the middle between these 2 methods and it helped to gain ~0.001 comparing to public rescale. </p> <hr> <h1>4/ Stacking</h1> <p>We made a 4 layers stacking : </p> <ul> <li>Layer 1 : Around 300 models, Paul and Lam’s neural nets, and classical algorithms like XGB, LGBM, which worked pretty well, and a lot of Scikit-learn classification algorithms (ET, RF, KNN, etc.)</li> <li>Layer 2 : Around 150 models using: <ul><li>All the inputs features</li> <li>Predictions of aAll the algorithms above</li> <li>We added hidden layers of the best L1 pure text ESIM model</li></ul></li> <li>Layer 3 : 2 Linear models <ul><li>Ridge by perimeter (3 perimeters were created, based on min/max degrees) on 3 least Spearman correlated L2 predictions</li> <li>Lasso with logit preprocessing of all L1 and L2 predictions</li></ul></li> <li>Layer 4 : Blend <ul><li>55/45, based on public LB score (final and best submission)</li></ul></li> </ul> <p>We used the Python library networkx (if you are a R user I believe you can do the same stuff with igraph). <br> The input of graph construction was just the couple of questions in a row (so each row represents an edge, and each question a node). I think you might grab more information on other topics dedicated on this. This will probably be explained more exhaustively ;-) </p> <p>Hi Jason, </p> <p>Our CV/LB score were quite consistent since both scores decreased hand to hand. So when we said 'based on public LB' it also means 'based on our CV score's faith'. </p> <p>In our 2 L3 models, Lasso performed slightly better than our Ridge, that's why we chose 55% Lasso and 45% Ridge. <br> And this blend was our final submission (11.27 public LB, 11.5 private LB), which performed better than our two L3 models (11.35 and 11.45 public LB respectively). <br> I don't know if this answer your first question, feel free to tell me if you need more details. </p> <p>Perimeters were defined on our section on the rescaling. This corresponds to a partition of train and test, based on min and max degrees (from the graph). And on the train set, we saw that the population didn't behave similarly each other. <br> That's the reason why we made 3 Ridges, one for each perimeter. What conforted us in our reflexion was the coefficients of the ridge, which were really different for each perimeter, so we judged that it was good to predict each perimeter independantly.</p> <p>The idea was to select some groups of features, especially without our golden features. That would force each model to grab the maximum signal with less important features. <br> For example, if you let all your magic features, XGB will see it really quickly and will not explore orthogonal signal. </p> <p>So selecting groups of features (~10% of our total pool of features) reduce the total computation time quite a lot on the one hand, grabbing additional signal (less important, but additional anyway), si that's quite vertuous. </p> <p>On the other hand, except for KNN who took us aroud 24 hours to predict, all the algorithms we used were quite fast. We didn't use SVM for example. </p> <p>Same idea for Layer 2.</p> <p>@BrunoGuilherme @RVK You right I cannot find anymore what I saw on the discussion, si I'll try to be clear here^^ <br> A graph is an ensemble of vertices and edges, you can find whatever you want here : <a href="https://en.wikipedia.org/wiki/Graph_theory">https://en.wikipedia.org/wiki/Graph_theory</a> <br> To construct a graph, you need at least 2 columns, i.e. and ensemble [(V1_1, V2_1), ..., (V1_n, V2_n)]. Here (V1_i, V2_i) represents the i-th edge of the graph connecting V1_i and V2_i. </p> <p>Note that for i != j, you may have V1_i == V1_j, and so on with V2 etc. And that is what interests us here, because you can draw a lot of properties when two vertices are connected together, and when a vertex is connected to several others. </p> <p>In this competition, each vertex represents a question, and each edge represents a row in train + test concatenated. </p> <p>Then you simply explore all what you can do with networkx API, such as collect each vertex's degree, neighbors' degrees and so on.</p>
Quora Question Pairs
24th Place Solution Repo
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Question Pairs <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I think 24th solution is not worth to share but I am sharing it because I and some people think that it is relatively lightweight. Here is the repo: <a href="https://github.com/aerdem4/kaggle-quora-dup">https://github.com/aerdem4/kaggle-quora-dup</a>. I am also open to any kind of feedback. This was my first serious Kaggle competition and it made me addicted but maybe I need to join a team for a gold medal next time:) I also want to thank @qianqian and @lystdo who have shared nice kernels and @Krzysztof for his feature ideas.</p> <p>What made my model successful? BETTER GENERALIZATION</p> <ul> <li><p>All the features are question order independent. When you swap the first and the second question, the feature matrix does not change. For example, instead of using question1_frequency and question2_frequency, I have used min_frequency and max_frequency.</p></li> <li><p>Feature values are bounded when necessary. For example, number of neighbors are set to 5 for everything above 5, because I did not want to overfit on a particular pair with specific number of neighbor 76 etc.</p></li> <li><p>Features generated by LSTM is also question order independent. They share the same LSTM layer. After the LSTM layer, output of question1 and question2 merged with commutative operations which are square of difference and summation.</p></li> <li><p>I think a good preprocessing on the questions also leads to better generalization.</p></li> <li>Replacing the rare words with a placeholder before LSTM is another thing that I did for better generalization.</li> <li>The neural network is not so big and has reasonable amount of dropouts and gaussian noises.</li> <li>Different NN ppredictions are ensembled at the end.</li> </ul> <p>Thank you very much for forking it and testing it. I am really sorry that I did not test the postprocess part after I have refactored it together with my ensembling script. So I have missed the numpy array conversion. Now I have changed it on Github. Many thanks to you.</p> <p>As you say I didn't use any XGB/LGB. My LB score is just an ensembling of this flow with different parameters and sometimes some difference on the functions. Some examples:</p> <ul> <li>Change the value of MIN_WORD_OCCURRENCE. </li> <li>Switch the order of lemmatize functions in "cutter" function: WNL.lemmatize(WNL.lemmatize(word, "n"), "v")</li> <li>Change some parameters in the feature extraction like NB_CORES.</li> <li>Change some parameters in the postprocessing like DUP_THRESHOLD.</li> </ul> <p>I have published the model with the best parameters that I thought but ensembling it with the slightly different versions helps. If you variate the model with such changes and run the whole pipeline 3-4 times and then average it, you should get +/- my score.</p> <p>Thanks for the question. Please check <a href="https://www.kaggle.com/c/quora-question-pairs/discussion/31179">https://www.kaggle.com/c/quora-question-pairs/discussion/31179</a>. You can also weight the samples during the training instead of using this lambda function at the end. I didn't see too much difference between weighting the samples and calibrating the predictions at the end. Calibrating them at the end has only one advantage which is that you can easily adjust them without running the whole model again. So I as far as I remember 0.16 gave better results than 0.174 experimentally.</p> <p>Using the class_weight param of the fit function. If class 0 occurs 10 times of class 1, you can give a dict: {0:1, 1:10} to balance it. <a href="https://keras.io/models/sequential/">https://keras.io/models/sequential/</a></p> <p>Thanks for using my model. Can you give more details? Did you train on Quora dataset and test on your dataset?</p> <p>It may not work well on a small dataset since it utilizes some features like frequency of occurrence etc. These features were used by everyone because Quora dataset was biased. If you want to make it a pure NLP model, you may remove these features and also the postprocessing step while training and testing.</p> <p>Anyway, Quora questions and Github issues may have different behavior. So I suggest you to create a training set with github issues if possible.</p> <p>When it comes to &gt;0.5 separation, my model does not guarantee that since it optimizes log_loss in an unbalanced dataset. If you want to have this property, balance the classes by weighting during the training phase.</p>
Quora Question Pairs
5th Place Solution Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Question Pairs <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><blockquote> <p><strong>qianqian wrote</strong></p> <blockquote> <p>Congratulations! I have a question about how much it gains by adding oof prediction reweighted edge features cause we don't have this kind of feature in our dataset. Anyway maybe we can put a question like "What does it feel like to be addicted to Kaggle" on quora:)</p> </blockquote> </blockquote> <p>I just reran a fairly late model, here the results: </p> <p><strong>with oof-preds graph features</strong> <br> public LB: <em>0.11525</em> <br> private LB: <em>0.11907</em></p> <p><strong>without oof-preds graph features</strong> <br> public LB: <em>0.11747</em> <br> private LB: <em>0.12127</em></p> <p><strong>log-loss diff</strong> <br> public LB: <em>-0.00221</em> <br> private LB: <em>-0.00220</em></p> <blockquote> <p><strong>DataDugong wrote</strong></p> <blockquote> <p>How many n rows did you validate on? After validating on the n-rows, and picking your best model params etc, did you then go and retrain on the whole dataset for a final submission?</p> </blockquote> </blockquote> <p>We validated on 234290 samples and yep we trained on the whole set afterwards.</p> <p>Hey there,</p> <p>congratz to the winners and especially to the <strong>"DL guys"</strong> for an impressive victory. Well done! Thanks goes to my teammates Kaza and Qingchen for another nice "kaggle experience" and a special one goes to the guys from <a href="https://www.h2o.ai/">H₂O</a> (they know why ^^).</p> <p>That one was a tough and fast paced competition with quite some twists and lots of things to discover, which made it really interesting but also a bit exhausting. ^^</p> <p>Our final model is an XGB with ~600 features, of these ~25 are oof-model predictions (LightGBMs, NNs/LSTMs and some SGDs) or likelihoods and the others are "normal" features. We used an oversampled training set with prior ~0.13 to train our submission models and we validated on the last <em>n</em> rows. Out-of-fold predictions have been created via 5 stratified folds on non-oversampled data.</p> <p>For the NLP part, it turnt out the be very useful to create features based on differently pre-processed questions texts (raw, interrogative forms, stemmed, cleaned, stopwords only, stopwords removed,..) and token bags (shared &amp; non-shared tokens). For instance, count vectorization applied to those bags yielded very predictive features. Another source of NLP-like feature extraction were the longest common subsequences (lcs) of both questions in a row computed via dynamic programming. This was also useful to enhance the token matching for questions pairs with grammar or punctation errors in only one of them or just to detect those and creating stats upon. Especially, the mutal information of lcs and orginal question texts was useful.</p> <p>Beside some silly-looking features like min(qid1, qid2), qid-deltas and indices-deltas from a sorted list of all questions texts, it was predictve to categorize each row based on the train-test-appearances of both questions like "both questions appear in train &amp; test". In general, train-test-split information carried alot of target information but was very prone to overfitting. </p> <p>As generally known, the book of spells has been the underlying graph structure of the question comparisons. Beside the stats like |common neighbors|, |unique neighbors|, |paths of length <em>n</em> between q1 &amp; q2|, max. clique size, component size, etc. pp. we put each of our features as weights <em>w</em> to the edges and computed stats based on that (for instance <em>mean(w)</em> of common neighbors). <br> We also used out-of-fold predictions as edge-weights to get something ouf of the transitive relation y(q1, q3) = a = y(q2, q3) =&gt; y(q1, q2) = a (which is inconsistent given the ground truth). Those features provided a significant gain. Last but not least, we treated strongly connected components in the graph as markov chains (state transitions could be an oof-prediction or an feature) and re-weighted given features with the estimated steady state distributions of those MCs. I don't know yet, if that added anything, but I found it interesting enough to try. </p> <p><a href="https://www.quora.com/What-does-it-feel-like-to-be-addicted-to-Quora"><strong>???</strong></a></p> <p>cheers, <br> Faron</p>
Quora Question Pairs
7-th solution overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Question Pairs <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi there! The competition has ended, and I can finally get on with my life. However, after studying the solutions that the top teams have kindly shared, I found out that my approach was considerably different in a number of ways. So, for the sake of learning, I decided to give a summary of the key parts.</p> <p><strong>DNN features</strong> First of all, I really don't like feature engineering, particularly in NLP tasks. So from the very beginning I decided to devote most of my time (and computational resources) to building and training a diverse array of deep neural networks to do the boring stuff for me. The resulting features were very predictive, some exceeding even the 'magic' features by a large margin in terms of information gain. Here is a ranked list of networks I used to prepare the final submission:</p> <ol> <li>Bidirectional LSTM enhanced with Recurrent Highway Layers (see <a href="https://arxiv.org/abs/1607.03474">this paper</a>) and a custom attention mechanism. This network has a standard 'siamese' architecture, finely described in a <a href="https://engineering.quora.com/Semantic-Question-Matching-with-Deep-Learning">blog post</a> from Quora itself. The whole difference came from the attention mechanism, which boosted the performance of said network significantly. The addition of RHN layers allowed to put more dropout on the recurrent connections of the LSTM, which turned out to be a very effective regularisation strategy. I used glove 840B vectors for the word embedding layer, which was frozen during training. After quite a lot of hyperparameter optimization, I found a configuration which yielded 89.1% accuracy on validation (with cv). More details on that approach will probably be released soon.</li> <li>Basically the same as 1, but character-level. Somewhat surprisingly, this one was only 1% acc behind the word-level one, and it provided very good, de-correlated features. </li> <li>Same as 1, character-trigram level. Really nothing else to say there</li> <li>1-D convolutional net with <a href="https://arxiv.org/abs/1606.01933">decomposable attention</a>. This one was blazing fast as compared to the first one (20 minutes vs 2 hours to convergence). I had three of those networks, with different kernel size sets (I used <a href="https://arxiv.org/abs/1606.01933">3</a>, [2,3], [2,3,5]).</li> <li>Networks 1-4, but without the attention mechanism</li> </ol> <p>I used 'contrastive loss' as an objective function for all networks, since it turned out to be somewhat better than binary crossentropy in the long run. My guess is that crossentropy does not behave very well when the labels are noisy, which was certainly the case. Minimal preprocessing was used, no stop words were removed because it harmed performance. The predictions of the networks (after sigmoid) were used as features for the XGB. To get out-of-fold predictions on the whole train set, 10-fold cv was used.</p> <p><strong>Unsupervised NLP features</strong> Most of these came from publicly available kernels by Abhishek, Mephistopheles and the1owl. I also did compute some by myself, including</p> <ul> <li><p>WordNet similarity (based on a script i found on <a href="https://github.com/sujitpal/nltk-examples/blob/master/src/semantic/short_sentence_similarity.py">github</a>)</p></li> <li><p>Word mover distances, using glove840B word embeddings</p></li> <li><p>NER-based features. I got the tags with Stanford CoreNLP</p></li> <li><p>POS-based features. Same thing</p></li> </ul> <p>Probably there was something else, I don't remember now.</p> <p><strong>Magic features</strong> My use of magic features was pretty consistent with what I have read from the top teams. In addition, I used another 'pseudo label' feature which was constructed as follows:</p> <ol> <li><p>Using the best available model, predict the test set.</p></li> <li><p>Build a sparse square matrix of size len(train+test) x len(train+test)</p></li> <li><p>For all pairs with duplicate probability higher than threshold (say 0.3), put a '1' in two corresponding cells of said matrix</p></li> <li><p>For each pair from train and test, compute cosine similarity between corresponding rows.</p></li> </ol> <p>Interestingly, this feature turned out to be very helpful, and did not lead to overfitting.</p> <p><strong>Post-processing</strong> Again, mostly similar to the published solutions. In addition, I did prediction clipping, which means that I set very confident predictions to 1e-5 and 1-1e-5, respectively.</p> <hr> <p>Overall, this was a great competition, which taught me a lot about the ways of NLP, DL, ensembling and a bunch of other stuff. For me, Kaggle worked brilliantly by providing motivation to dive deeper into all these areas, which I wouldn't have the energy to explore otherwise. </p> <p>Finally, many thanks to participants that shared their thoughts and approaches, some of which were very helpful. See you next time!</p> <p>This might happen as part of the upcoming paper, which is expected in ~1 month or so.</p> <p>Thanks, much appreciated.</p>
Quora Question Pairs
8th solution with part of source code(under construction)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Question Pairs <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I didn't sent all my base model features to @Fengari but just some core features. We trained our metafeatures seperately. We also trained some linear model with some trick: Raw tfidf features + weighted gbdt base features, which use clique features as weight. This boosted our score a lot and even more gain than rf+et. And also we have several deep models(raw lstm,attention,siamese etc.) Deep models have boosted about 0.002~0.003.</p> <p>Our best single xgb scores 0.12813 on the public leaderboard.</p> <p>Finally we have more than 1000 features in all.</p> <p>Ok lets get a little bit more technical:</p> <h1>Overview of solution</h1> <h2>Preprocessing</h2> <p>1.remove punctuations</p> <p>2.using porter stemmer</p> <p>3.generate unigram bigram phrases of stemed courpus</p> <p>4.generate distinct unigram bigram phrases of stemed courpus</p> <h2>qian's features</h2> <p>1.count/ratio of words/char of questions </p> <p>2.count/ratio of common words</p> <p>3.jaccard/dice distiance</p> <p>4.count/ratio of digits or punctuations in questions</p> <p>5.tfidf of raw corupus with nrange=(1,2)</p> <p>6.tfidf of unigram/bigram</p> <p>7.tfidf of distinct words' unigram/bigram</p> <p>8.tfidf of cooccurence of (distinct) words unigram/bigram</p> <p>9.gensim tfidf similarity</p> <p>10.similarity of self/pre-trained wored2vec weighted average embedding vectors(idf as weight)</p> <p>11.similarity of self/pre-trained glove weighted average embedding vectors(idf as weight)</p> <p>12.tfidf decomposition by NMF,SVD,LDA using sklearn</p> <p>13.similarity of distinct word pairs in q1 and q2 using self/pre-trained wored2vec/glove,aggregated</p> <p>14.num of nodes belong to cliques</p> <p>15.sklearn tfidf similarity</p> <p>16.deepwalk embedding of question as nodes</p> <p>17.using label to encode cooccurence distinct words and aggregation by mean max min std</p> <p>18.fuzz_feature</p> <p>19.NER by spacy</p> <p>20.simhash of unigram/bigram</p> <p>21.decomposition of adjacency matrix</p> <p>22.glove weighted average embedding vectors(idf as weight)</p> <p>23.aggregation of size of cliques of each node</p> <p>24.average neighbour degree</p> <p>25.aggregated distinct words by wordnet</p> <p>26.(distinct words)entropy based question representations</p> <h2>fengari's features</h2> <p>1.decomposition features of ngrams : nmf + svd + lsi +lda</p> <p>2.decomposition featues of diff ngrams :nmf + svd +lsi + lda</p> <p>3.similarities and distances of decomposition features above</p> <p>4.maxclique features of edges</p> <p>5.maxclique features of nodes</p> <p>6.bfs (depth =2) cnts of graph</p> <p>7.duplicated features ( with ranking )</p> <p>8.number diff feature among question pairs</p> <p>9.pagerank (directed/undirected)</p> <p>10.tsne of all leak features</p> <p>11.doc2vec and doc2vec sim features</p> <h2>hhy's features</h2> <p>1.similarity and distance of pre-trained glove weighted average embedding vectors</p> <p>2.decomposition features of w2v</p> <p>3.duplicate feature</p> <p>4.nlp stats features(contains token log prob,brown cluster,pos tag,dependency,entity,subject,verb,object)using spacy</p> <p>5.dependency tree feature using stanford nlp utils</p> <p>6.wordnet similarity feature</p> <p>7.stop words basic stats feature and char distribution (with tf)</p> <p>8.word move distance</p> <p>9.ngram extra features(contains BLEU metric,indicator,pos_link,postion change,and pos tag compare)</p> <p>10.decomposition features of ngram extra features : nmf + svd</p> <p>11.neighbor basic feature</p> <p>12.neighbor semantics similarity</p> <p>13.neighbor distance compare :long match + edit + jaccard + dice + word move distance</p> <p>14.neighrbor combined with nlp basic features compare</p> <p>15.deeplearning model: siamse + siamse_match + bimpm</p> <h2>qian's base model and metafeature</h2> <p>base model type 1: lgb,xgb,et,rf,mlp with basic feature + decomposition features</p> <p>base model type 2: lr,linear svc with clique weighted basic feature + tfidf features</p> <p>base model type 3: lstm,attentive lstm siamese</p> <h2>final stacking</h2> <p>We using the same 5 fold stacking</p> <p>stack level 1 : lgb、xgb、mlp with dense feature、mlp with sparse tfidf weighted feature、et、rf and so on.</p> <p>stack level 2 : we use lgb、xgb、mlp、rf and et, and our final submit was a simple avg of those models.</p> <p>PS. We have included our features and stacking models. More deep models will be released.</p> <p><a href="https://github.com/qqgeogor/kaggle-quora-solution-8th">https://github.com/qqgeogor/kaggle-quora-solution-8th</a></p>
Quora Question Pairs
Explanations about why stacking LSTMs often leads to worse LB and one question about Stacking procedures
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Question Pairs <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for sharing! I I joined in this competition and wanted to try something like this, but eventually returned to the old path of feature engineering and ensemble : (</p> <p>Only using magic features cannot achieve 0.18+, so the LSTM part is still working.</p> <p>Hi kagglers! Firstly, congratulations to all the winners and all the competitors who have learned much from this competition! And also I want to thank my teammates, it was a really nice experience and I’ve learned a lot from them!</p> <p>Here I want to share some explanations about why many people find that using LSTMs as base models when stacking often leads to worse LB, and post a question about the stacking procedure, which confuse me for a long time.</p> <p><strong>(1) About LSTMs when stacking</strong></p> <p>As many competitors may have noticed, LSTM suffers from some sort of overfitting, i.e., the validation loss is 0.26+ while the LB scores are often 0.30+. Moreover, many have tried using LSTMs as base models in stacking, but often get only worse results.</p> <p>I was confused a lot and struggled trying to solve the problems without success, until the magic features were published. Now things are clear. We already know that the prior probabilities of different questions are quite different, due to data preparations by Quora and Kaggle. Since DNN models have a large capacity, it will capture these different prior probabilities in a strange manner, i.e., it matches semantical information to prior probabilities. This pattern generalizes terribly as there are a lot of questions that don't show up in training set. So <strong>LSTM is overfitting to this strange pattern when magic features are absent</strong>.</p> <p>After the magic features are introduced, the model can learn prior probabilities with magic features, and the LSTM part will learn the semantical similarity. Now the LB score can be quite close to (a little better than) the validation loss. If we use LSTMs as base model, things will be fine. And if possible, it’s better to use bagging of LSTMs to further reduce the varience.</p> <p>At lease this teaches me a lesson: It's not always true that dropping features can prevent overfitting, and although people call DNNs "black box", we still need to have a good understanding of data before deploying DNNs.</p> <p><strong>(2) One question about Stacking: Do we really need to fix the fold partition of base models?</strong></p> <p>Here comes the question that has confused me for a long time. I’ve read some explanations about information leakage, but I still feel that not fixing the fold partition and using bagging strategy for every one of the based models might be a better choice?</p> <p>Let’s check the following two solutions.</p> <p>a) We fix the folds of different base models, and use a 5-fold cross prediction for training set.</p> <p>b) We use a 1000000-fold cross predictions for training set, and use a bagging strategy for each base model, i.e., for each cross, we train multiple models with different subset of training data, and average the results.</p> <p>Initiatively, I think solution b) is better. Now it comes to solution c).</p> <p>c) For every one of the base models, we don’t fix the fold partition. We split the data to 5 folds randomly and do a cross prediction. Then we repeat this procedure for several times, and use the average predictions as the base model’s output.</p> <p>Clearly solution c) is largely the same with solution b), but it’s not the recommended procedure on textbooks.</p> <p>Have any people done some experiments on this, or can anyone analyses it theoretically? Thanks a lot!</p> <p>I bet that the data leakage in this competition can beome one of the most seriours ones ever, for many results and conclusions are overturned after magic features are introduced.</p>
Quora Question Pairs
My solution (top 6%)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Question Pairs <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>So I finally got around to releasing the code I ran for this competition. </p> <p><a href="https://github.com/rbauld/kaggle/tree/master/quora_question_pair">https://github.com/rbauld/kaggle/tree/master/quora_question_pair</a></p> <p>Better late then never I suppose! It is not the most orderly/organised thing, but it worked decently well.</p>
Quora Question Pairs
Overview Of 3rd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Question Pairs <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I also tried by test question frequency and intersection count (for one submission only). I actually suspect there may be a better way to exploit it, but I discovered this pretty late in the comp and didn't want to waste too many submissions searching for it. I basically did two approaches - I used some submissions to get the fractions in public lb for each frequency, by submitting a 0.5 prediction for all instances and then varying the prediction for question frequencies between all 0 and then another submission for all 1. The original motivation was to get the number of real questions in each frequency bin, because for instance ~30% of the dataset was unique questions which are pure NLP in train and I wanted to know if this held true in test. Once I did this though I noticed that only 3% of the unique questions in public lb were duplicates, compared to like 25% in train. This was one reason NLP wasn't as important for this comp. I mainly just used trial and error with public lb feedback, to do the adjustment. Minor tweaks to the adjustment didn't matter that much.</p> <p>One thing I noticed that lead me to think there was a systematic difference was including test frequency as a feature lead to gains on cv, but significant losses on the lb.</p> <p>I'll update this post with more detail and the code when I have documented it better, but I figure I'll quickly describe my part of our solution. I'll focus on my parts, Sean provided very good NLP, nn and graphical features so I'll let him describe these.</p> <p>Overall we used around 1300 features for the 1st level models, which were nn, lightgbm and xgb models. Lightgbm worked really well for this competition, being up to 5x faster than xgb and only a little less accurate. These features were primarily NLP features and what I'll call meta-features. The meta-features were frequency features and graph features, such as intersection count, the frequency of intersecting questions, question frequency, question frequency just for q1, question frequency just for q2 and so on. NLP features were things like matching words, if the word before a "?" matched, frequencies of matching words, frequencies of not matching words, how similar matching and non-matching words were, etc.</p> <p>These models, roughly 15 of them (some nn were bagged, I'm counting that as 1 model) were then stacked together. Before stacking we introduced features such as q1_hash_by_mean_pred, q1_hash_by_min_pred, sum of the question hashes etc. Our best single model was an xgb model and got around 0.185 on cv, stacking got down to 0.157 on cv. It was important to include meta features when stacking, as some models were built just on NLP and this allowed them to still be useful. Sean also found a feature in the last couple days which was related to the OOF predictions of NLP models to the intersections somehow, and this provided significant gains.</p> <p>Edit: Another interesting find by Sean was that averaging the stacked models with the individual models (so the final ensemble was 0.75 stacked + 0.25 * (base models)). This lead to gains of ~ 0.001 on public lb, but harmed cv. So doing this somehow ensured better generalization, still not sure exactly why but it did also translate to private lb.</p> <p>For one idea which probably a lot of teams did not do, was to selectively adjust our predictions based on question frequency. There was a systematic difference between lb and cv, and it was somehow related to question frequency. For instance we used the link function (adjust the odds down) by a factor of 4.7 when the question frequency sum was 2 (both questions unique) and scaled this gradually down with increasing question frequency.</p> <p>That is all for now, there is a bunch of other things we did and I'll try and post the code when it is cleaned and documented. Thanks for a fun competition, even if it was about more than just NLP.</p>
Quora Question Pairs
Overview of 2nd-Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Question Pairs <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I'm not sure about Stanislav's and Dmitry's sparse vectors, but in my case I used binary tf and set a cutoff of the top 2,000 1-grams and 2-grams. Then I summed the q1 and q2 vectors so there were three possible values for each ngram -- 0 if the ngram appeared in neither question, 1 if it appeared in only one of the questions, and 2 if it appeared in both. This turned out be a very effective strategy.</p> <p>Actually, we were neck-and-neck with the #1 team over the last two weeks, but we were using a trick to hide our actual LB score so as not to motivate you guys to work even harder ;-). As it turned out, we were too clever for our own good. When we finally revealed our actual LB score in the final hours, it turned out to be a bit worse than we had expected.</p> <p>First, congratulations to everyone who participated, expecially the winners, and thanks to the organizers. Below is a very brief description of our solution and some personal observations I had regarding this competition. I'll let Stanislav and Dmitry add their own thoughts.</p> <p>Our final solution was a simple weighted ensemble of 7 models -- 6 lightGBM and one NN. After producing the ensembled predictions, we had a post-processing phase where we recalibrated the probabilities based on some graphicial properties, similar to what Jared described.</p> <p>Each of us independently created both graphical and NLP features, which we then shared. We had one LGB model that included all useful features (many thousands, including sparse ngram vectors). The others included different subsets of the features and used different LGB architectures. Our single superset model scored about 0.116-0.117 on the LB. </p> <p>One thing that definitely helped on the NLP side was to process the text in many different ways -- lowercase and unchanged, punctuation replaced in different ways, stop words included and excluded, stemmed and not stemmed, etc. -- and to build features from all of these different representations. Each representation excelled at picking up different types of text similarities so mixing them all together in a single model was very beneficial.</p> <p>While this was an interesting and challenging competition, the importance of the question distrbutions and the difference between the train and test sets were rather perplexing. The graphical features (e.g., common neighbors) were very significant and interacted with the NLP features in complex ways that we'll never understand. If, as others have suggested, these peculiarities are artifacts of the way the questions for the train/test set were selected, and not general properties of the full Quora database, then we have modeled a very artificial data set that has little relation to the real world.</p> <p>Another interesting aspect of the questions is that a disproportionate number seem to be from/about India. This caused all of the NLP features (e.g., tf and tfidf) to skew towards words relevant to India questions. Again, I'm doubtful that the resulting features will generalize well to questions from/about different regions.</p> <p>Finally, the labeling was rather noisy. When I looked at the question pairs where our models failed worst, there were many cases where I felt that our model was correct and the labeling was wrong. If I were to make just one recommendation to Quora, it would be that that they use the winning models to identify these questionably-labeled pairs and then review them. Improving the labeling would allow the models to perform much better.</p> <p>Oh no! We didn't want to scare you. We wanted to lull you into a false sense of omnipotence ;-). Once again, congratulations! It was a tough battle out there.</p>
Quora Question Pairs
Overview of 4th-Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Question Pairs <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First congratulations to every participants and thanks a lot to the organizers. I'll make a overview of our solution. </p> <p>We can simply divide the solution into different parts: Pre-processing, Feature Engineering, Modeling and Post-processing.</p> <strong>Pre-processing</strong> <p>We made some different versions of original data (train.csv &amp; test.csv).</p> <ol> <li><code>Text-cleaning</code>: spell correction, symbol processing, acronyms restore, ...</li> <li><code>Word-stemming</code>: SnowballStemmer, ...</li> <li><code>Shared-word-removing</code>: delete the words appeared in the both sides</li> </ol> <strong>Feature Engineering</strong> <p>There was around 1400+ features in the Feature Pool which will be combined in different ways. These features can be classified as the following categories.</p> <ol> <li><code>Statistic</code>: rate of shared words, length of sentences, number of words, ...</li> <li><code>NLP</code>: analysis of grammar tree, negative words count, ...</li> <li><code>Graph</code>: pagerank, hits, shortest path, clique size, ...</li> </ol> <strong>Modeling</strong> <p>We used <code>DL Models</code>, <code>XGB</code>, <code>LGB</code> and <code>LR</code>. The best single model scored about 0.122~0.124 on the LB. We build a multi-layer stacking system to ensemble different models together (about 140+ model results), this method can get a gain of ~0.007 on public LB.</p> <strong>Post-processing</strong> <p>As we all knonw, the distribution of the training data and test data were quite different. We cutted the data into different parts according to the clique size and rescale the results in different parts, this method can get a gain of ~0.001 on public LB. </p> <p>What's more, we developed a light weight framework 'FeatWheel' to help us to finish ML jobs, such as feature extraction, feature merging and so on, you may enjoy it.</p>
Quora Question Pairs
Solution #12 overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Quora Question Pairs <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks, and congrats to you as well!</p> <p>Thanks!</p> <p>Thanks!</p> <p>Your LB dropped with pairwise features? Maybe your singleton features were capturing some of the magic. First that comes to mind is the disymmetry of qid difference likelihoood. </p> <p>What made the huge gain? Some additional NLP features, and the use of target instead of predictions when computing our fourth magic. And yes, we worked like hell to submit with that improvement as early as possible the last day. We wanted all to be puzzled by the jump when waking up ;)</p> <p>Here is an overview of our solution as seen by me. I hope my fellow teammates will correct/augment what I say here. First of all, let me thank them for great ideas and code, hard work, and great team spirit, including when we were losing ground on LB. I also want to thank the numerous kagglers that shared useful content. I name few of them below but there are way more to thank. And thanks to kaggle team and the organisers for this very interesting dataset. Of course, we all have quesitons about the sampling strategy used to construct that dataset, and any feedback from Quora team would be much appreciated.</p> <p>This competition mean lots of learning for the four of us. I personally learned a lot about NLP as it was a first time. I didn’t know what word embeddings were when I started for instance. Fortunately, the official external data thread plus posts by various people contained great starter lists. </p> <p>I will focus mostly on the xgb/lgb side of our solution, and stacking, and will let our DL experts, kenchen and yifanxie describe the NNs side if they feel inclined to do so.</p> <p>We used 4 magic features families, 3 of them having been shared publicly one way of another.</p> <p>Graph node features. Looking at the graph where questions are nodes and rows are edges, we computed node degree (Jared’s frequency), connected component size, biconnected component sizes, average neighbor degree, etc</p> <p>Graph edge features. Using same graph, computed features based on common neighbors, eg number of common neighbors, number of common neighbors divided by sum of number of neighbors for both questions, etc</p> <p>Temporal pattern. We did not spent much time on that, but we have one feature that is the abs difference of each question rank in order of appearance. Ranks are computed for each side (question1, and question2) independently. We observed that the order of questions has an effect but did not model it.</p> <p>Transitivity of is duplicate. If q1 is duplicate of q2 and q2 is a duplicate of q3 then q1 and q3 are most probably duplicates. We say most probably because this is not 100% true on the train dataset. A variant is: i If q1 is duplicate of q2 and q2 is not a duplicate of q3 then q1 and q3 are most probably not duplicates. One way to capture this is to to say that if q1 and q2 have similar properties with respect to their common neighbors, then they are probably duplicates. We therefore compute, for each level 1 prediction p, and for each row (q1, q2): the series of predictions p(q1,q) and the series of predictions p(q2,q) for all q common neighbours of q1 and q2. We then compute distances between these series, and add it as a feature for level 2. We improved this in the last day of the competition by using the target value instead of train predictions in an out of fold manner. </p> <p>While the first three feature families may be artifacts of the sampling method used for constructing the dataset (aka leaks), the last one is not. It is exploiting a mathematical property of the is_duplicate relation.</p> <p>We also used a lot of NLP features, starting from ones shared by @anokas and @abishek, and also drawn from previous competition winners, especialy @ChenglongChen github repo. We computed features on raw text, text cleaned in various ways, and stemmed/lemmatized. We used various word vectors: pretrained Word2vec, Glove, Fasttext, but also computed ones using gensim dbow2vec, and lsi. For each embedding we computed distances similar to those shared by abishek, and also 10 component pca of the abs difference of question vectors. Using pca has the advantage of not basing any feature on the order in which the vector components are presented.</p> <p>Most of the NLP features are used in their pairwise form. For instance, if l1 = len(q1) and l2 = len(q2) are the length of each question in a row, we use min(l1, l2) and max(l1,l2) as features instead of l1 and l2. We also took the ration of the smallest to largest in some cases. Goal is to not have features that depend on the order in which questions appear in a row. Another way is to duplicate rows in each fold: if (q1, q2) is a row, then add the row (q2, q1). This avoids overfitting on very popular questions whatever the other question they appear with.</p> <p>We used class weight or output scaling to improve LB When using output scaling there is a trap that may explain why stacking failed for many teams. If you input rescaled level 1 test predictions to your level 2 learning, then you must also rescale the level1 out of fold train predictions. In general, whatever you do to compute your test predictions must be applied to the train predictions. </p> <p>We generated about 100 level 1 models over the course of the competition, mostly using xgboost and lightgbm on the one end, and NNs on the other end. In the last couple of weeks we also added xgb gblinear, random forest (H2O and sklearn), extra tree classifiers, logistic regression, and few others, to add variety to the ensemble. Our best submitted model is a xgb with 600+ features, with LB 0.131x. We may have built better models after that but we didn’t submit them individually.</p> <p>We built 2 ensemble, one made of models that did not use any of the 4 magic feature families, and one where these features were allowed. For both ensemble we used xgb, lgb, logistic regression, and NNs. Transitivity post processing of level1 predictions was used for xgb/lgb. Level 3 classifier was a logistic regression. We tried to apply transitivity again at level 3 but that led to overfit. Stacking yield about 0.01 LB improvement overall.</p> <p>Work was split that way between us: Cpmp: feature engineering, xgb, stacking Ss: feature engineering, lgb, additional algorithms run KenChen: NNs, feature engineering, Yifanxie: NNs, feature engineering, xgb and other algorithms runs</p> <p>We also learned about teaming as we never worked together before. It has been a very tiring, but very rewarding experience. Our 0.003 improvement the last day of the competition was the best ending we could dream of.</p> <p>Thanks.</p> <p>An example of how to build graph features is given by @qianqian here: <a href="https://www.kaggle.com/c/quora-question-pairs/discussion/34283#190117">https://www.kaggle.com/c/quora-question-pairs/discussion/34283#190117</a></p> <p>Thanks! I must admit I wouldn't have understood this write up either if I had not participated in the competition ;)</p> <p>We also found that removing NN models from our ensemble was losing almost 0.01 on LB. Diversity was more important than individual model accuracy.</p>
RANZCR CLiP - Catheter and Line Position Challenge
10th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you. You can also be a GM sooner or later😃</p> <p>First of all, I would like to thank Kaggle and the organizers for hosting this great competitions. Also, I would like to thank my teammates - <a href="https://www.kaggle.com/ludovick" target="_blank">@ludovick</a> and <a href="https://www.kaggle.com/woshifym" target="_blank">@woshifym</a> . Due to the time difference, we can work 24 hours a day.</p> <h1>Solutions</h1> <p>Our solution can be divided into 2 parts, pytorch part and tensorflow part.</p> <h2>Pytorch part</h2> <p>For Pytorch part I would like to thank <a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a> for the 3-step method first. All our models based on 3-step method. We trained models with different backbones and different image size to get better results.</p> <ul> <li>resnet200d + image size: 600 CV: 0.9578</li> <li>ecaresnet269d + image size: 600 CV: 0.9584</li> <li>resnest200e + image size: 640 CV:0.9625</li> </ul> <p>After that we found <a href="https://github.com/alipay/cvpr2020-plant-pathology" target="_blank">Soft label method</a> performs great in this competition. So we trained all the models above using this method to form our 4-step method.</p> <h2>Tensorflow part</h2> <p>My teammates <a href="https://www.kaggle.com/ludovick" target="_blank">@ludovick</a> did a great job using tensorflow. I would like to invite him to introduce his method in details later.<br> For brief summary:</p> <ul> <li>Efficient B7 + 1024 image size</li> <li>Efficient L2 + 768 image size</li> <li>Soft label method</li> <li>Pesudo Label</li> </ul> <h2>Ensemble</h2> <p>For ensemble, we use the OOF files to train a linear regression model. Then we use this linear model to get the weight for each model. We select our best CV and best public score in our final submissions. Unfortunately, we just missed out best ensemble which is the second best in public score.</p> <p>Final submission one <br> CV: 0.9692 Public LB: 0.971 Private LB: 0.973</p> <ul> <li>resnet200d + 600</li> <li>ecaresnet269d + 600</li> <li>resnest200e + 640</li> <li>Efficient B7 + 1024 + soft label + PL</li> <li>Efficient L2 + 768 + soft label</li> </ul> <p>Final submission two<br> CV: 0.9689 Public LB: 0.971 Private LB: 0.974</p> <ul> <li>resnet200d + 600</li> <li>resnest200e + 640</li> <li>Efficient B7 + 1024 + soft label + PL</li> <li>Efficient B7 + 1024 + PL</li> <li>Efficient L2 + 768 + soft label</li> </ul> <h2>Tricks</h2> <ul> <li>Larger model</li> <li>Higher resolution</li> <li>Soft Label</li> <li>Pesudo Label</li> </ul> <h2>Acknowledge</h2> <p>Thanks to TFRC team for providing us free TPU.</p> <p>Thanks. We didn't try segmentation method in our solution. Soft label and Pesudo Label is the most important tricks in out solution.</p>
RANZCR CLiP - Catheter and Line Position Challenge
11th Place Solution - Utilizing High resolution, Annotations, and Unlabeled data
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Actually I noticed this paper today, too. Planning on reading it :)</p> <p>Thanks a lot! Rooting for your last gold for GM ;)</p> <p>You asked for some references for pseudo training. Actually, if you search with the keyword <code>pseudo labeling</code>, you can find a lot of resources here in kaggle or at google. Previous SOTA of Imagenet, noisy-students (<a href="https://arxiv.org/abs/1911.04252" target="_blank">https://arxiv.org/abs/1911.04252</a>) also uses pseudo labeling.<br> 축하 감사합니다 :)</p> <p>I had access to 4 x RTX3090, which I really appreciate ;)</p> <p>Thanks. Congratulations for your impressive finish too</p> <p>Thanks ;) 감사합니다 ㅎㅎ</p> <p>Thanks a lot! 감사합니다😀</p> <p>Congratulations to the winners. It was an honor to compete with brilliant minds.</p> <p>I'll first summarize the history of my public scores.</p> <ol> <li>efficientnet-b5 baseline: <strong>0.956</strong></li> <li>downconv: <strong>0.959</strong></li> <li>segmentation pretrain: <strong>0.963</strong></li> <li>other optimizations: <strong>0.966</strong></li> <li>5fold: <strong>0.970</strong></li> <li>pseudo training (<em>For brevity, I'll call <code>pseudo-labeling-&gt;training</code> as <code>pseudo training</code></em>): <strong>0.972</strong></li> <li>5fold &amp; ensemble with b4, b6: <strong>0.972</strong></li> </ol> <p>This is the model I used.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1796795%2Fb1fac286781793dc0ee4674cb04f3859%2F2%20(1).JPG?generation=1657688615932601&amp;alt=media" alt=""></p> <p>Now I'll go into details of the 3 main problems that I faced and the solutions for each of them.</p> <h2>1. How to utilize high resolution? - Downconv</h2> <p>We are given &gt;2048x2048 resolution images, and we lose information when we downsample. So like many observed, increasing the resolution boosts the score quite much. </p> <p>I wanted to use 2048x2048 resolution, but it was too large to fit the computer. So I used one convolutonal layer to downsample the image from 2048x2048 to 1024x1024, then used regular CNN. GPU memory usage was almost identical to when using plain 1024x1024 image as input. It was inspired by <a href="https://www.kaggle.com/ekydna" target="_blank">@ekydna</a> 's method at <a href="https://www.kaggle.com/c/understanding_cloud_organization/discussion/118255" target="_blank">https://www.kaggle.com/c/understanding_cloud_organization/discussion/118255</a></p> <p>To be precise, I used following pseudo code. I concatenated avgpool-ed input with downconv-ed input.</p> <pre><code>def __init__(): self.avgpool = nn.AvgPool2d(2) self.downconv = nn.Sequential( nn.Conv2d(1, 7, kernel_size=5, stride=2, padding=2, bias=False), nn.BatchNorm2d(7), nn.ReLU() ) def forward(x): x = torch.cat((self.avgpool(x), self.downconv(x)), dim=1) features = CNN(x) </code></pre> <h2>2. How to utilize catheter position annotations? - Pre-training</h2> <p>We are given additional catheter position annotations for some of the image, so it was natural to find a way to leverage this information. Segmentation models came to mind first, so from the start, I processed the annotations to segmentation masks and used UNet architecture.</p> <p>At first, I tried to use multi-task learning, but I couldn't make it work. I suspect that it is difficult for the model to squeeze out the classification loss, when there is also segmentation loss.</p> <p>Then, I tried pre-training approach which worked. I pretrained UNet with segmentation loss only, used pretrained UNet encoder to train for the classification. You need to increase pos weight for the segmentation bceloss to make it work.</p> <p>Also, I tried concatenating mask predictions to the image input, but couldn't make it work to increase CV.</p> <h2>3. How to utilize unlabeled data? - Pseudo-training</h2> <p>There are a lot of external datasets, like listed in <a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/220873" target="_blank">https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/220873</a> , which we could use for the competition(<a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/222644#1231865)" target="_blank">https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/222644#1231865)</a>. But the thing is, they are not labeled. So the question becomes how to utilize unlabeled data. I tried two approaches; pre-training and pseudo-training.</p> <p>Recently, there are a lot of research on self-supervised learning to make use of large unlabeled data, especially contrastive learning in computer vision. I experimented with SWAV and SIMSIAM. I spent 2 weeks on these contrastive pretraining approach, but unfortunately it didn't boost the score.</p> <p>On the other hand, pseudo-training worked. I trained 5fold model with labeled dataset, used them to predict unlabeled dataset, selected images that have max prob &gt; 0.5, appended it to the original dataset, and trained model.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1796795%2F67cd710a790111a4324a39672d2de966%2F3%20(1).JPG?generation=1657688656901232&amp;alt=media" alt=""></p> <h2>Other points</h2> <h3>Extensive augmentations</h3> <p>Hard augmentation prevents the model from overfitting. I used albumentations library for augmentations.</p> <pre><code>transforms = albu.Compose([ albu.RandomResizedCrop(cfg.resolution, cfg.resolution, scale=(0.9, 1), p=1), albu.OneOf([ albu.MotionBlur(blur_limit=(3, 5)), albu.MedianBlur(blur_limit=5), albu.GaussianBlur(blur_limit=(3, 5)), albu.GaussNoise(var_limit=(5.0, 30.0)), ], p=0.7), albu.OneOf([ albu.OpticalDistortion(distort_limit=1.0), albu.GridDistortion(num_steps=5, distort_limit=1.), albu.ElasticTransform(alpha=3), ], p=0.7), albu.CLAHE(clip_limit=4.0, p=0.7), albu.IAAPiecewiseAffine(p=0.2), albu.IAASharpen(p=0.2), albu.RandomGamma(gamma_limit=(70, 130), p=0.3), albu.RandomBrightnessContrast(brightness_limit=0.2, contrast_limit=0.2, p=0.75), albu.OneOf([ albu.ImageCompression(), albu.Downscale(scale_min=0.7, scale_max=0.95), ], p=0.2), albu.CoarseDropout(max_holes=8, max_height=int(cfg.resolution * 0.1), max_width=int(cfg.resolution * 0.1), p=0.5), albu.ShiftScaleRotate(shift_limit=0.1, scale_limit=0.1, rotate_limit=15, border_mode=0, p=0.85), albu.Normalize(mean=0.482288, std=0.22085) ]) </code></pre> <p>I didn't use HorizontalFlip since I thought the position of the catheter endpoint matters, and it didn't improve CV.</p> <h3>Ensemble</h3> <p>When only training with original dataset, 5fold ensemble boosted the score quite much. However, when training with pseudo labeled dataset, 5fold ensemble didn't boost the score much. I suspect the reason is; 1. with additional data, the data 5fold models see overlaps more 2. soft pseudo labels, which has more implicit informations than hard labels, forces the models to converge to certain point thus models lose diversity.</p> <p>Ensembling different model architectures also didn't help much in pseudo training stage.</p> <p>Yeah, downconv is easily implemented, so I also think it doesn't hurt trying it. Thanks!</p> <p>Yeah, I probably wouldn't have thought of it too, if I didn't see Andrey Kiryasov's solution. Thanks!</p> <p>I don't know how to do it with tensorflow. Maybe there is such code at <a href="https://github.com/qubvel/segmentation_models" target="_blank">https://github.com/qubvel/segmentation_models</a></p> <p>You just need to replace first conv layer. For pytorch code, you can refer to <a href="https://github.com/qubvel/segmentation_models.pytorch/blob/master/segmentation_models_pytorch/encoders/_utils.py#L5" target="_blank">https://github.com/qubvel/segmentation_models.pytorch/blob/master/segmentation_models_pytorch/encoders/_utils.py#L5</a> .</p>
RANZCR CLiP - Catheter and Line Position Challenge
12th on Public, 28th on Private Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>1. Image pre-processing:</h1> <p>All our classification models in the final solution were trained in 600x600 resolution and inference in 640x640 resolution. <br> All images were cropped based on black limits. We also discovered that some images are inverted, so we applied reverse inversion. <br> Augmentations: random resized crop, horizontal flip, random gamma, brightness, shift, scale, rotate, contrast, different blurs, elastic/optical/grid distortions, cutout.</p> <h1>2. Model design:</h1> <p>As input for most of our models we used 4 channel input: <br> 1-channel grayscale ct-image<br> 3-channel mask, where ETT, CVC, NGT, SwanGanz were encoded as (0, 0, 255), (0, 255, 0), (255, 0, 0) and (255, 255, 255) respectively. To get such masks we trained a simple segmentation network Unet with EffNet-b0 backbone.</p> <p>Best performing classification backbones: EfficientNet-b6, EfficientNet-b7, ResNet200d, SeResNet152d</p> <h1>3. Training:</h1> <p>Group KFold 5-fold CV<br> Batch size: 14 with 2 batch gradient accumulation (on 4 GPUs)<br> Adam for 30 epochs.<br> LR schedule: 0.001 SGDR<br> The top 5 checkpoints were used to apply SWA</p> <h1>4. Loss:</h1> <p>BCE or <a href="https://github.com/BloodAxe/Kaggle-2020-Alaska2/blob/3c1f5e8e564c9f04423beef69244fc74168f88ca/alaska2/loss.py#L305" target="_blank">ROCAUCLoss</a>+BCE</p> <h1>5. Ensembling all together:</h1> <p>The simple mean of the predictions appeared to work the best. We’ve tried different techniques to merge predictions but didn’t succeed. <br> The single model trained in this way (5 fold) without any post-processing gets <strong>0.971</strong> Public LB and <strong>0.970</strong> Private LB. </p> <p><strong>Framework:</strong> PyTorch</p> <h1>What didn't work:</h1> <ul> <li>Chest14</li> <li>Pseudo Labels</li> </ul>
RANZCR CLiP - Catheter and Line Position Challenge
140th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here is the code for converting to image.</p> <pre><code>submissions = [sub1, sub2, sub3, sub4] submissions = [sub.loc[:, :1].values for sub in submissions] x = np.array(submissions) # 4xNx11 x = np.transpose(x, (1, 0, 2)) # Nx4x11 x = np.expand_dims(x, axis=1) # Nx1x4x11 </code></pre> <p><a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a>’s pretrained weights are the key to get CV&gt;.96, which boosted the CV and LB.</p> <p>Thank you👍🏻</p> <p>Thank you:)</p> <p>First of all, thanks to <strong>Royal Australian &amp; NZ College of Radiologists</strong> and <strong>kaggle</strong> for organizing this competition. </p> <h1>Summary</h1> <p>Private LB: 0.970</p> <p><img src="https://user-images.githubusercontent.com/66665933/111384982-7ef09800-86ed-11eb-89cd-5848b0b41f8c.png" alt="input_pipeline (1)"></p> <table> <thead> <tr> <th>Model</th> <th>Note</th> <th>CV</th> <th>LB</th> </tr> </thead> <tbody> <tr> <td>ResNet200D</td> <td></td> <td>0.9566</td> <td>0.965</td> </tr> <tr> <td>Multi-Head ResNet200D</td> <td><a href="https://www.kaggle.com/ttahara" target="_blank">@ttahara</a>'s multi-head approach</td> <td>0.9639</td> <td>0.963</td> </tr> <tr> <td>EfficientNet B5</td> <td></td> <td>0.9570</td> <td>0.96</td> </tr> <tr> <td>Resnet200D</td> <td>pseudo labelling test</td> <td>0.9628</td> <td>0.965</td> </tr> <tr> <td>Weight Average of these models</td> <td></td> <td>0.9673</td> <td>0.967</td> </tr> <tr> <td>Stacking(sub1)</td> <td>powered average with weights</td> <td>0.9681</td> <td>0.968</td> </tr> <tr> <td>Stacking(sub2)</td> <td>powered average with weights</td> <td>0.9676</td> <td>0.968</td> </tr> </tbody> </table> <ul> <li>StratifiedGroupKFolds(k=5)</li> <li><a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a>’s pretrained weights </li> <li>heavy augmentation</li> <li>Adam+OneCycleR</li> <li>2×TTA(Horizontal Flip)</li> <li>weight average</li> <li>stacking of 4 models</li> </ul> <p>Let me know if you have any questions. Thank you:). </p>
RANZCR CLiP - Catheter and Line Position Challenge
16th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you </p> <p>I think it was a collabrative effort</p> <p>Sorry, I have a Change in the architecture<br> The Multi Head Attention were also 3 stages.</p> <p>Thanks </p> <p><strong>Thanks to Kaggle and hosts for this very interesting competition with a annotations. This has been a great collaborative effort and please also give your upvotes to <a href="https://www.kaggle.com/syxuming" target="_blank">@syxuming</a>, <a href="https://www.kaggle.com/fanwenping" target="_blank">@fanwenping</a>, <a href="https://www.kaggle.com/chanyanyuese" target="_blank">@chanyanyuese</a>. Congrats to Winners</strong></p> <p>Funny thing we had CV more than 1st Place Winner but lower in LB 😂😂</p> <h3>TLDR</h3> <p>We had a simple approach . here is the basic diagram,<br> <img src="https://i.ibb.co/f2j1XwB/Blank-diagram-6.png" alt=""></p> <p>First of All I would like to thank <a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a> and <a href="https://www.kaggle.com/ttahara" target="_blank">@ttahara</a> for the starting points and notebooks. <a href="https://www.kaggle.com/ttahara" target="_blank">@ttahara</a> your model in CV scored 0.9767 LB 0.972 (which is our best single) and Staged training proposed by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> and kernel provided by <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> and followed by finetuning with multiple datasets</p> <p><strong>CV strategy</strong> : we all had had almost different CV split but same algorithm. as proposed by <a href="https://www.kaggle.com/underwearfitting" target="_blank">@underwearfitting</a></p> <h3>Models</h3> <p>We basically used 3 backbones with 4 heads( but only 2 of them were experimented) into submission:</p> <ul> <li>Backbones<br> --. Resnet200d<br> --. EfficientNetB7<br> -- Resnet50d</li> <li>Head<br> -- Multi head Attention<br> -- GeM<br> -- Simple Global Pooling<br> -- AdaptiveConcatPooling</li> </ul> <h3>Our Strategy</h3> <p>We Started with the idea proposed by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> and <a href="https://www.kaggle.com/ttahara" target="_blank">@ttahara</a> as starting. <br> We train the model as 3 stages as <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>. When we trained Resnet200d with GeM with soft label(which is created by stage 3 model) on our competition set itself ( we thought it as same as knowledge distillation). and pretrain the model. this model showed us a CV: 0.97 and LB: wasn't tested. We found this soft labelling helps a lot. And we continue to do this on NIH , PadChest , VinBigData external dataset. <br> doing this CV: 0.971/ PublicLB: 0.970/ PrivateLB: 0.971 for 3 staged model<br> and CV: 0.9767/PublicLB: 0.970/ PrivateLB: 0.972 for Multi Head Attention<br> We first soft labelled only NIH and pretrained and finetuned and then PadChest pretrained and finetuned. We trained every staged model into 5 folds and only 1 fold for Multi Stage. We also used <a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a> 's public high scoring weights and pretrained and finetuned to give into the ensemble.</p> <h3>Ensembling</h3> <p>We used simple averaging</p> <h3>Other Points</h3> <h4>Augmentation</h4> <pre><code>RandomResizedCrop(CFG.img_size, CFG.img_size, scale=(0.9, 1), p=1), HorizontalFlip(p=0.5), ShiftScaleRotate(p=0.5), HueSaturationValue(hue_shift_limit=10, sat_shift_limit=10, val_shift_limit=10, p=0.7), RandomBrightnessContrast(brightness_limit=(-0.2,0.2), contrast_limit=(-0.2, 0.2), p=0.7), CLAHE(clip_limit=(1,4), p=0.5), OneOf([ OpticalDistortion(distort_limit=1.0), GridDistortion(num_steps=5, distort_limit=1.), ElasticTransform(alpha=3), ], p=0.2), OneOf([ GaussNoise(var_limit=[10, 50]), GaussianBlur(), MotionBlur(), MedianBlur(), ], p=0.2), Resize(CFG.img_size, CFG.img_size), OneOf([ JpegCompression(), Downscale(scale_min=0.1, scale_max=0.15), ], p=0.2), IAAPiecewiseAffine() </code></pre> <p><strong>Logging</strong>: Neptune.ai</p> <h3>Things we didn't had time to do</h3> <ul> <li>Segmentation based learning ( that made us apart from other top candidates)</li> <li>Retrieval based learning</li> <li>GeM Pooling and AdaptiveConcatPool ( we prepared didnt experimented )</li> </ul> <h3>Things didn't worked</h3> <ul> <li>Multi Staged Ensembling ( because it was hurting too much )</li> <li>Dynamic Temperature for pseudo labelling</li> </ul> <p>Thanks to <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a> for his <a href="https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/220563" target="_blank">writeup</a> where I got how to write a solution writeup (since this is my First writeup)</p> <p>Thanks to All we learned a lot Team <strong>HotWater</strong></p> <p>Thanks </p> <p>Thanks, also Congratulation on your achievement</p> <p>Sure. We had 8x V100 GPU(cloud) and 3x 3090 GPU</p> <p>Thanks for your MultiHead Attention<br> And Congratulations on your score!</p>
RANZCR CLiP - Catheter and Line Position Challenge
17th place with 16 submissions :)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I joined late and need to go through important ideas only…</p> <p>I will definitely meet some plateau I believe…</p> <p>Thanks! You do an amazing work!</p> <p>Thanks <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> </p> <p>Here is my submission and experiment records:</p> <ul> <li>effficentnetb5-640 with tube prediction: 0.96136/0.964/0.968</li> <li>resnet200d-640 without tube prediction: 0.9614/0.959/0.967</li> <li>efficientnetb3-512 with tube prediction: 0.955/0.959/0.964</li> </ul> <p>2 models (effficentnetb5-640 and resnet200d-640) have almost the same cv but 0.005 difference in LB, maybe due to there are only 14k*0.25=3.5k images on public LB… Still better to trust the cv if cv split makes sense :)</p> <p>2 efficientnet models seems to work on my side, maybe I used tube prediction that guides the training process. Did not submit the efficientnet model without tube predictions, so I could not compare from my side. </p> <p><strong>Thanks to the competition host and Kaggle for this competition! Congrats to all winners!</strong></p> <h4><strong>[Solution Overview]</strong></h4> <p><strong>1. Validation Split:</strong> </p> <ul> <li>Used patient id to split train-validation set for both train and annotation csv files to create cv folds for later steps. </li> </ul> <p><strong>2. Make use of partial annotation images:</strong></p> <ul> <li>I train a efficientnet-b4 segmentation model to predict the location of 4 kinds of tubs + backgrounds at <strong>pixel-level</strong>: ETT, NGT, CVC, and Swan Ganz Catheter</li> </ul> <p><strong>3. Train diverse classification models</strong></p> <ul> <li>Models: resnet200d, seresnet200d, efficient b5, </li> <li>Use the out-of-fold predictions from step2 as extra 5 channels, merge with the original image to be 6-channels input (+0.005 cv)</li> <li>Use multihead approach from <a href="https://www.kaggle.com/ttahara" target="_blank">@ttahara</a> (+0.002 cv)</li> <li>Use pretrained good starting points from <a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a> (+0.005 cv)</li> <li>Use img size 640</li> </ul> <p><strong>4. Simple average</strong></p> <h4><strong>[Base Model Performances (cv/public lb/private lb)]</strong></h4> <ul> <li>resnet200d-640 with tube predictions, heavy aug: 0.96386/0.969/0.970</li> <li>seresnet152d-640 with tube predictions, heavy aug: 0.96158/0.967/0.969</li> <li>effficentnetb5-640 with tube predictions, heavy aug: 0.96136/0.964/0.968</li> <li>resnet200d-640 without with tube predictions, light aug: 0.9614/0.959/0.967</li> <li>blend of 4 above model: 0.968/0.971/0.973</li> </ul> <h4><strong>[What does not work]</strong></h4> <ul> <li>Predict 11 labels + backgrounds in segmentation model and add to step 3. as multichannel input</li> </ul>
RANZCR CLiP - Catheter and Line Position Challenge
18th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you! My all models are for 640x640 image size.</p> <p>Exactly! We need more GPUs…</p> <p>Thank you!</p> <p>I really appreciate your effort and contribution. You are my hero.</p> <p>Thank you for much!</p> <p>Exactly! I added 20sec job to my submission notebook, then run time error happened…</p> <p>Thank you! I have had trouble in avoiding run time error…</p> <p>Thank you for kind comment and competing with me. It was really exciting.</p> <p>Sorry for your confusion. 2nd and 3rd model have "multi-head" classifier to each class. Technically it is one model, but can be considered as 11 models. </p> <p>Thank you for your comment!</p> <p>Hi all! Thank you for hosting and organizing such a great competition. I learned many things from this competition.</p> <h1>Summary</h1> <ul> <li>As a starting point, I used <a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a> and <a href="https://www.kaggle.com/underwearfitting" target="_blank">@underwearfitting</a> pretrained model. Thank you for sharing!!</li> <li>3rd Training Stages Strategy:<ul> <li>1st: basemodel + classifier for all class with light augmentations</li> <li>2nd: basemodel + classifier for each class with light augmentations</li> <li>3rd: basemodel + classifier for each class with heavy augmentations</li></ul></li> <li>As a heavy augmentation, I tried to use "annotation-based mixcut"<ul> <li>To increase data variation, randomly erase the annotated area and its label, and fill with non-label image</li></ul></li> <li>To avoid the Notebook Runtime Error, I measure runtime many times, then I think I could achieve 8hours 59mins runtime by giving up TTA for fold10 of Model3.<ul> <li>I was so lucky to avoid errors!</li></ul></li> </ul> <p>I believe I could have tried many other ideas like going deeply inside external dataset, but I competed solo and did not have enough time and GPU resources.</p> <p>Finally I got public 0.970(22nd) and private 0.973(18th). </p> <p><img src="https://pbs.twimg.com/media/EwrSCJZUYAEQCW3?format=jpg&amp;name=large" alt=""></p>
RANZCR CLiP - Catheter and Line Position Challenge
19th Place Solution - First Time With NFNets
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I think they will be in time. <a href="https://www.kaggle.com/rwightman" target="_blank">@rwightman</a> still trying to get them more practical in his great library:</p> <p><a href="https://twitter.com/wightmanr/status/1372292319634882560" target="_blank">https://twitter.com/wightmanr/status/1372292319634882560</a></p> <p>Thanks Chris, congrats to you and your team too, I really liked your teams predicting/finding similar images from nih solution!</p> <p>I think so, it's +0.005 with nfnet and +0.003 without nfnet if I remember correctly.</p> <p>Hey all, congrats to all for their hard work. Before I start I'd like to thank my teammates <a href="https://www.kaggle.com/pukkinming" target="_blank">@pukkinming</a> , <a href="https://www.kaggle.com/frtgnn" target="_blank">@frtgnn</a> and <a href="https://www.kaggle.com/divyansh22" target="_blank">@divyansh22</a> for their hard work and commitment. The competition was really interesting for us and learned a lot on the way.</p> <p>Also would like to thank all the competitors where they shared their codes, ideas, discussions, datasets etc.</p> <p>Along the way we tried many different ideas, both simple or complex, but I'm going to share key points taken to keep things simple:</p> <h1>Preprocessing</h1> <ul> <li>We went with 640x image size, tried couple other resolutions but didn't get expected scores or we couldn't go with them because of performance or timing reasons.</li> <li>For augmentations we usually went something medicore aprroach.</li> <li>We tried removing black boxes around but didn't give us better results.</li> </ul> <h1>Models</h1> <p>We tried several architectures but we got CV success mainly on two of them:</p> <ul> <li>NfNet F3,</li> <li>ResNet 200d</li> </ul> <h1>Training</h1> <p>After getting stable CV/LB correlation we tried to get our CV high as possible, first we tried ImageNet pretrained models on training images themselves to test limits of these with different losses, schedulers, optimizer combinations.</p> <ul> <li>We found out that Ranger optimizer converges faster while SAM optimizer generalizes better. So in the light of that we decided to use some kind of distillation of model using Ranger first then fine tuning to find minima with SAM optimizer.</li> </ul> <h1>Pseudo Labeling</h1> <ul> <li><p>With the findings from distillation we decided to train model on bigger dataset and transfer learning to main model itself, so we start working on external datasets like NIH. </p></li> <li><p>We pseudo labeled NIH dataset with our best single models and then retrained these models with the created labels.</p></li> <li><p>This method increased our CV considerably, so we kept trying different approaches for increasing the CV with this method.</p></li> </ul> <h1>Ensemble</h1> <ul> <li>After getting satisfactory results with CV's we went with ensembling our best CV models. </li> <li>Weighting were based on CV's with simple averaging while NfNet model taking huge weight on this followed by ResNet200d.</li> </ul> <h1>Post Processing</h1> <ul> <li>We tried several approaches based on train/test distributions but no avail.</li> </ul> <h1>Trusting your CV</h1> <p>In last hours we were deciding to choose which submission would be to best selection. At the end we went with highest CV ensemble and found out it was actually our best private submission.</p> <h1>Side Note About NfNets:</h1> <p>I'd say our NfNet F3 model gave really robust result and made us survive the shakeup. At some point we were almost giving up on that architecture and move on because of the performance/time trade. I want to thank my teammates again for their patience with my obsession about the model and enduring my endless improvement ideas about them :)</p> <p>I actually shared one of the earlier versions as public notebook, our private model was little bit modified/pretrained version with heavier approach. You can check the code here:</p> <p><a href="https://www.kaggle.com/datafan07/ranzcr-nfnets-tutorial-single-fold-training" target="_blank">https://www.kaggle.com/datafan07/ranzcr-nfnets-tutorial-single-fold-training</a></p>
RANZCR CLiP - Catheter and Line Position Challenge
1st Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi, thank you.</p> <p>No we didn't use ONNX or pruning. We put all the models on the GPU, then use one dataloader to iterate all test data only once. We maximized the 9 hour limit by squeezing in as many models as possible without timeout.</p> <p>Thank you. Oh, you noticed 😏</p> <p>Congrats to all the winners. Thank you for great collaboration, my long time teammates <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a> and <a href="https://www.kaggle.com/garybios" target="_blank">@garybios</a>. I had a blast once again working with such talented teammates.</p> <h2>TL;DR</h2> <p>2-stage segmentation and 2-stage classification pipeline. Using pseudo labels in both segmentation and classification.</p> <h2>Dataset definitions</h2> <p>Different subsets of the NIH ChestX dataset (112k images) are illustrated in this Venn diagram. </p> <ul> <li>Official dataset (30k) contains 9k images with tube segmentation ground truth. </li> <li>We also used Dr. Konya’s <a href="https://www.kaggle.com/sandorkonya" target="_blank">@sandorkonya</a> trachea bifurcation annotation <a href="https://www.kaggle.com/sandorkonya/5k-trachea-bifurcation-on-chest-xray" target="_blank">dataset</a> which has 5k images. Thank you Doctor!</li> <li>For pseudo labeling, we identified 28k external images outside the Train set which contain tubes: (1) first run a segmentation model inference on all the 112k images to filter out all the images without tubes; (2) using imagehash to de-duplicate the 30k images that are already in 30k train set; (3) link patient IDs to make sure same patient in external data and original data falls into the same fold.<br> <img src="https://i.imgur.com/KJQumVO.png" alt=""></li> </ul> <h2>Pre-processing</h2> <p>For images, we applied <a href="https://www.kaggle.com/ratthachat/aptos-eye-preprocessing-in-diabetic-retinopathy#2.-Try-Ben-Graham's-preprocessing-method." target="_blank">“Ben’s pre-processing”</a> with different parameters, in order to train diverse models.</p> <p>For official segmentation annotation, we make 2 channel masks by drawing lines representing the tube, and drawing big dots indicating the tips of tubes. See pictures below.</p> <p>For trachea bifurcation annotation, we make 1 channel masks by drawing big dots.</p> <h2>Segmentation Stage 1</h2> <p><img src="https://i.imgur.com/xdP22R1.png" alt=""></p> <ul> <li>Model 1: Mask is tube and tips - 2 channel output<ul> <li>Train and validate on 9k images with tube anno</li> <li>Pseudo label on 28k + (30k - 9k) data without tube anno</li> <li>10 model ensemble with a mixture of Unet and Unet++, with backbones B3-B8, and different preprocessing parameters, at image size 1024x1024 to 1536x1536.</li></ul></li> <li>Model 2: Trachea bifurcation (TB) - 1 channel output<ul> <li>Train and validate on 5k images with TB anno</li> <li>Pseudo label on 28k + (30k - 5k) data without TB anno</li> <li>Similar ensemble as model 1. But TBs are easier to segment, so the image sizes are 384x384 to 1024x1024</li></ul></li> </ul> <h2>Segmentation Stage 2</h2> <p><img src="https://i.imgur.com/r4fI8Iz.png" alt=""></p> <ul> <li>Tubes, tips and TB – 3 channel output<ul> <li>Train on 30k + 28k images (with combination of GT and pseudo labels)</li> <li>Validate tubes and tips on 9k data</li> <li>Validate TB separately on 5k data</li> <li>Predict out-of-fold on all 30k + 28k images, to be used by classification</li></ul></li> <li>Stage 2 needs to run in inference kernel, so there are only 5 models in the ensemble:<ul> <li>Unet++ B3 at 1536</li> <li>Unet B4 at 1536</li> <li>Unet++ B5 at 1024</li> <li>Unet++ B6 at 1024</li> <li>Unet B7 at 1024</li></ul></li> </ul> <p>Locally we trained 5 fold for each model, in order to get an OOF cv score. In inference, only one fold from each model is used.</p> <h2>Classification Stage 1</h2> <ul> <li>Input is 6 channel (3 ch original image + 3 ch predicted masks)</li> <li>Output is 12 classes: original 11 plus no_ETT, defined as whether all 3 ETT classes are 0</li> <li>Loss is weighted average of CE loss for the 4 ETT classes and BCE loss for the other 8 classes, with weight being 1:7</li> <li>Train on 30k data; make pseudo labels on 28k external data</li> <li>20 model ensemble, a mixture of EfficientNets, ResNets, ResNexts, ViTs at size 384 to 512, with various pre-processing parameters</li> <li>CV = <strong>0.97553</strong> with rank ensemble</li> </ul> <h2>Classification Stage 2</h2> <ul> <li>Same input, output, loss as Stage 1</li> <li>Training with 30k+28k data (combination of GT and pseudo labels)</li> <li>Since Stage 2 models need to go into inference kernel, overall model sizes are smaller than Stage 1</li> <li>31 model ensemble, a mix of EfficientNets, ResNets, SEResNexts, ResNexts, RegNet, Inception, RexNet, DenseNet, ViTs etc at size 384 to 512, with various pre-processing parameters</li> <li>CV = <strong>0.97606</strong> with rank ensemble</li> </ul> <p>Trained 5 fold locally, but only squeezed 67 folds into the inference kernel. </p> <h2>Post-processing</h2> <p>For AUC metric, sometimes it’s better to rank the probabilities of each model before ensembling. For this competition, we found that this is true only for these 5 columns:</p> <pre><code> 'ETT - Abnormal', 'NGT - Borderline', 'NGT - Incompletely Imaged', 'CVC - Normal', 'Swan Ganz Catheter Present' </code></pre> <p>This boosts CV score by about 0.00032</p> <h2>update</h2> <p>We have released simplified training and inference code:<br> <a href="https://www.kaggle.com/haqishen/ranzcr-1st-place-soluiton-seg-model-small-ver" target="_blank">Segmentation training</a>, <a href="https://www.kaggle.com/haqishen/ranzcr-1st-place-soluiton-cls-model-small-ver" target="_blank">Classification training</a>, <a href="https://www.kaggle.com/haqishen/ranzcr-1st-place-soluiton-inference-small-ver" target="_blank">Inference</a></p> <p>Thanks.</p> <p>I have NVIDIA DGX Station with V100 GPUs. I think my teammates both have HP Z8G4 Workstation with NVIDIA RTX6000 GPUs and HP ZBook with NVIDIA RTX5000 GPU.</p> <p>Thank you.</p> <p>Just simple average of soft labels, for both segmentation ensemble and classification ensemble. No weighting. No thresholding.</p> <p>Thanks. </p> <p>Q1: We have tube annotation for the 9k images, so we only need to make pseudo labels for the other 28k+30k-9k images. The goal for stage 1 is to make pseudo labels, to enable stage 2 to train on all 28k+30k data.</p> <p>Then in stage 2, we have labels for all 28k+30k data (among which 9k are GT labels, the rest being pseudo labels). So we train on all of them.</p> <p>Q2. For segmentation: HorizontalFlip, RandomBrightness, ShiftScaleRotate, Cutout<br> For classification: all the above, plus RandomContrast, OpticalDistortion, GridDistortion, HueSaturationValue</p> <p>Thanks. We used soft pseudo labels for both segmentation and classification. They work better than hard ones as they contain more information.</p> <p>Thanks.</p> <p>Yeah we also do this by hand. The AutoCV is still under development. 😉</p> <p>For optimizer, we like Adam. For image size, you can start with a small one like 256 or 384 depending on your hardware, then gradually increase to 512 or even larger after you tune your model on small images.</p> <p>Best learning rate varies somewhat for different model architecture, so they need to be tuned on each model. But you only need to run one fold per model for this. And it doesn't have to be exact. A slightly "off" learning rate for a single model won't affect ensemble's score by much.</p> <p>You're right. When we load the original image as 3 channel numpy array, the 3 channels are identical.</p> <p>It's just that when we started experiments, we were using 3 channel input (without masks), then later after adding 3 channel mask, we just kept it as 3+3 channels. But I believe 1+3 channels would work equally well as you suggested.</p> <p>Hi, for this purpose, we just used a single model (Unet++, B5, 1024) trained on official annotation. </p> <p>Hi, for some reason, Kaggle disabled attachment in forum.</p> <p>I uploaded the 28k images' IDs to a dataset: <a href="https://www.kaggle.com/boliu0/ranzcr-external-data-id" target="_blank">https://www.kaggle.com/boliu0/ranzcr-external-data-id</a></p>