Title of Competition
stringclasses 343
values | Title of Writeup
stringlengths 6
139
| User
stringclasses 1
value | Writeup
stringlengths 419
80.4k
|
---|---|---|---|
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
|
77th -> 58th place brief write up
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I started this competition from <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a> 's great notebook.<br>
ref: <a href="https://www.kaggle.com/mrkmakr/covid-ae-pretrain-gnn-attn-cnn" target="_blank">https://www.kaggle.com/mrkmakr/covid-ae-pretrain-gnn-attn-cnn</a></p>
<p>I tuned some in models, epochs, and data augmentation, then check CV scores. Finally I choose 8 submissions based on CV scores and simply took average of them. Averaging from pretty different models didn't work for me, so I gave up using other types of nn models.</p>
<p>I had a scale downed sub which scored 1st (if chosen), but I couldn't select it because there is no solid theory for me.<br>
ref: <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189182" target="_blank">https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189182</a></p>
<p>Anyway, I enjoyed this short-term competition and I hope our activities can contribute to the researches. Thank you to all the organizers and participants.</p>
<p>Private leaderboard finalized, my rank changed from 77th to 58th. I have some of my work published.</p>
<ul>
<li><a href="https://www.kaggle.com/sishihara/covid-ae-pretrain-gnn-attn-cnn" target="_blank">Model and epoch tuning</a></li>
<li>Data augmentation (<a href="https://www.kaggle.com/sishihara/fold-0-of-data-aug-of-covid-ae-pretrain" target="_blank">Train</a>, <a href="https://www.kaggle.com/sishihara/inference-of-data-aug43k-of-covid-ae" target="_blank">Predict</a>)<ul>
<li>Note: This version is not used for final submission. Instead I used better one which is trained and predicted on GCP.</li></ul></li>
</ul>
|
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
|
79th Place Solution: Distance Attention with Dijkstra's Algorithm
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Basic Idea</h1>
<p>We represented the protein structure as graph and computed shortest distance matrices using dijkstra's algorithm. We got additional boost using second best structure & corresponding loop type as feature. Our best model was a distance attention + Multi head Attention + LSTM/GRU head with 7 types of distance matrices and some engineered features. We started with distance attention notebook by <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/183865" target="_blank">https://www.kaggle.com/c/stanford-covid-vaccine/discussion/183865</a> .</p>
<h1>Features</h1>
<h2>Sequential Features</h2>
<ul>
<li>Sequence, Structure, Predicted loop type in one hot encoded form</li>
<li>BPPS sum, BPPS Max, BPPS NB</li>
<li>Dijkstra based aggregation features</li>
</ul>
<pre><code>No of neighbours within distance 3, distance 7
No of each loop types within distances 3 & 7
</code></pre>
<ul>
<li>Alternate Structure, Alternate loop type in one hot encoded form</li>
<li>Log Gamma for alternate structure</li>
</ul>
<h2>Distance Matrices for distance attention</h2>
<ul>
<li>BPPS Array</li>
<li>Best Structure Matrix (if bond exists b/w i & j: then 1 else 0) (Given Structure)</li>
<li>Djikstra Matrix for best structure</li>
<li>Distance Matrices (Power 1, 2)</li>
<li>Alternate Structure Matrix </li>
</ul>
<h1>Sample Rejection</h1>
<ul>
<li>Samples with SN_filter 0 were rejected for validation</li>
<li>Samples with Signal/Noise < 1 were rejected for training</li>
<li>Weights: log(1.02+Signal/Noise)</li>
</ul>
<h1>Validation Strategy</h1>
<p>Fold i -> Rank(Signal/Noise) % 5 == i</p>
<h1>Training</h1>
<p>Training on all the target, learning rate scheduling & checkpoint on 3 targets!!</p>
<h1>Generating Model Variants</h1>
<ul>
<li>Randomly 1-2 features were dropped in each model & Architecture (mostly head ) was slightly varied!! Heads used - LSTM, GRU, Wavenet, Combination of them </li>
</ul>
<h1>Models</h1>
<ul>
<li>Distance Attention + Multihead Attention + LSTM/GRU/Wavenet head</li>
<li>LSTM, GRU with all sequential features<ul>
<li>GCN with all sequential features and 2 matrices</li></ul></li>
</ul>
<h1>Ensemble</h1>
<ul>
<li>Weighted Blending</li>
<li>Weights optimised on OOF </li>
<li>First All </li>
</ul>
<h1>Private scores of Individual Model Types</h1>
<ul>
<li>Distance Attention (0.40787)</li>
<li>LSTM, GRU Models (0.41723)</li>
<li>GCN (0.40956)</li>
</ul>
<h1>OOFs of Individual Model Types</h1>
<ul>
<li>Distance Attention (0.1962)</li>
<li>LSTM, GRU Models (0.208)</li>
<li>GCN (0.209)</li>
</ul>
<h1>Comment</h1>
<p>We had some solutions in gold range but it wasn't reflected on CV. Even simple mean of our distance attention and GCN passes the barrier, but if we compare CV, there is a great difference between scores of Distance Attention & GNN!! </p>
<h1>Selected Submission</h1>
<ol>
<li>Only Distance Attention</li>
<li>Ensemble of all 3 types of models </li>
</ol>
<h1>Mentions (My Teammates)</h1>
<p><a href="https://www.kaggle.com/prateekagnihotri" target="_blank">@prateekagnihotri</a> <br>
<a href="https://www.kaggle.com/akshaypatel05" target="_blank">@akshaypatel05</a></p>
<h1>Edit</h1>
<p>With the reveal of new private scores, scores of my individual models are now consistent:</p>
<ul>
<li>Distance Attention (0.35298)</li>
<li>LSTM, GRU Models (0.36429)</li>
<li>GCN (0.36120)</li>
</ul>
<p>New Final Score: 0.35173 (Ensemble of all 3)</p>
|
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
|
83rd Place - Feature Eng & Key Takeaways
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I would like to thank the hosts for organizing a great contest that helps us contribute to the fight against Covid. Hopefully the work done here will help accelerate vaccine research!</p>
<p>This post is to show some key findings and discuss the feature engineering that we did for our final model. I hope that people in the field can use this for various other open issues:</p>
<ol>
<li>Our Graph Neural Net ensemble gave us the winning result - the ensemble which included RNN performed worse (the RNN included ensemble was not scaled this is probably why)</li>
<li>Wavenet model gave the worst individual performance and as much as I tried to FE and hyper parameter tune it did not help</li>
<li>We used Vienna RNA and its PERL scripts to generate additional features. In some cases we got worse scores on public LB but now I can see that their addition did help on the private LB. <a href="https://www.kaggle.com/nasirkhalid24/61st-pos-feature-engineering-entropy-mtn-plot" target="_blank">I share my notebook here</a> where it shows the various features we extracted including entropy, mountain plot etc.</li>
</ol>
<p>I will leave it to my teammate <a href="https://www.kaggle.com/arpitrf" target="_blank">@arpitrf</a> to discuss his excellent work related to our winning submission - he may make a post soon</p>
<p>Will update this post if I discover something else.</p>
|
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
|
89-th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi ,Kagglers!</p>
<p>First of all, I would like to say <strong>Thanks</strong> to <a href="https://www.kaggle.com/yakuben" target="_blank">@yakuben</a> and <a href="https://www.kaggle.com/lzcabc123456" target="_blank">@lzcabc123456</a>. It was a dream team. Any of us did not have any domain knowledge. But we managed to make some domain research (mostly <a href="https://www.kaggle.com/yakuben" target="_blank">@yakuben</a>) and apply our Deep/Machine Learning skills for such an essential task nowadays. Also great thanks to Kaggle team and organizers for such an important competition.</p>
<p>So let's go to our modelling and inference insights!</p>
<h2>Data Preparation</h2>
<p>We have several approaches to data preparation:</p>
<ul>
<li>Some of our models include filtered data (<code>signal_to_noise > 1</code>)</li>
<li>Some include all train data</li>
<li>Also we tried adding augmented data, from this <a href="https://www.kaggle.com/its7171/how-to-generate-augmentation-data" target="_blank">notebook</a>, but mostly it did not boost nor CV nor public LB score </li>
</ul>
<h2>Feature extraction</h2>
<p>Firstly we used only <code>sequence</code>, <code>structure</code>, <code>predicted_loop_type</code></p>
<p>Then we made some feature engineering. Features that worked well:</p>
<ul>
<li><code>structure_bpps</code> - bpps row from paired element. Paired element is taken from structure</li>
<li><code>distance2pair</code> - distance to paired element, normalized by sequence length </li>
<li><code>bbps_sum</code> - sum of bbps by columns for each row</li>
<li><code>bbps_max</code> - max of bbps by columns for each row</li>
</ul>
<h2>Validation Scheme</h2>
<p>We used two validation schemes:</p>
<ol>
<li>First - we compute median by reactivity and made StratifiedKFold - <a href="https://www.kaggle.com/vladimirsydor/basic-visualization-and-stratification-strategy" target="_blank">https://www.kaggle.com/vladimirsydor/basic-visualization-and-stratification-strategy</a></li>
<li>Second - we made stratification by median reactivity and errors in target</li>
</ol>
<p>We used both of them and they correlated well with each other and with Public LB</p>
<h2>Model</h2>
<p>We mostly used GRU based model </p>
<p>Input, Features -> Embedding, Feature Prenet -> Shared Prenet -> CustomBPPSAttention -> SeqEncoder (takern from this <a href="https://www.kaggle.com/takadaat/openvaccine-pytorch-ae-pretrain" target="_blank">notebook</a>) -> 2-4 GRUS blocks -> Linear Multilayer classifier </p>
<p>GRU Block - GRU -> CustomBPPSAttention -> Linear -> BatchNorm -> Mish -> Dropout </p>
<p>Linear Multilayer classifier - (Linear -> BatchNorm -> Mish -> Dropout) x2 -> Linear </p>
<p>CustomBPPSAttention - use <code>bpp</code> matrix as attention matrix to sequence and concat attention result to input sequence </p>
<p>Also we tried to use ResNets on bpps matrices or on stacked [bpps, stricture, distance] matrices but it did not boosted score but gave some different models for final ansamble </p>
<p>And we tried some Reccurent decoders based on GRUs. They also did not boosted score but gave diversity to ansamble</p>
<p>One more great model was using SeqEncoder and Attention at the end </p>
<p>All in all:</p>
<ul>
<li>Replacing ReLu with Mish boosted performance </li>
<li>Adding BatchNorms boosted performance </li>
<li>Adding prenets and Linear layers before GRUs boosted performance </li>
<li>Adding SeqEncoder boosted performance</li>
</ul>
<h2>Training Scheme</h2>
<ul>
<li>We used Ranger (Radam + Lookeahead) and it greatly outperformed Adam </li>
<li>Tuning ReduceLRonPlato also boosted score</li>
<li>Smaller batch sizes (8,16,32) performed better then bigger one</li>
<li>Also we used denoising pretraining in some of our models, as it was proposed in this <a href="https://www.kaggle.com/takadaat/openvaccine-pytorch-ae-pretrain" target="_blank">notebook</a></li>
<li>We used Crop Augmentation - we gave random part of input sequence and predicted part of target sequence. This stuff performed worse on CV and Public LB but boosted Private LB score greatly</li>
<li>Also we tried adding white noise to target, which also made models more stable</li>
</ul>
<h2>Choose Final Blend</h2>
<p>We end up with some of models from each teammate:</p>
<ul>
<li><a href="https://www.kaggle.com/yakuben" target="_blank">@yakuben</a> gave 59 models to his bland part !!!!</li>
<li><a href="https://www.kaggle.com/lzcabc123456" target="_blank">@lzcabc123456</a> gave 2 best models (one was our best on Public LB)</li>
<li>I gave 10 diverse models </li>
</ul>
<p>And we made BLENDOFBLENDS </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1690820%2Fa70492c1ad41ccba1780c2dc81b2dffd%2FScreenshot%20from%202020-10-07%2012-03-10.png?generation=1602061416100633&alt=media" alt=""></p>
<h2>Training/Inference Example</h2>
<p><a href="https://www.kaggle.com/yakuben/openvaccine-9th-place-single-model" target="_blank">https://www.kaggle.com/yakuben/openvaccine-9th-place-single-model</a></p>
<h2>Framework</h2>
<p>For training and experiment monitoring I and <a href="https://www.kaggle.com/yakuben" target="_blank">@yakuben</a> used Pytorch and Catalyst frameworks. Great thanks to <a href="https://www.kaggle.com/scitator" target="_blank">@scitator</a> and catalyst team! And <a href="https://www.kaggle.com/lzcabc123456" target="_blank">@lzcabc123456</a> used TF2 and Keras</p>
<h2>P.S</h2>
<p>Thanks to all participants and organizers!</p>
<p>And happy Kaggling!</p>
<p>In a short time <a href="https://www.kaggle.com/yakuben" target="_blank">@yakuben</a> will introduce detailed notebook with training and inference of our model</p>
|
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
|
8th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Input</h2>
<ul>
<li>sequence, structure, predicted_loop_type</li>
<li>sum, max of bpp (vienna(T=37, 50), nupack(T=37, 50), rnastructure(T=37, 50), rnasoft(T=37), eternafold(T=37), contrafold(T=37)) (<a href="https://github.com/DasLab/arnie" target="_blank">ARNIE</a>)</li>
<li>shannon entropy (<a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/187560" target="_blank">see this thread</a>)</li>
</ul>
<h2>Model</h2>
<ul>
<li>LSTM, GRU (with / without AE pretrain)</li>
<li>GNN + Transformer (with / without AE pretrain)</li>
</ul>
<p><strong>My models were based on these great public notebook</strong>.</p>
<ul>
<li><a href="https://www.kaggle.com/xhlulu/openvaccine-simple-gru-model" target="_blank">OpenVaccine: Simple GRU Model</a></li>
<li><a href="https://www.kaggle.com/symyksr/openvaccine-deepergcn" target="_blank">OpenVaccine - DeeperGCN</a></li>
<li><a href="https://www.kaggle.com/mrkmakr/covid-ae-pretrain-gnn-attn-cnn" target="_blank">[covid] AE pretrain + GNN + Attn + CNN</a></li>
<li><a href="https://www.kaggle.com/takadaat/openvaccine-pytorch-ae-pretrain?scriptVersionId=43764127" target="_blank">Open Waccine: Pytorch AE pretrain</a></li>
</ul>
<h2>Validation</h2>
<ul>
<li>train without SN_filter (sample_weight=0.5*log(signal_to_noise+1.1)) and validate with SN_filter.</li>
<li>10fold split</li>
<li>KFold, StratifiedKFold(binned reactivity), GroupKFold(cluster of sequence)</li>
<li>For all models, I averaged the predictions made by the above 3 validations.</li>
</ul>
<h2>Ensemble</h2>
<ul>
<li>Weights were determined based on Public LB's scores.</li>
<li>Local CV, Public LB and Private LB were all correlated.</li>
</ul>
<h2>NOTE</h2>
<ul>
<li>Because all of the data in this competition are predicted values except for the sequence, I think it would have been effective to use an ensemble of bpps predictions from various algorithms.</li>
<li>(It would have been nice to regenerate structure and predicted_loop_type from the various bpps predictions, but I was short on time.)</li>
<li>At high temperature: Since the secondary structure of RNA is temperature-dependent, I think it was effective to use bpps with T changed as input.</li>
<li>At high pH: Alkaline hydrolysis of RNA can occur at any position in the sequence (probably) as well, so the prediction itself is difficult.</li>
</ul>
|
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
|
9th solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>1. Introduction</h1>
<p>The COVID-19 RNA vaccine related competition was held in Kaggle from 9/11 to 10/8 for a month.<br>
This competition was led by Rhijus Das, one of Stanford University's professors in biochemistry, <a href="https://profiles.stanford.edu/rhiju-das" target="_blank">https://profiles.stanford.edu/rhiju-das</a>.</p>
<p>There is a global effort to create a COVID-19 vaccine.<br>
However, there are several vaccine candidates, but it is said that there has been a problem of degradation of activity.</p>
<p>In this competition, given each RNA vaccine sequence, the goal was to predict the reactivity of each part of the sequence under various environmental conditions (pH, temperature).</p>
<h1>2. Data</h1>
<p>The data is given a sequence consisting of A, G, U, C, structural information, and predicted loop type as shown below.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1155353%2Fe4301e4b373db920f23f2de72acae141%2Ffig1.png?generation=1602209693538402&alt=media" alt=""></p>
<h1>3. Methodology</h1>
<p>A brief of the solving process is shown in the figure below.<br>
We need to optimizer inputs, encoders, and heads for a month 😊</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1155353%2F3d0008a25b8d4ad33d29fd23a6510cbf%2Ffig2.png?generation=1602209716128348&alt=media" alt=""></p>
<h1>4. Solutions</h1>
<p>First I needed to convert the sequences given as inputs to numbers before embedding them in the encoder. Since it is a different way of expressing the same RNA, there can be two ways.</p>
<p>a. Convert them together<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1155353%2F8ba7489d06ba62c6ea4d17d97c4b589d%2Ffig3.png?generation=1602209732298186&alt=media" alt=""></p>
<p>b. Convert them individually<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1155353%2F8e97cd0c828c25b56a961cf640b5a246%2Ffig4.png?generation=1602209750766042&alt=media" alt=""></p>
<p>At first, I used a, but it's a different way of expressing, so I tested b. \<br>
Actually, b was good on the cv score, so I chose method b. </p>
<p>These encoded sequences have a (batch size, length of sequence, 3) matrix, and become a matrix (batch size, length of sequence, dimension of embedding) after the embedding layer.</p>
<h3>my solution</h3>
<p>Besides the sequence, I added other features. RNA sequence can have a probability of being linked. The organizer gave a (N, N) adjacency matrix called bpps. Since tokens in each sequence are probability values reaching different tokens and structures, it could be a very important feature, and many Kagglers put this information in the input.<br>
<a href="https://www.kaggle.com/xhlulu/openvaccine-simple-gru-model" target="_blank">https://www.kaggle.com/xhlulu/openvaccine-simple-gru-model</a><br>
<a href="https://www.kaggle.com/its7171/gru-lstm-with-feature-engineering-and-augmentation" target="_blank">https://www.kaggle.com/its7171/gru-lstm-with-feature-engineering-and-augmentation</a><br>
<a href="https://www.kaggle.com/hidehisaarai1213/openvaccine-checkout-bpps" target="_blank">https://www.kaggle.com/hidehisaarai1213/openvaccine-checkout-bpps</a></p>
<p>In addition, the statistical features that the sequence has are also included.<br>
As a result, it becomes:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1155353%2F965e6b6aab8450cfbcb6696d4dcc892b%2Ffig5.png?generation=1602209787702488&alt=media" alt=""></p>
<p>The possibility of GNN began to be discussed from the beginning of the competition, and people started to use GNN. At first, I didn’t use GNN. I also used GNN In the middle of the competition.</p>
<p>In my GNN, node and edge features were created, and additional statistical features were also used. <br>
All this baseline are from other kagglers. Thanks 😊 <br>
You can put it as a multi input.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1155353%2Fb66c0f42373ebe726f06380d82cb9786%2Ffig6.png?generation=1602209813885439&alt=media" alt=""></p>
<p>Now I needed to use encoders that can manipulate these embedded features well.<br>
Almost all architectures were proposed in this competition. RNN (gru, lstm), transformer, GNN, wavenet, etc. were used.</p>
<p>In particular, GNN based on self-supervised learning was shared, and this approach gave a very strong score. I think Most of the gold zones used it.</p>
<p><a href="https://www.kaggle.com/mrkmakr/covid-ae-pretrain-gnn-attn-cnn" target="_blank">https://www.kaggle.com/mrkmakr/covid-ae-pretrain-gnn-attn-cnn</a></p>
<p>I used all of architectures above. I opened, studied and customized all of the baselines shared by many Kagglers. These are the model structures I did.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1155353%2Fce2b04e3497574d9db6a65d17514b0f3%2Ffig7.png?generation=1602209839803512&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1155353%2F1fd00b23b4e4bbc89b027ce6325661fc%2Ffig8.png?generation=1602209854399900&alt=media" alt=""></p>
<p>(1), is an approach made to learn sequence data.<br>
(2), (3), (4) are the approaches created to learn graph data.</p>
<h1>5. Discussion</h1>
<p>My major was Computational Chemistry. And I had 3rd place in the molecular property prediction competition last year, and my previous roommate was doing RNA experiments, so I started the competition with a lot of background knowledge.</p>
<p>The big issue of this contest is that the train and test RNA sequence lengths are different, and the data contains noise. The RNA sequence length used for training was 107, and the final ranking was obtained for 130 sequences.</p>
<p>I came up with the following strategy to be robust against these noise and other sequences.<br>
a. Make models as many as possible and do ensemble<br>
b. Feature engineering<br>
c. Using GNN Much more<br>
d. Diversifying the training data<br>
A. Dataset(all data), Dataset(SN_filter == 1), Dataset(signaltonoise > 1)</p>
<p>I created the (N, N) matrix for fixed attention in various ways. Bpps was created using several packages, and the distance matrix was further expanded using gaussian.</p>
<p>for gamma in np.arange(10):<br>
Dss.append(np.exp(-np.power(Ds, 2) * gamma))<br>
Ds = np.stack(Dss, axis = 3)</p>
<p>As a result, the above strategy seems to work well.<br>
It was 10th in public, 2nd in non-filtered data (1st private), and 9th in filtered data (2nd private), so My solution achieved an generalization which is very important issue in machine learning.</p>
<p>The ensemble is shown below.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1155353%2F0650b58e841d94c50bf10e7b19ecada2%2Ffig9.png?generation=1602209880738874&alt=media" alt=""></p>
<h1>6. Acknowledgement</h1>
<p>First of all, I dedicate my solution to those who suffer from Corona and to many who struggle to overcome Corona. </p>
<p>I think it is a solution that many Kaglers have made together. Thanks to all Kaggler</p>
<p>Also, I sincerely thank my beloved wife who provided food, eye medicine to me. (I worked for this competition in the room on a weekday, weekend after honeymoon. bad husband..)</p>
<h1>7. Final words.</h1>
<p>I became the 195th grandmaster in the world by winning a solo gold medal in this competition. (nakama became a grandmaster on the same day as me. Congratulations.)<br>
Yesterday, after the competition, I watched a movie(Thor Ragnarok) with Wife and, and there was a Grandmaster too.<br>
I felt very good. I first knew Kaggle and have been kaggling for three years, shouting keep going. Finally GM!!!!!!<br>
I</p>
|
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
|
BERT Solution ~Part of the heaven and hell team (single 0.35085) ~
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our team has other models and we use stacking ensumble as final submission, and other models are not good in private but the reason other models are not good is unknown now…</p>
<p><a href="https://www.kaggle.com/sishihara" target="_blank">@sishihara</a> Sorry, above notebook is only MLM pretrain code.</p>
<p>Thanks!!!<br>
This kind comment make my model worth.</p>
<p>Thanks.<br>
We use stacking (Ridge and LGB) in our final prediction, and it is not good result.</p>
<p>Congrats prize / gold medal / good result winners.<br>
And thank you for kagglers who shared many great kernel and discussion.</p>
<p>Unfortunately our team experienced heaven and hell(<a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189561)" target="_blank">https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189561)</a>, but I think one of my model may be different from others, and single model score is not bad (single 0.35085).<br>
Therefore I share my model a little.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1305588%2Fd16310913457b197fb43b96fb18b87ce%2Fsolution.png?generation=1602117903910861&alt=media" alt=""></p>
<p>I regard sequence data as text and use BERT Encoder with MLM(Masked Language Model)</p>
<p>Other information</p>
<ul>
<li>After pretrain, I attached CNN+LSTM+BppAttention(This is based on <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a> and <a href="https://www.kaggle.com/takadaat" target="_blank">@takadaat</a> great kernels and my team member <a href="https://www.kaggle.com/yiemon" target="_blank">@yiemon</a> 's great idea)</li>
<li>Loss: <a href="https://www.kaggle.com/xhlulu" target="_blank">@xhlulu</a> loss and custom weight(target error and signal-to-noise) (<a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/183211" target="_blank">https://www.kaggle.com/c/stanford-covid-vaccine/discussion/183211</a>) </li>
<li>Data:<ul>
<li>Augmentation data by <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> method (Use in BERT pretrain and normal-train and TTA)</li>
<li>not filtering on SN_filter = 1, but reduce noise by weighted loss</li></ul></li>
</ul>
<p>I also shared my MLM notebook.<br>
<a href="https://www.kaggle.com/go5kuramubon/bert-mlm-training-with-augment" target="_blank">https://www.kaggle.com/go5kuramubon/bert-mlm-training-with-augment</a></p>
|
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
|
Brief Summary of 5th Place
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all I would like to thank Kaggle and host team for organizing this interesting contest.<br>
How great is it that I might be able to contribute to the research of covid-19 vaccine by participating in this contest?<br>
I'm very excited to be a part of this meaningful contest.</p>
<h2>models</h2>
<p>I extended the following models.</p>
<ul>
<li>AE model based on <a href="https://www.kaggle.com/mrkmakr/covid-ae-pretrain-gnn-attn-cnn" target="_blank">AE pretrain + GNN + Attn + CNN</a>.</li>
<li>GRU+LSTM model mainly based on <a href="https://www.kaggle.com/xhlulu/openvaccine-simple-gru-model" target="_blank">OpenVaccine: Simple GRU Model</a>.</li>
</ul>
<p>Final model is weighted average of variation of these 2 models.</p>
<p>I have already shared some techniques <a href="https://www.kaggle.com/its7171/gru-lstm-with-feature-engineering-and-augmentation" target="_blank">here</a>.</p>
<ul>
<li>augmentation</li>
<li>sample_weight with all data</li>
<li>some feature extraction</li>
<li>CV strategy with clustering</li>
</ul>
<p>I mainly focused on data preprocessing.</p>
<h2>additional augmentation</h2>
<p>I used eternafold, vienna, nupack, contrafold and rnasoft to extract structure and loop_type.<br>
These backend engines are used to extract additional bpps too.<br>
Especially eternafold and contrafold worked well.</p>
<h2>features</h2>
<p>I extracted following features.<br>
For detail information, please refer to the <a href="https://www.kaggle.com/its7171/feature-extraction" target="_blank">source code for this</a>.<br>
・bpps_sum<br>
・bpps_max<br>
・bpps_sum-max<br>
・The value of bpps of the pair - the strength of the pair.<br>
・The type of the pair (CG or GU or AU or None)<br>
・Information on the neighbors of the pair<br>
・entropy</p>
<h2>Some experiments.</h2>
<p>・130 length sequence training.<br>
I added dummy 39 length sequence to the training sequence.<br>
I expected this model improves private scores, but did not make significant improvements.<br>
・reversed sequence<br>
I added reversed sequence data as augmentation data and it did not make significant improvements either.But these 2 modelshelped the ensemble a bit.<br>
I tried these 2 models only for GRU+LSTM model.</p>
<h2>Results</h2>
<table>
<thead>
<tr>
<th>model</th>
<th>private</th>
<th>public</th>
</tr>
</thead>
<tbody>
<tr>
<td>ensemble of AE pretrain + GNN + Attn + CNN</td>
<td>0.34799</td>
<td>0.23260</td>
</tr>
<tr>
<td>ensemble of GRU+LSTM</td>
<td>0.35477</td>
<td>0.24222</td>
</tr>
<tr>
<td>ensemble of all models</td>
<td>0.34471</td>
<td>0.23025</td>
</tr>
</tbody>
</table>
<p>I was able to get 5th place with private score of 0.3447.<br>
Needless to say, I couldn't have done this without the input from the Kaggle community.<br>
I'd like to thank Kaggle community, especially @xhlulu, @mrkmakr and @hengck23!!!</p>
<p>Thank you,<br>
About 60 models are splitted into 8 groups by its CV score and characteristics.<br>
And the 8 weights for each group are decided by CV score.<br>
It is very rough and heuristic way but I thought it's enough thinking about this data(small data size and difference of train and private test).</p>
<p>Thank you,<br>
I'm expecting next re-scoring and moving up :)</p>
<p>Thank you Allohvk!<br>
Yes, it helped for ensemble and I could not get gold medal without it.</p>
|
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
|
Congrats, and my 20 min lottery tickets (154th place/bronze medal)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The original topic idea is from <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/94324#latest-552728" target="_blank">https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/94324#latest-552728</a> and I have another discussion topic talk about the shake-up: <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189124" target="_blank">https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189124</a><br>
I'll show the whole process of my final result.</p>
<p>My previous model is at 0.24120 public LB, and 0.41489 private LB, with 370th place.<br>
This 0.24120 model is Fork from notebook <a href="https://www.kaggle.com/takadaat/openvaccine-pytorch-ae-pretrain" target="_blank">https://www.kaggle.com/takadaat/openvaccine-pytorch-ae-pretrain</a><br>
I read through his kernel takes several days. Then re-run his model by changing some parameters, and get 3 same model result with different parameters. Doing averaging of these 3 models and get a 0.24120 public LB solution</p>
<p>This notebook shows a lot of things about the test set <a href="https://www.kaggle.com/scaomath/private-leaderboard-shake-up/data" target="_blank">https://www.kaggle.com/scaomath/private-leaderboard-shake-up/data</a>? <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1860617%2Ff5b09cf8980ee673be909243f05a3d61%2Fresult.png?generation=1602047821073873&alt=media" alt=""><br>
There are lots of figures that show: All model prediction has the same and correct trend, so we can trust your single model with the best public LB.</p>
<p>Then using the idea of <a href="https://www.kaggle.com/sushize" target="_blank">@sushize</a>. I using my best single model multiple by 0.92 and 1.1. and using <a href="https://www.kaggle.com/scaomath" target="_blank">@scaomath</a> notebook code to shows these two new results. It takes me 20mins in total. In the end, the 0.92 results get a better fitting, to help me win a bronze medal.</p>
<p>Thank you all, if you think this processing is well, don't forget to upvote the original authors topic <a href="https://www.kaggle.com/sushize" target="_blank">@sushize</a> <a href="https://www.kaggle.com/scaomath" target="_blank">@scaomath</a> <a href="https://www.kaggle.com/takadaat" target="_blank">@takadaat</a>. The link is below:<br>
<a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/94324#latest-552728" target="_blank">https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/94324#latest-552728</a><br>
<a href="https://www.kaggle.com/takadaat/openvaccine-pytorch-ae-pretrain" target="_blank">https://www.kaggle.com/takadaat/openvaccine-pytorch-ae-pretrain</a><br>
<a href="https://www.kaggle.com/scaomath/private-leaderboard-shake-up/data" target="_blank">https://www.kaggle.com/scaomath/private-leaderboard-shake-up/data</a>? </p>
|
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
|
First solo gold, 7th place solution: transformers with convolution/deconvolution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for the kind words!</p>
<p>Thanks! Will be up soon. Working on it rn</p>
<p>Thanks! Will post it soon</p>
<p>Before going into my solution, I would like to thank many people. First, I would like to thank <a href="https://www.kaggle.com/herbison" target="_blank">@herbison</a> for helping me clear my name when I was banned (wrongly) in the recent major compliance action mainly against angtk.ai. Then the kaggle compliance team <a href="https://www.kaggle.com/inversion" target="_blank">@inversion</a> for looking into my case and unbanning me. This would not have been possible without them. </p>
<p>Also thanks to people who questioned the lb results <a href="https://www.kaggle.com/cpmp" target="_blank">@cpmp</a>, <a href="https://www.kaggle.com/group16" target="_blank">@group16</a>, <a href="https://www.kaggle.com/onodera" target="_blank">@onodera</a> who turned out to be right and thanks to the stanford team for looking into the issue and rescoring everything. I had given up yesterday thinking that maybe deep learning just did not work on this RNA problem</p>
<p>Many thanks to the stanford team (@dosoon, <a href="https://www.kaggle.com/rhijudas" target="_blank">@rhijudas</a>, <a href="https://www.kaggle.com/wayment" target="_blank">@wayment</a>, and <a href="https://www.kaggle.com/andywatkins" target="_blank">@andywatkins</a>) for having hosted an amazing and important competition. Special thanks to <a href="https://www.kaggle.com/andywatkins" target="_blank">@andywatkins</a> for explaining RNA structures and helping me understand it better.</p>
<p>Many of you are probably aware of the CHAMPS competition a a year ago, where all the top solutions made use of transformers, and were my inspiration going to into this competition. </p>
<p>Last but not least, thanks to my PI for allowing me to do this competition as part of my research work.</p>
<p>This competiton was very short and I made many mistakes… I made some big improvements last Sat and had to go into ensembling in a hurry so if I had more time, I can probably come up with a better score, but this is probably the same for everyone in the competition and I am happy with the final results either way</p>
<h1>Core ideas</h1>
<p>I would say my solution is the simplest manifestation of my basic understanding of RNA and deep learning. I did not use any tricks and just focused on architectural design. There is no feature engineering, and minimal post or pre processing. Everything was executed simply and directly. Here I will talk about the main ideas, and let me know if something is not clearly explained. </p>
<ol>
<li>Self-attention to learn global dependencies</li>
<li>Convolutions to provide local connectivity</li>
<li>RNA should be treated as molecules (covalently and locally bonded nucleotides with the chance to form non-covalent bonds) instead of sequences (this means there should be no positional encoding as was present in bert and the original transformer)</li>
</ol>
<h1>Core architecture: ConvTransformerEncoder</h1>
<p>Most people know that transformers do not work on this problem if you directly input single nucleotide encodings into the transformer encoder. What works is instead of using self-attention to obtain a nucleotide to nucleotide attention map, you aggregate single nucleotides locally using 1D convolutions before self-attention. You can say this is a region to region attention mapping (and hence no padding). </p>
<p>Early in the competition, I thought that adding the BPP matrix into the attention function as a bias must improve the results. This makes sense since I think that secondary structures stabilize RNA. I used the same idea (mathematically) used by CHAMPS top solution <a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/106575" target="_blank">https://www.kaggle.com/c/champs-scalar-coupling/discussion/106575</a>. This idea should be attributed to <a href="https://www.kaggle.com/zkolter" target="_blank">@zkolter</a>.</p>
<p><img src="https://raw.githubusercontent.com/Shujun-He/images/main/modified_self_attention.png" alt=""></p>
<p>Adding the BPP+distance matrix turned out to lead to significantly better results. Since I use no padding in all 1D convolution operations, the BPP matrix should also be processed into a region to region mapping, and 2D convolution using the same kernel size without padding results in direct correspondence to nucleotide region to region mapping on a 2D level and can be directly added to the attention function QK^T before softmax.</p>
<p>Since convolutions without padding leads to reduced dimensional encodings, this type of convolution+self-attention module cannot be stacked to form a deeper model using residual connections. Therefore, I thought I would just use the opposite of convolution, namely deconvolution, to retrieve full dimensional encodings. With residual connections, now I have a module that can theoretically be infinitely stacked to form models of arbitrary depths. This is also done for the BPP matrix so a deeper transformation can be used to obtain a better bias that is fed into the self-attention function.<br>
<img src="https://raw.githubusercontent.com/Shujun-He/images/main/arch.PNG" alt=""></p>
<p>Aside from the typical hyperparameters for the transformer encoder, here I mainly introduce one more parameter k, which is the kernel size for all conv/deconv operations in the same module. I found that when stacking the ConvTransformerEncoders, the results were always better when the last module before outputting predictions does single nucleotide level self-attention (k=1), before layers where k>1. This makes sense because you would want nucleotides to be looked at by the transformer individually before outputting predictions for each one of them.</p>
<h1>Pretraining</h1>
<p>I used all available sequences to pretrain on randomly mutated or masked (with NULL token) sequence retrieval loss (basically just softmax to retrieve correct nucleotide/structure/loop). I think this resulted in larger variance between single models, and worked better during ensembling. Also, I thought it had to be better for the models to have seen longer sequences than not.</p>
<h1>Ensembling</h1>
<p>Basically just many models of the same core architecture with different depths and kernel sizes. I selected the training set based on signal to noise > [0.25, 0.5, 1]. Some models were trained without positional encoding, and a few without pretraining. During the last hour, I decided to make a submission using only models without positional encoding, which resulted in a shakeup for me (0.34701->0.34550). I also used median averaging (inspired by <a href="https://www.kaggle.com/zkolter" target="_blank">@zkolter</a>'s CHAMPS solution) using a code written by <a href="https://www.kaggle.com/rvslight" target="_blank">@rvslight</a>, which came in handy:</p>
<pre><code>def median_avg(predictions,beta=0.5):
'''
predictions should be of a vector shape n_models
beta: if beta is 0.5, then the middle 50% will be averaged
'''
sorted_predictions=np.sort(predictions)
n_model=len(sorted_predictions)
mid_point=n_model//2+1
n_avg=int(n_model*beta)
if debug:
print('sorted_list',sorted_predictions)
print('after_cut',sorted_predictions[mid_point-n_avg//2-1:mid_point+n_avg//2])
to_avg=sorted_predictions[mid_point-n_avg//2-1:mid_point+n_avg//2]
return np.mean(to_avg)
</code></pre>
<p>In my final ensemble, I used beta=0.7. This helps with outliers and such.</p>
<h1>Some things that did not work</h1>
<p>I tried to get some 3D distance matrix but based on my discussion with <a href="https://www.kaggle.com/andywatkins" target="_blank">@andywatkins</a>, it seemed impractical and I gave up on it. <a href="https://www.kaggle.com/heng" target="_blank">@heng</a> also tried to get 3D structures, I wonder if this worked for him? </p>
<p>Adding RNNs did not work (or minimal benefit with additional computational overhead). I thought by adding the positional distance matrix into the self-attention function, the self-attention function already simulates an RNN, so it makes sense.</p>
<p>Single label models did not work. The labels are highly correlated so this makes sense as well.</p>
<p>Thanks! Congrats on 3rd place! </p>
<p>Thanks! Congrats on winning to you! Looking forward to your solution</p>
<p>Fixed now I think</p>
<p>How to embed images? I'm having some trouble…</p>
<p>Thanks and congrats to you too!</p>
<p>Thanks man! Could not have done it without you!</p>
<p>Thanks! It was unlucky you got second again, but congrats anyway and hope you break the 2nd place curse soon!</p>
|
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
|
From public kernels to silver in a few days (52nd place before and after rescore)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/sishihara" target="_blank">@sishihara</a> You're very much welcome, thanks a lot for the kind comment </p>
<p><a href="https://www.kaggle.com/duykhanh99" target="_blank">@duykhanh99</a> Thanks ! Did PL help your results as well ? I did only 2 subs with it so I'm curious if someone has a better performance assessment than mine.</p>
<p>It may not be needed aha</p>
<p>Indeed, guess I'll add temporary to the title </p>
<p>This time it's the other selected submission that gave me the 52nd place, which is quite funny as well </p>
<p>Private LB was always finalised for me ;)</p>
<p>Turns out that statement was true !<br>
But now it looks like you deserve congratulations more than I do :)</p>
<p>Congratz to the winners ! Even though my result is not really good, I felt like sharing anyways.</p>
<p>I started working on this competition 5 days before the end, but managed to get a fairly nice results in a short time, here's a brief overview of what I did.</p>
<h3>Process</h3>
<p><strong>Day 1 :</strong> Study the top scoring public kernels </p>
<p>I submitted a first blend using a bunch of public notebooks as a benchmark to beat. At the time it was in the low bronze range (0.238~) if I remember. I mostly focused on <a href="https://www.kaggle.com/takadaat/openvaccine-pytorch-ae-pretrain" target="_blank">the PyTorch AE one</a> since I believe it had the most potential. </p>
<p><strong>Day 2 :</strong> Improve the transformer model</p>
<p>With simple modifications, I was able to reach 0.238~ with a single model here's what I changed :</p>
<ul>
<li>Get rid of the checkpointing, which in my opinion is not a really good practice</li>
<li>Use a linear scheduling with warmup</li>
<li>Add the augmentations and TTA from <a href="https://www.kaggle.com/its7171/gru-lstm-with-feature-engineering-and-augmentation" target="_blank">the GRU-LSTM kernel</a></li>
<li>Tweak some parameters, but in the end I think the only thing I changed is the batch size which I put to 32. </li>
<li>Use the group k-fold from <a href="https://www.kaggle.com/its7171/gru-lstm-with-feature-engineering-and-augmentation" target="_blank">the GRU-LSTM kernel</a> as a validation strategy</li>
</ul>
<p>Other things were kept as-is, I mostly did code refactoring. </p>
<p><strong>Day 3 :</strong> Build an ensemble</p>
<p>I trained a bunch of transformers with slightly different hyperparameters, to benefit from diversity and have a more robust submission.</p>
<p>I also looked into post-processing, and found that my CV score benefited from some linear transformations such doing <code>y = x * a + b</code>. Using <code>a=0.97</code> and <code>b=0.02</code> gave a private boost of <code>0.0014</code>, but before the rescore. Boost is only <code>0.0004</code> after it.</p>
<p>At the end of the day, I was ~30th public by blending a bunch of transformers.</p>
<p><strong>Day 4 :</strong> Add diversity </p>
<p>I trained a RNN architecture to hopefully push my results a bit further. The gain on the overall ensemble was very small but I still added a RNN blend to my previous best LB (with weighting 0.9 / 0.1)</p>
<p><strong>Day 5 :</strong> Surprise day </p>
<p>I looked into pseudo-labelling, the motivation was to make my models more robust to the longer molecules. It gave me a 0.001 boost on single models, but unfortunately my ensembles did not benefit much from it. During the last two days my public went up by 0.0001 I believe.</p>
<p><strong>Update :</strong> After the re-run, turns out that my submissions which only used models with trained with pseudo-labelling was best one. </p>
<ul>
<li><p>Private : 0.35092</p></li>
<li><p>Public : 0.23679</p>
<p>52nd, the same rank as the old blend did in the previous rescore which is a funny coïncidence. But with a different submission !</p></li>
</ul>
<p>One of my single model trained with PL scored private LB <code>0.35186</code>.</p>
<h3>Results</h3>
<p>I dropped 18 places overall, which in the end is not too bad considering that I did not really validate my models. . I was in fact saved by the surprise day and the pseudo-labelling. </p>
<p>Many thanks to the people who shared without whom I wouldn't've been able to get a decent score given that I procrastinated way too much before really working on this interesting challenge.<br>
Especially <a href="https://www.kaggle.com/takadaat" target="_blank">@takadaat</a> <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> and <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a> </p>
<p>Also, I was lurking in the forum for a while, so I was already familiar with the competition, I did not really do everything in 5 days. But my early exepriments scored so poorly that I kinda gave up.</p>
<p>If anyone is interested, an exemple of trained model is available here : <a href="https://www.kaggle.com/theoviel/open-vaccine-pl" target="_blank">https://www.kaggle.com/theoviel/open-vaccine-pl</a><br>
It's really dirty and I don't give any credits, which I apologize for.</p>
|
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
|
Solution 20
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Psi and I worked on very similar models, Dieter on a different one.</p>
<p>The transformation is always applied. Last lines of the forward method are:</p>
<pre><code>preds = 3 * preds + preds * preds * preds
return preds
</code></pre>
<p>Great thanks to Kaggle ad Stanford University for this challenge. It is great to see Kaggle community help fight COVID. I also thank Psi ( <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a> ) and Dieter ( <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a> ) for joining me for the last week despite being exhausted after their GRL win. Moreover they had to write a paper about it during that week. Last, I'd like to thank whatever god for the extra day. Without it we would be around the 140th rank.</p>
<p>Before describing our solution let me answer this: no, I did not reuse the distance attention that I shared early in this competition. Reason is that a node can have at most 3 edges, its preceding and following node sin the sequence, and its paired node. A generic attention looked like an overkill to me. After reading several top teams writeup, and seeing they almost all reused my distance attention, either directly, or via public notebooks, I coded it and compared with our best. It is slightly worse, by 0.001-0.002 depending on variants. But it would probably have helped in an ensemble. </p>
<p>This said, the provided structure is only the most probable one given a bpps matrix, and using bpps probabilities should help. Dieter has been working on it and got a first model like 2 hours before end. We blended it with our best, and got some boost. Psi and I worked on variants of the model I had when we teamed and our best model is a result of that collaboration. That's a simplification of course, we constantly exchanged ideas and info and everybody contributed to all pieces of the solution.</p>
<p>Let me now describe our solution.</p>
<p><strong>Data</strong></p>
<p>We did not use extra sequences beyond the competition data. Like everyone, we created a sequence of nodes with this data for i-th node:</p>
<ul>
<li>one hot encoding of sequence char, </li>
<li>one hot encoding of structure char</li>
<li>one hot encoding of loop prediction char. </li>
<li>a binary which is one if the node is paired. </li>
<li>an index which is the index of the paired node i if the node is not paired, i otherwise.</li>
<li>pp and entropy computed with <a href="https://www.tbi.univie.ac.at/RNA/ViennaRNA/doc/html/examples_python.html" target="_blank">RNAlib</a> </li>
<li>unpaired probability (1 - sum bpps along one dimension)</li>
</ul>
<p><strong>Model</strong></p>
<p>Overall structure is a linear layer to create embeddings of size 512 from the node features, then 3 attention/convolution layers, then a head to output 5 targets.</p>
<p>The Attention/Convolution layers are similar to the transformer encoder structure: an attention layer followed by a convolution bloc with skip connection.</p>
<p>The attention layer is a paired attention layer followed by a bidirectional GRU. Sure, GRU is not attention, but here it plays the same role as an attention layer where each node attend its two neighbors in the sequence.</p>
<p>The paired attention is where we could have used my NFL attention as discussed above. Instead we just concatenate the paired node embedding. Do do so we permute the node sequence using the paired index, then we mask it with the paired binary. We then feed the concatenation to the GRU.</p>
<p>The convolution layer was designed after efficientnet convolution bloc. </p>
<p>With the above we got public LB of 0.241 by bagging the model a large number of time.</p>
<p>Last day, looking for inspiration, we looked at public kernel and decided to add convolution with larger kernel size as in <a href="https://www.kaggle.com/thebigd8ta/open-vaccine-pytorch-v" target="_blank">this notebook</a>. I don't know who is the original author of this code but I want to credit him/her and than him/her a lot. By stacking our conv bloc the same way, with larger and larger kernel sizes we move our public LB to 0.0.23646., private 0.34910, which would be 30th rank.</p>
<p>At a point we trained variants of the model by replacing one or several GRU by a LSTM. Given we have three GRU layer this can generate 8 variants. Bagging these 8 variants was our best sub till last day. Unfortunately we did not have time to train the same 8 variants of our best model. I expect it would have given us a 0.001 boost.</p>
<p><strong>Training</strong></p>
<p>We weighted samples by their signal to noise clipped between 0 and 10. Without clipping this was unstable and led to loss nan from time to time. When bagging a large number of runs we also had nan. After competition experiment show that clipping to (0.03, 10) was way more stable numerically.</p>
<p>We tried pseudo labeling. It improved our CV but degraded public LB a bit, which stopped us. It is a pity as the PL models had a quite better private score. We tried both leaky PL, where you use all fold models to predict on test then use that as pseudo labels, and non leaky, fold based PL where you only use the model trained on same data to pseudo label test. public test data wasn't noisy hence we use sample weight =1. Private test data is noisy hence we used a much smaller weight like 0.03.</p>
<p>Our CV setup is similar to others: we did clustering on all node features and use GroupKFold to keep each cluster in a single fold. The CV value was extremely well correlated with public LB, except when we used pseudo labeling. It turns out that CV was well calibrated with private LB and that we should have trusted it for our PL models.</p>
<p>I sued cosine annealing with warmup and restart, 6 cycles of 10 epochs. Psi used a single cycle of 60 epochs. Results were similar but add diversity. Unfortunately we could only use one scheme for last day subs because of lack of time.</p>
<p>We use Adam.</p>
<p>And, oh, yes, we all used Pytorch.</p>
<p><strong>Target Transform</strong></p>
<p>The target distribution is very narrow around 0. I looked for a way to make it smoother. IN the end we turn prediction y into 3y + y^3. Here is the reactivity distribution:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F75976%2F540d2f1eaa24872017b1d390623b2cc3%2Frecativity.png?generation=1602163509653828&alt=media" alt=""></p>
<p>Here is what the distribution looks like before the transformation:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F75976%2F97d419da8871c305d7e4ac10e3640e10%2Freact2.png?generation=1602163591182036&alt=media" alt=""></p>
<p>This gives us a boost around 0.002. I was looking for a function that would spread values near zero. A cubi looked good. To find the coefficient values of the cubic function I used <a href="https://en.wikipedia.org/wiki/Cubic_equation" target="_blank">Cardamo formula</a>.</p>
<p><strong>Gamble</strong></p>
<p>We decided to submit a bagged version of our best model blended with Dieter's model. For second sub we tried a little gamble. We applied an upscaling coefficient. The gamble backfired badly when private LB was revealed as downscaling was the way to go. Fortunately, once private test scoring was fixed, our gamble bought us 10 ranks.</p>
<p><strong>What did not work</strong></p>
<p>A lot of things. I may provide the list here later, but it was quite discouraging. The main thing is that GNN or Transformer didn't work. What worked is a GNN with GRU backbone. </p>
<p><strong>What's best</strong></p>
<p>I used Hardswich activation, Dieter used PReLU, and Psi used ReLU. I used LayerNorm and Psi used BatchNorm. We used different schedulers. We all claimed ours was best for us. I guess it is a self fulfilling prophecy: once your settle on things like activation function, norms, etc, then you tune for it, and as a result switching to other ways degrades your tuned model.</p>
<p>Thanks for confirming my hunch.</p>
|
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
|
[38th solution]My attempt at simulating private leaderboard
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>UPDATE after regrading: it works! based on my submissions scores.</p>
<p><a href="https://www.kaggle.com/scaomath/covid-19-openvac-private-lb-simulation-baseline" target="_blank">https://www.kaggle.com/scaomath/covid-19-openvac-private-lb-simulation-baseline</a></p>
<p>Apparently it does not work because seq_len matter, using public to simulate private is fruitless. 😂</p>
|
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
|
[Updated with code] 13th place solution - Arnie Ensemble
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Many thanks <a href="https://www.kaggle.com/allohvk" target="_blank">@allohvk</a> . It was tough time to be honest but thankfully it went well at the end.</p>
<p>Thanks a lot brother for your kind words.</p>
<p><strong>Update</strong>: you can find our solution code <a href="https://github.com/karimmamer/Predicting-RNA-Vaccine-Degradation-using-Deep-Learning" target="_blank">here</a>.</p>
<p>First of all, congratulations to all winners and many thanks for the organizers. </p>
<p>It was a very confusing day for us after going from 95 public lb to 730 in the first private lb and finally landing on 13th place to win our first gold medal 😁.</p>
<p>The main idea behind our approach is generating bpp matrices from all the libraries included in Arnie, training a model from the output structures of each library and finally creating an ensemble of all the trained models.</p>
<p><strong>Model Architecture</strong></p>
<p>The main architecture consisted of: </p>
<ol>
<li>one 1D conv layer</li>
<li>one edge conv layer</li>
<li>two bi-directional lstm layers</li>
<li>one edge conv layer</li>
<li>one linear layer</li>
</ol>
<p>The edge conv layer processes each two connected nucleotides in the secondary structure (zeros added when a nucleotide isn't connected). I think it is similar to what <a href="https://www.kaggle.com/kingychiu" target="_blank">@kingychiu</a> described in his <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189571" target="_blank">post</a> but I am adding an instance norm and relu after each 1D conv and edge conv layers separately.</p>
<p><strong>Input Features</strong></p>
<ul>
<li>OHE of nucleotide</li>
<li>OHE of predicted loop type</li>
<li>OHE of secondary structure </li>
<li>bpp sum</li>
</ul>
<p><strong>Training configuration</strong></p>
<p>Each single model is trained as follows:</p>
<ul>
<li>5 folds</li>
<li>structure augmentation with 5 generated structures (log gamma 0 to 4) for each sequence and their corresponding predicted loop type</li>
<li>self supervised learning to predict randomly dropped nucleotides</li>
<li>snapshot ensemble</li>
<li>TTA with all the generated structures</li>
</ul>
<p><strong>Final submission</strong></p>
<p>My final submission is an average ensemble of the main architecture (along with other variations) trained with the outputs of all libraries in Arnie at T=37 and T = 50. </p>
<p>In order to make sure we are not overfitting on the short sequences in training, we didn't use Batchnorm nor distance features with any model and trusted only our CV score.</p>
<p>The post will be updated with a link to the code soon.</p>
<p>At the end, we want to thank <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> and <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a> for their amazing kernels upon which we built our solution. </p>
<p>All the praises be to Allah.</p>
<p>Thanks a lot for your kind words <a href="https://www.kaggle.com/duykhanh99" target="_blank">@duykhanh99</a> .</p>
<p>Thanks a lot for your kind words. </p>
<p>Indeed. I think we are the second most shaken team in the competition 😂</p>
<p>Thanks a lot Man-Chun.</p>
<p>Thanks a lot my friend.</p>
<p>Thanks a lot Mahmoud.</p>
<p>Thanks a lot Mohamed.</p>
|
Optiver - Trading at the Close
|
14th Place Solution for the Optiver - Trading at the Close Competition
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Optiver - Trading at the Close
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Updated with the private scores of the submissions we selected</p>
<p>We saved each time bucket from dates 436-480 into CSV files with format 436_0.csv, 436_10.csv, 436_20.csv, etc<br>
Then when doing CV, load them one by one, doing feature generation and postprocessing bucket by bucket. It is very slow though, for us it took about 4 hours to get the CV for each experiment and hence we couldn't perform it too often</p>
<p>the CV was 5.8117 with no postprocessing, 5.8080 with zero-sum and 5.8072 with zero-mean (without taking into account refitting) using simple average. Hold-out last 45 days. we didn't use the purging fold split in the public notebooks</p>
<p>This post outlines our approach which got us 14th on the private leaderboard</p>
<p>Firstly, thanks to <a href="https://www.kaggle.com/ravi20076" target="_blank">@ravi20076</a>, <a href="https://www.kaggle.com/mcpenguin" target="_blank">@mcpenguin</a>, <a href="https://www.kaggle.com/madarshbb" target="_blank">@madarshbb</a>, <a href="https://www.kaggle.com/cody11null" target="_blank">@cody11null</a> for the collaboration, and Optiver for organizing </p>
<p>Also, thanks to <a href="https://www.kaggle.com/wenxuanxx" target="_blank">@wenxuanxx</a>, <a href="https://www.kaggle.com/zhangyue199" target="_blank">@zhangyue199</a>, <a href="https://www.kaggle.com/lblhandsome" target="_blank">@lblhandsome</a>, for upholding the Kaggle spirit and sharing your feature ideas and notebooks</p>
<h1><strong>Context</strong></h1>
<p>Business context: <a href="https://www.kaggle.com/competitions/optiver-trading-at-the-close" target="_blank">https://www.kaggle.com/competitions/optiver-trading-at-the-close</a><br>
Data context: <a href="https://www.kaggle.com/competitions/optiver-trading-at-the-close/data" target="_blank">https://www.kaggle.com/competitions/optiver-trading-at-the-close/data</a></p>
<h1><strong>Overview of the approach</strong></h1>
<p>To begin with, we fixed bugs in the public notebook, and made a small change to our local validation such that it does not have data leakage when handling global features</p>
<pre><code>df = df(periods=)(lambda x: x > (- x < ))
to
df = df()(periods=)(lambda x: x > (- x < ))
</code></pre>
<p>(Btw this fix improved CV but worsened public LB score. Idk why LOL)</p>
<p>Also, we wrote our own functions for RSI, MACD and bollinger bands indicators as the public one gave inaccurate results in inference</p>
<p><strong>What worked (impact on CV)</strong></p>
<ul>
<li>Features based on revealed targets. We used lags 1,2,3 for target grouped by stock_id and seconds_in_bucket as features into our models (-0.005)</li>
<li>Signed representation of imbalance_size (-0.003)</li>
<li>Continuous model training with revealed targets. We refitted our LGB and CatBoost model at fixed intervals (elaborated later) (-0.006)</li>
<li>Performing CV in streaming fashion. To do this, we saved the data each time period in _.csv and delivered them one by one in chronological order when calculating CV. This took much longer but correlated better with the public LB</li>
<li>Zero sum post processing (-0.005) (but we are not sure in private LB so we only chose 1 submission with this)</li>
<li>Global Features (-0.004). We have to re-initialize these values this whenever we want to retrain the model to keep it up to date</li>
<li>Technical indicators RSI, MACD and bollinger bands. We had to rewrite them for good results (-0.002)</li>
</ul>
<p><strong>What did not work (impact on CV)</strong></p>
<ul>
<li>Group by date_id and stock_id for rolling features instead of stock_id (+0.003). We didn’t do this in the end</li>
<li>Lagged features shift(x) where x is large. Made CV better but worsened LB</li>
<li>Rolling features over a window x where x is large. Made CV better but worsened LB</li>
<li>Sector features (+0.002)</li>
<li>Neural Networks</li>
<li>Triplet Imbalances (+0.001) realised this feature gave very unstable values because of precision issues so we decided to discard this even though they improved the public LB score (worsened from 5.3315 to 5.3327)</li>
<li>Dropping features based on feature importances</li>
<li>Zero mean postprocessing. It makes our ensembles and LB/CV correlation worse for some reason so we didn’t choose this. In fact the ensemble does worse with zero mean (5.333) compared to zero sum (5.3327)</li>
</ul>
<p>Some additional discussion points of the approach are outlined <a href="https://www.kaggle.com/competitions/optiver-trading-at-the-close/discussion/485985" target="_blank">here</a></p>
<h1><strong>Details of the submission</strong></h1>
<p>Overall, after much consideration, the submissions that we chose were these two. Thank you <a href="https://www.kaggle.com/cody11null" target="_blank">@cody11null</a> for tuning the parameters and testing it with your huge 91 model script</p>
<p>170 features, no postprocessing. Public 5.3384. Private 5.4457. LGB + CatBoost<br>
Refitting strategy (assuming day X is the first day where currently_scored is True)</p>
<ul>
<li>Day X: Refit LGB, CAT</li>
<li>Day X+6: Refit LGB</li>
<li>Day X+12: Refit CAT</li>
<li>Day X+18: Refit LGB<br>
…</li>
<li>Day X+54: Refit LGB</li>
<li>Day X+60: Refit CAT</li>
</ul>
<p>193 features, zero sum postprocessing. Public 5.3327. Private 5.4458. LGB + CatBoost<br>
Refitting strategy (assuming day X is the first day where currently_scored is True)</p>
<ul>
<li>Day X: Refit LGB, CAT</li>
<li>Day X+9: Refit LGB</li>
<li>Day X+18: Refit CAT</li>
<li>Day X+27: Refit LGB<br>
…</li>
<li>Day X+54: Refit CAT</li>
</ul>
<p>At each time period, we use the latest LGB and latest CAT and submitted the average prediction of the 2 models</p>
<p>We tried to use the stock weights for a while but got submission scoring error when refitting so didn’t proceed 😢</p>
<p>Yeah, we restricted the window size to 450 or 470 dates to prevent this problem. </p>
<p>No, just feature generation and re-train, same as what we do with the original train data</p>
|
Optiver - Trading at the Close
|
15th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Optiver - Trading at the Close
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I used an ensemble of GBTs with a single online component. Published my training code <a href="https://github.com/osyuksel/kaggle-optiver-2024" target="_blank">here</a>.</p>
<h1>Details</h1>
<p><strong>Ensemble:</strong></p>
<ul>
<li>3 x LGB, offline</li>
<li>4 x XGB</li>
<li>1 x LGB, online (re-trained every 5 days with a window of 60 days)</li>
</ul>
<p><strong>Training:</strong></p>
<ul>
<li>Used n-fold cross validation by date_id with 5-day gaps between the folds.</li>
<li>Hyperparameter selection: used public notebooks as a reference, optuna + manual tweaks as the final pick.</li>
</ul>
<p><strong>Features that I haven't seen in public notebooks:</strong></p>
<ul>
<li><p>Revealed_target</p></li>
<li><p>Intraday revealed_target using the wap from previous time steps</p></li>
<li><p>Features based on <a href="https://www.kaggle.com/code/lognorm/de-anonymizing-stock-id" target="_blank">revealed stocks</a>:</p>
<ul>
<li>sector_id</li>
<li>embeddings based on historical open/close/high/low data</li></ul></li>
<li><p>Group features</p>
<ul>
<li>performance against the mean</li>
<li>performance against the sector</li></ul></li>
<li><p>At-the-money call price estimate with "expiry" at the end of auction</p></li>
<li><p>Inferred price based on tick size</p></li>
</ul>
<p><strong>Post processing:</strong></p>
<ul>
<li>Replaced "zero-sum" with subtraction by index-weighted mean targets.</li>
</ul>
<p><strong>Other:</strong></p>
<ul>
<li>Dropped stock_id and relied on the embeddings based on historical performance and sector to reduce the effects of delisting and other drastic changes.</li>
</ul>
<p><strong>Stuff that didn't work:</strong></p>
<ul>
<li>Most rolling features I introduced caused a drop in LB, so I omitted those such as:<ul>
<li>Rolling cross-correlation</li>
<li>Rolling z-score</li></ul></li>
<li>Clustering stock_ids based on correlated wap, target</li>
<li>Stock embeddings from neural networks</li>
</ul>
<p>I got the biggest public LB boost from online learning, post-processing and hyperparameter tuning.</p>
<h1>Acknowledgements</h1>
<p>I re-used feature engineering code from:<br>
<a href="https://www.kaggle.com/code/meli19/lgb-kf-baseline" target="_blank">https://www.kaggle.com/code/meli19/lgb-kf-baseline</a><br>
<a href="https://www.kaggle.com/code/zulqarnainali/explained-singel-model-optiver/notebook" target="_blank">https://www.kaggle.com/code/zulqarnainali/explained-singel-model-optiver/notebook</a><br>
<a href="https://www.kaggle.com/code/judith007/lb-5-3405-rapids-gpu-speeds-up-feature-engineer" target="_blank">https://www.kaggle.com/code/judith007/lb-5-3405-rapids-gpu-speeds-up-feature-engineer</a><br>
<a href="https://www.kaggle.com/code/verracodeguacas/fold-cv" target="_blank">https://www.kaggle.com/code/verracodeguacas/fold-cv</a></p>
<p>Thanks to Kaggle and Optiver for the interesting competition. This is the first time I saw a competition through and learned a lot during the process.</p>
|
Optiver - Trading at the Close
|
1st place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Optiver - Trading at the Close
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Sorry, I don't plan to share my code.<br>
Yes, it's seq2seq model but not bidirection for avoiding label leak.</p>
<p>Right. ~~~~</p>
<p>The online training parameters was selected by offline validation set.</p>
<p>Just single model. Kfold is more reasonable but also more time-consuming.</p>
<p>based on CV score, 300 is better than 200.<br>
400 will have memory issue.</p>
<p>I love China. 😀</p>
<p>Just zero inited at the beginning of every day.</p>
<p>Thanks for your suggestion. I am not familiar with meta model. What does this mean? Stacking or what else?</p>
<p>Sorry, I don't plan to share my code.😂</p>
<p>I know you mean. It's very time-consuming to train a second-level model and doesn't help much in fact. So i just use weighted sum.</p>
<ol>
<li>This is polars.</li>
<li>polars over == pandas groupby().transform()</li>
</ol>
<ol>
<li>You're right. Will take the last timestamp when inference.</li>
<li>Yes.</li>
</ol>
<p>Thanks to Optiver and Kaggle for hosting this great financial competition. And thanks to the<br>
great notebooks and discussions, I learned a lot. I am so happy to win my second solo win! 😃😀😀</p>
<h1>Overview</h1>
<p>My final model(CV/Private LB of 5.8117/5.4030) was a combination of CatBoost (5.8240/5.4165), GRU (5.8481/5.4259), and Transformer (5.8619/5.4296), with respective weights of 0.5, 0.3, 0.2 searched from validation set. And these models share same 300 features.</p>
<p>Besides, online learning(OL) and post-processing(PP) also play an important role in my final submission.</p>
<table>
<thead>
<tr>
<th>model name</th>
<th>validation set w/o PP</th>
<th>validation set w/ PP</th>
<th>test set w/o OL w/ PP</th>
<th>test set w/ OL one time w/ PP</th>
<th>test set w/ OL five times w/ PP</th>
</tr>
</thead>
<tbody>
<tr>
<td>CatBoost</td>
<td>5.8287</td>
<td>5.8240</td>
<td>5.4523</td>
<td>5.4291</td>
<td>5.4165</td>
</tr>
<tr>
<td>GRU</td>
<td>5.8519</td>
<td>5.8481</td>
<td>5.4690</td>
<td>5.4368</td>
<td>5.4259</td>
</tr>
<tr>
<td>Transformer</td>
<td>5.8614</td>
<td>5.8619</td>
<td>5.4678</td>
<td>5.4493</td>
<td>5.4296</td>
</tr>
<tr>
<td>GRU + Transformer</td>
<td>5.8233</td>
<td>5.8220</td>
<td>5.4550</td>
<td>5.4252</td>
<td>5.4109</td>
</tr>
<tr>
<td>CatBoost + GRU + Transformer</td>
<td>5.8142</td>
<td>5.8117</td>
<td>5.4438</td>
<td>5.4157</td>
<td>5.4030*(overtime)</td>
</tr>
</tbody>
</table>
<h1>Validation Strategy</h1>
<p>My validation strategy is pretty simple, train on first 400 days and choose last 81 days as my holdout validation set. The CV score aligns with leaderboard score very well which makes me believe that this competition wouldn't shake too much. So I just focus on improving CV in most of time.</p>
<h1>Magic Features</h1>
<p>My models have 300 features in the end. Most of these are commonly used, such like raw price, mid price, imbalance features, rolling features and historical target features.<br>
I will introduce some features really helpful and other teams didn't share yet.<br>
1 agg features based on seconds_in_bucket_group</p>
<pre><code>pl(pl() < )()(pl() < )()()(pl.Float32)(),
</code></pre>
<pre><code> *[(pl.col(col).first() / pl.col(col)).([, , ]).cast(pl.Float32).(.format(col)) col base_features],
*[(pl.col(col).rolling_mean(, min_periods=) / pl.col(col)).([, , ]).cast(pl.Float32).(.format(col, )) col base_features]
</code></pre>
<p>2 rank features grouped by seconds_in_bucket</p>
<pre><code> *[(pl.col(col).mean() / pl.col(col)).over([, ]).cast(pl.Float32).alias(.format(col)) col base_features],
*[(pl.col(col).rank(descending=,='') / ..)...) ],
</code></pre>
<h1>Feature Selection</h1>
<p>Feature selection is important because we have to avoid memory error issue and run as many rounds of online training as possible. <br>
I just choose top 300 features by CatBoost model's feature importance.</p>
<h1>Model</h1>
<ol>
<li>Nothing to say about CatBoost as usual, just simply train and predict.</li>
<li>GRU input tensor's shape is (batch_size, 55 time steps, dense_feature_dim), followed by 4 layers GRU, output tensor's shape is (batch_size, 55 time steps).</li>
<li>Transformer input tensor's shape is (batch_size, 200 stocks, dense_feature_dim), followed by 4 layers transformer encoder layers, output tensor's shape is (batch_size, 200 stocks). A small trick that turns output into zero mean is helpful.</li>
</ol>
<pre><code> = - .mean(, keepdim=)
</code></pre>
<h1>Online Learning Strategy</h1>
<p>I retrain my model every 12 days, 5 times in total. <br>
I think most teams can only use up to 200 features when training GBDT if online training strategy is adopted. Because it requires double memory consumption when concat historical data with online data. <br>
The data loading trick can greatly increase this. For achieving this, you should save training data one file per day and also loading day by day.</p>
<p><em>data loading trick</em></p>
<pre><code>def (meta_data, features):
res = np.(((meta_data), (features)), dtype=np.float32)
all_date_id = (meta_data[].())
data_index =
for date_id in (all_date_id):
tmp = h5py.( .(date_id), )
tmp = np.(tmp[][], dtype=np.float32)
res[data_index:data_index+(tmp),:] = tmp
data_index += (tmp)
return res
</code></pre>
<p>Actually, my best submission is overtime at last update. I just skip online training if total inference time meets certain value.<br>
So there are 4 online training updates in total. I estimate that the best score would be around 5.400 if not overtime.<br>
Anyway, I am really lucky! </p>
<h1>Post Processing</h1>
<p>Subtract weighted mean is better than zero mean since metric already told.</p>
<pre><code>test_df[] = test_df[].map(stock_weights)
test_df[] = test_df[] - (test_df[] * test_df[]).sum() / test_df[].sum()
</code></pre>
<h1>What not worked for me</h1>
<ol>
<li>ensemble with 1dCNN or MLP. </li>
<li>multi-days input instead of singe day input when applying GRU models</li>
<li>larger transformer, e.g. deberta </li>
<li>predict target bucket mean by GBDT</li>
</ol>
<p>Thank you all! </p>
<p>You are right!</p>
<p>It means post processing.</p>
<p>No, It's torch.transformerencoder ~</p>
<p>Thanks~</p>
<ol>
<li>Yes</li>
<li>It means Gradient Boosting Decision Tree.</li>
</ol>
<p>It means retrain models with more test data in prediction phase.</p>
<ol>
<li>decoder only</li>
<li>We can have recent data in prediction phase.</li>
<li>I use whole data set to retrain my models.</li>
</ol>
<ol>
<li>We can receive historical target info in inference phase, you can refer to this <a href="https://www.kaggle.com/code/sohier/optiver-2023-basic-submission-demo" target="_blank">demo</a>.</li>
<li>I want transformer to learn info acrross different stocks, GRU to learn sequence info. You can combine these two modules in one model of course.</li>
</ol>
|
Optiver - Trading at the Close
|
275 Place Solution for the Optiver - Trading at the Close (Drop from silver on public to bronze on private)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Optiver - Trading at the Close
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Overview of the Approach</p>
<p>It was an interesting timeseries competition. We tried many approaches; some worked well, while some didn't.</p>
<p>Details of the submission</p>
<p>Our team's solution approach was a blend of the following models:<br>
LGBM<br>
CatBoost<br>
LSTM</p>
<p>Most important feature was<br>
index_df = df.groupby(['date_id', 'seconds_in_bucket'])['wap'].mean().reset_index()<br>
index_df.rename(columns={'wap': 'index'}, inplace=True)</p>
<p>df = pd.merge(df, index_df, how='left', on=['date_id', 'seconds_in_bucket'])<br>
df['delta'] = (df['wap'] - df['index']) / df['index'] * 10**4</p>
<p>Our main mistake was the absence of retraining during the online period.<br>
We started the online period in the silver medal zone but eventually ended up with bronze medals.</p>
|
Optiver - Trading at the Close
|
89th Place Solution for the Optiver - Trading at the Close Competition
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Optiver - Trading at the Close
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to express my gratitude to the hosts for organizing such an amazing competition.</p>
<p>Congratulations to all the winners! While my solution may not measure up to theirs, I still hope that sharing my solution can be helpful to someone.</p>
<h1>Context</h1>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/optiver-trading-at-the-close/overview" target="_blank">https://www.kaggle.com/competitions/optiver-trading-at-the-close/overview</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/optiver-trading-at-the-close/data" target="_blank">https://www.kaggle.com/competitions/optiver-trading-at-the-close/data</a></li>
</ul>
<h1>Overview of the approach</h1>
<p>I built a LightGBM and Neural Network incorporating the structures of Transformer Encoder, which I named Stock-wise Transformer. For each model, I took the average of 8 folds and then blended them. Around 400 features were fed into LightGBM, out of which the top 200 features in terms of importance were fed into the NN model. Additionally, no further training was conducted during the private period. In fact, inference time was tight, leaving any time for additional training.</p>
<h1>Details of the submission</h1>
<h2>Cross-Validation</h2>
<p>I conducted an 8-fold split without shuffling along the date_id axis. I chose this number of folds as the amount of data in the private test set was approximately 1/8 of the training data. Additionally, this approach did not employ TimeSeriesSplit, which could lead to strict temporal leaks. However, contrary to intuition, such a splitting method often does not pose a problem in financial time series. In fact, in this competition, there were no significant differences in metrics between the last fold and the others, and the CV mean showed a very strong correlation with the leaderboard.</p>
<h2>Features</h2>
<p>The features I fed into LightGBM were as follows:</p>
<ul>
<li>Bid-ask spread</li>
<li>Imbalances of each features</li>
<li>Aggregates per stock_id for each feature (including the target) from the previous day</li>
<li>Aggregates per seconds_in_bucket from the previous day (including the target)</li>
<li>Aggregates for all stocks from the previous day</li>
<li>Rolling aggregates for each stock within the same day</li>
<li>Aggregates along the column axis for the current timestamp</li>
<li>Day of the week ID</li>
<li>Encoding for stock_id composition weight</li>
<li>Global aggregates across all stock_id and date_id</li>
</ul>
<p>I conducted comprehensive aggregation using methods such as min, max, mean, median, std, skew, and quantile.<br>
After training LightGBM with all these features, I extracted 200 features based on feature importance for each fold in the CV and fed them into the NN model.</p>
<h2>Preprocessing</h2>
<h3>LightGBM</h3>
<p>In this competition, features related to price were already given in a normalized state within each trading day. However, size-related features were not normalized. Since financial time series data often exhibits non-stationarity, I initially assumed that normalizing or taking differences of such features would make them more manageable, as is common practice. Surprisingly, in the case of LightGBM training, leaving these features untouched yielded the best performance. Even applying logarithmic transformations to size-related features proved to be detrimental for LightGBM. Of course, I believe this result was also influenced by the robust problem design in this competition. However, a significant lesson learned from this competition is that blindly applying traditional methods of financial time series without consideration may not be advisable.</p>
<h3>Stock-wise Transformer</h3>
<p>On the other hand, it was crucial to normalize the features during the training of the NN model. To minimize the impact of domain shifts, we performed rank gauss transformations within each data sharing the same timestamp. By adopting this method, each feature was transformed into relative values within the timestamp, thus allowing us to ignore changes in feature distributions over time.</p>
<h2>Models</h2>
<p>I computed the averages of LightGBM and Stock-wise Transformer over 8-folds respectively and generated the final predictions through blending these models. The optimal weight was determined to be 0.6 for LightGBM and 0.4 for the NN model. I was slightly surprised by the extent to which the weight of the NN model could be increased.</p>
<h3>LightGBM</h3>
<p>Hyperparameter tuning was crucial, and the optimal parameters led to an improvement of around +0.005 on the Public LB compared to the initial parameters. In many problem designs concerning financial time series, tuning often quickly leads to overfitting. However, in this competition, it seems to have been effective due to the robust problem design.</p>
<h3>Stock-wise Transformer</h3>
<p>I constructed a Neural Network based on the structure of Transformer Encoder. Specifically, I input a matrix of dimensions 200 stocks * 200 features belonging to the same timestamp at once, and applied Multi-head Attention along the stock dimension to consider the relationships between stocks for prediction. The output of this model is 200-dimensional, predicting the target variable for each stock simultaneously. <br>
Additionally, when outputting predictions (before calculating gradients), I devised a method to efficiently learn specific returns by subtracting the mean of all output values from each output value for centering.<br>
Furthermore, I performed augmentation using mixup for each mini-batch. Specifically, I extracted data points (features, target variable) belonging to two timestamps from the mini-batch and synthesized them into input data using a single random variable $\alpha (0 < \alpha < 1)$. During this process, for a given constant application rate $r (0 < r < 1)$, I randomly applied masks to stocks with a proportion of $1 - r$ to avoid performing mixup. $r$ is a hyperparameter. In this case, I adopted a value of 0.7.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4849759%2Fb9ab53c014757cb71c907a6eb5d4b12f%2Fstock_wise_transformer.png?generation=1711184235093172&alt=media"></p>
<h2>Postprocessing</h2>
<p>I calculated the average of predicted values for each timestamp and then performed centering by subtracting the mean from each prediction. Additionally, I further improved both the CV and LB scores by approximately +0.0006 by multiplying the predicted values by a constant factor (around 1.1).</p>
<h2>What Didn't Work</h2>
<p>I'll also share some ideas that didn't work out despite experimentation:</p>
<ul>
<li>Rank gauss transformation of the target variable and optimization with RMSE:<ul>
<li>While the correlation coefficient improved with this transformed target variable, it didn't contribute to the original metric.</li></ul></li>
<li>Stacking</li>
<li>Target Encoding:<ul>
<li>Despite the robust problem design, it seemed to be a stretch.</li></ul></li>
<li>Decomposing the model into sign prediction and absolute value prediction:<ul>
<li>It's generally believed that features capable of predicting the sign and magnitude of returns differ. Although I trained LightGBM on two separate problem designs and combined them, the prediction tendencies didn't differ much from a single LightGBM model and didn't contribute to the ensemble.</li></ul></li>
<li>Building models for each stock:<ul>
<li>Overfitting occurred quickly, rendering this approach ineffective.</li></ul></li>
<li>Constructing a volatility prediction model to correct return predictions:<ul>
<li>Typically, the predicted distribution of returns has a much smaller variance compared to the original distribution. Thus, I tried building a two-stage model by training a model to predict the volatility of the target variable and then applying this predicted volatility to the return predictions. However, it didn't work out.</li></ul></li>
</ul>
|
Optiver - Trading at the Close
|
9th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Optiver - Trading at the Close
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks. haha</p>
<p>I have 128GB RAM so I didn't spend time reducing memory in training phase. Maybe you can use <code>reduce_mem_usage</code> function on the featured dataframe before going into Xgboost. Besides, you can also cast dtypes in <code>generate_features_no_hist_polars</code>(i.e. float64 -> float32) to save memory.</p>
<p>I want to create some simple and useful features according to cross validation and business knowledge.</p>
<p>Check the definition of target. Weighted sum of target for all stocks is 0.</p>
<p>A big thanks to Optiver and Kaggle for hosting this competition. This competition has a really stable correlation between local cv and lb. </p>
<p>Actually I entered this game a little late, about 30 days before its ends and I am not good at NN, so I only focus on Gradient Boosting tree models and its feature engineering. I noticed there are many top solutions using NN and it is really a good opportunity for me to learn NN.</p>
<h3>Model</h3>
<ul>
<li>Xgboost with 3 different seeds and same 157 features<ul>
<li>There is not much difference between Xgboost and Lightgbm in lb score. But GPU Xgboost trains faster than GPU Lightgbm.</li></ul></li>
</ul>
<h3>Feature Engineering</h3>
<ul>
<li>Firstly, create some "basic features" based on raw features(i.e. add, subtract, multiply, divide from raw features). Also, create some median-scaled raw size features.</li>
</ul>
<pre><code>size_col = [,,,]
_ size_col:
train[] = train[_] / train.groupby([])[_].transform()
</code></pre>
<ul>
<li>Secondly, do further feature engineering/aggregation on raw features and "basic features"<ul>
<li>imb1, imb2 features</li>
<li>market_urgency feateures I copied from public notebook</li>
<li>diff features on different time window</li>
<li>shift features on different time window</li>
<li>rolling_mean/std features on different time window</li>
<li>using history wap to calculate target of 6 second before. Then, do some rolling_mean </li>
<li>some global date_id+seconds weighted features</li>
<li>MACD feateures</li>
<li>target rolliong_mean over stock_id + seconds_in_bucket</li></ul></li>
</ul>
<h3>Feature Selection</h3>
<ul>
<li>Because we have limit on inference time and memory, it's essential to do some feature selection. I add features group by group and check whether the local cv improves. Each feature group usually have 10 - 30 features. If one groups make local cv improve, I add feature one by one insides this feature group and usually kept only 5-10 most effective features.</li>
<li>I keep 157 features in my final model.</li>
</ul>
<h3>Post-processing:</h3>
<ul>
<li>Subtract weighted sum. From the definition of target, we can know weighted sum of target for all stocks should be zero.</li>
</ul>
<pre><code>test_df[] = lgb_predictions
test_df[] = test_df[] * test_df[]
test_df[] = test_df.groupby([,])[].transform() / test_df.groupby([,])[].transform()
test_df[] = test_df[] - test_df[]
</code></pre>
<h3>Others:</h3>
<ul>
<li>xgb mae objective</li>
<li>xgb sample_weight 1.5 weight for latest 45 days data </li>
<li>Online training. I only retrain model twice. one is N day (N is the start date of private lb), the other is N+30 day.</li>
<li>polars and <code>reduce_mem_usage function</code> helps a lot</li>
</ul>
<h3>Codes</h3>
<p>train: <a href="https://github.com/ChunhanLi/9th-kaggle-optiver-trading-close" target="_blank">https://github.com/ChunhanLi/9th-kaggle-optiver-trading-close</a><br>
inference: <a href="https://www.kaggle.com/code/hookman/9th-submission" target="_blank">https://www.kaggle.com/code/hookman/9th-submission</a></p>
<p>As far as I remembered, setting sample weight can improve round 0.001 in my local cv.</p>
|
Optiver - Trading at the Close
|
Private 50th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Optiver - Trading at the Close
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to extend my gratitude to Optiver and Kaggle for organizing this competition. I also appreciate the great notebooks and discussions which I learned a lot from.<br>
I would like to congratulate all the winners. <br>
I will share my solution hoping it can be helpful for someone. Since this is first time post, please forgive me if it is difficult to read or understand.</p>
<h1>Summary</h1>
<p>I used 3 different feature sets. Each set has common basic features such as and I trained both LightGBM and CatBoost on each feature set and ensembled these models. For each feature set and each method (LightGBM or CatBoost), “overall” model and “every 10 seconds” model were built and selected the better one for each 10-seconds i.e., each seconds_in_bucket value.</p>
<h1>Pre-processing and Preparation</h1>
<p>I divided stock_id into 7 groups by using KMeans method. The total cluster number 7 was determined by elbow method.</p>
<h1>Feature Engineering</h1>
<p>Main features I created are below:</p>
<ul>
<li><strong>Common features:</strong> raw features, doublet imbalance features, triplet imbalance features</li>
<li><strong>Set 1:</strong> imbalance momentum, price pressure, depth pressure, some statistics of each price features and size features, z-score in the same knn group, shifted value for z-score and size features, differences for price features and revealed target, day of week, seconds, minute</li>
<li><strong>Set2:</strong> z-score and difference with median in the same seconds_in_bucket value for [bid_size, ask_size, ask_price, bid_price, wap, revealed target], differences and fractional change for price features and size features</li>
<li><strong>Set 3:</strong> moving average for price features and size features and revealed target</li>
</ul>
<p>Please see appendix at the end of this text for details.</p>
<h1>Validation Strategy</h1>
<p>Simple time-based split was used for validation. The first 435 days were used for training and the last 45 days were used for validation. All models were re-trained on all data with the iteration number determined by validation.</p>
<h1>Evaluation Metrics</h1>
<p>Mean absolute error (MAE), same as the competition metrics.</p>
<h1>Models</h1>
<p>I trained LightGBM and CatBoost on each feature set. For each feature set and each method (LightGBM or CatBoost), I trained two types of models. One was trained on all train data, on the other hand, the other was trained only on data with the same seconds_in_bucket value. Here the former is called "overall model" and latter is called "every 10 seconds model". For each all seconds_in_bucket value, I compared the validation MAE between the "overall model" and "every 10 seconds model" and selected better one (please see the table below for an example). Finally I averaged 6 models (3 feature sets X 2 model methods; each combination includes both "overall model" and "every 10 seconds model").</p>
<table>
<thead>
<tr>
<th>seconds_in_bucket</th>
<th>MAE for "overall"</th>
<th>MAE for "every 10 seconds"</th>
<th>use model</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>6.83671</td>
<td>6.80268</td>
<td>"every 10 seconds model"</td>
</tr>
<tr>
<td>10</td>
<td>6.23614</td>
<td>6.25976</td>
<td>"overall model"</td>
</tr>
<tr>
<td>20</td>
<td>6.11294</td>
<td>6.13191</td>
<td>"overall model"</td>
</tr>
<tr>
<td>…</td>
<td>…</td>
<td>…</td>
<td>…</td>
</tr>
<tr>
<td>540</td>
<td>5.14656</td>
<td>5.10441</td>
<td>"every 10 seconds model"</td>
</tr>
</tbody>
</table>
<h1>Post-processing</h1>
<p>The mean of the predicted value was subtracted from the predicted value. Other post-processing such as clipping was not used at all.</p>
<h1>Results</h1>
<table>
<thead>
<tr>
<th>Stage</th>
<th>Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>Public</td>
<td>5.3315</td>
</tr>
<tr>
<td>1st update</td>
<td>4.6337</td>
</tr>
<tr>
<td>2nd Update</td>
<td>4.7876</td>
</tr>
<tr>
<td>3rd Update</td>
<td>4.9845</td>
</tr>
<tr>
<td>4th update</td>
<td>5.2112</td>
</tr>
<tr>
<td><strong>Private (50th)</strong></td>
<td><strong>5.4621</strong></td>
</tr>
</tbody>
</table>
<h1>Appendix</h1>
<ul>
<li>Feature enginnering function for feature set1:</li>
</ul>
<pre><code>def imbalance_features(df):
prices = [, , , , , ]
sizes = [, , , ]
df[] = df.eval()
df[] = df.eval()
df[] = df.eval()
for c in combinations(prices, ):
df[f] = df.eval(f)
for c in combinations(sizes, ):
df[f] = df.eval(f)
for c in combinations([, , , ], ):
max_ = df[list(c)].max(axis=)
min_ = df[list(c)].min(axis=)
mid_ = df[list(c)].sum(axis=)-min_-max_
df[f] = np.where(min_==max_, np.nan, (max_-mid_)/(mid_-min_))
for c in combinations(sizes, ):
max_ = df[list(c)].max(axis=)
min_ = df[list(c)].min(axis=)
mid_ = df[list(c)].sum(axis=)-min_-max_
df[f] = np.where(min_==max_, np.nan, (max_-mid_)/(mid_-min_))
df[] = df.groupby([])[].diff(periods=) / df[]
df[] = df[] * (df[] - df[])
df[] = (df[] - df[]) * (df[] - df[])
for func in [, , , ]:
df[f] = df[prices].agg(func, axis=)
df[f] = df[sizes].agg(func, axis=)
z_scores = [, , , , ]
df = pd.merge(df, df_knn, on=, how=)
df_agg = df.groupby([, , ])[z_scores].agg([, ])
df_agg.columns = [.join(col) for col in df_agg.columns]
df_agg = df_agg.add_suffix()
df_agg = df_agg.reset_index()
df = pd.merge(df, df_agg, on=[, , ], how=)
for feat in z_scores:
df[feat+] = (df[feat] - df[feat+])/df[feat+]
for col in [feat+ for feat in z_scores]:
for window in [, , , , ]:
df[f] = df.groupby()[col].shift(window)
for col in [, , , ]:
for window in [, , , , ]:
df[f] = df.groupby()[col].shift(window)
df[f] = df.groupby()[col].pct_change(window)
for col in [, , , ]:
for window in [, , , , ]:
df[f] = df.groupby()[col].diff(window)
if in df.columns:
df_agg = df[[, , , ]]
df_agg[] +=
df_agg = df_agg.rename(columns={: })
df = pd.merge(df, df_agg, on=[, , ], how=)
for col in []:
for window in [, , , , ]:
df[f] = df.groupby()[col].diff(window)
return df.replace([np.inf, -np.inf], )
def other_features(df):
df[] = df[]
df[] = df[]
df[] = df[] //
for key, value in global_stock_id_feats.items():
df[f] = df[].map(value.to_dict())
return df
global_stock_id_feats = {
: df_train.groupby()[].median(),
: df_train.groupby()[].median(),
: df_train.groupby()[].median(),
: df_train.groupby()[].median(),
}
</code></pre>
<ul>
<li>Feature enginnering function for feature set2:</li>
</ul>
<pre><code>def imbalance_features(df):
prices = [, , , , , ]
sizes = [, , , ]
z_scores = [, , , , ]
diff_med = [, , , , ]
for c in combinations(prices, ):
df[f] = df.eval(f)
for c in combinations(sizes, ):
df[f] = df.eval(f)
for c in combinations([, , , ], ):
max_ = df[list(c)].max(axis=)
min_ = df[list(c)].min(axis=)
mid_ = df[list(c)].sum(axis=)-min_-max_
df[f] = np.where(min_==max_, np.nan, (max_-mid_)/(mid_-min_))
for c in combinations(sizes, ):
max_ = df[list(c)].max(axis=)
min_ = df[list(c)].min(axis=)
mid_ = df[list(c)].sum(axis=)-min_-max_
df[f] = np.where(min_==max_, np.nan, (max_-mid_)/(mid_-min_))
df_agg = df.groupby([, ])[z_scores].agg([, , ])
df_agg = df_agg.reset_index()
df_agg.columns = [, ] + [.join(col) for col in df_agg.columns[:]]
df = pd.merge(df, df_agg, on=[, ], how=)
for feat in z_scores:
df[feat+] = (df[feat] - df[feat+])/df[feat+]
for feat in diff_med:
df[feat+] = (df[feat] - df[feat+])
for col in [, , , ]:
for window in [,,,,]:
df[f] = df.groupby()[col].shift(window)
df[f] = df.groupby()[col].pct_change(window)
for col in [, , , ]:
for window in [,,,,]:
df[f] = df.groupby()[col].diff(window)
df[f] = df.groupby()[col].pct_change(window)
if in df.columns:
df_agg = df[[, , , ]]
df_agg[] +=
df_agg = df_agg.rename(columns={: })
df = pd.merge(df, df_agg, on=[, , ], how=)
df_agg = df.groupby([, ])[[]].agg([, , ])
df_agg = df_agg.reset_index()
df_agg.columns = [, ] + [.join(col) for col in df_agg.columns[:]]
df = pd.merge(df, df_agg, on=[, ], how=)
df[] = (df[] - df[])
df[] = (df[] - df[])/df[]
return df.replace([np.inf, -np.inf], )
def other_features(df):
for key, value in global_stock_id_feats.items():
df[f] = df[].map(value.to_dict())
return df
global_stock_id_feats = {
: df_train.groupby()[].median() + df_train.groupby()[].median(),
: df_train.groupby()[].median(),
: df_train.groupby()[].median(),
: df_train[df_train[]<=].groupby()[].median(),
: df_train[df_train[]<=].groupby()[].median(),
: df_train[df_train[]>].groupby()[].median(),
: df_train[df_train[]>].groupby()[].median(),
: df_train.groupby()[].median() + df_train.groupby()[].median(),
: df_train.groupby()[].median(),
: df_train.groupby()[].median(),
: df_train[df_train[]<=].groupby()[].median(),
: df_train[df_train[]<=].groupby()[].median(),
: df_train[df_train[]>].groupby()[].median(),
: df_train[df_train[]>].groupby()[].median(),
}
</code></pre>
<ul>
<li>Feature enginnering function for feature set3:</li>
</ul>
<pre><code>def imbalance_features(df):
prices = [, , , , , ]
sizes = [, , , ]
z_scores = [, , , , ]
diff_med = [, , , , ]
df[] = df.eval()
df[] = df.eval()
df[] = df.eval()
df[] = df.eval()
df[] = df.eval()
df[] = df.eval()
df[] = df.eval()
df[] = df.eval()
df[] = df.eval()
df[] = df.eval()
for c in combinations(prices, ):
df[f] = df.eval(f)
for c in combinations(sizes, ):
df[f] = df.eval(f)
for c in combinations([, , , ], ):
max_ = df[list(c)].max(axis=)
min_ = df[list(c)].min(axis=)
mid_ = df[list(c)].sum(axis=)-min_-max_
df[f] = np.where(min_==max_, np.nan, (max_-mid_)/(mid_-min_))
for c in combinations(sizes, ):
max_ = df[list(c)].max(axis=)
min_ = df[list(c)].min(axis=)
mid_ = df[list(c)].sum(axis=)-min_-max_
df[f] = np.where(min_==max_, np.nan, (max_-mid_)/(mid_-min_))
df_agg = df.groupby([, ])[z_scores].agg([, , ])
df_agg = df_agg.reset_index()
df_agg.columns = [, ] + [.join(col) for col in df_agg.columns[:]]
df = pd.merge(df, df_agg, on=[, ], how=)
for feat in z_scores:
df[feat+] = (df[feat] - df[feat+])/df[feat+]
for feat in diff_med:
df[feat+] = (df[feat] - df[feat+])
for func in [, ]:
df[f] = df[prices].agg(func, axis=)
df[f] = df[sizes].agg(func, axis=)
df[] = df.groupby([])[].diff(periods=) / df[]
for col in [feat+ for feat in z_scores]:
for window in [, , , , ]:
df[f] = df.groupby()[col].shift(window)
for col in [feat+ for feat in z_scores]:
for window in []:
df[f] = df.groupby()[col].pct_change(window)
for col in [feat+ for feat in z_scores]:
for window in [, , , , ]:
df[f] = df.groupby()[col].shift(window)
for col in [feat+ for feat in z_scores]:
for window in []:
df[f] = df.groupby()[col].pct_change(window)
for window in [, ]:
df[f] = df.groupby()[].diff(window)
df[f] = df.groupby()[].diff(window)
df[f] = df.groupby()[].diff(window)
df[f] = df.groupby()[].diff(window)
df[f] = df.groupby()[].diff(window)
for col in [, , , ]:
for window in [, , , , ]:
df[f] = df.groupby()[[col]].rolling(window).mean().reset_index().sort_values()[col].values
for col in [, , , ]:
for window in [, , , , ]:
df[f] = df.groupby()[[col]].rolling(window).mean().reset_index().sort_values()[col].values
if in df.columns:
df_agg = df[[, , , ]]
df_agg[] +=
df_agg = df_agg.rename(columns={: })
df = pd.merge(df, df_agg, on=[, , ], how=)
for window in [,]:
df[f] = df.groupby()[[]].rolling(window).mean().reset_index().sort_values()[].values
for window in [,]:
df[f] = df.groupby()[].shift(window)
return df.replace([np.inf, -np.inf], )
def other_features(df):
df[] = df[]
df[] = df[]
df[] = df[] //
for key, value in global_stock_id_feats.items():
df[f] = df[].map(value.to_dict())
return df
def generate_all_features(df):
cols = [c for c in df.columns if c not in [, ]]
df = df[cols]
df = imbalance_features(df)
df = other_features(df)
= gc.collect()
z_scores = [, , , , ]
diff_med = [, , , , ]
drop_feats = [] + [feat+ for feat in diff_med] + [feat+ for feat in diff_med] + [feat+ for feat in z_scores] + [feat+ for feat in z_scores] + [feat+ for feat in z_scores]
feature_name = [i for i in df.columns if i not in [, , , ]+drop_feats]
return df, feature_name
global_stock_id_feats = {
: df_train.groupby()[].median(),
: df_train.groupby()[].median(),
: df_train.groupby()[].median(),
: df_train.groupby()[].median(),
}
</code></pre>
|
Optiver Realized Volatility Prediction
|
15th Place, Interesting Features - No Phishing
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Optiver Realized Volatility Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I'm surprised to see someone dig into it so deeply. My apologies for the readability. To make up for it I'll do my best best to go over things here.</p>
<blockquote>
<p>You are not using LGBM predictions for ensembling. You just use it to select features for NN models.</p>
</blockquote>
<p>Yes, it’s not necessary to exclude the LGBM predictions, but any improvement it gave was very minute and I got tired of fiddling with the ensemble weight.</p>
<blockquote>
<p>Your submission is an ensemble of 2 NN models each trained on 4 folds. They are very similar structurally, the only difference is that one model takes all 190 features as input, whereas another one - only 40 most importance features from LGBM.</p>
</blockquote>
<p>The 40 feature model should have less regularization/ dropout or at least that was the plan originally.</p>
<blockquote>
<p>You are predicting target / current_rv, not target directly.</p>
</blockquote>
<p>Yes, note that the math works out such that the rmspe of the rescaled target is equal to the rmspe of the original target after scaling back the predictions. That is <code>RMSPE(pred, target_vol/realized_vol) == RMSPE(realized_vol*pred, target_vol)</code>. </p>
<p>There are a few reasons why I think rescaling might be useful:</p>
<ul>
<li>Target volatilities span a couple orders of magnitude which might make learning more difficult, volatility ratios are more compact. </li>
<li>Rescaling means I can use the log() of features without the NN having to approximate an exponential curve over many octaves. </li>
<li>The target ratio is much more correlated between stock ids than the raw target. This is likely important when combining stock ids.</li>
</ul>
<blockquote>
<p>Your model is multi-input, multi-output - it takes all 112 stocks as input and predict all 112 realized volatilities as output - unlike most public notebooks that predicts stocks one-by-one.<br>
Your model has ~700K learnable parameters with more than half of them in single dense layer that combines all stocks.<br>
Adding layers combining stock ids showed a significant improvement. I have a stock_id-wise 1D dropout before the large dense layer.</p>
<p>You are doing additional normalization of features separately for each stock inside batch.<br>
Additionally you concat features mean and std aggregated per stock over whole input batch.<br>
Your batch during prediction contains whole test dataset, thus mean and std feature aggregation will be aggregated over all the test data.</p>
</blockquote>
<p>I’m not sure why normalizing separately over stock id showed improvement, but I suppose that for most features how they differ from their typical values is more important than what their value actually is. In the case of log(vol1) this makes sense at least – volatilities outside their normal range are likely to revert towards the mean, so when predicting delta volatility, the normalized volatility is more useful. </p>
<p>Concatenating the mean’s and std’s avoids a loss of information. I suspect it might be more adaptable to market changes than relying on a stock_id embedding. </p>
<blockquote>
<p>Also could you explain what's going on in your swish activation - is it really swish? Does it return single value?<br>
def swish(x, beta=1, extrapolation=.667):<br>
q = (x ** 2 + 1) ** .5<br>
qq = q ** (1 - extrapolation)</p>
</blockquote>
<pre><code>return tf.concat((x / qq * (x / q + 1) / 2, -x / qq * (-x / q + 1) / 2, x, qq / q), -1)
</code></pre>
<p>Knowing there might be a shift in market regime in the future made me think about how my NN would extrapolate on new inputs, and how the activation function might affect the way that it does so. It occurred to me that an asymptotically linear function might tend to over-extrapolate. So I came up with a variation on swish which would approach <code>x^exptrapolation_coef</code> asymptotically where extrapolation_coef could be set somewhere between 0 and 1.</p>
<p>I replaced the tanh sigmoid with an algebraic sigmoid because the tanh sigmoid made the modified swish look too sharp to my eye. I concatenate s(x) and s(-x) in the manner of an antirectified relu. Concatenating a linear activation as well got me a decent improvement in performance, and concatenating a bell curve shaped activation showed a very slight additional improvement. </p>
<p>I also tried a custom loss of the form: <br>
<code>loss = mean( ( ( y_pred - log(y_true) ) / y_true**p )**2)**0.5 | 0 < p < 1</code> <br>
where y_true is the rescaled target. It performed pretty much exactly as well as RMSPE, but ensembling models trained on the alternative loss with models trained on RMSPE showed no benefit over just training on RMSPE so I dropped it. </p>
<p>One thing to notice about the distribution of <code>log(target/vol1)</code> is that while it is approximately normal, it has significantly more outliers right of the mean. This is because while volatility can spike suddenly, such as in response to news, there is no reason for volatility to drop off all at once, so it tends to decay gradually. The choice of RMSPE as the competition loss means that volatility spikes are given very low weights and have fairly minimal impacts on scores, which would not be the case if the metric were instead RMSE[log(volatility)]. I've been wondering for a while if this was intentional on the part of Optiver, or entirely unintended. </p>
<p>There aren't too many more features worth mentioning but here's a couple:</p>
<h3>Volatility of Liquidity</h3>
<p>Surprisingly this was the most important feature for the cluster that had the largest error, but was not particularly important to other clusters.</p>
<h3>Volatility From New Book Orders</h3>
<p>Volatility as a result of orders being added to the book with volatility as a result of orders being removed from the book excluded ( <code>volatility( (liquidity[t] > liquidity[t-1]) * ( log_return[t] - log_return[t-1] ) )</code>.) Very slight improvement.</p>
<p>Clustering starts with an idea I had some time ago about an oblique decision tree based learner. So far I haven't been able to get it perform on par with LGBM, but my oblique tree can do clustering when the target is set to be the same as the features. It has the advantage of being deterministic and putting about the same number of samples in each cluster. After making the clustering tree, I take the centroid of the samples in each leaf, and use these centroids to initialize gaussian mixture clustering as a means of fine tuning. </p>
<p>To get the features used for clustering, I choose a feature and take a PCA over time for that feature with the PCA features being the stock ids. The PCA components then serve as a sort of embedding for each stock id.</p>
<p>Log return appears to be the feature that produced the best clusters.</p>
<p>I haven't tried it without clustering recently. At the time I implemented clustering, I was using LGBM and I saw some improvement, but my NN models flatten one of the last layer's output across stock id, so they can probably discover clusters on their own. It might be less important in this version.</p>
<p>My thanks to Optiver for hosting a really interesting competition that I very much enjoyed digging into. I’d like to share some of the more compelling features I've found that haven’t been discussed. I've also made my 15th place notebook <a href="https://www.kaggle.com/jacobyjaeger/15th-place" target="_blank">public</a> though I can't say much for it's readability. '</p>
<p>The features I will discuss include extensions of weighted average price that incorporate all book orders, measures of orderbook liquidity, and a feature derived from the previously mentioned that had a remarkably high correlation with volatility.</p>
<h2>Extending WAP</h2>
<p>First of all, the weighted average price definition we were given minimizes the following function: <code>f(x) = -bid_size*log(x-bid_price) - ask_size*log(ask_price-x)</code> over the range <code>[bid_price, ask_price]</code>. Given this fact we can extrapolate to a wap definition that includes multiple levels of market depth <code>wap =argmin_x(sum_i[bid_size_i*-log(x-bid_price_i) + ask_size_i*-log(ask_price_i-x) ])</code>.</p>
<p>I identified this by starting from the following assumptions:</p>
<ol>
<li>There exists some two functions <code>g_ask(x)</code> and <code>g_bid(x)</code> such that <code>WAP=argmin_x[ bid_size*g_bid(x - bid_price) + ask_size*g_ask(x - ask_price)]</code></li>
<li><code>lim(g_ask(x), x->0)=inf</code> and <code>lim(g_bid(x), x->0)=inf</code></li>
</ol>
<p>The first assumption is made because it is necessary if I am going to have a hope of finding a useful function minimized by WAP and because it is true of a traditional weighted average (with <code>g_*(x)=x^2</code>).</p>
<p>The second is made because logically the WAP should never be greater than the lowest ask or lower than the highest bid, and for that to always be the case, g_*(x) must have a vertical asymptote at 0.</p>
<p>Of course, <code>log()</code> is not the only function with a vertical asymptote, and this leads to a family of alternative wap definitions: <code>wap_k = argmin_x(sum_i[ bid_size_i*(x-bid_price_i)**-k + ask_size_i*(ask_price_i - x)**-k</code>. </p>
<p>Unfortunately features derived from these alternative waps did not provide me much of a performance benefit. However they did lead me to a very important other set of features: liquidity.</p>
<h2>Defining Liquidity</h2>
<p>In finance, liquidity describes how much of an asset can be purchased or sold immediately without moving the price significantly. A reasonable numerical definition of liquidity then should start from the following assumptions: </p>
<ol>
<li>moving bids closer to asks always increases liquidity</li>
<li>increasing order sizes always increases liquidity</li>
<li>adding additional orders to the book always increases liquidity</li>
</ol>
<p>It happens that if we take the minimum of any of our alternative WAP functions, that is to say the value of the function at its respective WAP it will meet all of our conditions for a good liquidity measure. For example the liquidity derived from wap_1 is:</p>
<p><code>liq_1 = sum_i[ bid_size_i/(wap_1 - bid_price_i) + ask_size_i/(ask_price_i - wap1)]</code></p>
<p>or for wap_2:</p>
<p><code>liq_2 = sum_i[ bid_size_i/(wap_2 - bid_price_i)**2 + ask_size_i/(ask_price_2 - wap1)**2]</code></p>
<h2>Trade Volume Per Liquidity (TVPL)</h2>
<p>Now, liquidity is a useful feature on its own, but what’s really compelling is trade volume relative to liquidity. Trade volume divided by liquidity has a remarkably high correlation with volatility with <code>log(TVPL_2))</code> and <code>log(vol1)</code> having a correlation coefficient of about 0.88. I think it’s safe to say that trade volume per liquidity is most of what creates volatility in the short term. </p>
<p>Recognizing the relationship between TVPL and volatility, there are a few more ideas we can explore. First, between trade volume and liquidity, trade volume is the far more noisy feature whereas liquidity is much more stable. So, for the sake of making projections, we should calculate trade volume over the full time window or at least most of it, but we can take liquidity over only a small portion to the latter end. Many of my most powerful features play on this, some also incorporating aggregations across stock_id's. </p>
<p>These have been the most interesting and helpful features I've found, but there are also other unique aspects to my notebook, so let me know if you are interested in having any more of them explained.</p>
<p>Cuts satisfy: <code>min(v, b)[ sum_ij [ || y_i - y_j ||^2 * ( x_i . v + b ) * ( x_j . v + b ) ] ]</code> where x is the features, y the target, v the unit-vector cut direction, and b is a scalar representing the offset of the cut plane from the origin. Essentially it tries to put samples that are distant in the target domain as far on opposite sides of the cutting plane as possible. It doesn't guarantee that clusters will have the same size, but it tends towards more balanced clusters than randomly initialized kmeans. </p>
<p>The solution can be found as an eigenvector of a matrix but I don't really want to go into details about how that matrix is constructed. You're welcome to try it for yourself. </p>
<p>As far as I know I am the first person to come up with this particular cutting strategy for oblique trees, so I don't have a reference for you.</p>
<blockquote>
<p>How much improvement did you see?</p>
</blockquote>
<p>I think I may have seen an LB increase in the range of +0.0005 to +0.001. I've been paying more attention to LB than CV in this competition because I suspect it can tell me more about my model's adaptability to market shifts. LB is particularly relevant in this case, because the change relates to how the model extrapolates. I tried rerunning my model with the old swish just now and the CV did not change significantly (- ~0.0001 to the full features model + ~0.0001 to the most important features only model.)</p>
<p>One reason this activation may have hurt your cv but not mine could be because I've scaled my features and target relative to vol1, meaning that there is less of a linear relationship between my features and target that my model has to maintain (my swish being asymptotically nonlinear.) It sounds like you are also using the default extrapolation whereas I am using a different extrapolation in each layer.</p>
<blockquote>
<p>What can be expected when having less slope? Will that result take longer time to convergence?</p>
</blockquote>
<p>I think it is more useful to think about this in terms of rate of asymptotic growth rather than slope. My swish has <code>x^extrapolation_coef</code> asymptotic growth compared to swish's linear growth. This will affect how hard the model extrapolates when it sees inputs outside the range it was shown during training. For a visualization, I have trained model's to reproduce an image from x and y coordinates, and "zoomed out" by feeding them coordinates outside the range of the original image: <a href="https://postimg.cc/mhdM2hzC" target="_blank">https://postimg.cc/mhdM2hzC</a>. It can be seen that larger extrapolation coefficients produce larger values outside the scope of the original image (the original swish has the same asymptotic growth as extrapolation_coef=1.0).</p>
<blockquote>
<p>Why might anti-swish improve the score? Is there any literature on such activation functions?</p>
</blockquote>
<p>I think there may have been a paper that used anti-relus in a CNN, but I can't seem to find it. The most obvious benefit would be that it avoids a loss of information about each node's input. It could also be argued that it allows the model to build better composite activation functions that are linear combinations of concatenated functions.</p>
<blockquote>
<p>Where did you get the idea from? :)</p>
</blockquote>
<p>Do you mean the modified swish or the concatenated activations? Multiple activations has been discussed in a few places, and I've mentioned that the modified swish comes from the idea of controlling the model's extrapolation.</p>
<blockquote>
<p>How did you find the extrapolation parameter to be 0.67 ?</p>
</blockquote>
<p>0.67 is not actually used, it was just an initial guess at what might be a reasonable drop in extrapolation from the original swish. I use a different extrapolation in each layer starting with 0.75 in the earlier layers and incrementally dropping to 0.45. That of course was just trial and error.</p>
|
Optiver Realized Volatility Prediction
|
1st Place Solution - Nearest Neighbors
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Optiver Realized Volatility Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/fritzcremer" target="_blank">@fritzcremer</a> <br>
While I haven't tried such CV, I have confirmed that changing the cross-validation to time-id GroupKFold in the final submission still yields the first place score.</p>
<p>However, since CV strategies are used to evaluate ideas throughout the competition, it is difficult to properly evaluate the impact of the choice of CV strategy afterward.</p>
<p><a href="https://www.kaggle.com/raphael1123" target="_blank">@raphael1123</a> <br>
Thanks! I think there is no magic in using optuna: I explored important hyperparameters like lr and batch size (only 3~8 parameters were explored at the same time), and visualized them with <code>plot_contour</code> to see the sensitivity of parameters.</p>
<p>You can see my code here :)<br>
<a href="https://github.com/nyanp/optiver-realized-volatility-prediction/blob/master/notebook/optuna_optiver.ipynb" target="_blank">https://github.com/nyanp/optiver-realized-volatility-prediction/blob/master/notebook/optuna_optiver.ipynb</a></p>
<p>Yes, 10% means the size of the red rectangles.</p>
<p>For LightGBM, I used lgb.cv with early stopping and then retrained model on 100% of the data with (1.1 x rounds in CV).</p>
<p>In NN, I used the model of last fold in CV directly for inference. This was because the training of my NN model was a bit unstable, there was a risk in using 100% of the data without monitoring the validation score.</p>
<p><a href="https://www.kaggle.com/regipublic" target="_blank">@regipublic</a> </p>
<blockquote>
<p>Would simply rearranging CV like this improve a typical LB score right off the bat?<br>
e.g. if I took the same model I had (0.223 score, normal 5 fold CV) and reordered the data to time_id and simply changed the CV to this method instead , I'd likely gain significant LB score increase?</p>
</blockquote>
<p>Maybe not. Adopting a better cv strategy is like getting a better compass: A good cv strategy can be a reliable criterion when deciding whether you should adopt an idea or not.</p>
<p>It does not mean that the current model will immediately improve. By repeating decisions based on a good cv strategy throughout the competition, you will eventually get better results.</p>
<p><a href="https://www.kaggle.com/regipublic" target="_blank">@regipublic</a> <br>
“1.1”is the same topic as the one discussed <a href="https://www.kaggle.com/c/optiver-realized-volatility-prediction/discussion/273523#1520582" target="_blank">here</a>. </p>
<p><a href="https://www.kaggle.com/stassl" target="_blank">@stassl</a> </p>
<blockquote>
<p>I ended up using another dimensionality reduction method, because for some reason 1d TSNE didn't work for me very well.</p>
</blockquote>
<p>I’m looking forward to see your solution!</p>
<p>BTW my twitter follower told me today that this reverse-engineering problem can be formulated as <a href="https://en.m.wikipedia.org/wiki/Travelling_salesman_problem" target="_blank">TSP</a>. I think it’s a very clever idea.</p>
<p><a href="https://www.kaggle.com/carlolepelaars" target="_blank">@carlolepelaars</a> <br>
I tried features in that article, but unfortunately none of them worked.</p>
<p><a href="https://www.kaggle.com/calebyung" target="_blank">@calebyung</a> <br>
Yes, NN features can be calculated perfectly without recovering time-id order.</p>
<p><a href="https://www.kaggle.com/nrcjea001" target="_blank">@nrcjea001</a> <br>
Thanks for the comment! There are several questions, and very good points, which I will address in turn.</p>
<blockquote>
<p>Do you not face the "using the future to predict the past" risk when doing this?</p>
</blockquote>
<p>These kinds of features need to be handled with care, but I believe that aggregation using NN is safer than features that use the time-id order directly.</p>
<blockquote>
<p>What if in the forecasting phase, time periods are predicted one at a time so that you cannot use other time periods.</p>
</blockquote>
<p>If I understand correctly, this will not be the case since the private test data will be processed in batch.</p>
<blockquote>
<p>Or what if there are significant gaps between time periods, so that you cannot establish a suitable distance metric to extract similar properties?</p>
</blockquote>
<p>There will probably be a 6-month gap between train-private tests. I created the same situation between train-validation to make sure that the NN features have no negative impact on the score in that situation. I think the model is robust to such long gaps because short-term NN features below 10-neighbors are dominant in the model.</p>
<p>However, I don't know if the test data will also be sampled 9 time-ids/day. A change in this density would certainly have an impact on my model, but based on the forecasting timeline and the number of test data, I assumed that it would not change significantly from the train data.</p>
<p><em>2022-01-11 Updated the title.</em></p>
<p>First of all, I would like to say a big thank you to Optiver and Kaggle for organizing a very interesting competition. I have never analyzed financial data before, but thanks to the very helpful tutorial notebook, I was able to fully immerse myself in the competition.</p>
<p>I don't know where I rank on the final leaderboard, but the points of my public 2nd place solution are as follows:</p>
<ul>
<li>Time-series cross-validation by reverse engineering of time-id order</li>
<li>Nearest neighbor aggregation features (boost from 0.21 to 0.19)</li>
<li>Blend of LightGBM, MLP, and MoA’s 1D-CNN</li>
</ul>
<p><img src="https://raw.githubusercontent.com/nyanp/optiver-realized-volatility-prediction/master/img/pipeline.png" alt=""></p>
<p>You can also see my notebook here:<br>
<a href="https://www.kaggle.com/nyanpn/public-2nd-place-solution" target="_blank">https://www.kaggle.com/nyanpn/public-2nd-place-solution</a></p>
<p>Now, I'd like to introduce the detailed solution.</p>
<h2>Reverse engineering of time-id order</h2>
<p>The prices in the competition data are normalized, but as someone pointed out in the discussion, you can use "tick size" to recover the real prices before normalization.<br>
Furthermore, by compressing the time-id x stock-id price matrix to one dimension using t-SNE, I was able to recover the order of the time-id with sufficient accuracy.</p>
<pre><code>import glob
import numpy as np
import pandas as pd
from joblib import Parallel, delayed
from sklearn.manifold import TSNE
from sklearn.preprocessing import minmax_scale
def calc_price_from_tick(df):
tick = sorted(np.diff(sorted(np.unique(df.values.flatten()))))[0]
return 0.01 / tick
def calc_prices(r):
df = pd.read_parquet(r.book_path,
columns=[
'time_id',
'ask_price1',
'ask_price2',
'bid_price1',
'bid_price2'
])
df = df.groupby('time_id') \
.apply(calc_price_from_tick).to_frame('price').reset_index()
df['stock_id'] = r.stock_id
return df
def reconstruct_time_id_order():
paths = glob.glob('/kaggle/input/optiver-realized-volatility-prediction/book_train.parquet/**/*.parquet')
df_files = pd.DataFrame(
{'book_path': paths}) \
.eval('stock_id = book_path.str.extract("stock_id=(\d+)").astype("int")',
engine='python')
# build price matrix using tick-size
df_prices = pd.concat(
Parallel(n_jobs=4)(
delayed(calc_prices)(r) for _, r in df_files.iterrows()
)
)
df_prices = df_prices.pivot('time_id', 'stock_id', 'price')
# t-SNE to recovering time-id order
clf = TSNE(
n_components=1,
perplexity=400,
random_state=0,
n_iter=2000
)
compressed = clf.fit_transform(
pd.DataFrame(minmax_scale(df_prices.fillna(df_prices.mean())))
)
order = np.argsort(compressed[:, 0])
ordered = df_prices.reindex(order).reset_index(drop=True)
# correct direction of time-id order using known stock (id61 = AMZN)
if ordered[61].iloc[0] > ordered[61].iloc[-1]:
ordered = ordered.reindex(ordered.index[::-1])\
.reset_index(drop=True)
return ordered[['time_id']]
</code></pre>
<p>The correctness of the recovery of the time-id order for the training data can be easily verified by comparing it with the real market data. By doing so, we know that the training data is for the period between 2020/1/1~2021/3/31.</p>
<p><img src="https://raw.githubusercontent.com/nyanp/optiver-realized-volatility-prediction/master/img/tsne%20result.bmp" alt=""><br>
Left: Stock price restored by t-SNE Right: Actual stock price retrieved by yfinance (2020-01-01~2021-03-31)</p>
<p>On the other hand, for the test data, I did not use the time-id information directly in the features and models because there is no guarantee that t-SNE will be able to sort the time-id order correctly. However, even if it cannot be used directly for features, the order of the time-id can be used in various ways.</p>
<ul>
<li><strong>Time series cross-validation</strong>. Now that we know the correct order of the timestamps, we can construct the validation sets as if they were normal time-series data.</li>
<li><strong>Detection of covariate shifts</strong>. Using methods such as adversarial validation enables us to find features that change over time.</li>
<li><strong>Data exclusion</strong>. I didn't do this because it didn't work for me, but we can exclude periods of specific market events from our data, for example, the stock market crash in early 2020.</li>
</ul>
<p>Time series CV was especially important. I used a 4-fold time-series CV, with 10% of<br>
data used for validation for each fold. This allowed me to get a good enough (though not perfect) correlation between CV and LB throughout the competition.</p>
<p><img src="https://raw.githubusercontent.com/nyanp/optiver-realized-volatility-prediction/master/img/cvlb.bmp" alt=""></p>
<h2>How is the data being generated?</h2>
<p>By the way, between 2020/1/1~2021/3/31, the stock market was open for 443 days, and there were 3833 time-ids during that period, which means that about 8.6 time-ids were recorded per day.</p>
<p>If Optiver excludes (for some reason, perhaps it's hard to predict) the first and last hour of the time that the stock market is open, then the time from 10:30~15:00 can be used for competition data. If they divided this time period into 30-minute intervals, up to 9 time-ids will be recorded per day. </p>
<p>In reality, given the case of circuit breakers and early closing days, the number of time-ids they can use for the data should be a little less than 9 per day. It's just a guess, but I figured this is how the data is generated.</p>
<p><img src="https://raw.githubusercontent.com/nyanp/optiver-realized-volatility-prediction/master/img/timeline.png" alt=""></p>
<h2>Feature engineering</h2>
<p>In this competition to predict the next 10 minutes of realized volatility (RV) from the previous 10 minutes of data, the most important feature would obviously be the RV of the previous 10 minutes.</p>
<p>However, if we consider how the data was generated, just 10 minutes after the target computation period for a time-id, the training data for the next time-id starts. This means that if the order of the time-id can be fully recovered, we can expect the RV at the next time-id to be a very powerful feature.</p>
<p>If we generalize further, we can improve the prediction accuracy by using not only the next time-id, but also the information of time-ids that are "similar" in some distance metric. For example, the RV of the same stock when the market had similar price, volatility, and trading volume would be useful for predicting the RV at a certain time-id.</p>
<p>So, I used NearestNeighbor with various distance metrics to find the similar N time-ids and calculate the average of features like RV and stock size (N=2,3,5,10,20,40).</p>
<p>In addition to the time-id, I also calculated the aggregation between similar stock-ids. Furthermore, by combining these ideas, I calculated features such as "the average tau of 20 similar stocks with similar volatility in 5 closest time-ids".</p>
<p><img src="https://raw.githubusercontent.com/nyanp/optiver-realized-volatility-prediction/master/img/nn%20feature.png" alt=""></p>
<pre><code>target_feature = 'book.log_return1.realized_volatility'
n_max = 40
# make neighbors
pivot = df.pivot('time_id', 'stock_id', 'price')
pivot = pivot.fillna(pivot.mean())
pivot = pd.DataFrame(minmax_scale(pivot))
nn = NearestNeighbors(n_neighbors=n_max, p=1)
nn.fit(pivot)
neighbors = nn.kneighbors(pivot)
# aggregate
def make_nn_feature(df, neighbors, f_col, n=5, agg=np.mean, postfix=''):
pivot_aggs = pd.DataFrame(agg(neighbors[1:n,:,:], axis=0),
columns=feature_pivot.columns,
index=feature_pivot.index)
dst = pivot_aggs.unstack().reset_index()
dst.columns = ['stock_id', 'time_id', f'{f_col}_cluster{n}{postfix}_{agg.__name__}']
return dst
feature_pivot = df.pivot('time_id', 'stock_id', target_feature)
feature_pivot = feature_pivot.fillna(feature_pivot.mean())
neighbor_features = np.zeros((n_max, *feature_pivot.shape))
for i in range(n):
neighbor_features[i, :, :] += feature_pivot.values[neighbors[:, i], :]
for n in [2, 3, 5, 10, 20, 40]:
dst = make_nn_feature(df, neighbors, feature_pivot, n)
df = pd.merge(df, dst, on=['stock_id', 'time_id'], how='left')
</code></pre>
<p>I created about 600 features in total, 360 of which were Nearest Neighbor features, and most of my score improvement was based on these NN features.</p>
<p>I don't think we can use future information in a real Optiver’s scenario, but the idea of using Nearest Neighbor to aggregate nearby features can probably be used in a real model.</p>
<h2>Feature processing</h2>
<p>Now that we know the order of the time-id, we can detect features that are changing over time. By performing adversarial validation, I noticed that the features aggregated from <code>trade.order_count</code> and <code>book.total_volume</code> changed remarkably over time. Therefore, instead of using their raw features, I converted these features into ranks within the same time-id.</p>
<p>I also applied <code>np.log1p</code> to features that have large skew, as they may degrade the predictions if large outliers come during the 2nd stage.</p>
<p>These have little improvement on the LB scores, but I believe they help to reduce the risk of shakedown in private LB.</p>
<h2>Modeling</h2>
<p>I used three simple blends for prediction: LightGBM, 1D-CNN, and MLP. The 1D-CNN is a simplified version of the architecture used in MoA 2nd place solution. (This CNN worked surprisingly well in both the recent MLB and Optiver competition.)</p>
<p><img src="https://raw.githubusercontent.com/nyanp/optiver-realized-volatility-prediction/master/img/1d%20cnn.png" alt=""><br>
1D-CNN architecture</p>
<p>I did not use the pretrained model because I wanted to use the test data for feature calculation, thus all the model training was done within a single notebook. My NN model was a bit unstable in training, so I trained 10 models with different seeds and cherry-picked the top 5 models with the best validation scores for prediction (oops, I forgot to use <a href="https://arxiv.org/abs/2109.08203" target="_blank">seed=3047</a> :) ).</p>
<h2>What didn’t work</h2>
<ul>
<li>a lot of domain-specific features like beta coefficient</li>
<li>TabNet (not bad, but too long training time)</li>
<li>training NN on residual (target - book.log_return.realized_volatility)</li>
<li>dimensionality reduction features</li>
</ul>
<h2>What I thought might work, but didn't have time for</h2>
<ul>
<li>300-sec model (split the training data into the first and second halves, and create a model that predicts the RV of the second half from the first half and use it as meta-feature or data augmentation)</li>
<li>Ensembles with LSTMs and RNNs that do not create features</li>
<li>Auto-encoder</li>
</ul>
<p>I forgot to write something important: the idea that t-SNE can restore the order of time-ids came from this notebook. </p>
<p><a href="https://www.kaggle.com/stassl/recovering-time-id-order?scriptVersionId=71499310" target="_blank">https://www.kaggle.com/stassl/recovering-time-id-order?scriptVersionId=71499310</a></p>
<p>Thanks for the great idea! <a href="https://www.kaggle.com/stassl" target="_blank">@stassl</a> </p>
<p><a href="https://www.kaggle.com/tobiit" target="_blank">@tobiit</a> <br>
Thanks! This is what I had in mind.</p>
<blockquote>
<p>Now, assume that at some point during the 600s interval, the change of the (real) stock price was as small as possible, namely only the 0.01 $.</p>
</blockquote>
<p>A small point, bun even if there was no timing that changes in the smallest step, if there is at least one pair of values in 600sec that differ only in 0.01 $, the price can be estimated.</p>
<p>For example, suppose the actual price changes as <code>[600.01, 600.04, 600.02, 600.05, 600.02]</code>. If we sort after removing duplicates, we get <code>[600.01, 600.02, 600.4, 600.05]</code>, and if we calculate the difference, get <code>[0.01, 0.02, 0.01]</code>, and we can get the tick size from the smallest of them.</p>
<p>This is the explanation of <code>sorted(np.diff(sorted(np.unique(x))))[0]</code> in my code.</p>
<p><a href="https://www.kaggle.com/raphael1123" target="_blank">@raphael1123</a> <br>
Not sure why, but searching hyper-parameters using Optuna helped me a lot to find such a stable setup.</p>
<p><a href="https://www.kaggle.com/keleea" target="_blank">@keleea</a> <br>
You can see the original one here:<br>
<a href="https://github.com/baosenguo/Kaggle-MoA-2nd-Place-Solution/blob/main/training/1d-cnn-train.ipynb" target="_blank">https://github.com/baosenguo/Kaggle-MoA-2nd-Place-Solution/blob/main/training/1d-cnn-train.ipynb</a></p>
<p><a href="https://www.kaggle.com/bejeweled" target="_blank">@bejeweled</a> <br>
What types of bias are you concerned about? If train and test are generated in the same way, then I think there is no distribution bias between them.</p>
<p><a href="https://www.kaggle.com/calebyung" target="_blank">@calebyung</a> <br>
In the 2nd stage, if there are 3830 time-ids in train and 700 time-ids in test, then the NN is calculated from all 4530 time-ids for both train and test features.</p>
<p><a href="https://www.kaggle.com/denisvodchyts" target="_blank">@denisvodchyts</a> </p>
<ol>
<li>Although it is not explicit, using the information of next time-id via Nearest Neighbor may actually happen.</li>
<li>No, I think this is an almost impossible strategy because we cannot recover the exact order of time-id perfectly. If the correct time-id order can be provided by the host, I would love to see how much the local CV changes by using only historical information.</li>
<li>Not tried, but it's probably around 0.195 (just my guess).</li>
</ol>
<p><a href="https://www.kaggle.com/longyin2" target="_blank">@longyin2</a> <br>
That’s fantastic! How did you integrate this 300sec model into your predictions?</p>
<p>While this is 100% legal against the competition rules, unfortunately it is probably not what the host wanted us to do.<br>
On the other hand, it has happened repeatedly in Kaggle that in order to win the competition, we need to use methods that would probably not work in a real-world scenario. I think this mismatch should be reduced in the design of the competition (this is also why I love the time-series code competition).<br>
I believe that time/stock NN features are useful even if future information is not available (e.g. you can use information from past periods of similar volatility), so I still believe that the host can get some meaningful insights from my solution.</p>
<ol>
<li><p>price is a real price calculated by <code>0.01 / tick_size</code>. n is my typo, and n_max is correct as you said. I'm sorry, but please see the notebook for the actual working code :)</p></li>
<li><p>Yes, I tried different distance metrics (manhattan, mahalanobis,…) as well as p (1, 1.5, 3,…). Use multiple kind of NNs got better results.</p></li>
</ol>
<p>It took 15s/epoch x 50epocs = 750sec to train singe NN. Perhaps because my CNN is much smaller than the original MoA’s CNN.</p>
<p><a href="https://www.kaggle.com/rdizzl3" target="_blank">@rdizzl3</a> <br>
It would be possible. My first 6 submissions used groupkfold with NN features, without time-id recovery, reaching 0.191 with a single lgb.</p>
<p>t-SNE compresses the dimension so that similar feature vectors are placed close to each other. In this case, I used the price of each stock-id as a feature vector of 112 dimensions.<br>
t-SNE is often used to compress to 2D for visualization, but if you compress to 1D, you can arrange the time-id on a straight line.</p>
|
Optiver Realized Volatility Prediction
|
51st place solution after 1st rerun
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Optiver Realized Volatility Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle and Optiver for this challenge and such a great contribution to educating us in both data science and finance.</p>
<p>My main driving force idea for this competition was a "Concept Drift" and "Data Drift" described in "MLOps: From Model-centric to Data-centric AI" by Andrew Ng <a href="https://www.deeplearning.ai/wp-content/uploads/2021/06/MLOps-From-Model-centric-to-Data-centric-AI.pdf" target="_blank">here</a><br>
The stock prices in the validation and public test set are too close in time to the training set - so the drift in the distribution of the features and target are very likely for the forthcoming private test set. The best way would have been to check it with external data and estimate the drift, but I have not done that.</p>
<p>My solution notebook is <a href="https://www.kaggle.com/isakev/opt-09-25-seeds3-time-id-kmeans-noaug-groupkfold?scriptVersionId=75813300" target="_blank">here</a>. I missed the discussions on reengineering stock_ids, prices, and time_id sequentiality - so was not able to use that idea and probably could not. The notebook is based on the public notebooks from <a href="https://www.kaggle.com/alexioslyon/lgbm-baseline" target="_blank">AlexiosLyon</a> and <a href="https://www.kaggle.com/tatudoug/stock-embedding-ffnn-my-features" target="_blank">Douglas Marques </a>,<br>
The changes I made in this notebook that might have contributed to the improvement of the private leaderboard score but worsened the PL score are the following:</p>
<ul>
<li>dropping stock_id as a feature in one of the blended models - worsened the PL score</li>
<li>normalizing and quantiletransforming jointly train and test dataframes - worsened the PL score</li>
<li>GroupKFold by time_id - worsened the PL score.</li>
<li>added 15 kmeans labels feature (kmeans in the features space) - improved the PL score</li>
<li>blending 3 different seeds of LGBM model</li>
<li>blending weights used were found (approximately) by evaluating scores on validation datasets.</li>
</ul>
<p>What did not work:</p>
<ul>
<li>augmentation by increasing price amplitudes and respective target (volatility) values</li>
</ul>
<p>One of the lessons I learned too late in the competition:</p>
<ul>
<li>It is allowed to use private trained models.<br>
So in this notebook all models are trained from scratch.</li>
</ul>
<p><a href="https://www.kaggle.com/yamqwe" target="_blank">@yamqwe</a>. You might be right. Update: I dropped to 421st place on the second rerun.<br>
The idea was that as time goes stock price behavior patterns change. So, dropping stock_id supposedly would make the models concentrate more on patterns rather than the stock names.</p>
|
Optiver Realized Volatility Prediction
|
7th place main ideas
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Optiver Realized Volatility Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I haven't tried not cutting up the segment at all, but when I went from cutting it up into 100 second windows to cutting it up into 50 or 25 second windows my performance decreased</p>
<p>Ah OK, if both give similar results then I don't think 'gain' would've worked that well either.<br>
BorutaShap should be a more accurate algorithm in theory, since as far as I understand model importance only considers the importance of each individual feature and not how different features may interact with each other. </p>
<p>Say one of your features can be described as a relation of 2 other features, but the 2 other features are unimportant: if I understand it correctly, if you pass those 2 features anyway your model may infer that relationships by itself and mark those 2 features as important. Or it could mark all 3 features as unimportant because it splits on each individually only 33% of the time, but removing all 3 would still decrease your performance. <br>
Either way, the optimal solution would be to remove the 2 features used to calculate the 3rd and I believe model importance on it's own may not be able to arrive to that conclusion.</p>
<p>Features such as 'trade_size_tau' from most public notebooks were clearly meaningless to GBTs (since that tree models shouldn't be affected by monotonic transformations to their features), yet they still ranked high in importance, but once I removed them my CV improved.<br>
This added complexity does obviously come with a clear drawback: speed, and I'm sure there are better solutions out there, but this is the only solution that worked for me, although I admit I didn't really try that many techniques, all I've tried apart from BorutaShap was:</p>
<ul>
<li>removing the bottom 100 features based on importance </li>
<li>removing highly correlated features based on Pearson coefficient</li>
</ul>
<blockquote>
<p>The amount of time it will save to create features</p>
</blockquote>
<p>Also it didn't actually save that much time for feature creation, basically instead of calculating these features for seconds_in_bucket [0:600, 100:600, 200:600…] I'm calculating those same features for [0:100, 100:200, 200:300…]. It may have had some impact but it wasn't significant overall</p>
<p>Picking features by feature importance didn't work for me but perhaps with a different importance type I could've had more success. I used the default option which I've now learned was 'split' so perhaps with 'gain' it could've worked.</p>
<p>Thanks for the comment, and also thanks for the (much cleaner) code snippet!</p>
<blockquote>
<p>I figured instead of calculating the mean, we can just calc c, the intercept and pass along with m (gradient) and error. Still 3 values like Micheal's code.</p>
<p>'c' gives a similar but slightly different value than mean, but my feeling is, an ML model will give almost identical output regardless if one is use instead of the other</p>
</blockquote>
<p>I have actually started with this, then I replaced to try to get it closer to the mean and somehow my CV improved so I stuck with it. <br>
I haven't put much thought into it initially but now that I think of it <br>
(EDIT: I've just tested it, CV got significantly worse)</p>
<blockquote>
<p>Then after that.. why not a quadratic, with 4 or 5 parameters.. this will fit even better, so hopefully give some improvements.. did you consider this also?</p>
</blockquote>
<p>I haven't tested this but I don't see it having a significant impact. This will fit better but it may be less meaningful in terms of estimating future values.<br>
Initially I wrote a couple of paragraphs of what I have thought of implementing but it was way too long so TLDR: potentially with access to the whole dataset we could find out which function could be more appropriate to estimate the evolution of each feature but I don't think a polynomial would be the way to go.</p>
<p>I have no clue what "tau" means exactly but it's a feature found in many public notebooks such as "lgbm baseline" linked above, defined as (among others):</p>
<pre><code># replace by order sum (tau)
train['size_tau'] = np.sqrt(1/train['trade_seconds_in_bucket_count_unique'])
</code></pre>
<p>Firstly, many thanks to Optiver for sponsoring this competition. It allowed me to gain a lot of knowledge about finance and data science in general, and it ended up being very interesting.</p>
<h5>About time_id aggregations</h5>
<p>It's probably not a surprise to anyone that my public LB placement can be mostly attributed to "The Leak", so credit to whoever posted <a href="https://www.kaggle.com/c/optiver-realized-volatility-prediction/discussion/256725" target="_blank">this</a> (and then deleted their account).</p>
<p>I used a similar approach to the one described in <a href="https://www.kaggle.com/nyanpn" target="_blank">@nyanpn</a>'s <a href="https://www.kaggle.com/c/optiver-realized-volatility-prediction/discussion/274970" target="_blank">post</a>, with a few minor differences such as [this] (basically I also re-ordered the NN time-ids and derived some features from there)(<a href="https://www.kaggle.com/c/optiver-realized-volatility-prediction/discussion/274970#1531808)" target="_blank">https://www.kaggle.com/c/optiver-realized-volatility-prediction/discussion/274970#1531808)</a>.</p>
<p>It goes without saying that such a solution could not actually be implemented in production, so this post will focus on all the other features & techniques I've learned about and that I haven't seen mentioned elsewhere.</p>
<p>I'd also like to say that I'm quite inexperienced in both data science and quantitative finance, and I don't know what my LB placement would be without time_id aggregations, so don't take this for a top solution. These ideas improved my CV and LB but I'm not sure if they would, by themselves, get me anywhere in the top LB spots.</p>
<h5>Baseline & models</h5>
<p>My submission started as a fork of the <a href="https://www.kaggle.com/alexioslyon/lgbm-baseline" target="_blank">"lgbm baseline"</a>, thanks to <a href="https://www.kaggle.com/alexislyon" target="_blank">@alexislyon</a> for providing such a helpful learning resource.</p>
<p>Although I changed most the the feature engineering, my final model is still based on a 50/50 LGBM and FFNN ensemble.</p>
<h5>Trend</h5>
<p>Most public solutions are using aggregations by seconds_in_bucket over 100 second time periods. I've found out that instead of passing those features as is, it is more efficient to fit a linear regression model on those features, and pass the mean, slope and error:</p>
<p>Function to get mean, slope, error of input array</p>
<pre><code>def get_dir_stats(arr):
mat = np.array([np.ones(len(arr)), np.arange(len(arr))]).transpose()
coefs, err, _, _ = lstsq(mat, arr)
return np.array([coefs[0] + (len(arr)/2)*coefs[1], coefs[1], err])
</code></pre>
<p>Whenever we would like to analyse the way a feature behaves over time, we would calculate that feature over 100 second time periods, and pass the array of those features (for example [rv_0-100, rv_100-200, …]) to the above function.</p>
<p>This ended up improving my CV and LB by about 0.00100, and I believe this also helped improve readability, which in turn helped me for feature selection.</p>
<h5>Regarding stock_id</h5>
<p>Stock_id as a categorical feature wasn't used in my final submissions because I believe it wouldn't scale well with future data. As an extreme example, imagine what would happen if one of the stocks in our dataset was GameStop and the dataset was taken before 2021. </p>
<p>I did use stock_id aggregations, but only over the N closest time_ids. This does require price denormalisation but I still think that would be the way to go even if we only had access to past time_ids for each prediction.</p>
<p>This probably isn't such an original concept but I haven't seen it discussed elsewhere. I'm interested to see what would be the most successful approach without time_id aggregations.</p>
<h5>Feature selection</h5>
<p>I used BorutaShap with an XGBRegressor model for feature selection. After running it on a kaggle kernel overnight I was still left with about 60 features I was unsure about. I tried re-running it again with the unimportant features removed, but I kept getting diminishing returns. If I had more time I could try to save the state of the program and resume it in a new session every 8 hours. Alternatively, I could try to get access to a decent CPU and run it locally.</p>
<p>Interestingly, most of the features deemed unimportant were domain specific (most quarticity features, a lot of trade features from the "lgbm baseline" notebook etc.) and not part of the increasing amount of aggregated features.</p>
|
Optiver Realized Volatility Prediction
|
My journey: 1st public -> 154 private
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Optiver Realized Volatility Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>If someone is interested, I've made my code available: <a href="https://www.kaggle.com/stassl/rnn-stock-attention" target="_blank">https://www.kaggle.com/stassl/rnn-stock-attention</a> - it includes only minimal key pieces of my solution, I refactored it for simplicity/performance and to fit kaggle RAM limits, you won't probably get the best score right away, but you might try training in single-stock mode and see how it goes ) It also doesn't include 5D version that uses neighbors, I think its implementation is not that interesting as actually it requires minimal changes to the model (though it could be a good exercise to practice your tensor reshaping skills :) ), but it could slowdown training and also doesn't has much sense from a practical standpoint.</p>
<p>Hi! Here I'd like to share my experience with this competition. I wanted to wait until the first LB update on live data, and now after it happened and I slipped to the 1259th place I'm reconsidering whether I should post my solution at all, but I decided to post it anyway as I believe some of you might be still curious and I'll share in the end my thoughts on what's happened.</p>
<p>I don't have any financial background and this was my first financial competition, so from the very beginning I understood that I hardly will be able to compete with someone with deep domain knowledge by smart feature engineering. Luckily, NNs are doing pretty good job these days learning from raw data, so I decided to try this approach. Basically, most of my work during this competition was trying different NN architectures and only minimal FE. </p>
<p>I think one of the key/unique aspects and challenges of the competition was stock interaction/correlation, and getting a good score was largely determined by how effectively you were able to leverage it. When doing manual FE + lightgbm it was solved using clustering and aggregations by time_id. For NN there exists attention mechanism which I found more flexible as it can attend to any stock with any weight and you don't have to predefine any number of clusters. Thus it becomes clear that you should feed into the NN not only data for current stock_id, but for all other stocks from the same time_id, i.e. your batch will have 4D shape: (N, 112, 600, F).</p>
<h3>Architecture</h3>
<p>I saw already several solutions using similar idea, so I don't pretend to present something unique here and also don't argue that it the best possible architecture, but that's just what I found working better for me than other things I've tried.</p>
<p>Here is the baseline architecture which I used (you can open it in another tab to enlarge):</p>
<p><img alt="orvp nn arch" src="https://user-images.githubusercontent.com/4602302/135931402-7175092d-d07d-412c-9825-b36b89a3062d.png"></p>
<p>In addition to this architecture I had 2 other versions of it: simplified and more advanced. The simplified one received 3D input: (N, 600, F) - only data for current stock, the more advanced one received 5D input: (N, NN, 112, 600, F), where NN - number of nearest neighbors. But there weren't much differences in code between all of them: 5D version = 4D version + one additional linear layer/attention for combining NN, and 4D version = 3D version + stock attention layer. Though there was quite a difference in training time:</p>
<ul>
<li>3D version: 20 min for 30 epochs (single fold)</li>
<li>4D version: 4 hours for 30 epochs (single fold)</li>
<li>5D version: 24 hours for 20 epochs (single fold)</li>
</ul>
<p>So considering similarity of these versions, when tuning hyper parameters, I tried them on simpler versions if they were applicable, and only if the results were superior, applied them to more advanced versions.</p>
<p>Essentially my 3D model is single stock input -> single stock output, while my 4D & 5D models are multiple stock input -> single stock output. I've seen other solutions use multi-inputs -> multi-output architectures and I think they are perfectly valid alternatives and probably I should have used them as well, as it could reduce training time.</p>
<h3>Attention variants</h3>
<p>As attention over stocks was one of the key elements, I experimented with it quite a bit:</p>
<ul>
<li>placing attention after RNN</li>
<li>instead of attention feeding current stock data + mean of other stocks (attention with equal weights)</li>
<li>multiple head attentions</li>
<li>instead of attention using full connected layer</li>
<li>trying different initializations</li>
</ul>
<h3>Other NN architectures</h3>
<p>Besides it I experimented with other types of networks like CNN and especially Transformers as with all recent advancements they become dominating in many domains. There is always a chance that I had some bugs, but no matter how I tried I wasn't able to achieve better results with them than with RNNs. In fact during many experiments I observed that simpler models often worked better than more complicated ones. Maybe such powerful attention that transformers provide was just superfluous for this task, and single pass over sequence was enough. The most fancy architecture that I've tried was inspired by Transformer-in-Transformer and MLP-Mixer, namely I tried to interleave transformer layers with attention over time/over stocks, as they interleave attention in patch with attention between patches in TNT.</p>
<h3>Overfitting</h3>
<p>Another important aspect that I'd like to talk about is overfitting. As you can see from the diagram, the input to the network contains data from all stocks, thus effectively reducing number of different samples to 3830 (number of time_ids). So it becomes more easily to overfit. This also where attention helps, in fact only first few convolutional layers "see" all data (and their weights are shared for all stocks), but after attention over stocks the data is mixed for each stock in unique way with dominating data from current stock. Dropout and weight decay didn't help much, but most effective way to prevent overfitting was reducing number of parameters - despite large inputs and long training time, even for 5D version number of parameters was ~60K. So NNs are really tiny.</p>
<h3>Normalization</h3>
<p>Also would like to mention data normalization, as it is quite important for NN performance unlike tree-based models. There were 2 things that improved my score:</p>
<ul>
<li>input features normalization. I tried different transforms GaussRank, Power, Quantile, but StandardScaler worked best for me. I also log transformed all sizes/counts/volumes. An important here was to normalize all features per stock instead of global - it gave me noticeable score improvement.</li>
<li>LayerNorm over all stocks/features. Usually for RNNs and Transformers LayerNorm is used instead of BatchNorm, so here I also used it, but in our case instead of usual 3D shape (B, T, F) for such networks we had multi series data of 4D shape: (B, ST, T, F), so I thought it might be helpful to normalize not only over features dimension, but over stocks as well. Indeed it also gave me a good score boost. </li>
</ul>
<h3>Augmentation</h3>
<p>I tried several augmentation ideas that didn't work:</p>
<ul>
<li>as it is known that seconds_in_bucket are not perfectly synchronized anyway, we can try to randomly shift all stocks (independently) in time</li>
<li>multiplying prices by some constant, as it doesn't change RV</li>
</ul>
<p>Of course I should mention recovering time_id order and using nearest neighbors (by recovered prices). I wouldn't talk much about it, as it was already discussed in other solutions, but in case you are still not aware of it, you can check <a href="https://www.kaggle.com/stassl/recovering-time-id-order" target="_blank">my notebook</a>.</p>
<p>Comparing scores of all my 3-4-5D versions of models, each higher version gave me ~0.015 improvement, so public LB scores for 3D was ~0.215, 4D - ~0.2, 5D - ~0.185. </p>
<p>My best submission without using nearest neighbors scores on public 0.198 using ensemble of 10 trained 4D models. It is of course not the best score, but I almost stopped improving it in the last 2 weeks and still, I think it is not that bad if only one type of models was used. I also selected it as one of 2 final submissions to hedge risks related to my best 5D solution. My best 5D submission is an ensemble of 6 models trained using different NN count=5,6,7,8 and distance metrics (p=0.5, 1, 2).</p>
<h3>Failure analysis</h3>
<p>Now probably the most intriguing part of my post - why my private LB position dropped so low. Of course overfitting is not uncommon on kaggle, though I have a feeling that there should be something more specific here. I can't say I'm 100% sure that I found the true reason, but that is the most plausible of my hypotheses (which I have a dozen) for now.</p>
<p>As you probably aware a week before the competition end we had this <a href="https://www.kaggle.com/c/optiver-realized-volatility-prediction/discussion/274222" target="_blank">Error handling reminder</a> which I found a bit frustrating (that it was posted so late). Of course we should make our solution robust without additional reminders, but it is hard foresee all possible future edge cases, if they weren't present nor in train nor in public sets, especially as I have very little experience with stock market. I actually learned that stocks could be split shortly before that post, when I tried to recover real tickers by looking visually at prices shape and comparing them. I noticed that stock 31 had specific W-shape very similar to GE (General Electric) stock, but their price range didn't match. I was quite puzzled, but then I found that <a href="https://finance.yahoo.com/news/reverse-stock-split-ge-trading-145334436.html" target="_blank">GE was reverse split 1-for-8 on July 30</a>, and it all clicked to me. At that moment competition at the top of LB became really hard, so I worked a lot by training new models and ensembling them. I tried to think how my models could be affected by mentioned edge cases, and I believe my model can survive some of them, but at the same time my thoughts were that if smth like this happens a lot of models will perform badly and if hosts see this, they'll handle this problem on their side as it was proposed by some participants. So I was a bit lazy to handle it on my own, and actually I'm not sure I could do something at that late stage, as I had a lot of models trained already. TBH, I still don't know what is the best way to handle it. I'm quite surprised that lots of models survived this, even though I haven't seen a lot of solutions that mention it explicitly. Maybe solutions that include manual FE, boosting trees and other architectures are less affected by this (as in my case size features accounted for 1/4 of all my 21 input features), in some solutions I saw separate normalization for train and test, which I thought not very correct, but it definitely can help survive stock splits that happened between train and test.</p>
<p>To confirm my hypothesis I conducted a quick experiment - I just evaluated my trained model on data with single stock (31) volumes scaled:</p>
<pre><code>if stock_id == 31:
df_book['ask_size1'] /= 8
df_book['ask_size2'] /= 8
df_book['bid_size1'] /= 8
df_book['bid_size2'] /= 8
df_trade['size'] /= 8
</code></pre>
<p>As result the score for 4D model dropped 0.198 -> 0.22, which is very similar to how it dropped for private LB. As I feed all stocks at the same time to my model, it could affect other stocks as well, so I plotted RMSPE by stock, and found that fortunately only single stock result is mostly affected (arguably because other stocks don’t pay much attention to stock 31 for their predictions), otherwise it could be even worse.</p>
<p><img src="https://user-images.githubusercontent.com/4602302/137599357-efd6b427-df7c-4ca7-95a2-230fe9e67e45.png" alt="image"></p>
<p>That's probably it, would like to hear your thoughts on how you handled (if handled) this issue in your solutions and how do you think your model is robust against such changes, as I believe this topic wasn't discussed a lot.</p>
<p>Update:<br>
I made my code available: <a href="https://www.kaggle.com/stassl/rnn-stock-attention" target="_blank">https://www.kaggle.com/stassl/rnn-stock-attention</a> - it includes only minimal key pieces of my solution, I refactored it for simplicity/performance and to fit kaggle RAM limits, you won't probably get the best score right away, but you might try training in <em>single-stock</em> mode and see how it goes ) It also doesn't include 5D version that uses neighbors, I think its implementation is not that interesting as actually it requires minimal changes to the model, but it could slowdown training and also doesn't has much sense from a practical standpoint.</p>
<p>How did you combine stocks if not using attention? I believe the notebook you are referencing has has 3D input (B, T, F), so it contains only single stock data as input, but if you have input with many stocks you need to combine them somehow. I don’t think that using stocks as channels for 1d convolution without attention is a good idea, as you are reducing number of training samples to 3830, and there could be high risk of overfitting, whereas attention might reduce this problem. In my case attention was just a fancy way of saying weighted sum along stock dimension, so you don’t actually need keras attention layer. It was like this: stock_id->stock embedding->linear layer->softmax->element-wise multiplication with tensor (B, ST, T, F) with stock weights broadcasted along T & F dimensions->sum along stock dimension to get 3D tensor of shape (B, T, F). I’m considering now multi-input->multi-output architecture and placing attention after RNN, in that case more complicated keras attention layer might be used with keys and queries, though simple attention based on stock embeddings might still be enough.</p>
<p>Also I would recommend using RNNs instead of CNNs or combine a few CNN layers with a few RNN. Also as I wrote per-stock normalization and layer norm layers where quite important while in the notebook there are global means/std.</p>
<p>Actually I found that you can easily overcome memory issues with memmapped numpy array and from my quick experiment it shouldn’t slowdown training too much, especially if you have multi-input->multi-output architecture.</p>
<p>Maybe some kind of caching could be used for very large number of NNs. As you’ll be reusing same time_id multiple times as neighbor for other time_ids. So maybe you can cache output of last layer before combining NNs and provide this cache as input to the model. In a similar fashion like you can pass context/hidden state to RNNs. You can start with cache of all zeroes, and after each forward pass update its state. Then for inference you have to make 2 passes: 1st for filling NN cache, 2nd - for final predictions.</p>
<p><a href="https://www.kaggle.com/eduardopeynetti" target="_blank">@eduardopeynetti</a>, yeah I used only a few NNs around current time_id, as from quick <a href="https://www.kaggle.com/stassl/recovering-time-id-order" target="_blank">experiments</a>, I found that feeding more NNs plateaus quite quickly, though I haven’t tried very large numbers like 100 NNs. Indeed it could become very slow and you’ll run out of memory, if using like I did. Even training with 8 NNs was not very fun, as it took 24 hours to train on single fold. Reducing input dimensions with averaging might help, though I’m not sure how much gain you’ll get from much larger number of NNs. Did you observe significant improvements with larger NN context?</p>
<p><a href="https://www.kaggle.com/nyanpn" target="_blank">@nyanpn</a>, thanks for the hints!</p>
<p>I had to look at the data one more time, I agree that there is a lot of variability in volumes over time, and that is a good reason to normalize it per time_id, as you did, that's a nice idea. Though I wouldn't conclude that there were stock splits from the plots below, or at least not by 8x.</p>
<p><img src="https://user-images.githubusercontent.com/4602302/137930862-0aa046d4-0f2b-4d1e-a52f-8fbfca6922db.png" alt="image"></p>
<p>Considering adversarial validation - I never used it, did you train binary lgbm classifier on your features using splits as in timeseries CV and looked at feature importances?</p>
<p>As for aux loss it gave me not that much, but definitely there was improvement ~0.001-0.0015 on public, I added it in the last week, and it allowed me to squeeze last bits of performance.</p>
<p><a href="https://www.kaggle.com/yvonnef" target="_blank">@yvonnef</a> no templates, just from blank page - inspired by visualizations in this blog <a href="https://jalammar.github.io/illustrated-transformer/" target="_blank">https://jalammar.github.io/illustrated-transformer/</a> (the guy uses Keynote to create them)</p>
<p>Thanks for the hint, <a href="https://www.kaggle.com/fengpan23" target="_blank">@fengpan23</a>, though I'm not sure I completely understand it. Do you mean normalized prices or real (absolute/denormalized) for calculating volume = price * size? Because if taking normalized prices, I believe you can still suffer from the same issue if sizes change by an order of magnitude, don't you?</p>
<p>Thanks for your kind words, <a href="https://www.kaggle.com/yvonnef" target="_blank">@yvonnef</a> :) Indeed it took me some time to create the diagram - I used Figma for it, but I think any graphic editor will work as well, if it allows to create straight and curved lines :)</p>
<p>That's indeed an interesting idea, <a href="https://www.kaggle.com/lucasmorin" target="_blank">@lucasmorin</a>. I like how simple it is. I haven't thought about it. The only doubt I have is that it could indeed help with stock splits, but how it would affect the result if there were no stock splits. Because absolute prices for some stocks might change 2x-3x over 2020-2021 period and I'm not sure how well they correlate with volatility.</p>
<p>Considering source code, I don't have a good readable version of it at the moment, and I'm not sure that full training will work in kaggle kernel, as it requires more than 16GB of RAM, as I have to keep all 3830x112x600x21 numbers in memory. As a starting point I would recommend to look at source code of other non-FE solutions by <a href="https://www.kaggle.com/gunesevitan" target="_blank">@gunesevitan</a> and <a href="https://www.kaggle.com/slawekbiel" target="_blank">@slawekbiel</a> - I found their code clean and readable. Though it might correspond to my 3D version of model, without stock-by-stock interaction, but as I said 4D and 5D versions differ from it by a few additional reshapes and attention/linear layer to combine results, I hope it could be reproduced by looking at the architecture diagram - I tried to make it very close to my implementation.</p>
<p>Thanx, <a href="https://www.kaggle.com/alexlwh" target="_blank">@alexlwh</a>! I’ve already answered about the app in this topic - I used Figma to create the diagram.</p>
|
Optiver Realized Volatility Prediction
|
No Feature Engineering NN Only Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Optiver Realized Volatility Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello <a href="https://www.kaggle.com/stassl" target="_blank">@stassl</a>,</p>
<p>You are right. After a convolutional neural network becomes deep enough, it should be able to capture long range dependencies. As you see, I already added a model scaling mechanism in my CNN just like EfficientNet. I noticed that scaling up my model's depth and width gives little improvements on score but training time was increasing linearly. That's why I decided to use non-local blocks which are more efficient and they actually shorten the training time. I was able to run one epoch in 20-25 seconds. In original paper, non-local blocks are defined as a non-local operation computes the response at a position as a weighted sum of the features at all positions in the input feature maps. The set of positions can be in space, time, or spacetime, implying that our operations are applicable for image, sequence, and video problems.</p>
<p>My gru output is connected to sequence self attention layer so tokens in the sequence attend to each other. It is kinda similar to non-local blocks. I'm not sure whether it is a coincidence or not but they had very close oof scores.</p>
<p>I only used stock_id embeddings with dropout. My input was (batch, 600, 25) and stock_id. 3d input fed to rnn/cnn and stock_id is fed to embeddings layer. Features extracted from each input are concatenated before the regression head. </p>
<p>Thanks for your kind words <a href="https://www.kaggle.com/isakev" target="_blank">@isakev</a>. I don't think this solution has a chance to win gold but if all competitors using the same set features are overfitting, I might end up on high silver zone. I think I have the simplest solution so far. I trained on my local machine and uploaded to Kaggle. I saw a rules clarification topic about requirements which mentioned that we can train models locally and upload for inference.</p>
<h2>Introduction</h2>
<p>I have no idea what will be my final placement but I wanted to share my solution before the competition finalizes even though my public leaderboard score is not good. I didn't want to push my score higher based on my previous forecasting competition experiences (missed gold medals twice).</p>
<p>I started with a LightGBM model and feature engineering approach in the earlier stages but I quickly noticed that features I created intuitively weren't working most of the time. That's why I decided to follow non-feature engineering approach.</p>
<h2>Data Preparation</h2>
<p>I prepared two dimensional order book (600 timesteps x 8 raw sequences) and trade data (600 timesteps x 3 raw sequences) npy files for every stock_id-time_id pair. For book data, stocks are sorted by time_id and seconds_in_bucket, resampled to 600 seconds and missing values are forward filled and backward filled. For trade data, missing time buckets are inserted to stocks and trade data are sorted by time_id and seconds_in_bucket, resampled to 600 seconds and missing values are filled with zeros.</p>
<h2>Cross-validation</h2>
<p>I used group kfold cross-validation with time_ids as groups just like most other participants. It was the most logical cross-validation choice for me since I didn't want to work too much on a custom one.</p>
<h1>Preprocessing</h1>
<p>I had to create some new sequences since my models weren't deep enough to approximate them. Those sequences were absolute log returns of raw order book sequences, weighted average prices and absolute log returns of weighted average prices. I didn't create log returns of trade sequences since it was very sparse. There were 25 sequences (22 from order book and 3 from trade) at the end. I used global means and stds of those sequences to normalize them but for trade data, non-zero mean and std were used on non-zero values. 0 values in trade data weren't effected so they became masks.</p>
<h2>Models</h2>
<p>My ensemble consist of two models; 1D CNN and RNN.</p>
<p>1D CNN is a very small model with 4 convolution blocks (conv, batch norm, non-linearity) and each followed by non-local blocks. I decided to use non-local blocks because I thought long range dependencies were very important and smaller convolutions weren't able to capture them. Final non-local block is connected to a global average pooling layer and features are concatenated with stock_id embeddings. I used <a href="https://docs.fast.ai/layers.html#sigmoid_range" target="_blank">SigmoidRange</a> after final dense layer which shorten the convergence time. This was the final score of 1D CNN model.</p>
<pre><code>CNN1D
-----
Fold 1 - RMSPE: 0.234049
Fold 2 - RMSPE: 0.238894
Fold 3 - RMSPE: 0.225608
Fold 4 - RMSPE: 0.223012
Fold 5 - RMSPE: 0.222544
------------------------------
OOF RMSPE: 0.228914
------------------------------
</code></pre>
<p>My RNN is a stacked one directional GRU followed by self attention. Attention scores are concatenated with stock_id embeddings and same regression head in 1D CNN is used. This was the final score of RNN model.</p>
<pre><code>RNN
---
Fold 1 - RMSPE: 0.231432
Fold 2 - RMSPE: 0.242344
Fold 3 - RMSPE: 0.225047
Fold 4 - RMSPE: 0.225377
Fold 5 - RMSPE: 0.220146
------------------------------
OOF RMSPE: 0.228997
------------------------------
</code></pre>
<p>and their blended scores are</p>
<pre><code>Blend
-----
Fold 1 - RMSPE: 0.230628
Fold 2 - RMSPE: 0.238636
Fold 3 - RMSPE: 0.223709
Fold 4 - RMSPE: 0.221811
Fold 5 - RMSPE: 0.219708
------------------------------
OOF RMSPE: 0.227004
------------------------------
</code></pre>
<h2>Post-processing</h2>
<p>I tried to scale predictions of worst stocks in a data driven way but I wasn't able improve my OOF score, so I didn't do any post-processing to predictions.</p>
<h2>What didn't work</h2>
<ul>
<li>Sequences like gradient of raw sequences, squared log returns, different variations of weighted average prices, log returns of trade sequences</li>
<li>Models like MLP Mixer and gMLP scored between 0.231 - 235 with same sequences I used in 1D CNN and RNN</li>
<li>Pretraining to approximate current realized volatility with only 8 raw sequences</li>
</ul>
<h2>Links</h2>
<p>Here are the links of everything I made for this competition.</p>
<p>Kaggle Notebooks:</p>
<ul>
<li><a href="https://www.kaggle.com/gunesevitan/optiver-realized-volatility-prediction-eda" target="_blank">Optiver Realized Volatility Prediction - EDA</a></li>
<li><a href="https://www.kaggle.com/gunesevitan/optiver-realized-volatility-prediction-1d-cnn" target="_blank">Optiver Realized Volatility Prediction - 1D CNN</a></li>
<li><a href="https://www.kaggle.com/gunesevitan/optiver-realized-volatility-prediction-mlp-mixer" target="_blank">Optiver Realized Volatility Prediction - MLP Mixer</a></li>
<li><a href="https://www.kaggle.com/gunesevitan/optiver-realized-volatility-prediction-inference" target="_blank">Optiver Realized Volatility Prediction - Inference</a></li>
</ul>
<p>Kaggle Datasets:</p>
<ul>
<li><a href="https://www.kaggle.com/gunesevitan/optiver-realized-volatility-npy-files" target="_blank">Optiver Realized Volatility - npy Files</a></li>
</ul>
<p>GitHub Repository</p>
<ul>
<li><a href="https://github.com/gunesevitan/optiver_realized_volatility_prediction" target="_blank">Optiver Realized Volatility Prediction</a></li>
</ul>
<h2>References</h2>
<p><a href="https://arxiv.org/abs/1711.07971" target="_blank">Non-local Neural Networks</a><br>
<a href="https://arxiv.org/abs/2105.01601" target="_blank">MLP-Mixer: An all-MLP Architecture for Vision</a><br>
<a href="https://arxiv.org/abs/2105.08050" target="_blank">Pay Attention to MLPs</a></p>
|
Optiver Realized Volatility Prediction
|
Public 12 Place Short Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Optiver Realized Volatility Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>for train set, I only aggregate features from train.<br>
for test set, I only aggregate features from test: since private test set will be in more far away future than in public test set (that reflects on LB)… </p>
<p>yes, you are right. <br>
I'm not actually creating 112 input layers but combined them into tensor of shape (batch size, 112, feature dimension).</p>
<p>It also involves some coding effort to do the masked output and masked loss, since not every stock_id appears in the same time_id.</p>
<p>sorry for not being clear above. I meant instead of using RNN, I used handmade time-series features</p>
<p><a href="https://www.kaggle.com/longyin2" target="_blank">@longyin2</a> <br>
They are features from book\trade parquet similar to the good work from <a href="https://www.kaggle.com/gunesevitan/optiver-realized-volatility-prediction-1d-cnn" target="_blank">https://www.kaggle.com/gunesevitan/optiver-realized-volatility-prediction-1d-cnn</a> </p>
<p>Using rnn\cnn itself only reaches 0.22-23x in cv, which is not good for me as well.<br>
What made the difference is to further consider inter-stock relationships in the same bucket of time by using yet another transformer</p>
<p>Thanks to Kaggle, Optiver for hosting this competition, and thanks to a lot of public sharing from which I learned a lot. Finally thanks to my teammates for their efforts during the competition period.</p>
<h4><strong>Validation Strategy</strong></h4>
<p>Since the public\private LB represent the future market, the best way is to do a time-based validation split. However, since it is declared that we don't have an actual sequence relationship between different time_ids, the best way to do validation is to do time_id based group k-fold validation.</p>
<p>The rule of thumb for validation becomes: </p>
<ol>
<li>Check every set of changes</li>
<li>if the cv holds or improves, we test on LB. </li>
<li>If LB is improving, we keep those features.</li>
</ol>
<p>The idea is that what features\models we created\tuned should generalize to the different buckets of time at least. Also, we treat LB as an extra validation fold that should represent the private test set better.</p>
<h4><strong>Modeling</strong></h4>
<h5><strong>[Ideal NN Design]</strong></h5>
<ul>
<li>Use RNN to extract book\trade features for each stock_id at each time_id</li>
<li>Use Transformer to capture inter-stock relationship</li>
</ul>
<p><img src="https://i.postimg.cc/bNyvxBDD/ideal-nn-arch.png" alt="NN Design"></p>
<h5><strong>[Evolution 1]</strong></h5>
<p>Issue: The training was too slow<br>
Solution: Replace RNN parts with handmade features<br>
- training is much faster<br>
- almost no difference in performance</p>
<p><strong>Performance: cv: 0.211, lb: ~0.200</strong></p>
<h5><strong>[Evolution 2]</strong></h5>
<p>Issue: As hint by the top team, we did not consider the relationship between time_ids.<br>
Solution: Use Nearest Neighbors to find top-N closest time_ids, and average the features for each stock from the time_id neighbors as new features.</p>
<p><strong>Performance: cv: ~0.198, lb: ~0.189</strong></p>
<h5><strong>[Blending]</strong></h5>
<p>Transformer + Tabnet + NN with different feature sets.</p>
|
Optiver Realized Volatility Prediction
|
Public 1398th → Private 24th Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Optiver Realized Volatility Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>As the title says, We experienced a great shakeup. Although it might be a total fluke, I would like to sum up a brief solution summary.<br>
<strong>Final submission code is <a href="https://www.kaggle.com/shigeria/optiver-lgbm-tabnet-nn" target="_blank">here</a>.</strong></p>
<p><strong>1. features</strong><br>
We added some features used in the following Notebooks.<br>
<a href="https://www.kaggle.com/tommy1028/lightgbm-starter-with-feature-engineering-idea" target="_blank">https://www.kaggle.com/tommy1028/lightgbm-starter-with-feature-engineering-idea</a><br>
<a href="https://www.kaggle.com/denisvodchyts/lgbm-slightly-different-features" target="_blank">https://www.kaggle.com/denisvodchyts/lgbm-slightly-different-features</a><br>
<a href="https://www.kaggle.com/alexioslyon/lgbm-baseline" target="_blank">https://www.kaggle.com/alexioslyon/lgbm-baseline</a><br>
Also, We added original price discussed in some discussion(Original Discussion is now deleted.).<br>
Because We could not get good Public LB score, We finally betted on features that seemed to be largely affected by the leak and have an effect on Private LB, though they did not work on Public LB.<br>
Furthermore, We chose different ways in filling in missing values(fill in 0 or np.mean depending on the features)</p>
<p><strong>2. validation and models</strong><br>
We used group kfold by stock id, but in NN, group kfold was based on knn algorithms and in Tabnet and LGBM, We just used sklearn group kfold.<br>
NN model architecture is like below.</p>
<pre><code>def base_model():
# Each instance will consist of two inputs: a single user id, and a single movie id
stock_id_input = keras.Input(shape=(1,), name='stock_id')
num_input = keras.Input(shape=(222,), name='num_data')
#embedding, flatenning and concatenating
x = keras.layers.Embedding(max(cat_data)+1, stock_embedding_size,
input_length=1, name='stock_embedding')(stock_id_input)
y = tf.expand_dims(num_input, axis=1)
x = keras.layers.Concatenate(axis=-1)([x, y])
x = keras.layers.Bidirectional(tf.compat.v1.keras.layers.CuDNNLSTM(256, return_sequences=True))(x)
x = keras.layers.Bidirectional(tf.compat.v1.keras.layers.CuDNNGRU(128, return_sequences=True))(x)
avg_pool = keras.layers.GlobalAveragePooling1D()(x)
max_pool = keras.layers.GlobalMaxPooling1D()(x)
conc = keras.layers.concatenate([avg_pool, max_pool])
out = keras.layers.Flatten()(conc)
# Add one or more hidden layers
for n_hidden in hidden_units:
out = keras.layers.Dense(n_hidden, activation='swish')(out)
#out = keras.layers.Concatenate()([out, num_input])
# A single output: our predicted rating
out = keras.layers.Dense(1, activation='linear', name='prediction')(out)
model = keras.Model(
inputs = [stock_id_input, num_input],
outputs = out,
)
return model
</code></pre>
<p>We ensembled NN * 0.4 + Tabnet * 0.4 + LGBM * 0.2</p>
<p>Finally We thanked for competitors in this Competition and the host! Sorry for poor English, thanks for reading!</p>
|
Optiver Realized Volatility Prediction
|
Public 23rd Solution - LB:0.1948
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Optiver Realized Volatility Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi <a href="https://www.kaggle.com/stassl" target="_blank">@stassl</a> , congrats on your top score! You were unstoppable! <br>
I also tried to follow your public kernels but didn't find those genius ideas like other top scorers at the end. </p>
<blockquote>
<p>How you didn't get lost with all of them :)</p>
</blockquote>
<p>This is my second competition on Kaggle. After reading the winning solution of MoA, I thought this is what people normally do on Kaggle, but it seems we were pushing it too hard this time. 😅</p>
<p>I tried both transformations at the very beginning, before we dived into 0.19x, they both worked for me at that time, so I kept them and never questioned their loyalty :)</p>
<p>Hi guys, my experiments were only performed on our own feature set, so I'm not sure if my findings can be generalized. What I found is, for features, the power transformation certainly helped a lot, probably not much difference with quantile transformation. For target, I found the log transformation works when eps is set to a small number like 1e-6 or 1e-9, it didn't work so well when I try eps=1e-3 or eps=1 (log1p). </p>
<blockquote>
<p>I was afraid at some point that it would be too easy to deduce what I'm using based on it, but as it turned out there still was some challenge.</p>
</blockquote>
<p>I very agree with what you said in nyanp's post, "it was very tempting to continue improve your best model". I guess, for me, this somehow limited my thinking :)</p>
<p>This makes me recall an article (in Chinese) I read recently. They describe life as an optimisation problem and how to use 'gradient descent' to optimise it. However, they forget to mention that life is not a convex problem and people can be easily trapped to a local optimum by their starting point. 😂</p>
<blockquote>
<p>I don't feel very safe with upcoming real life stage.</p>
</blockquote>
<p>No one is safe here, but as long as they sample the data in the same way as done for training, I think your solution will still be superior. Good luck for the next stage! 👍</p>
<p>Hi <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> , thank you for sharing this great work! <br>
We found that ensembling models trained with different CV showed some improvements. Your TabNet models were trained with KFold, and you had a very good control on model sizes and training epochs, so they are great candidates. 👍<br>
Best wishes to you also on the next phrase!</p>
<p>For power transformation or other transformations on features, we just want to make the optimisation problem (the training process) more 'regular', so optimisers can spend 'less effort' to find the optimal solution. The motivation is similar to trying different optimisers :)</p>
<p>Good eyes! The weights of 501 were initiated by optimising CV and tweak by our intuition from LB. The weights of 601 were purely optimised by CV. We also noticed this embrassing negative weight and tried to remove catboost and deep forest, but we didn't have enough time and submissions to do so at the end.</p>
<p><a href="https://www.kaggle.com/chrisrichardmiles" target="_blank">@chrisrichardmiles</a> To be honest, I don't know the answer. On one hand, for example, if we have a binary classifier with AUC < 0.5, we can simply take logical negation to make its AUC > 0.5, so it seems ok to have negative weights. On the other hand, our weights were optimised by CV; however, we can only partially trust the CV since the time_id doesn't follow a chronological order. If we consider the test set as an extra fold, clearly the performance of catboost was not good, so we should consider to remove it.</p>
<p>Hi <a href="https://www.kaggle.com/kewellcjj" target="_blank">@kewellcjj</a> , I don't really have a reference for NN, but transformation on features for traditional ML is quite common, for example, Chapter 3.8 of <a href="http://appliedpredictivemodeling.com/" target="_blank">Applied Predictive Modelling by Kuhn and Johnson</a> :)</p>
<p>Averaging more seeds only stabilize the output. The improvement on CV wasn't all transferred to LB, which means the gap between CV-AVG and LB is much smaller than the gap between CV and LB in my case. So you didn't miss much :)</p>
<p>The competition is finally over, thanks to Kaggle and Optiver for hosting such a challenging yet fun competition! No matter what the final ranks will be, all participants deserve a big applause for their hard work over the past three months. 👏🏻 👏🏻 Also, special thanks to my teammates <a href="https://www.kaggle.com/xiaowangiiiii" target="_blank">@xiaowangiiiii</a> and <a href="https://www.kaggle.com/gogo827jz" target="_blank">@gogo827jz</a> . It was an enjoyable collaboration.</p>
<p>At the time I'm writing this post, we rank 23rd with a public LB score of 0.1948.</p>
<p>We do have some concerns about our solution, and it wouldn't be a surprise if we were shaken down at the end. However, we decided to take a calculated risk and chose the following solution as our final submission. </p>
<h1>Solution</h1>
<p>Our public LB 0.1948 submission was an ensemble of the following:</p>
<ul>
<li>Our own feature set (named 501) + CV split by time_id</li>
<li>The public feature set (named 601) + CV split by time_id</li>
<li>The public TabNet solution + KFold CV</li>
</ul>
<p>We built quite a lot of models for increasing the robustness. However, this wasn't necessary at all if you have a very good set of features or signals.</p>
<p>The best single model solution are MLP and 1dCNN with our own feature set: LB 0.1962. A simple blend of these two models gives LB 0.1954.<br>
<img src="https://raw.githubusercontent.com/0xleowang/Kaggle-Optiver-Solution/main/Optiver-models.png?token=ABKAYG5E4MGHYHHDO7W5UZLBLT536" alt="optiver-models"></p>
<ol>
<li>CV was generated by randomly splitting time_ids into different folds. </li>
<li>'Seed' means the number of seeds used for generating CV folds.</li>
<li>'CV-AVG' means the average of OOF rmspe scores with CV generated by all random seeds.</li>
</ol>
<p>References:</p>
<ul>
<li>LGB: <a href="https://www.kaggle.com/alexioslyon/lgbm-baseline" target="_blank">https://www.kaggle.com/alexioslyon/lgbm-baseline</a></li>
<li>1dCNN (simplified) by <a href="https://www.kaggle.com/code1110" target="_blank">@code1110</a> :<a href="https://www.kaggle.com/code1110/janestreet-1dcnn-for-feature-extraction-infer" target="_blank">https://www.kaggle.com/code1110/janestreet-1dcnn-for-feature-extraction-infer</a></li>
<li>TabNet by <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> : <a href="https://www.kaggle.com/datafan07/optiver-volatility-predictions-using-tabnet" target="_blank">https://www.kaggle.com/datafan07/optiver-volatility-predictions-using-tabnet</a><br>
We thank these authors for sharing their great work!</li>
</ul>
<h2>Features</h2>
<p>Here are the main ideas of our own feature set:</p>
<ul>
<li>Forward filled the book data</li>
<li>Motivated by candle chart, features were aggregated in 5 time intervals, i.e., 0-119s, 120-239s, 240-359s, 360-479s, 480-599s</li>
<li>Realized volatility features. For RV2, we used size1+size2 instead of size2 only</li>
<li>Book prices/sizes were aggregated by mean</li>
<li>Trade prices/sizes/order counts were aggregated by sum</li>
<li>Selected a subset of features from public notebook by sequential forward selection:<ul>
<li>differences and absolute differences between ask_price 1/2 and bid_price 1/2</li>
<li>bid/ask spread (only for the best prices): (ask_price1 - bid_price1) / (ask_price1 + bid_price1)</li></ul></li>
<li>Some statistics features for the all above time interval features, e.g., mean, std, min, max, quantiles, etc. (not necessary, but seems helpful for MLP)</li>
<li>Generate all above features for the whole 10 minutes interval.</li>
<li>Aggregation of the 10-min features, groupby stock_id and time_id.<ul>
<li>For time_id, we used mean only</li>
<li>For stock_id, we used mean, std, min, max, quantiles, etc.</li></ul></li>
<li>No stock_id in the feature set, since we already have groupby stock_id features.</li>
<li>Remarks for aggregations:<ul>
<li>For convenient of implementation, groupby by stock_id and time_id were performed on the scaled and transformed features.</li>
<li>For training, groupby stock_id features were generated seperated for train set and validation set for preventing data leakage.</li>
<li>For inference, the aggregation features were genereated in test set only, without combining with the training set, because we want to take some advantages of the test set.</li></ul></li>
</ul>
<h2>Scaling and transformation</h2>
<p>We performed two transformations for both features and the target. Both showed some improvements.</p>
<p>For features, we used power transformation to reduce the skewness of distributions.</p>
<p>For the target, we simply applied (\log(y+eps)-mu)/sigma, where y is the target RV, mu and sigma are the mean and std of the target calculated from the training set, and eps=1e-9 is for handling the numerical issue when target is very close to zero. We also tried power transformation for the target, the result was similar to this simple log transformation. We created a customed loss function in Keras for this transformation. </p>
<pre><code>import tensorflow.keras.backend as K
mu = -5.762330803300896
sigma = 0.6339307835941186
eps = 1e-9
def rmspe_loss(y_true, y_pred):
y_true = K.exp(mu + sigma * y_true) - eps
y_pred = K.exp(mu + sigma * y_pred) - eps
return K.sqrt(K.mean(K.square((y_true - y_pred) / y_true)))
</code></pre>
<h2>More details</h2>
<ul>
<li>Feature engineering and training: <a href="https://github.com/0xleowang/Kaggle-Optiver-Solution" target="_blank">https://github.com/0xleowang/Kaggle-Optiver-Solution</a></li>
<li>Inference: <a href="https://www.kaggle.com/calibrator/final-ensemble-501-public" target="_blank">https://www.kaggle.com/calibrator/final-ensemble-501-public</a></li>
</ul>
<p>Yes. I've been recording and monitoring the CV/LB gaps. Before I added any groupby features, my gaps were ~0.015-0.016. When performing groupby stock_id outside of folds, the gaps dropped to ~0.012-0.013. After fixed this by performing groupby in-fold (i.e., seperately for train/test), the gaps returned to ~0.015. </p>
<p>However, I do have some concerns. I've tried feature selection, the CV improves a little, but the LB dropped and the gap dropped to ~0.012, so maybe some features in my blindly generated set worked really well on the public test set. It might not work in the later test sets.</p>
<p>Superman * 3😉</p>
|
Optiver Realized Volatility Prediction
|
Public 32nd place solution: LB 0.1953
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Optiver Realized Volatility Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I'm too busy this week and the code is complicated. If you can wait until this weekends…</p>
<p>First of all, thank you to Kaggle, Optiver and participants who provided meaningful discussions and notebooks. It was a tough and fun competition with lots of interesting solution.</p>
<p>Maybe it will be a big shake, but I'm glad to reach 32nd on public LB and I want to share my solution.<br>
Hope you guys will be interested in.</p>
<h1><strong>Solution</strong></h1>
<p>Our public LB 0.1953 submission was an ensemble of the following.<br>
CV settings are group k-fold (k=5) by time_id with 1 seed.</p>
<p><strong>Base models(5 types of NNs):</strong></p>
<ul>
<li>FFN (CV 0.2145 LB ???)</li>
<li>FFN with target log transformation (CV 0.2112 LB ???)</li>
<li>1d-CNN with target log transformation (Modification of MoA's 2nd place 1d-CNN) (CV 0.2116 LB: 0.1987)</li>
<li>TabNet (CV 0.2140 LB ???)</li>
<li>TabNet with target log transformation (CV 0.2129 LB ???)</li>
</ul>
<p>-> Ensemble: CV 0.2081 LB 0.1959</p>
<p><strong>Prediction adjustment with FFN and 1d-CNN on base models:</strong><br>
Adjust prediction of 5 base models by using FFN and 1d-CNN (5 base models * 2 adjustment models =10 predictions).<br>
Prediction adjustment is explained in later section.<br>
-> Ensemble of 10 predictions: CV 0.2057 LB 0.1960</p>
<p><strong>Prediction adjustment with FFN on base models using k-Means group k-fold:</strong><br>
Adjust prediction of 5 base models by using FFN.<br>
-> Ensemble of 5 predictions: CV 0.2077 LB 0.1957</p>
<p><strong>LGB</strong><br>
Standard 522 features LGB: CV 0.2160 LB 0.2049</p>
<p><strong>Ensemble of those</strong><br>
Base models ensemble * 0.09 + Prediction adjustment ensemble * 0.36 + <br>
Prediction adjustment k-Means ensemble * 0.45 + LGB * 0.1:<br>
CV ??? LB 0.1953 </p>
<p>Features used in NNs are completely the same with <a href="https://www.kaggle.com/datafan07/optiver-volatility-predictions-using-tabnet" target="_blank">this notebook</a> (thank you <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> !)</p>
<p>NNs are made by me and LGB is made by my teammate <a href="https://www.kaggle.com/conan1024" target="_blank">@conan1024</a> .</p>
<h2>Target log transformation</h2>
<p>Use following loss function instead of ordinal loss function (y_pred -> torch.exp(y_pred - 6))</p>
<pre><code>def mean_squared_per_error_exp(y_true, y_pred):
return torch.mean(torch.square( 1e-8 + (y_true - torch.exp(y_pred - 6))/ y_true))
</code></pre>
<p>Then transform predicted target by x = np.exp(x-6).</p>
<h2>Prediction adjustment</h2>
<ol>
<li>Load fold 0 model and predict target of all rows</li>
<li>Calculate ratio = prediction / target</li>
<li>Transform ratio to time_id * stock_id matrix (denote as y)</li>
<li>Create time_id * (112 stock_id * num_features) matrix (denote as X)</li>
<li>Transform X by PCA(n_components=50)</li>
<li>Split X, y by fold 0 split to X_train, X_val, y_train, y_val</li>
<li>Train FNN or 1d-CNN to minimize MSELoss(1, y_val * output)</li>
<li>Adjust prediction by multiplying output</li>
<li>Do the same for fold 1, 2, 3, 4</li>
</ol>
<p>This idea is to adjust prediction by using the status of whole market at the time_id.</p>
<h2>Prediction adjustment using k-Means</h2>
<ol>
<li>Transform X by PCA(n_components=50) then transform by UMAP(n_components=3)</li>
<li>Apply k-Means(n_clusters=8)</li>
<li>Classify 8 clusters to 3 labels by hand</li>
<li>Do Prediction adjustment (train on fold 1~4 and label 1, 2 samples and validate on fold 0 and label 0 samples, and do the same for each fold and label, train 5 folds * 3 labels = 15 times)</li>
</ol>
<p>Splitting time_id by k-Means is to ensure train time_id and val time_id have different characteristics to avoid overfitting. </p>
<p>Sorry for difficult-to-understand explanation.<br>
Any question are appreciated.</p>
|
Optiver Realized Volatility Prediction
|
Public 37th solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Optiver Realized Volatility Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Sadly, due to a small bug which was triggered by an edge case, I experienced a large drop off in my RMSPE on the private test set. You can read more about that right here: <br>
<a href="https://www.kaggle.com/c/optiver-realized-volatility-prediction/discussion/300746" target="_blank">https://www.kaggle.com/c/optiver-realized-volatility-prediction/discussion/300746</a><br>
I want to note that this bug is a programming mistake and has nothing to do with the model itself.</p>
<p>Basically, I just connected the one hot vector to a dense layer of size 112 with a softmax activation. Then I calculated the weighted average of the 112 embeddings, using the output of the softmax layer. The output of the softmax layer only depends on the weights connected to the activated neuron from the one hot vector, that represents the stock_id. Like this, every stock calculates a unique weighted average of all the stock embeddings. </p>
<p>That sounds like a big improvement! I'm glad you like it, I actually just used PowerPoint to create a slide and made a screenshot of it 😀</p>
<p>First of all I want to thank Optiver for hosting this interesting competition which I was absorbed in over the last 2-3 month. Even though my solution only ranks 37th (as time of writing this) on the public LB, I wanted to share it anyway, because I thought it might bring something new to the table.</p>
<p>In this post I want to share 3 main ideas, that were not talked about that often in the forum.</p>
<h5>Shifting data in time</h5>
<p>I shifted orderbook and trade data to better align different stocks within a time_id, though I only saw minor (if any) improvements to my score. You can read more on that in <a href="https://www.kaggle.com/fritzcremer/how-are-the-buckets-shifted-in-time" target="_blank">this notebook I made public.</a></p>
<h5>Recurrent net</h5>
<p>I preprocessed orderbook and trade data into 60x10 second intervals. Each 10 second interval holds features like price change, difference between last trade price and orderbook price levels, WAP volatility, trade and orderbook volumes relative to the total volume of the 10 minute window and some similar features. In total there are 15 features within each 10 second frame. I then used an recurrent net (LSTM - Dense - GRU) and concatenated the last layer with the rest of the features.</p>
<h5>Attention mechanism</h5>
<p>I trained a separate model which only used the recurrent net described in the bullet earlier and followed it up with 2 dense layers. The performance of this model was not very good with a CV score of ~0.23. I used this net to generate 30 dimensional embeddings for each stock_id/time_id pair. In my main model I then used an attention mechanism with stock_id to calculate a weighted average of the embeddings of all the other stocks at the same time_id. Like this, the model learned to pay attention to certain other stock_ids when making predictions. With this method (or very similar approaches) I saw an improvement of ~0.002 in my score. But because stock dependencies possibly change a lot over time, I don't know if this approach holds up when used on the coming private test set.</p>
<p>Besides the features earlier, I also calculated features like wap, spread… (though in total only 45 different features) and also aggregated these over time_id. I also input a one hot encoding vector of the stock_id.</p>
<p>Finally, I concatenated all these features and passed them into a MLP to make a prediction.<br>
I used 5 fold cross validation, grouped by time_id and for inference I used the 40th percentile of the cv model predictions.</p>
<h5>This picture shows the architecture of the model:</h5>
<p><img src="https://i.postimg.cc/kX05dws7/Model-architecture.png" alt="Model architecture"></p>
<p>I hope you found these ideas interesting and I would love to hear feedback on them and if any of you experimented with similar ideas.</p>
<p>Hey Denis, I didn't do it yet because I want to clean some parts up a bit before publishing. Also I trained the model on my local machine so I can only share the inference model (I could also share the training part but it will take a bit more preparation). But I hope I find some time to do it 🙂</p>
|
Optiver Realized Volatility Prediction
|
Public 39th solution – 10-folds lgbm
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Optiver Realized Volatility Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you Kaggle and Optiver team for their hard work in organizing this competition, especially the kindness of making some proprietary limit order data available to the public. Most of the works we have done were mainly careful data cleaning, feature selection and dealing with outliers. Our final submission is a 10-folds lgbm with no aggregation on test data and leak features, scoring 0.20018 on public LB. This solution is different from the other publicly shared solution as follow,</p>
<p><strong>Adding window of [10, 25, 50, 500, 550] to generate features</strong><br>
<a href="https://www.kaggle.com/c/optiver-realized-volatility-prediction/discussion/251358" target="_blank">https://www.kaggle.com/c/optiver-realized-volatility-prediction/discussion/251358</a><br>
As discussed here, the target distribution is generally lower. The distribution of realized_volatility generated in a slightly shorter timeframe (10, 25, 50) could be more similar to that to the target. Some features in the last 100 / 50 windows work better may be because they can reflect more information about the 600s-1200s period for generating target values. </p>
<p><strong>Winsorization</strong><br>
As most of the features are positive skewed, I winsorized all of their top 0.01 tails. <br>
The target is also winsorized with the top 0.02 tail.</p>
<p><strong>Removing outliers in target</strong><br>
We created a new feature called vol_ratio, which is target divided by log_return1_realized_volatility. There are some weird jumps of volatility.<br>
Dropping all the rows with vol_ratio >= 3 improved lb score ,</p>
<p><strong>Training with ‘dart’ mode and regularizations</strong><br>
To avoid overfitting, training lgbm with ‘dart’ mode with manual early stopping is better than ‘gbdt’, same as different parameters for early stopping. </p>
<p><strong>Predicting ‘vol_ratio’ rather than original target</strong><br>
It seems the distribution of predicting vol_ratio is better and the features importance plot makes more sense, but surprisingly the lb-score of these two approaches are almost identical. We ended up predicting the original target to play safe. </p>
<p>Notebook for training<br>
<a href="https://www.kaggle.com/tangtunyu/10-fold-lgbm" target="_blank">https://www.kaggle.com/tangtunyu/10-fold-lgbm</a></p>
<p>Notebook for inferring<br>
<a href="https://www.kaggle.com/tangtunyu/lgbm-infer?scriptVersionId=73904708" target="_blank">https://www.kaggle.com/tangtunyu/lgbm-infer?scriptVersionId=73904708</a></p>
<p>The best lb-score is just simple ensemble of public tabnet and nn <br>
<a href="https://www.kaggle.com/tangtunyu/tabnet-nn-lgbm-load-ensemble?scriptVersionId=74230564" target="_blank">https://www.kaggle.com/tangtunyu/tabnet-nn-lgbm-load-ensemble?scriptVersionId=74230564</a><br>
We did not submit this notebook for final scoring.</p>
<p>Special thanks to <a href="https://www.kaggle.com/tommy1028" target="_blank">@tommy1028</a> for his starter notebook which saved me a lot of time and kagglers who published the public tabnet and nn ( I could not trace who were the ones first published them ).</p>
<p>From the training notebook you can see an improvement of CV score with predicting "target / log_return1_realized_volatility". But the LB score doesnt improve in my experiment. I ended up submitting the version predicting original target too. </p>
<p>I just dont feel safe with all these overfitting features and aggregation across testing time_ids. I have seen many times how these overfitting models can blow up during production in different ways you can / cannot expect. </p>
<p>But the seemingly indifferent attitude of the hosts towards leakage is somehow unexpected to me.</p>
<p>not a lot, < 0.005 as I rmb</p>
|
Optiver Realized Volatility Prediction
|
Shake in First Rerun?(39th solution in Private)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Optiver Realized Volatility Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>According to the results of my submission on the Public Leaderboard and the current Private Leaderboard, the main reason for the large shake is '<strong>leak information</strong>'.</p>
<table>
<thead>
<tr>
<th>type</th>
<th>public score</th>
<th>private score</th>
</tr>
</thead>
<tbody>
<tr>
<td>leak</td>
<td>0.197+</td>
<td>0.195+</td>
</tr>
<tr>
<td>no leak</td>
<td>0.198+</td>
<td>0.191+</td>
</tr>
</tbody>
</table>
<p>My 39th solution in Private(first rerun):<a href="https://www.kaggle.com/leolu1998/lgbm-tabnet-nn-no-leaks-stratifiedgroupkfold" target="_blank">https://www.kaggle.com/leolu1998/lgbm-tabnet-nn-no-leaks-stratifiedgroupkfold</a></p>
<p>I further used <strong>TabNet</strong>, <strong>LGBM</strong>, and <strong>NN</strong> models, in order to avoid the submission running overtime error, I used my trained model weights for inference in the TabNet and LGBM models. The final ensemble weight combined practical experience based on public ranking results.</p>
<p>Reference:<br>
<a href="https://www.kaggle.com/nrcjea001/lgbm-baseline-no-leaks-stratifiedgroupkfold" target="_blank">https://www.kaggle.com/nrcjea001/lgbm-baseline-no-leaks-stratifiedgroupkfold</a><br>
<a href="https://www.kaggle.com/datafan07/optiver-volatility-predictions-using-tabnet" target="_blank">https://www.kaggle.com/datafan07/optiver-volatility-predictions-using-tabnet</a><br>
…</p>
<ul>
<li>Removed validation leakage from agg functions on <strong>time_id</strong>. <br>
features like:<br>
<em>df_time_id = df.groupby('time_id')[gcols].agg(['mean', 'std', 'max', 'min']).reset_index()</em></li>
</ul>
|
Optiver Realized Volatility Prediction
|
Tentative 3rd Place Solution (6th in Public) - life is volatile
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Optiver Realized Volatility Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Before I start, I would like to thank kaggle and Optiver for hosting this phenomenal competition. And, of course, my great teammate, <a href="https://www.kaggle.com/tomotomo5" target="_blank">@tomotomo5</a>, who drafted this post.</p>
<p>We have ended off 6th place in the Public Leaderboard and are 3rd place currently (as of October 15th) in the Private Leaderboard.</p>
<p>In this discussion, we would like to introduce the core ideas we have implemented that boosted our score. Some of them are not yet discussed much in the community. Model-wise, our final model is a simple stacking model of LightGBM x ANN (less noteworthy).</p>
<h2>time_id Nearest Neighbor</h2>
<p>The largest gamechanger of our solution was features based on time_id nearest neighbors, which was a similar idea of feature creation <a href="https://www.kaggle.com/nyanp" target="_blank">@nyanp</a> has explained in his solution below.<br>
<a href="https://www.kaggle.com/c/optiver-realized-volatility-prediction/discussion/274970" target="_blank">https://www.kaggle.com/c/optiver-realized-volatility-prediction/discussion/274970</a></p>
<p>I confess that our philosophy was not as sophisticated as <a href="https://www.kaggle.com/nyanp" target="_blank">@nyanp</a>'s. We just tried to find the time ids that have similar market conditions. In fact, we were not quite aware that we could recover time-id order, although we ended up creating quite similar features to <a href="https://www.kaggle.com/nyanp" target="_blank">@nyanp</a>'s.</p>
<p>Differences are,</p>
<ul>
<li></li>
<li>Adding the distance ratio of the 1st nearest time_id and k-th nearest time_id</li>
<li>No time-series cross-validation as we did not come up with the idea of time-series reverse engineering:(</li>
</ul>
<h2>Target Transformation</h2>
<p>Target Transformation was a crucial part of our model that increased the robustness. Stock volatility could behave as a non-stationary series, meaning that the level of volatility might be unstable over time but the change of volatility is less likely so.</p>
<p>With this in mind, we decided to change the task of the competition from directly predicting the level of realized volatility to predicting the ratio of the target to the realized volatility of 0~600 seconds.</p>
<p>Specifically, our transformed target is formulated as follows.<br>
<code>(transformed_target) = (target) / (realized volatility of 0~600 seconds)</code></p>
<h2>300 seconds Model</h2>
<p>Since a large time interval between the training dataset and the final test dataset set was expected, we wanted to give our model as much information on the test dataset as possible.</p>
<p>Therefore, we (1) concatenated the train + test set, (2) sliced the 600 seconds in half, (3) used the first half (0-300 seconds) to create features (4) to predict the realized volatility in the latter 300-600 seconds. We then (5) created features based on 300-600 seconds of the dataset, and (6) predicted the realized volatility of 600-900 seconds. </p>
<p>The predicted results were used as features of our main model. <br>
The best part of this idea was that we were able to train our model based on test data.</p>
<h2>Macro Estimation</h2>
<p>We believed that individual stock volatility depends largely on the macro environment.<br>
Therefore, we trained a model to predict the average volatility of all stock_ids in each time_id.</p>
<p>The results were used as features of our main model.</p>
<p><strong>Hope the best that the private leaderboard thereafter is not that volatile even if <em>life is volatile</em>.</strong><br>
<strong>Thank you so much for reading, HAPPY KAGGLING!</strong></p>
|
Otto Group Product Classification Challenge
|
1st PLACE - WINNER SOLUTION - Gilberto Titericz & Stanislav Semenov
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Otto Group Product Classification Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We tested many models at L1. But we used only the models that increased the performance at L2.</p>
<p>Just because that transformation performed better, for this dataset, when trained using NNets ;-)</p>
<p><strong>1st PLACE SOLUTION - Gilberto Titericz & Stanislav Semenov</strong></p>
<p>First, thanks to Organizers and Kaggle for such great competition.</p>
<p>Our solution is based in a 3-layer learning architecture as shown in the picture attached.<br>-1st level: there are about 33 models that we used their predictions as meta features for the 2nd level, also there are 8 engineered features.<br>-2nd level: there are 3 models trained using 33 meta features + 7 features from 1st level: <strong>XGBOOST</strong>, Neural Network(<strong>NN</strong>) and <strong>ADABOOST</strong> with ExtraTrees.<br>-3rd level: it's composed by a weighted mean of 2nd level predictions.<br>All models in 1st layers are trained using a 5 fold cross-validation technique using always the same fold indices.</p>
<p><br>The 2nd level we trainned using 4 Kfold random indices. It provided us the ability to calculate the score before submitting to the leader board. All our cross-validate scores are extremely correlated with LB scores, so we have a good estimate of performance locally and it enabled us the ability to discard useless models for the 2nd learning level.</p>
<p><strong>Models and features used for 2nd level training:</strong><br>X = Train and test sets</p>
<p>-Model 1: RandomForest(R). Dataset: X<br>-Model 2: Logistic Regression(scikit). Dataset: Log(X+1)<br>-Model 3: Extra Trees Classifier(scikit). Dataset: Log(X+1) (but could be raw)<br>-Model 4: KNeighborsClassifier(scikit). Dataset: Scale( Log(X+1) )<br>-Model 5: libfm. Dataset: Sparse(X). Each feature value is a unique level.<br>-Model 6: H2O NN. Bag of 10 runs. Dataset: sqrt( X + 3/8) <br>-Model 7: Multinomial Naive Bayes(scikit). Dataset: Log(X+1)<br>-Model 8: Lasagne NN(CPU). Bag of 2 NN runs. First with Dataset Scale( Log(X+1) ) and second with Dataset Scale( X )<br>-Model 9: Lasagne NN(CPU). Bag of 6 runs. Dataset: Scale( Log(X+1) )<br>-Model 10: T-sne. Dimension reduction to 3 dimensions. Also stacked 2 kmeans features using the T-sne 3 dimensions. Dataset: Log(X+1)<br>-Model 11: Sofia(R). Dataset: one against all with learner_type="logreg-pegasos" and loop_type="balanced-stochastic". Dataset: Scale(X)<br>-Model 12: Sofia(R). Trainned one against all with learner_type="logreg-pegasos" and loop_type="balanced-stochastic". Dataset: Scale(X, T-sne Dimension, some 3 level interactions between 13 most important features based in randomForest importance )<br>-Model 13: Sofia(R). Trainned one against all with learner_type="logreg-pegasos" and loop_type="combined-roc". Dataset: Log(1+X, T-sne Dimension, some 3 level interactions between 13 most important features based in randomForest importance )<br>-Model 14: Xgboost(R). Trainned one against all. Dataset: (X, feature sum(zeros) by row ). Replaced zeros with NA. <br>-Model 15: Xgboost(R). Trainned Multiclass Soft-Prob. Dataset: (X, 7 Kmeans features with different number of clusters, rowSums(X==0), rowSums(Scale(X)>0.5), rowSums(Scale(X)< -0.5) )<br>-Model 16: Xgboost(R). Trainned Multiclass Soft-Prob. Dataset: (X, T-sne features, Some Kmeans clusters of X)<br>-Model 17: Xgboost(R): Trainned Multiclass Soft-Prob. Dataset: (X, T-sne features, Some Kmeans clusters of log(1+X) )<br>-Model 18: Xgboost(R): Trainned Multiclass Soft-Prob. Dataset: (X, T-sne features, Some Kmeans clusters of Scale(X) )<br>-Model 19: Lasagne NN(GPU). 2-Layer. Bag of 120 NN runs with different number of epochs.<br>-Model 20: Lasagne NN(GPU). 3-Layer. Bag of 120 NN runs with different number of epochs.<br>-Model 21: XGboost. Trained on raw features. Extremely bagged (30 times averaged).<br>-Model 22: KNN on features X + int(X == 0)<br>-Model 23: KNN on features X + int(X == 0) + log(X + 1)<br>-Model 24: KNN on raw with 2 neighbours<br>-Model 25: KNN on raw with 4 neighbours<br>-Model 26: KNN on raw with 8 neighbours<br>-Model 27: KNN on raw with 16 neighbours<br>-Model 28: KNN on raw with 32 neighbours<br>-Model 29: KNN on raw with 64 neighbours<br>-Model 30: KNN on raw with 128 neighbours<br>-Model 31: KNN on raw with 256 neighbours<br>-Model 32: KNN on raw with 512 neighbours<br>-Model 33: KNN on raw with 1024 neighbours<br>-Feature 1: Distances to nearest neighbours of each classes<br>-Feature 2: Sum of distances of 2 nearest neighbours of each classes<br>-Feature 3: Sum of distances of 4 nearest neighbours of each classes<br>-Feature 4: Distances to nearest neighbours of each classes in TFIDF space<br>-Feature 5: Distances to nearest neighbours of each classed in T-SNE space (3 dimensions)<br>-Feature 6: Clustering features of original dataset<br>-Feature 7: Number of non-zeros elements in each row<br>-Feature 8: X (That feature was used only in <strong>NN</strong> 2nd level training)</p>
<p>The 2nd level we start training cross-validated just to choose best models, tune hyperparameters and find optimum weights to average 3rd level.<br>After we found some good parameters, we trained 2nd level using entire trainset and bagged results.<br>The final model is a very stable 2nd level bagging of:<br><strong>XGBOOST</strong>: 250 runs.<br><strong>NN</strong>: 600 runs.<br><strong>ADABOOST</strong>: 250 runs.</p>
<p>The average for the 3rd level we found better using a geometric mean of <strong>XGBOOST</strong> and <strong>NN</strong>. For <strong>ET</strong> we did an aritmetic mean with previous result: 0.85 * [<strong>XGBOOST</strong>^0.65 * <strong>NN</strong>^0.35] + 0.15 * [<strong>ET</strong>].</p>
<p>We tried a lot of training algorithms in first level as Vowpal Wabbit(many configurations), R glm, glmnet, scikit SVC, SVR, Ridge, SGD, etc... but none of these helped improving performance on second level.<br>Also we tried some preprocessing like PCA, ICA and FFT without improvement.<br>Also we tried Feature Selection without improvement. It seems that all features have positive prediction power.<br>Also we tried semi-supervised learning without relevant improvement and we discarded it due the fact that it have great potential to overfit our results.</p>
<p>Definetely the best algorithms to solve this problem are: Xgboost, NN and KNN. T-sne reduction also helped a lot. Other algorithm have a minor participation on performance. So we learn not to discard low performance algorithms, since it have enough predictive power to improve performance in a 2nd level training.<br>Our final cross-validated solution scored around 0.3962. LB(Public): 0.38055 and LB(Private): 0.38243.</p>
<p>Gilberto & Stanislav =)</p>
<p>@B Yang: reason: overfit at 2nd level.</p>
<p>@Amine Benhalloum: How did you come up with the formula 0.85 * [XGBOOST^0.65 * NN^0.35] + 0.15 * [ET] ? We have the crossvalidate predictions set of XGBOOST, NN and ET models. So we just calculate the final score using many different weights. Also we submited some predictions with different weights and that one was the best.</p>
<p>@Mario Filho: Yes. At 1st level we create meta features of trainset fold by fold (5 folds) and for testset metafeatures we use all trainset to train. Ex. Model 1 generated 9 trainset and testset metafeatures</p>
<p>At 2nd level we just changed the seed at each bag. We used the same 1st level crossvalidation approach, but using a fixed 4 Kfold indices.</p>
<p>@Triskelion: ifelse( question==1, YES, NO). Our 2nd level model have 297 metafeatures from model 1 to 33. Plus 148 features from Feature 1 to 8. Total around 445.</p>
<p>@barisumog: ifelse( question from barisumog, YES, make a question). Training time? Lot of time. I don't really know. Some simple models take hours to run in my 8 core cpu. For example model 16 takes about 1 hour per fold + 1 hour to train in all trainset ~ 6 hours.</p>
<p>@rcarlson: 8-D</p>
<p>@Jeong-Yoon Lee: That's a funny coincidence =)). But Stanislav and Michael Jahrer wasn't at Countable Care competition!</p>
<p>@Nicholas Guttenberg: I calculated once the randomForest importance at 2nd level. If i remember well, the best models are some XGB and some KNN.</p>
<p>Interestingly some of our models scored very poorly at 1st level. But contributed in 2nd level. ex. Model 2 CV ~ 0.65, Model 5 CV ~ 0.55. Also using RAW dataset at 2nd level also helped improve NN score.</p>
<p>Because that transformation performed better in NNet CV scores.</p>
<p>Forward Feature Selection. Adding one feature by time and comparing the score with previous models.</p>
|
Otto Group Product Classification Challenge
|
41599 via TSNE, meta bagging
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Otto Group Product Classification Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Several people have noted (usually indirectly) that the sample size (or partition) for each of the two stages could be an important hyper parameter. I believe one can take a sample with replacement of size over or under the number of rows in the training data. This would not be a proper bag (which is sample with replacement of exact size of the number of rows in the training data), but would be close. Thus one can control the row dimensionality of each partition. Perhaps someone knows of more research in this area (akin to that of active learning)? Complex re-sampling schemes could potentially improve the approach presented.</p>
<p>[quote=Jiming Ye;79130]</p>
<p>It's really a beautiful way to do ensembling. Seems I have a lot of new things to learn.</p>
<p>Could you explain more about this, why you choose the first 10000 training data?</p>
<p>[/quote]</p>
<p>Because SVM does not scale too well with too many rows. Also it's possible to over train the first stage estimators. The intuition for tuning this meta bag is somewhat strange. I did try more and less data for the first stage. In the R Code I just use out of bag which means the exact number is variable depending on run.</p>
<p>I did not do a lot of tuning here. So I have no idea what the possibilities are for scores using the framework. The tuning does not correspond to "common sense" for me.</p>
<p>Here's the meta bagged ANN with Lasagne and sklearn. I'm not sure what the score is. You can do slightly better using mean over geomean and doing more epochs.</p>
<p>As promised, this will get you 0.41599 public via T-SNE and meta bagging. The key here is meta bagging. And proper representation of the data with various transforms.</p>
<p>I'll post an update to a potentially higher scoring single model with meta bagging neural networks where the first stage uses the mean of rbf svm, rf, and gbm. </p>
|
Otto Group Product Classification Challenge
|
Competition write-up (Optimistically Convergent)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Otto Group Product Classification Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks, that's helpful!</p>
<p>[quote=fyears;79127]</p>
<p>1. How to tune hyper-parameter of xgboost?</p>
<p>[/quote]</p>
<p>Nothing too fancy, but I found that letting the machine churn without checking tended to waste a lot of time.</p>
<p>The most effective thing was to constantly shrink the bounding boxes of new values to try. So e.g. I'd start from the benchmark tuned parameters and make a cube around those and try random things (modulo what my SVR was saying, rejecting things that looked like they'd be totally awful). Then when I found a 'new best parameter set' and had maybe a set of 20 or so random trials around it, I took that and re-centered the bounding box on it and shrunk it slightly. I tried to respect the inherent slop that I was getting from the distribution of points.</p>
<p>It was also useful to calculate straight cuts across one parameter while holding others constant, just to get a feel for what the optima look like. For example, a really handy thing was the observation that eta and trials should be almost exactly linearly inverse-proportional. That let me cut out one dimension from the search entirely and only look at small perturbations around that relationship.</p>
<p>What I mean by human-based is that I used my intuition as a human to say 'huh, it looks like X is working well' or 'seems to be getting stuck' and gave it kicks or adjustments based on that intuition. Its very human-time consuming, but its kind of zen, so I'd often do it when I had insomnia.<br>[quote]</p>
<p>2. All your neural networks are parameters for Lasagne/Nolearn, instead of Keras, right? My team preferred Keras because of its elegant APIs, and Nolearn always showed some warnings, etc. But I personally found that it seemed that Nolearn performed better than Keras, no idea why. In fact they should perform similarly... Moreover, again, how do you determine the structure of the networks? My team "randomly" tried different layers/dropouts no-idea-wtf-we're-doing-ly. :-( Again, once too many parameters combinations, it was painful to wait: consider 10 epochs per second with 600 epochs == 1.5 hours per model![/quote]</p>
<p>Yeah, I had a harder time getting good results out of Keras. I think its the user feedback somehow - I found Lasagne's output to drive my intuition more strongly. Since I did a lot of hand-optimization, that was important.</p>
<p>By comparison, with Keras I ended up doing the guided random search thing which took forever. I was eventually able to get reasonable networks out, but I couldn't intuitively bee-line towards them the way I could do with Lasagne.</p>
<p>For architecture, I explained a little bit of my intuition. Large early layers are sort of feature-generators, and small layers are bottlenecks that prevent too much information from getting through and force the network to summarize. Except I was able to find successful architectures that do exactly the opposite! So I think many architectures can work in the end - the more important thing is probably the total number of degrees of freedom, tuning dropout to the particular architecture you've chosen, and things like that.</p>
<p>[quote]3. I do not quite understand your ensemble model from the first line... "Calculate the probability that the two models you want to blend will give the same result (using, e.g., the largest probability out of the 9 to determine what class it thinks the row is). Lets call this pXY.". Say, consider the 3-rd row, model A says it belongs to class_7 with probability 0.99, and model B says it belongs yo class_5 with probability 0.88, so what's pXY?[/quote]</p>
<p>Basically in this case I just say 'model A and B disagree'. Essentially pXY is just measuring how often model A and B agree on classification across the entire dataset (not per-row).</p>
<p>However, this is obviously not tuned to the logloss metric, so you can probably do better by independently tuning pXY, using the agreement as an estimate for the initial value. One of the things I tried was a greedy sequential ensembling using that idea, and it worked fairly well when self-contained but then it didn't combine well with Hoang's stuff so it didn't make the final cut.</p>
<p>[quote=Brach;79140]</p>
<p>Great post Nicholas thanks for sharing. This was my first competition and I learned A TON of useful knowledge. It's amazing how this competitive approach and all of the experts willing to share accelerates your learning. I hope I can pay your generosity forward some day!</p>
<p>p.s. now that you've got your top-10 finish, do we have to call you Master Nicholas? ;)</p>
<p>[/quote]</p>
<p>Thanks! I'll admit that my mood last night was a bit more like 'drat, I'm not going to get a winner badge' (perhaps an instinct to try to pull out some last-minute bit of magic). But actually I'm really happy about the Master badge. I really did not expect that when I joined the Otto competition (I almost didn't end up joining this one, because I saw the TFI Restaurant one and wanted to focus on that, but then it turned out to be such a lottery).</p>
<p>One thing I'll point out though is that if it wasn't for teaming up with Hoang, I think I'd probably have finished around rank 30. The social aspect is really important, as is combining ideas that have different points of origin and aren't too rapidly converged. If I recall, when we initially teamed up, I was around 0.405 and he was at 0.4 and we instantly jumped to the 0.39 range on our first joint submission. Despite the fact that most of our models were actually the same types of things.</p>
<p>I'd really be interested in a massive 'lets do the best we can with the top solutions' thing, just to see how far we can push it with the best of each type that people have come up with. Maybe 0.37 isn't so impossible.</p>
<p>[quote=M.E.;79213]</p>
<p>Hey, could You explain a little bit more how You collected data for the esnembling Neural Net?</p>
<p>I was thinking about something like this:</p>
<p><code>For each iteration of K-Fold CV: </code></p>
<p><code>Train the base model on the training data.</code></p>
<p><code></code><code>preds_val = Predict on the validation data.</code></p>
<p><code>Add preds_val and y_val to the ensemble-model training set.</code></p>
<p>But isn't it too comutationally intensive?</p>
<p>[/quote]</p>
<p>This is exactly it. It's very intensive, unfortunately.</p>
<p>[quote=Marcin Mucha;79195]</p>
<p>Nicholas, great write-up! Could you perhaps share some details on how the NN blending was done? (NN architecture etc..)</p>
<p>Marcin</p>
<p>[/quote]</p>
<p>I don't have all the details there yet to give, but when I spoke to Hoang he said he was still interested in a writeup, so hopefully he'll post some when he has a chance.</p>
<p>[quote=phalaris;79117]</p>
<p>Congratulations and thanks for the writeup! There are many new things for me to learn from your post.</p>
<p>Could you explain how you would determine if you needed more dropout from the convergence curve? I looked at these but I wasn't sure how to use the information.</p>
<p>[/quote]</p>
<p>Lasagne/Nolearn has a nice output thing where it displays the training error, validation error, and their ratio for each epoch. The rows are also colored based on whether the last epoch showed an improvement in global score.</p>
<p>When the ratio dropped below 0.8 and I saw more frequent improvement in the training score than the validation, that was generally an indication that I was in an overfitting regime. So that indicates the need for more dropout somewhere in the network.</p>
<p>The other side of it is if the ratio is above 1 but I'm not seeing improvements in either score, then that generally indicates that I have too much dropout (but the architecture might be broad enough).</p>
<p>Edit: Another useful thing is that the early performance is very telling about the late performance. There were sort of threshold points at 0.47 and 0.45, and the timing of when I crossed them was a good predictor of eventual performance.</p>
<p>The general philosophy I had was that more nodes in the starting layer allowed the network to 'try' more distinct things, whereas the subsequent layers were about synthesizing that information into predictions. It's a bit artificial, but if you look at what happens to the weights of a neural network under backprop, the strongest effects are in the bottom-most layers, whereas upstream layers tend to change less and retain more of their initial random configuration. So essentially, the upstream layers are a kind of highly random feature generator for the lower layers.</p>
<p>With dropout and narrow layers, what you're doing is trying to force the network to be bad at the task 'what data row am I looking at right now' and good at the task 'what do the data columns intrinsically say about the classification?'. So you're basically trying to 'sneak' information through the network, retaining as much as the input as you can but at the same time trying to obfuscate enough that the network can't just memorize the input data.</p>
<p>Sure, here are a couple:</p>
<p>param = {'max_depth':10, 'min_loss_reduction': 0.6, 'min_child_weight':6, 'subsample': 0.7, 'eta':0.025, 'objective':'multi:softprob', 'num_class': 9 } (950 trials) (LB = 0.445)</p>
<p>param = {'max_depth':11, 'min_loss_reduction': 0.1, 'min_child_weight':7, 'subsample': 0.9524, 'eta':0.02, 'colsample_bytree': 0.5040, 'objective':'multi:softprob', 'num_class': 9 } (1300 trials) (LB=0.442)</p>
<p>Also, one I never tested independently on the LB, but which seemed to have the best local CV performance after a random search:</p>
<p>param = {'gamma': 0.3, 'max_depth': 11, 'min_loss_reduction': 0.5, 'min_child_weight': 4, 'subsample': 0.675, 'eta': 0.002, 'colsample_bytree': 0.55, 'objective':'multi:softprob', 'num_class': 9 } (13656 trials)</p>
<p>Oh, another thing I forgot to mention - in the last few days of the competition, we went back and stuck Scikit-Learn's CalibratedClassifierCV on things. It seemed to make a huge difference in the local CV of the neural networks to do 10-fold isotonic calibration. Stuff that was getting 0.443 on local CV was getting down to 0.435.</p>
<p>Now that it's over, even though we didn't finish in the money, I figured I'd write up the interesting bits of our models. I'll be writing up my side of things, and then Hoang may add to that.</p>
<p>For me, it seems like this competition was all about two things: finding ways to get different biases in the individual models, and having a good method for ensembling beyond just averaging.</p>
<h3>Models</h3>
<p>This is a whole laundry list. I'm going to go over this briefly, then just list the things I think were particularly novel. </p>
<p>In the final ensemble, the way we organized things was to do 5-fold stratified CV on each model and generate a set of predictions for the entire training set. In some cases we bagged across the five folds for the test set, and in other cases we did an independent run training the model against the entire train set before predicting test.</p>
<p>The predictions on the training set were used in the ensembling stage to fit the ensemble parameters.</p>
<p>For parameter search, we had a couple different methods. I did a lot of human-based gradient descent, which worked very well for neural nets because you can look at the convergence curve and draw conclusions as to e.g. whether you need more dropout directly. In other cases, Hoang used random search with Gaussian processes to guide the search, and I did a similar thing but using RBF-based support vector regression with C and epsilon set to make it particularly blurry. The idea is basically to take the data on model performance for all the parameters you've already tried, fit the GaussianProcess or SVR learner to that data, and then use that to accept/reject random search coordinates. With GaussianProcess in particular, it can estimate how much you know or don't know about a region of the parameter space, so you can in principle search more efficiently that way.</p>
<p>I don't know that it made a huge difference honestly. Personally I found that adjusting the bounding box of the random search dynamically seemed to be more important than picking out particular contours of the landscape within that bounding box. But it probably did shave off a few days in the end.</p>
<p><strong>XGBoost</strong></p>
<p>Pretty straight-forward. Hoang and I both had our own XGBoost models when we teamed up, and they were pretty tightly correlated. This was pretty much just hyper-parameter optimization. Early on I got mine to the 0.44 range (not using gamma or colsample_by_tree), and while we had slightly better tuned ones by the end I'm not sure it mattered all that much compared to the computer time for the hyper-parameter search.</p>
<p><strong>Neural Networks</strong></p>
<p>I used Lasagne/Nolearn for my initial results (and I think Hoang did similarly), and then we switched to seeing if we could get more out of Keras near the end. I want to thank Inversion here for pointing out the log transform on the data in a forum post. I found that log(2+data) followed by StandardScaler worked the best here. I didn't end up having a chance to try the Anscombe transform. All my layers were ReLU, with a Softmax output layer.</p>
<p>One architecture I ended up getting reasonable results (LB=0.446) with was:</p>
<p>Input -> 0.13 Dropout -> 2500 -> 0.77 Dropout -> 1300 -> 0.35 Dropout -> 40 -> 0.1 Dropout -> Output (AdaGrad, 600 epochs, batch=1024, linear decreasing learning rate starting at 0.04, ending at 0.005)</p>
<p>However, I also had networks with (LB=0.441):</p>
<p>Input -> 0.15 Drop -> 2000 -> 0.7 Drop -> 1200 -> 0.4 Drop -> 40 -> Output (AdaGrad, 400 epochs, batch = 512, learning rate from 0.04 to 0.001)</p>
<p>or even (LB=0.463):</p>
<p>Input -> 0.15 Drop -> 180 -> 0.1 GaussianNoise -> 0.1 Drop -> 180 -> 0.05 Drop -> 180 -> 0.05 Drop -> 140 -> 0.05 Drop -> 140 -> 0.05 Drop -> 140 -> 0.1 Drop -> 140 -> 0.1 GaussianNoise -> 160 -> Output (AdaGrad, 100 epochs, batch=512, rate = 0.03 to 0.005)</p>
<p>(whew)</p>
<p>Random search on Keras gave me a network that scored 0.447 locally (never submitted it to the LB), with an inverted architecture compared to my main ones (Input -> 419 -> 2110 -> 2250 -> Output, with dropouts 0.43, 0.25, and 0.39). </p>
<p>One thing is that even though the network with 2000 nodes in the first layer had a better LB score, it ensembled significantly worse than the one with 2500 nodes.</p>
<p><strong>Boosted Neural Networks</strong></p>
<p>There's a final neural network trick I want to mention, because its sort of hacky but it worked pretty well for me. The idea is to implement a kind of boosting by removing data rows from the training set that are predicted well by the current version of the network, and then to iterate training several times with a small number of epochs.</p>
<p>In principle, if a neural network correctly predicts a data row, it should not change its weights when trained on that data row again. However, if you have a network with random stuff going on in the training (such as dropout) or which is using an objective function that remains under tension even when the network is converged, it should still make a difference. Also, periodically restarting training with different data distributions seems to help kick the network out of local minima in its convergence.</p>
<p>For these networks, we used a couple of different architectures but they weren't much different from the ones above (Input->0.15->2000->0.7->1200->0.4->40->Output). However, it was trained for only 25 epochs at a time (rate=0.02 .. 0.005).</p>
<p>Following that, I generated predictions on the training set using the network (of course this would be overfit in general). I then turned off all data rows whose individual log-loss was less than 2% of the average log-loss, so basically getting rid of rows that were crammed up against P=1 in some column. I repeated this over 16 cycles.</p>
<p>This gave a network that scored 0.433 on the leaderboard. So it seems to be a really nice trick.</p>
<p><strong>Stochastic Gradient Descent</strong></p>
<p>This is a dark horse. Even the best-performing one we managed to obtain was scoring about 0.58 locally. So why bother?</p>
<p>Well, one thing that it seems that linear SGD was good for in this competition was for feature engineering experiments. Linear SGD isn't clever on its own, so if you add something with a bit of cleverness to it then the CV will tend to be a lot more responsive than it might be with a more complex algorithm. So we were able to find out a few things about the data here, even if the final results weren't all that useful in ensemble.</p>
<p>One set of features that was pretty nice was to one-hot encode the index of the top four ranking data columns for a given row. That's a very non-trivial measure (because it requires ~log(94) comparisons to establish), so it isn't captured by second-order, third-order, etc interactions very well - there's some hope it could add to even something fairly strong like XGBoost, since basically you're substituting in a sensible, existing depth-7 tree as one of the inputs. </p>
<p>Another similar thing was that for second-order interactions, the ranking interaction was more informative than the product (that is to say, the quadratic feature is set to 1 if feature A > feature B, and is otherwise zero; rather than the feature being A*B).</p>
<p><strong>Random Forest, GBM, Support Vector Machines</strong></p>
<p>Hoang had the best versions of these models, so I won't get into it so much.</p>
<p>I will mention that one trick with SVMs here is that it seems that you can get some pretty nice improvements by tuning a per-column scaling on the data.</p>
<h3>Ensembling</h3>
<p>As I mentioned before, smart ensembling and having multiple different biases seemed to be the most important things for this competition. The issue was that XGBoost and neural networks both ran out in front as the best-performing algorithms on this dataset. So that means that it takes a lot to make other models which contribute meaningfully to the ensemble in a straight out average, even if they have different biases. Also, when you average too much stuff together at some point you go from marginal gains from bagging to losing logloss due to regression towards the mean.</p>
<p>The other part is that since this is a multi-class dataset, some algorithms will perform better on distinguishing certain classes than others. A straight-up averaging doesn't immediately take advantage of that.</p>
<p>So one of the big improvements came from the fact that Hoang had written a neural network-based ensembler which ended up taking even models performing in the 0.58 range and managed to squeeze some improvement out of them.</p>
<p>Another trick in the ensembling stage (which didn't make it into the final submission) was when bagging together copies of models, to look at the portion of information from the different folds which was statistically independent and use it to 'overshoot' in a sane fashion.</p>
<p>The algorithm is something like:</p>
<p>- Calculate the probability that the two models you want to blend will give the same result (using, e.g., the largest probability out of the 9 to determine what class it thinks the row is). Lets call this pXY.</p>
<p>You also may want to weight the models, so lets call that factor 'b', and it's between 0 and 1.</p>
<p>So, as a function of the estimated probabilities from the models P1 and P2, the blended, non-normalized probability Pb of a class is:</p>
<p>Pb = pXY * (P1*b + P2*(1-b) ) + (1-pXY)*( P1^(2*b) * P2^(2*(1-b)) ) / ( P1^(2*b) * P2^(2*(1-b)) + (1-P1)^(2*b) * (1-P2)^(2*(1-b)) )</p>
<p>Basically the idea is to say, if there's a chance that the models agree not because of their prior probability of agreement but because of some un-shared information, you should adjust the probability of that class to be more confident than either of the models would be alone.</p>
<p>It turned out that this was giving improvements of about 0.0007 in CV score when we were doing our final big blend, but by the end Hoang's neural network ensemble algorithm had completely outpaced it. Still, I felt it was worth a mention because it seems to be a very nice trick to avoid some of the problem of regression towards the mean when you're combining a large number of models.</p>
<p>Anyhow, I think I'll leave off here since Hoang can say more about the specifics of the ensembling method than I can.</p>
<p>[quote=fchollet;79283]</p>
<p><em>> Yeah, I had a harder time getting good results out of Keras. I think its the user feedback somehow - I found Lasagne's output to drive my intuition more strongly. Since I did a lot of hand-optimization, that was important.</em></p>
<p>What feedback does Lasagne output that you find useful for hand-optimization? I'm always looking for ways to improve Keras.</p>
<p>[/quote]</p>
<p>The major thing was that it gave me a color-coded indicator when the given epoch performed better than any previous performance on that data subset (training and validation). That was really good for just eyeing the process and getting a feel for how healthy it was. The other useful thing was the readout of the validation/train ratio. I suspect that when you make a full-fledged GUI, you can make some really good visual feedbacks of this sort (a convergence plot is a lot easier to read than columns of ascii, afterall).</p>
<p>There's also some weird stuff. There was some Theano error with some of my runs associated with the validation data, so also I think I wasn't seeing everything that it could show. I'll post that code if it's helpful for you.</p>
<p>Edit: Now that I go back and check, I'm getting no errors, so I don't know what was up with that. But it does bring to mind another thought: the scrolling ascii arrow thing ended up creating huge, huge nohup.out files when I was doing runs. I set verbose=0 to fix it, but just a thing to be aware of.</p>
<p>Also, ironically, I think the fact that Keras's optimizers are more efficient actually made it harder to get a feel for the convergence. The Adam optimizer did in about 20-30 iterations what AdaGrad was taking 200-300 to do. But the consequence of that was more of a feeling of continuity in the convergence. That's less of a feature request (please make your algorithms more inefficient!) but more of a different factor between the two that might be relevant. Maybe its possible to fake it with per-batch readouts or something like that?</p>
<p>As far as feature requests, a big one that I wanted in both Lasagne and Keras was a convenient way to mess directly with the weights during the training process. For example, in my own messings around I had some luck doing random perturbations directly on weights in an autoencoder task. It helped the thing not get stuck on symmetries or just fitting the mean pattern. But I couldn't figure out an easy way of doing it as part of either the natural Keras or Lasagne workflow. </p>
|
Otto Group Product Classification Challenge
|
Strategy for top 25 score
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Otto Group Product Classification Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>[quote=Daia Alexandru;79927]</p>
<p>Hello , I am currious if you could be more detailed regarding the genetic alghoritm and distance correlation?Tnks!I am interested in particular if you did this to capture the non linear stuff in the features. I tryied this on weekend on filtered features based on mutual info , adding a lot of learners but only the conditional forest performed better hence I supposed that will performe better since this measures are non parametric like the authors of conditional forest mentioned in the package description and also some averaged linear models .</p>
<p>[/quote]</p>
<p>Here's <a href="https://www.kaggle.com/inversion/otto-group-product-classification-challenge/visualizing-class-distance">a script</a> that shows how I calculated the distance metric. The only real difference is in the competition, I computed percentiles at 10, 25, 50, 75, 90. (The script I simplified to 10, 50, 90)</p>
<p>[quote=NISHIO Hirokazu;80108]</p>
<p>How many models do you have? It looks GA is choosing 20 models from more models.</p>
<p>[/quote]</p>
<p>It was 500+, although, only about 50 of them ended up being worth much.</p>
<p>[quote=J Kolb;79366]</p>
<p>By not allowing models that don't fit well with your previous model, aren't you ruling out models which are uncorrelated to your current best model and which might potentially be good to bring in for ensembling?</p>
<p>[/quote]</p>
<p>This was mostly a dummy check against overfit ensembles or models where I had stacked in previous model predictions.</p>
<p>It was very empirical - time and time again, when I tried to post an ensemble with a "pseudo-score" of > 0.21, the LB score was worse. In general, pseudo-scores between 0.19-0.21 had decent LB scores.</p>
<p>With the GA approach, <strong>I didn't worry about the models going in</strong>. I only tried to minimize the ensemble loss, while putting in a soft constraint for "pseudo-score", adding the difference of (pseudo_score - 0.2040) to the ensemble loss anytime the pseudo-score was greater than 0.2040.</p>
<p>So, this method doesn't rule out any models. Just starts penalizing the ensemble for straying <em>too</em> far away what was already known to be my best entry.</p>
<p>[quote=Herra Huu;79356]</p>
<p>Ooh, now that's a hot live act. Now I definitely have to listen more from these guys.</p>
<p>[/quote]</p>
<p>Here's the other two clips of the band from the show. The second starts with the bassist coming in from crowd surfing! You're absolutely right, these guys knew how to put on a show! Super high energy.</p>
<p>https://www.youtube.com/watch?v=mvxVGrd23bw</p>
<p>https://www.youtube.com/watch?v=JUL0JSpvQ6k</p>
<p>Bonus Video:</p>
<p>I was at the TrollFest/Korpiklaani/Ensiferum concert in Chicago last night, which started 30 minutes after the Otto contest close.</p>
<p>I have to say, it was the best way imaginable of getting past my top-10 near miss.</p>
<p>Here's <a href="https://www.youtube.com/watch?v=8PC6jCfTJ8E">TrollFest doing a cover of Toxic</a> (Brittany Spears).</p>
<p>[quote=SkyLibrary;79339]</p>
<p>Thanks for sharing your approach, inversion.</p>
<p>May I ask what software did you use for genetic algorithm ensemble?</p>
<p>This is not the first time I saw this term, I'd like to learn how it works and how to implement it or use it.</p>
<p>[/quote]</p>
<p>I used DEAP. </p>
<p>https://github.com/deap/deap</p>
<p>I'll post code tonight or tomorrow morning.</p>
<p>Basically, you have a population of individuals that looks something like this:</p>
<p><code>[22, 3, 63, 241, 3, 1, 77, 32, 56]</code></p>
<p>Those are just model number, and my objective function takes those models, averages them together, and spits out a log_loss score.</p>
<p>The GA combines different individuals in the population by crossing them. And individuals can be mutated (e.g., by turning the "22" into a "23"). The population slowly weeds out the least fit individuals, and you're left with a strong ensemble of models.</p>
<p>This was a pretty simple strategy that scored in the top 25 all by itself, giving a LB score of 0.40327.</p>
<ul>
<li><span style="line-height: 1.4">XGB Boost as main workhorse.</span></li>
<li><span style="line-height: 1.4">Created 10-fold training self-predictions</span></li>
<li><span style="line-height: 1.4">Added in different combinations of features when building models.</span></li>
<li><span style="line-height: 1.4">Used genetic algorithm ensemble (with replacement).</span></li>
</ul>
<p>For features, I used obvious ones (sum of the row, number of non-zero, max of the row, number of 1s, 2s, 3s, and position of max value) as well as different manifold algorithms (e.g., t-SNE).</p>
<p>Some distance measures worked better than others, e.g., Correlation distance. Here's a gif I made showing how it separates. (I'll be providing a Kaggle script for this soon.)</p>
<p>http://i.imgur.com/T78VdKw.gifv</p>
<p>The gif is 45 features cross plotted against each other. For each row, I calculated the distance between that row and all the rows in Class 1, Class 2, etc. That gave me a distribution of distances for that row. Then I calculated the 10, 25, 50, 75, 90th percentile of those distributions for each class. In other words, for a random row, I'm calculating the distribution of how near it is to all the points in each class. (9 classes x 5 percentiles)</p>
<p>I also used previous best model predictions as features.</p>
<p>For the Genetic Algorithm, I used DEAP where the gene was a vector length 20 composed of different model numbers.</p>
<p>A trick I found useful:</p>
<p>I always used my best LB submission to date act as a pseudo ground truth, and compared new submissions against that (loss typically being around 0.20). If I ran a model that had a good training loss, but the "pseudo score" got significantly worse, that meant I was over fitting.</p>
<p>In other words, if my local loss improved to 0.390 but my pseudo-loss jumped to 0.25, I knew I overfit.</p>
<p>EDIT: I uploaded a youtube video of the <a href="https://www.youtube.com/watch?v=OOkBYL5rHpk">feature cross plots</a> that is better than the gif.</p>
|
Outbrain Click Prediction
|
2nd place solution | team brain-afk
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Outbrain Click Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi fellow kagglers,</p>
<p>Before starting ... a big thanks to Outbrain and Kaggle for hosting this competition. Also congratulations to code monkeys for a well deserved win, to Three Data Points and last but not least to my teammates: This has been a really fun competition, the teamwork was just great and there was plenty of things to learn.</p>
<p>Our solution involved a lot of feature engineering, model diversity, stacking and a custum implementation of FFM by Alexey, which allowed very fast iterations and yielded a very strong single model at the end.</p>
<p><br><strong>Our most important features</strong><br></p>
<ul>
<li>rcarson's leak feature. This was very strong. It was slightly
improved by bucketing it into rows where the page_view doc was
before, 1 hour after, 1 day after and > 1 day after the display
timestamp.</li>
<li>Competing ads – on each row, we hashed each individual competing ad
as a feature in FFM.</li>
<li>Hashed all combinations of the document/traffic_source clicked by a
user in page_views. So if a user came to a document from
'search', it would be treated differently to a user coming to the
same document from 'internal'. Any documents occurring less than 80
times in events were dropped.</li>
<li>Hashed sources (from documents_meta) of all the page view
documents for each user. This gave slight improvement, perhaps due to
the fact that it better captured rare documents.</li>
<li>One hour after clicks – hashed as a new feature the documents clicked
by a user within one hour of the ad click. This is a form of leak, as
it is likely after clicking the ad, there are further links in the ad
to other documents, or users search content related to the ad.</li>
<li>Interaction of ad_doc_category and doc_category. Weekdays – worked
well in the stack – and hour.</li>
<li>Log of time difference between display doc creation time and current
time; as well as difference between display doc creation time and ad
doc creation time Within page_views, if user viewed ad documents
of the same publisher, or same source</li>
<li>Flags, if user viewed ad documents of similar
category, or similar topic.</li>
<li>Flags, if user viewed this ad in the past, if he
clicked it in the past; Same for publisher, source, category and
topic.</li>
<li>Flags, if the user viewed this ad in the future, same for ad doc, or if user not
viewed this ad in the future, but viewed ads of the same campaign.</li>
</ul>
<p><br><strong>Models used @ L1</strong><br></p>
<ul>
<li>LibFFM</li>
<li>Vowpal Wabbit FTRL as well as tingrtu's version</li>
<li>Liblinear</li>
<li>XGBoost</li>
<li>Keras</li>
<li>Logistic regression</li>
<li>SVC</li>
</ul>
<p>As mentioned above, Alexey wrote a custom version of FFM which was much faster and consumed considerably less memory. This will be published soon on github for reuse. </p>
<p><br><strong>CV & Meta Modelling</strong><br>
We used a subset of around 6M rows as our validation set, which we sampled according to the structure of the given test test (2 future days, 50% common days / 50% future days rows). Additionally, we used a training subset of around 14M rows in order to be able to test new ideas faster. </p>
<p>Before our team merge with Alexey, stacking around 20 L1-models on this 6M-set has given us a gain of around 0.003 at the public leaderboard. For that, we used a blend of Xgb & Keras models trained on the 6M set at once (=> no common days / future days separation).</p>
<p>During the last week, we improved our single model performance significantly and the gain from stacking decreased to roughly 0.001. On top of the L1-predictions we used normalized time as feature at L2, which yielded a gain of around 0.00020.
Alexey had its own meta-stack ready by the time we merged, and we blended that into our final submission (both stacks showed low correlation until the very end of the competition, even though Alexey rebuilt his models for our 6M validation set). </p>
<p><br><strong>Final solution</strong><br>
Our final submission is a geometric mean blend of Alexey’s meta stack, a bagged XGBoost and a bagged Keras model. Weights have been chosen by gut feeling and LB scores (we had no out-of-fold predictions for L2 to optimize the weights on).</p>
<p><br><strong>Best single model</strong><br>
Our best single model comes from Alexey's custom FFM implementation (2 times bagged). It scores 0.70017 at public LB and 0.70053 at private LB.</p>
|
Outbrain Click Prediction
|
3rd place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Outbrain Click Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>@nomo,</p>
<p>1) Splitting data worked for a and b. And It seemed to work for validation data set of stacking model, but LB score decreased. We did not have enough time to find the reason, so we did not use splitting for stacking model.</p>
<p>2) The single model gain is 0.001 for ffm future model with sample weight, 0.002 for ffm present model with timestamp feature and 0.002 for xgb present model with timestamp.</p>
<p>3) Exactly. We used this formula: </p>
<pre><code>sample weight = 1 + 2 * 0.5 ^ ((max_timestamp - timestamp)/86400000.0)
</code></pre>
<p>4) Timestamp feature for present data made big gain without any special method. We used almost same hyper-parameters for split model. Only tuned hyper-parameter is iterations/ntrees.</p>
<p>@nomo,</p>
<p>1) The gbdt feature dose not contain label information but contains impression/pageview count. This is almost same as what I listed in the first post of this topic.</p>
<p>2) Yes, it dose. The gbdt training data was same as ffm training data. But it did not suffer from overfitting.</p>
<p>3) I'm not sure I could understand your question correctly, but we only tried xgb leafnode information itself as feature and did not compared with any variations.</p>
<p>4) We used ntrees=39 and maxDepth=15. Almost all features and hyper-paramters of this model are the same as values of best xgb single model. But only eta was tuned for this model to reduce ntrees.</p>
<p>@Gabriel Moreira,</p>
<p>I uploaded this source code. You can find pairwise rank customization <a href="https://github.com/CuteChibiko/Outbrain-Click-Prediction/blob/master/libffm_pairwise/ffm.cpp#L443">around here</a>.</p>
<pre><code>if(param.use_default_train){
# default libffm training
} else {
# pairwise rank training
}
</code></pre>
<p>@Gabriel Moreira, thank you for your feedback.</p>
<p>The difference was 0.0015 ~ 0.002 for our models.
Relatively bigger ETA for LibFFM pairwise made better results.</p>
<p>You are right. It is using only 1 CPU to calculate validation score.
I don't remember why I commented out <a href="https://github.com/CuteChibiko/Outbrain-Click-Prediction/blob/master/libffm_pairwise/ffm.cpp#L500">here</a> and did not use omp. :(</p>
<p>We added higher weights to recent samples for ‘future’ model. This improved 'future' model score about 0.001.</p>
<p>Thank you Yuriy Makarov,</p>
<p>Yes, we tried CTR with some impression threshold. If impression threshold was low, CTR caused overfit. And if impression threshold was high, CTR did not contribute to score. So We did not use CTR in the end.</p>
<p>"immediate document viewed after click event." means the first document that was seen by a user after the user clicked an ad. </p>
<p>@Gabriel Moreira,</p>
<p>Your CV score drop, from CV=0.679 to 0.670, is very big, so I think something should be happening.
I'm not sure why, but maybe you are right. We made XGB models for test and validation respectively, so this may cause the difference.</p>
<p>BTW, our best single score did not improved with GBDT leaves, but it worked for ensemble model.</p>
<p>Our best single CV score without GBDT leaves was 0.69586 and score with GBDT leaves was 0.69588 when we tested GBDT leaves. And weighted average of them scored 0.6971.So even single model score was not improved, GBDT leaves feature was very important.</p>
<p>Congratulations to code monkey, brain-afk and everyone!
And I’d like to thank my teammates carl and Little Boat. I could learn lots of things from them.</p>
<p>As <a href="https://www.kaggle.com/c/outbrain-click-prediction/forums/t/27897/congrats-and-solution-sharing?forumMessageId=157062#post157062">Little Boat explained</a>, our model is ffm models + xgboost models as first layer, and xgboost as second layer.</p>
<p>I would like to explain our first layer models here.</p>
<h3><strong>Feature extraction</strong></h3>
<p>Following is the list of unobvious features we used.
We made lots of other features which turned out to be useless.
But analyzing large scale user behavior was very interesting to me.</p>
<ul>
<li>page view counts of each user.</li>
<li>page view counts of ad landing page.</li>
<li>impression counts for each ad_id, landing document_id, campaign_id and advertiser_id.</li>
<li>landing page confidence vector, where confidence vector is defined as a vector which element is composed of confidence level from documents_*.csv. This feature is used only for ffm as numeric data.</li>
<li>user confidence vector - this is the average of document confidence vector which is viewed by each user. This feature is used only for ffm as numeric data.</li>
<li>inner dot product of document confidence vectors of ad impression page and ad landing page .</li>
<li>inner dot product of user confidence vector and ad landing page document confidence vector</li>
<li>XGB leaf for ffm feature.</li>
<li>immediate document viewed after click event.</li>
</ul>
<h3><strong>Modeling and Training</strong></h3>
<strong>FFM</strong>
<p>In addition to <a href="https://www.kaggle.com/c/outbrain-click-prediction/forums/t/27892/introducing-light-ffm-and-stack-nn">carl’s light-ffm</a>, we used another customized libffm which has pair-wise rank and can take sample weight.
Best public LB score is 0.6974.</p>
<strong>XGB</strong>
<p>We used xgboost with rank:pairwise objective.
Best public LB score is 0.6885.</p>
<strong>Leak Row Excluding</strong>
<p>We trained ffm models without leak row by removing them from training data first, then blend/merge leak information later.
This improved single model score.</p>
<strong>Present/Future split</strong>
<p>We split test data into ‘present’ data (in time samples) and ‘future’ data (out time samples), then built models for each data.
Here, Features and hyper parameters for each model are optimized independently.
For example we used timestamp feature for ‘present’ model, but did not used it for ‘future’ model. And we used sample weight of training data for ‘future’ model to add higher weight to last day than first day.</p>
<p>Please feel free to ask any questions.</p>
<p>@Sameh Faidi,</p>
<p>I meant impression count as how many times the ad is shown.
Explanation of impression in the context of online advertising can be found <a href="https://en.wikipedia.org/wiki/Impression_(online_media)">wikipedia</a>. </p>
<p>For example, impression counts for advertiser_id means how many times ads from the advertiser is shown.</p>
<p>Thank you, Sangzia,</p>
<ol>
<li>Page 9 of <a href="http://www.csie.ntu.edu.tw/~r01922136/kaggle-2014-criteo.pdf">this presentation by 3 idiots</a> is very good explanation of XGB leaf feature. </li>
<li>Document confidence vectors are not normalized, but user confidence vectors are normalized.</li>
</ol>
|
Outbrain Click Prediction
|
4th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Outbrain Click Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hey Andrew, sorry for delay with reply.</p>
<p>Time required for feature engineering depends on number of cores, as it is designed to work in parallel, on 6th core machine it would take around 2-3 hours. Amount of data produced > 500G. fast SSD drive used locally and EBS on AWS</p>
<p>Same stuff with training, on 6th core machine, for k=4 it would take maybe 15 hours to train. With x8 large it would definitely take less. </p>
<p>I've been using mostly x4 and x8 RAM optimized instances. Usually 1, rarely 2 in parallel. +local 128G machine (i7-6800) .</p>
<p>Nearly 75% of engineering was done on 32G Alienware laptop however.</p>
<p>Andrii</p>
<p>[quote=FengLi;157194]</p>
<p>@Andrii Cherednychenko Thank you for sharing. <em>countryCTRDoc</em> means P(clicked=1|country,document_id) ?. Then you merge it back to test data? If so, some paris are very unique which would case severe overfitting. How did you handle this issue? Thanks </p>
<p>[/quote]</p>
<p>Yes, formula is correct - clicked / seen. Fighting overfit by taking this feature only when times seen is > 10. Same for all CTR* features</p>
<p>Reviewing what is written and everything seems so easy, but devil is in the details, lot of it :-)</p>
<p>Hey there, </p>
<p>Congrats to the winners and thanks a lot for congrats to my side :-)</p>
<p>Sharing details on my solution if anyone is interested, basically it is FFM model averaged for last 5 runs:</p>
<p>Technologies / frameworks:</p>
<ol>
<li><p>Early merging of files was done using Python. Basically, I merged all files into train/test/sumission files, then was reading line by line for generating features</p></li>
<li><p>All feature generation and statistics building is done using Scala. I found it way faster than Python + I am better with it, especially when it came to parallelization.
I have also filtered page_views.csv to users seen in train/submission sets, so different statistics would fit in RAM. Size of FFM train file ~180G. RAM required ~ 80G</p></li>
<li><p>I have not used hashing for features, each feature seen >=10 times would get its own ID. Submission features unseen features in train are dropped</p></li>
<li><p>For numerical features, I have different strategies of dealing with them - bins, log smoothing or leaving as is, best chosen for each, based on validation. Some fields contain single feature, some bag of features. Total number of features is >5M</p></li>
<li><p>I added MEAP metric into libffm, so it would stop based on it, not on logloss. Used default parameters of libffm, including k=4</p></li>
<li><p>No special approach was taken to deal with leak, removing it reduces model's precision + model learned almost all leaks naturally, so no post processing was required</p></li>
<li><p>Most of the features engineering was done on 32Gb laptop, later I used 128Gb RAM machine and for the last week used AWS</p></li>
<li><p>What have not worked for me - GBT leaves as features, users clustering , MF of page_views.csv, stacking of FFM. Also, run out of time for stacking ffm->xgboost. </p></li>
<li><p>List of features:</p></li>
</ol>
<blockquote>
<p>country, state, platform, county, pageDocumentCategories,
pageDocumentEntities, pageDocumentTopics, publisherCTRAdv,
publisherCTRCamp, publisherCTRDoc, countryCTRAdv, countryCTRCamp,
countryCTRDoc, stateCTRAdv, stateCTRCamp, stateCTRDoc, countyCTRAdv,
countyCTRCamp, countyCTRDoc, prevDayCTRAdv, prevDayCTRCamp,
prevDayCTRDoc, currentDayCTRAdv, currentDayCTRCamp, currentDayCTRDoc,
nextDayCTRAdv, nextDayCTRCamp, nextDayCTRDoc, currentHourCTRAdv,
currentHourCTRCamp, currentHourCTRDoc, prevHourCTRAdv,
prevHourCTRCamp, prevHourCTRDoc, nextHourCTRAdv, nextHourCTRCamp,
nextHourCTRDoc, publisherSourceCTRAdv, publisherSourceCTRCamp,
publisherSourceCTRDoc, adId, documentId, campaignId, advertiserId,
userPageViewsMeta1, userPageViewsMeta2, documentIdViews,
campaignIdViews, advertiserIdViews, userDocsSeenFromLogToday,
userPageMeta1SeenToday, userPageMeta2SeenToday,
userCampSeenFromLogToday, userAdvertisersSeenFromLogToday,
userDocsSeenFromLogTomorrow, userPageMeta1SeenTomorrow,
userPageMeta2SeenTomorrow, docStats, documentCategoriesId, advStats,
documentEntitiesId, documentTopicsId, metaSourceId, metaPublisherId,
pageDocumentMeta1, pageDocumentMeta2, userClickedThisDocumentTimes,
userSkippedThisDocumentTimes, userClickedThisCampaignTimes,
userSkippedThisCampaignTimes, userClickedThisAdvertiserTimes,
userSkippedThisAdvertiserTimes, hourOfDay, dayNum, dayOfWeek,
thisAdEntityClickedBefore, daysFromAdDocPublished,
thisAdEntityClickedBeforeCount, thisAdCategoryClickedBefore,
thisAdCategoryClickedBeforeCount, userClickedThisAdTimes,
userSkippedThisAdTimes, userPageViewsDocuments,
userPageViewsCampaigns, userPageViewsAdvertisers,
userPageViewsCategories, userPageViewsEntitites, userPageViewsTopics,
seenThisCategoryInLog, userSeenThisMeta1, userSeenThisMeta2,
seenThisDocInLog, seenThisCampaignInLog, seenThisAdvInLog, campStats,
adStats, userCampSeenFromLogTomorrow,
userAdvertisersSeenFromLogTomorrow, seenThisEntityInLog,
userAllAdsFreq, nextDocUserClicked, nextCampUserClicked,
nextAdvUserClicked, userClickedCount, userSkippedCount,
userClickedCountToday, userDocsClickedToday, userCampClickedToday,
userAdvertisersClickedToday, seenThisTopicInLog,
userClickedThisDocFreq, userClickedThisCampFreq,
userClickedThisAdvFreq, lastPageViewUserSeen, nextPageViewUserSeen,
lastDocUserClicked, lastCampUserClicked, lastAdvUserClicked</p>
</blockquote>
<p>Thanks,</p>
<p>Andrii</p>
|
Outbrain Click Prediction
|
Congrats and Solution Sharing
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Outbrain Click Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations code monkey, brain-afk and Three Data Points for the top 3 finish. Congratulations to other top finishers as well. </p>
<p>It was a very interesting competition due to lot of factors such as Data size, number of tables to use, test set having both in time and out time samples, ensembling for map metric etc</p>
<p>It will be really helpful for other people in the community if the top finishers share their ideas, codes and the thinking that went behind them in this thread.</p>
<p>Thank you Kaggle and outbrain for this nice competition. Thanks to my team mates and everyone else for making this competition more lively.</p>
<p>Thank you.! </p>
|
Outbrain Click Prediction
|
Takeaway for R users - 0.68 on laptop- FTRL, data.table and feature hashing
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Outbrain Click Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi all. As part of our solution we used FTRL. Here I summarized how to achieve ~50 place with R (and little bit Rcpp) on a laptop:</p>
<ol>
<li><a href="https://dsnotes.com/post/2017-01-27-lessons-learned-from-outbrain-click-prediction-kaggle-competition/">Part 1 - Large data, feature hashing and online learning</a></li>
<li><a href="https://dsnotes.com/post/2017-02-07-large-data-feature-hashing-and-online-learning-part-2/">Part 2 - Fitting logistic regression on 100gb dataset on a laptop</a></li>
</ol>
<p>EDIT - repo with easily reproducible end-to-end code <a href="https://github.com/dselivanov/kaggle-outbrain">https://github.com/dselivanov/kaggle-outbrain</a></p>
<p>@Pengju fixed - try again</p>
|
PLAsTiCC Astronomical Classification
|
#13 Solution, true story: tries and fails
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PLAsTiCC Astronomical Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>So far i found eli5 importance to be the most reliable</p>
<p>Thank you for this challenge, lots of fun and learning, data are good and clean, answers from organizers are timely. Very well organized.
Here is our write up.</p>
<p><strong>What worked:</strong>
Our final solution was based on LGBM model inspired by Oliver kernel, feature design and selection, and augmentation.</p>
<p><strong>Features:</strong> </p>
<ol>
<li><p>Magnitudes. Both aggregated and per band, but without k-correction till the last day. As we were not quite sure how to interpret detected flag exactly, we decided to try both: for detected == 1 and for all data. Surprisingly, magnitude features for detected == 1 worked better on LB (not on CV though). I attribute it to the outliers removal with detected == 1 and also class 99. Indeed, to detect unknown -- you have to be 100% sure, so the detected points may be important for 99 discovery. Known objects are a different story: we know them and the confidence level of error can be lower. </p></li>
<li><p>Parametric curve fittings. We used models of Bazin (<a href="https://arxiv.org/pdf/0904.1066.pdf">https://arxiv.org/pdf/0904.1066.pdf</a>) and Karpenka (<a href="https://arxiv.org/abs/1208.1264">https://arxiv.org/abs/1208.1264</a>), Gaussian fits to define peak-width, and the decay slope fits of supernova light curves in the log scale. Fitting loss was one of the most important feature in all the parametric curves.</p></li>
<li><p>Basic features from cesium package and statistics. Those based on ratios, std, and skew made it to the final. </p></li>
<li><p>From the Bazin equation we derived the position of the max for the incomplete light curves and derived magnitudes from fits as well. They helped for some bands. We also tried a range of other parameters, m15 and m-10 parameters, peak width at different levels -- but they did not help. </p></li>
<li><p>We checked all features from feets. CAR tau from feets was particularly helpful, but it takes ages to calculate. So, Mithrillion optimized a code to make it 10 times faster! If astronomers are interested, I think we can put it in a kernel.</p></li>
<li><p>Autocorrelation lag 0, also estimated peak widths made it to the final. </p></li>
<li><p>Finally, on the last 2 days I was pointed to the proper calculation of k-correction by Kyle (thank you). I used the calculator from here (<a href="http://kcor.sai.msu.ru/getthecode/">http://kcor.sai.msu.ru/getthecode/</a>) and the colors as suggested here (<a href="https://arxiv.org/pdf/1410.8139.pdf">https://arxiv.org/pdf/1410.8139.pdf</a>). It's only valid for relatively small red shifts, but the majority of data lie in this region.
The k-correction of magnitudes per band did not help, but along the way we calculated colors: both from detected magnitudes and Bazin fitted magnitudes --> that's how we got from 0.836 to 0.815 on a single model </p></li>
</ol>
<p>Feature selection: we tried both RFE from sklearn and manual tuning while looking at eli5 importance and deleting highly correlated features. Manual approach and eli5 importance worked out much better than RFE.</p>
<p><strong>Battling over fitting:</strong></p>
<ol>
<li><p>To overcome over-fitting we first augmented train 10x times using flux variation within normal distribution of the corresponding flux_err, similar for photoz values (except for galactic classes). We also introduced small time shift to bring translation symmetry. It was meant for NN mostly, that we did not use at the end.</p></li>
<li><p>We twisted parameters of classifier, especially deep tree and max bin gave a change.</p></li>
<li><p>We realized that we have ~30% of ddf samples in train and only 1% in test. So we are likely to over fit to the ddf samples. First, we down-sampled ddf in the current augmentation and re-selected features checking eli5 importance for wide field drilling samples only (ddf = 0). Secondly, we changed augmentation, adding 30x train from ddf = 0 samples and leaving ddf samples only in the original train fold. That helped. Unfortunately, this idea came to us in the last 48 hours, so we did what we could manage within that time, and feature selection -- optimization still could have been better. </p></li>
</ol>
<p>With 30x augmentation CV 0.5 LB 0.815, gap 0.315</p>
<p>Not much ensembling, 50/50 blend of two LGBM models with slightly modified features and parameters was the final blend</p>
<p><strong>What did not work:</strong></p>
<ol>
<li><p>Autoencoder. Although it has shown quite nice curves reconstruction, unfortunately, non of it's features made it to the final. We tried to include autoencoder loss as well -- did not help.</p></li>
<li><p>More parametric fitting. Although we tried adding models with double-peak (<a href="https://arxiv.org/abs/1208.1264">https://arxiv.org/abs/1208.1264</a> ), and add tau per band as well, nothing really improved LB compared to just our very first Bazin parametric fit. </p></li>
<li><p>Gaussian Processes augmentation. We tried to use GP to create more non-ddf samples for augmentation, but got a bit of an issues with detected and flux error, so far this did not work out, but the idea is good I think. Mithrillion gave more comments under Kyle's solution.</p></li>
<li><p>PU classification to identify class 99. We tried to detect negative labels (99 labels that are not in train) for 99 class prediction using a PU classifier (<a href="https://arxiv.org/pdf/1605.06955.pdf">https://arxiv.org/pdf/1605.06955.pdf</a> ). It did not work out...</p></li>
<li><p>Class 99 probing. We tried to find class 99 in test using LB probing. We noticed that the percent of high probability 99 objects is in the far z region. Indeed, apart from themselves, people tend to know better what is closer than what is far. Also, our visible Universe keeps expanding as the light from far away objects keeps reaching the Earth, so we expect more unknown from far, I think. We tried to find 99 among photoz > 2.5 and high probability of 99 class in Oliver and Scirpus methods. Guess what ? ... Bingo! It did not work out.</p></li>
</ol>
<p><strong>What we did right:</strong></p>
<ol>
<li><p>We teamed up! Teams have power, totally recommend, it's more fun and more learning and better results.</p></li>
<li><p>We did not give up when we fell to place 19 just a three days before the end. The competition is not over till the last submit! It's in the last two-three days that we actually realized our mistakes, fixed what we could and managed to improve from 0.87 to 0.815 on a single model getting to place 12. Then it was just a bad luck on private... </p></li>
<li><p>We asked questions on forum. And people answered. Special thanks to Kyle, CPMP and organizers.</p></li>
<li><p>We read, we learned, we tried, we discussed, we tried again and we learned more...</p></li>
</ol>
<p><strong>What we did wrong:</strong></p>
<ol>
<li><p>We did not utilize our submissions properly in the beginning of competition. When we had plenty of them -- that's when we should have tried more features on LB as well and more experimenting with parameters earlier. Last days every submission counts and you cannot just twist parameters while also checking on LB.</p></li>
<li><p>We did not bring ddf percent close to the test data until the last 48 hours. We should have looked at data more and try in earlier. In general, I think 99% of answers are in the data, so if you feel stuck -- look at them again, look at them more.</p></li>
<li><p>Strategy. We first did things that should work, like parametric fits from papers and magnitudes, and we left fun experiments to the end. At the end everything has speed up, and we did not had time to properly utilize those experiments. Maybe we should have done it earlier when the time was not pressing. It's my third kaggle, so I still do not know what the best strategy is. More comments from Grandmasters here would be helpful.</p></li>
<li><p>We did not plan submissions ahead. Trying to generate 5 submissions in evening to utilize all of them did not work out. I think it's nice to plan what we want to probe on LB and prepare ahead, so to have a full use of submissions. </p></li>
<li><p>Trying to find a black cat in a dark room. We spent too much time trying to find "the secret", identify class 99 and build a classifier to find it. Instead, we could do more traditional things that bring incremental improvement, like better feature selection and parameters tuning. We left it to the last few days, but then its not enough time and submissions left.</p></li>
</ol>
<p>*The hardest thing of all is to find a black cat in a dark room. Especially if there is no cat. Confucius</p>
<p><strong>What I still do not understand:</strong></p>
<p>We noticed a considerable variations in folds loss with LGBM, it is stratified, it is still big for wdf only training. Anyone knows, why is that?</p>
<p>NN classifier did not want to improve anymore after around 0.94 even with augmentation. Why is that? We tried to twist parameters.</p>
<p>Feature selection: what are the best feature selection methods people use here? I've seen comment about boruta, but found it for random forest only, does it exist for lgbm and xgboost? What people use here?</p>
<p>It looks from other solutions that we did things right, so what was the most important that we missed to cross 0.8 margin on a single model (we had 0.815)? And why there was such a shake on private? </p>
<p><strong>Personal outcome:</strong> It was a hard work and lot's of learning. The last few days were especially crazy for me. I was literally holding a baby with one hand and programming with the other, trying to concentrate and compete for gold against people who can use both hands, lol. No much sleep, no much food, no time for anything else. </p>
<p>Still, it was fun and learning experience! Thanks to kaggle, organizers and most of all -- to my best-ever-teammates!!!</p>
<p>PS. Dear kaggle people, please consider:
- adding emotions (happy, sad, thinking, and a little penguin dancing).
- consider adding a sign:
KAGGLE IS ADDICTIVE ! ENTER AT YOUR OWN RISK !!!</p>
|
PLAsTiCC Astronomical Classification
|
11th solution - very basic but may different methods
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PLAsTiCC Astronomical Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Use celerite pkg for GP, referred to <a href="https://celerite.readthedocs.io/en/stable/tutorials/modeling/">this exmple</a>. And pywavelets pkg for Wavelet Decomposition, referred to <a href="https://pywavelets.readthedocs.io/en/latest/regression/dwt-idwt.html#more-examples">these exmples</a>. I've never learned wavelet before, I stuck and can't find a suitable way to do it well. You may do it better !</p>
<p>Addition, set z bound: <br>
<code>
bounds={'z':(max([hostgal_photoz - hostgal_photoz_err, 0]), hostgal_photoz + hostgal_photoz_err)}
</code></p>
<p>For SALT2, I set passband map : <br>
<code>
{0: 'lsstu', 1: 'lsstg', 2: 'lsstr', 3: 'lssti', 4: 'lsstz', 5: 'lssty' }
</code> <br>
referred to <a href="https://sncosmo.readthedocs.io/en/v1.6.x/bandpass-list.html">this doc</a> <br>
And set zp = 25 ( I tried 5, 25, 40), zpsys = 'ab'. <br>
For microlensing feature, I first found it's useless for gal model. Then I tried for ext-gal model, it worked.</p>
<p>Maybe you have some correlative but more effective features than SALT2 model.</p>
<p>Here is my brief summary that may be different from others: </p>
<hr>
<h2>Feature Engineer</h2>
<ul>
<li><p><strong>Features inspired by Starter Kit</strong> <br>
Not only construct features between adjacent passbands as starter kit showed, like between passband 0 and 1, 1 and 2. But also between passband 0 and 2, 0 and 3, and so on. Besides, not only mean flux, but also try max flux and min flux. Finally, features like <br>
<code>
log((passband_1_max_flux) / (passband_3_max_flux))
</code> <br>
contribute a lot to score. </p></li>
<li><p><strong>Features I've tried that useful</strong> <br>
1) 1st quartile and 3rd quartile flux per passband. And their gap value relative to full scale, like <br>
<code>
(passband_0_q3_flux - passband_0_q1_flux) / (passband_0_max_flux - passband_0_min_flux)
</code> <br>
2) 3rd quartile flux per passband multiply square of hostgal_photoz, like <br>
<code>
passband_0_q3_flux * (hostgal_photoz**2) <br>
(passband_0_q3_flux - passband_0_q1_flux) * (hostgal_photoz**2)
</code> <br>
Those features work well on my model. </p></li>
<li><p><strong>Features from cesium and feets packages</strong> <br>
period_fast feature per passband from cesium pkg and CAR feature per passband from feets pkg, made some contribution to my model.</p></li>
<li><p><strong>Features referred to papers</strong> <br>
<a href="https://arxiv.org/abs/1603.00882">Here is the very important paper I've read and referred to</a>. In this paper, authors introduced several method to extract light curve parameters for classification. I referred to the most effective method of them -- salt2 model, in part 3.2 of the paper. Fortunately, you can use <a href="https://sncosmo.readthedocs.io/en/v1.6.x/examples/plot_lc_fit.html#sphx-glr-examples-plot-lc-fit-py">sncosmo api</a> to fit salt2 model and get model parameters as features. </p></li>
<li><p><strong>Features referred to github</strong> <br>
Bazin function, this method is same with what <a href="https://www.kaggle.com/c/PLAsTiCC-2018/discussion/75011">4th solution</a> has metioned, but I found it by this <a href="https://github.com/ramp-kits/supernovae/blob/master/PLAsTiCC_starting_kit.ipynb">link</a>. I've tried modify this method, but the result shows no big different.</p></li>
<li><p><strong>Features referred to a public kernel</strong> <br>
<a href="https://www.kaggle.com/manugangler/optimal-feature-extraction-for-class-6">This public kernel is very impressive</a>, but only got a few upvotes. Instead of use it as hostgal_photoz == 0 ( author set in the kernel), I use it for ext-gal model with a set hostgal_photoz > 0. It really improved my model. </p></li>
<li><p><strong>Features selection</strong> <br>
In order to remove some inappropriate features, I do it simple, observing some less important features' distribution on train and test. Use <a href="https://www.analyticsvidhya.com/blog/2017/07/covariate-shift-the-hidden-problem-of-real-world-data-science/">this method</a> I've used before. Since there're class99 in test set, it didn't work very well, but still contribute a little.</p></li>
</ul>
<hr>
<h2>LGB algorithm</h2>
<ul>
<li><p><strong>Modify Objective Function</strong> <br>
Mithrillion has public <a href="https://www.kaggle.com/mithrillion/know-your-objective">a very impressive kernel</a>. But you may find well but not as good as just setting sample weights. So I add sample weights to grad and hess, it works better than built-in objective. This result in my final model structure: <br>
<code>
0.55 * (lgb with customized objective) + 0.45 * (lgb with built-in objective)
</code>
separated by gal/ext-gal, which means the final model contains 4 lgbs.</p></li>
<li><p><strong>Tune Parameters</strong> <br>
I tuned parameters to avoid overfitting, small max_depth: 3, small max_bin: 20, large min_child_weight: 10, large min_data_in_leaf: 35. These parameters' setting work well both on CV and LB.</p></li>
</ul>
<hr>
<h2>Final Score</h2>
<p>When my CV gets better, there are three things that decreasing the gap between CV and LB: 1) adding salt2 model parameters (improve CV 0.033, but LB 0.058); 2) Remove inappropriate features; 3) Tuning Parameters. Finally, I got CV 0.411, public LB 0.794, gap 0.383, private LB 0.822, gap 0.411.</p>
<hr>
<h2>What I've tried but failed, you may try it better</h2>
<ul>
<li><strong>Wavelet Decomposition</strong> <br>
Again in <a href="https://arxiv.org/abs/1603.00882">this important paper</a>, part 3.4. Wavelet Decomposition is model-independent compared to salt2 model, but also effective. Three steps: GP -> Wavelet Decomposition -> PCA. I've tried, but failed. You may do it better.</li>
</ul>
<hr>
<h2>Useful papers I didn't tried</h2>
<p>Here are links to what I've found may useful <br>
1) <a href="https://arxiv.org/abs/1801.07323">Machine Learning-based Brokers for Real-time Classification of the LSST Alert Stream</a> <br>
2) <a href="https://arxiv.org/abs/1606.07442">Deep Recurrent Neural Networks for Supernovae Classification</a> </p>
<hr>
<h2>Thanks</h2>
<p>As a novice in kaggle before this competition, I really learn a lot from those who public their impressive kernels, like <a href="https://www.kaggle.com/ogrellier">ogrellier</a>, <a href="https://www.kaggle.com/mithrillion">mithrillion</a>, <a href="https://www.kaggle.com/kyleboone">kyleboone</a>, <a href="https://www.kaggle.com/meaninglesslives">meaninglesslives</a> ( this is my first time to use keras practicing NN). And <a href="https://www.kaggle.com/cpmpml">CPMP</a>, he gives lots of truly useful tips in discussion. <a href="https://www.kaggle.com/titericz">Giba</a>, his class weights discussion help us a lot . <a href="https://www.kaggle.com/sionek">sionek</a> released the most famous 'detected_mjd' feature. It's strange that those useful and original kernel have more fork times than upvotes, those truly helpful discussions got only a little upvotes. If you think they are helpful, just upvote. <br>
Besides, there is an abnormal thing :), <a href="https://www.kaggle.com/psilogram">Silogram</a> didn't post his famous 'a few notes...' as he did in previous competitions I joined before, like 1) <a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/58332">Home Credit Default Risk</a>; 2) <a href="https://www.kaggle.com/c/ga-customer-revenue-prediction/discussion/67767">Google Analytics Customer Revenue Prediction</a> . We can learn a lot from his notes, as CPMP do in this competition. <br>
They are the real fundamental of kaggle community !!</p>
<hr>
<h2>PS</h2>
<p>I just began to learn python and data science about 2 years ago ( I learn mechanical engineer in college). I think I've got lots luck in this competition, I'm not good enough. When I received teaming up invitations, I regarded them as an honor and encouragement, Thank you !</p>
<p>Thank you !! You makes it clear to me. You're a good teacher !</p>
|
PLAsTiCC Astronomical Classification
|
12th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PLAsTiCC Astronomical Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to Kaggle and LSST teams for holding this fantastic competition. It is also very interesting and made me know a lot more about what mysterious things our astronomers are doing. I am very grateful for the community and learned very much from kagglers' generous kernels and discussions. Congratulations to the incredible <a href="https://www.kaggle.com/kyleboone">@Kyle</a>, all the medal winners, my teammates <a href="https://www.kaggle.com/lucaskg"></a><a href="/lucaskg">@lucaskg</a>, <a href="https://www.kaggle.com/zuoweijian"></a><a href="/zuoweijian">@zuoweijian</a>, <a href="https://www.kaggle.com/xietian6578"></a><a href="/xietian6578">@xietian6578</a>, and especially my teammate <a href="https://www.kaggle.com/strideradu"></a><a href="/strideradu">@strideradu</a> who became a Kaggle Competition Master.</p>
<p>Our final model is an ensemble of LightGBM, XGBoost, and binary classifiers. Final CV=0.3252(with specz)/0.3842(without specz), Public LB=0.814, Private LB=0.826.</p>
<p>Here is a summary of what we did during the last tough several weeks.</p>
<h1><strong>1. Feature Engineering</strong></h1>
<h2><strong>1.1 Frequency non-related features</strong></h2>
<p>We performed feature extraction manually one by one, inspired by the functions listed on <a href="https://pypi.org/project/FATS/">FATS</a>. I guess they are basically similar to those extracted from packages such as tsfresh and cesium. Around a total of 50-60 kinds of features are extracted to characterize each light curve. We did both passband-level and object-level feature extractions.</p>
<h2><strong>1.2 Frequency related features</strong></h2>
<p>We referred to <a href="https://www.kaggle.com/scirpus/lomb-scargle">this kernel</a> published by <a href="https://www.kaggle.com/scirpus">@Scirpus</a>. We evaluated the periodogram and grouped the result into ~20 frequency bins. The amplitude was summed in each frequency bin.</p>
<h2><strong>1.3 Bazin</strong></h2>
<p>My teammate <a href="https://www.kaggle.com/lucaskg"></a><a href="/lucaskg">@lucaskg</a> did the curve fitting, using <a href="https://arxiv.org/pdf/0904.1066.pdf">this paper</a> for reference.</p>
<h2><strong>1.4 Flux adjustment & Features while detected == 1</strong></h2>
<p>We performed feature extractions on different datasets, including:</p>
<ul>
<li><p>original flux features,</p></li>
<li><p>original flux features while detected == 1,</p></li>
<li><p>magnitude1 (flux * specz * specz) features,</p></li>
<li><p>magnitude1 features while detected == 1,</p></li>
<li><p>magnitude2 (flux * photoz * photoz) features,</p></li>
<li><p>magnitude2 features while detected == 1</p></li>
</ul>
<h2><strong>1.5 Feature difference among different passbands or passband groups</strong></h2>
<p>Besides aggregating passband-level features using 'std' and 'mean', we constructed some features that reflect the difference among passbands or passband groups, such as:</p>
<ul>
<li>'flux max passband i' - 'flux max passband j',</li>
<li>('flux mean passband0' + 'flux mean passband1' + 'flux mean passband2') - ('flux mean passband3' + 'flux mean passband4' + 'flux mean passband5')</li>
</ul>
<h2><strong>1.6 Meta Features from passband-level prediction</strong></h2>
<p>Please refer to Part 2.1 below.</p>
<h1><strong>2. Modeling</strong></h1>
<p>We built three models: galactic, extragalactic with specz, and extragalactic without specz.</p>
<h2><strong>2.1 Passband-level modeling</strong></h2>
<p>We ran a LightGBM on passband level. In this model, we only employed those features related to the shape of the light curve, such as flux skewness and frequency features. All features related to the magnitude of the flux, such as flux max and flux mean, are ignored. After we got the passband-level probability prediction, we flattened them as a group of meta-features.</p>
<p>It may be noteworthy that <strong>in the galactic model, we removed all the data with passband == 0</strong>. We found the prediction can be more accurate without them. At least, CV improved with doing that.</p>
<h2><strong>2.2 Object-level modeling</strong></h2>
<p>We applied LightGBM at most of the time during the competition and tried XGBoost on the last day. To avoid overfitting, feature numbers are restricted to 100 for galactic model and 140 for extragalactic model, based on the feature importance obtained by LightGBM. The performance of single models is as follows.</p>
<ul>
<li>Galactic: LGB CV=0.03149, XGB (no time to run)</li>
<li>Extragalactic with specz: LGB CV=0.48724, XGB CV=0.50902</li>
<li>Extragalactic without specz: LGB CV=0.58838, XGB CV=0.61177</li>
<li>Total: LGB CV=0.3448(with specz)/0.4143(without specz), Public LB=0.831, Private LB=0.847</li>
</ul>
<h2><strong>2.3 Binary Classification</strong></h2>
<p>Besides the multi-class classification models, my teammate <a href="https://www.kaggle.com/lucaskg"></a><a href="/lucaskg">@lucaskg</a> conducted a binary classification on each of extragalactic classes.</p>
<h2><strong>2.4 Ensemble</strong></h2>
<p>I tried to stack LightGBM and XGBoost by allocating them different weights, but CV did not improve at all. I guess the reason could be the high correlation between them. So I turned to use LightGBM to attempt an ensemble on LightGBM, XGBoost and binary classification predictions. It turned to be useful with CV improved from 0.3448 to 0.3252(with specz), 0.4143 to 0.3842(without specz), public LB score improved from 0.831 to 0.814, and private LB score improved from 0.847 to 0.826.</p>
<h1> <strong>3. Class_99 adjustment</strong></h1>
<p>Actually, what we did on class_99 is not worth mentioning because we only got a 0.001 boost on the leaderboard. However, considering we are finally just 0.0004 ahead of 13th place (who is the first of silver medal winners) on private LB, it was very crucial to help us achieve the gold medal. </p>
<p>We applied <a href="https://www.kaggle.com/ogrellier"></a><a href="/olivier">@olivier</a>'s method and <a href="https://www.kaggle.com/scirpus">@Scirpus</a>' <a href="https://www.kaggle.com/c/PLAsTiCC-2018/discussion/72104">method</a>, and found the latter is always a little better than the former on public LB. So I tried to interpolate class_99 by setting it to:</p>
<p>class99Prob = class99scirpus + (class99scirpus - class99olivier)</p>
<p>which boosted the LB score by 0.001. Here, we would like to express our sincere thanks and appreciation to <a href="https://www.kaggle.com/ogrellier"></a><a href="/olivier">@olivier</a> and <a href="https://www.kaggle.com/scirpus">@Scirpus</a>, helping us make our gold dream come true.</p>
<h2><strong>Thanks to the Kaggle community and You'll never walk alone!</strong></h2>
<p>We checked the confusion matrix of our passband-level model and found that the accuracy is always better if only using data of higher passbands. But we didn't have time to try others. </p>
|
PLAsTiCC Astronomical Classification
|
14th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PLAsTiCC Astronomical Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It gave me around 0.03 gain on leaderboard.</p>
<p>Thanks, added a short summary :)</p>
<p>Code and full writeup (see the pdf in the github repo). <a href="https://github.com/btrotta/kaggle-plasticc">https://github.com/btrotta/kaggle-plasticc</a></p>
<p><strong>Summary</strong></p>
<p>My solution is implemented in Python and uses LightGBM gradient boosted classification tree models. It scores 0.84070 on the private leaderboard, and runs in around 6 hours on a 24 Gb laptop (including calculating features, training, and prediction). It uses only elementary operations to calculate the features: there's no curve fitting or optimisation, which helps keep the runtime down. Apart from the hints revealed in the forum discussions, my original insights that gave the most improvement in score are: </p>
<ul>
<li>Bayesian approach to removing noise from the flux measurements. Replace the flux with
<code>
(flux / flux_err**2 + flux_mean / flux_std ** 2) / (1 / flux_err**2 + 1 / flux_std **2)
</code>
where <code>flux_mean</code> and <code>flux_std</code> are the mean and standard deviation for the given object and passband. (Detailed justification for this calculation is in the pdf).</li>
<li>Adding features based on scaled flux values. As well as a lot of features caclulated on the raw flux (after removing noise), I also calculated a scaled version of the flux by dividing by the maximum absolute value for each object and passband. I think this helps because it captures the shape of the curve, normalising for magnitude.</li>
<li>Adding features to capture the behaviour around the peak. Since these peaks can occur at any time during the observation period, we need a way to extract the data from these peaks. I did this by finding the "most extreme" minimum and maximum times for each object, defined as the time with detection=1 when one of the object's passbands differs the most from its median value. Then, for each passband, I retrieve the closest flux value before and after this peak time. I do this for both the raw (but de-noised) flux and the max-scaled flux. Also, I calculate the "duration" of the peak in each passband, defined as the period of continuous detections around the peak, and the time difference between the overall peak and the individual passband peaks.</li>
<li>Understanding how to optimise the metric (including for class 99). (Basically you can just optimise normal log loss, then multiply the estimated probabilities by W/class frequency.)</li>
</ul>
|
PLAsTiCC Astronomical Classification
|
19th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PLAsTiCC Astronomical Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Final stand</h1>
<ul>
<li>Public LB: 0.838</li>
<li>Private LB: 0.851</li>
</ul>
<h1>Features</h1>
<ul>
<li>aggregate</li>
<li>focused on peak</li>
<li>focused on detected</li>
<li>Luminosity</li>
<li>LC fitter</li>
</ul>
<h1>Models</h1>
<ul>
<li>LGB</li>
<li>XGB</li>
<li>MLP</li>
</ul>
<h1>Post process</h1>
<p>We used weighted multi logloss(same as kernel) and multi logloss.
After predicted by weighted multi logloss, calculated weight for each class using <a href="https://github.com/KazukiOnodera/PLAsTiCC-2018/blob/master/py/utils_post.py">gradient descent</a> .
For multi logloss, <a href="https://github.com/KazukiOnodera/PLAsTiCC-2018/blob/master/py/utils.py#L400-L406">here</a>.</p>
<h1>Did not work</h1>
<ul>
<li>AE</li>
<li>Augmentation</li>
</ul>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/441823/10901/model_pipeline_compreessed.png" alt="model pipeline"></p>
|
PLAsTiCC Astronomical Classification
|
20th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PLAsTiCC Astronomical Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks Kaggle and the sponsors for this great, clear and leak free competition :-)</p>
<p>This competition I started working hard in the beggining and practically stopped competing during last 10 days. My final solution is a stack ensemble and a blend of 4x LGB and 2x SVC models trained using different subsets of features. </p>
<h1>Features</h1>
<p>I used some subsets of around 250 features for the final models, but I'm sure I tried more than 1000 during the competition and dropped useless features. My features are built using all kind of aggregations and statistics, feets lib, coeficients of linear regressions for each passband, coeficient of linear regression after a peak and custom made features base on observation of the light curves, for example: raise rate of peaks and valeys, peak ratios of 'time to raive' vs 'time to decrease to X%', etc...</p>
<h1>Models</h1>
<p>For each model fit I augmented the train set concatenating noise versions of the train set. That simple augmentation improved my score by about 0.04 and decreased the CV x LB gap.</p>
<p>Model 1: multi-class LGB trained using all 250 features and all whole set.</p>
<p>Model 2: multi-class LGB trained using a subset of the 250 features, selected via feature selection, using whole train set.</p>
<p>Model 3: multi-class LGB trained using another subset of the 250 features, selected via other feature selection algorithm, using whole train set.</p>
<p>Model 4: multi-label LGB trained using a subset of the 250 features, selected via feature selection for each target individually, using whole train set.</p>
<p>Model 5: multi-label SVC using rbf kernel trained using all 250 features and whole train set.</p>
<p>Model 6: multi-class LGB trained using all 250 features. Fit one model for intra galaxy and other for extra galaxy.</p>
<p>Model 7: multi-label LGB trained using a subset of the 250 features, selected via feature selection for each target individually. Fit one model for ddf==0 and other for ddf==1.</p>
<p>Stack ensemble 1, multi-class, models 1,2,3,4,5.</p>
<p>Stack ensemble 2, multi-class, model 6.</p>
<p>Stack ensemble 3, multi-class, model 7.</p>
<p>Blend Stack Ensemble 1,2,3 using simple geometric weighted average.</p>
<p>For class 99 I ended using Scirpus(thanks @scirpus) equation plus some minor LB probings. </p>
<h1>What didn't worked for me:</h1>
<ul>
<li><p>List item</p></li>
<li><p>FFT, wavelet and periodogram features.</p></li>
<li><p>DAE</p></li>
<li><p>Curve reconstruction</p></li>
<li><p>Fit a model for hoztgal_specz using train+test</p></li>
<li><p>CNN models ( metric > 1.3 )</p></li>
<li><p>Target encoding</p></li>
<li><p>Flux correction by redshift</p></li>
<li><p>Flux passband correction using redshift</p></li>
<li><p>Semi-supervised Learning (improved just 0.0001)</p></li>
<li><p>SMOTE</p></li>
</ul>
<p>Giba</p>
<p>You're right, I missed a good NN model.</p>
<p>I trained the SVC as a multi-label task so score isn't possible to compare directly. But the rbf kernel performed very well in terms of AUC and ACC and SVMs are very robust against outliers and added around 0.01 to the final blend.</p>
<p>I used RFECV and other I coded by myself.</p>
|
PLAsTiCC Astronomical Classification
|
2nd-Place Solution Notes
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PLAsTiCC Astronomical Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My theory is that the simulator starts with idealized curves for each class and then adds noise to mimic the data they'll actually get. I think the detected flag was set before the addition of at least part of the noise, so it's really a 'leak'. But that's just a theory. I would love to get some confirmation from the organizers.</p>
<p>I was about to answer this but realized I better wait for Mike to respond because he built the NN models and understands them much better than me. Unfortunately he's traveling now and won't be able to post a description of his amazing solution until next week. </p>
<p>How does flux_err work in this formula? The detected flag is very highly correlated with abs(flux)/flux_err>5.</p>
<p>And what does an 'undetected' observation mean? The object could not be detected from the background noise, and yet we still have flux data for that observation? Am I the only one confused by this?</p>
<p>If this were true, then (per object/passband) the detected flag would depend solely on the flux values since the reference template per object is static, yes? So how would you explain the following records?</p>
<pre><code>
object_id passband flux flux_err detected
713 0 -14.735178 2.326417 0
713 0 -13.083604 2.663738 0
713 0 -12.353376 2.357691 1
713 0 -12.232555 1.708795 0
713 0 -12.148479 2.243120 0
713 0 -11.829331 2.358846 0
713 0 -11.605895 1.778605 1
713 0 -11.340659 1.930082 1
713 0 -10.934606 2.143276 1
713 0 -10.828177 1.470152 1
713 0 -10.602926 1.838902 1
713 0 -10.165054 1.726118 1
713 0 -10.050170 3.275514 0
713 0 -9.363182 3.042286 0
713 0 -9.289350 1.992813 1
</code></pre>
<p>First, thanks to Kaggle and the PLAsTiCC organizers for presenting such an interesting challenge, and congratulations to all the top teams, especially Kyle with his amazing performance.</p>
<p>I haven't looked at all the other solution write-ups yet but there were clearly a lot of different ways to tackle this problem effectively. For us, NNs worked much better than LGB models. Our final ensemble included 9 models, 7 of which were NNs that scored as low as 0.75x individually. The two LGB models were much weaker, each scoring about 0.90x, but they did provide a bit of diversity to the ensemble.</p>
<p>My teammate Mike built the NN models so I'll let him describe them separately. What follows is some more general notes about our overall solution:</p>
<ol>
<li><p>The 'Gap'. There was quite a bit of chatter about lowering the gap between CV and LB scores, most of which we ignored because our LB scores were tracking our CV scores very consistently, with a gap of 0.43-0.44. Our final solution, which scored 0.694 on the LB had a CV score of 0.264 (0.430 gap). In hindsight, it may have been a mistake not to look more closely at the gap. We really didn't pay much attention to the differences between the train and test sets until the final few days of the competition.</p></li>
<li><p>'Class_99'. Especially in the final week, we did quite a bit of LB probing to come up with a better way to predict Class_99. In the end we settled on the following alorithm:
a) Set Class_99 to 1-top_prediction per row
b) Move class_99 closer to 0.14 (for extra-galactic objects) and 0.014 (for galactic objects) with the following formula: class_99 = (2*class_99 + 0.14) / 3
This improved our scores modestly (about 0.005). </p></li>
<li><p>Ensemble. The final ensemble was a very shallow (max_depth=2) LGB model that was very effective. </p></li>
<li><p>'Detected' Flag. I still don't understand what this means or how it was calculated, but it's clearly very significant. As Kyle pointed out in a post, it's roughly set when the absolute flux value is 5 times greater than the flux error (the documentation says +- 3 sigmas, which doesn't make much sense to me), but there are many exceptions to this rule. The only thing I can think of is that the flag was set before some random jitter was applied to the flux values. If anyone knows more about this, please tell me because I devoted several days trying to reverse-engineer this feature with no success. That said, I did find that this ratio -- absolute flux/flux_error -- to be very significant in my models. One of the LGB models generates features for 4 different sets of observations -- all, detected only, flux error ratio >3, flux error ratio > 4. Because the NN model was able to continuously process the flux and flux error values in parallel, this relationship did not need to be manually specified, one reason I suspect that the NN model performed so well.</p></li>
<li><p>Hostgal_specz pseudo-labeling. One thing that worked for both the NN and LGB models was to build a separate model to predict the hostgal_specz values (using both the training set and the test set objects that have this value), and then using oof predictions for these values in our models.</p></li>
<li><p>Flux adjustments. For the LGB model, I found that adjusting the the flux values with the simple formula flux *= hostgal_photoz improved the results. All efforts to normalize flux values by object and/or passband did not help. In the NN models, on the other hand, Mike found a very effective normalization strategy.</p></li>
<li><p>MJD adjustments. Adjustments to MJD based on redshift and wavelength did not help the LGB model.</p></li>
<li><p>Augmentation. We tried a variety of different augmentation methods. Augmenting the training set with random variations helped the NN models but did not help the LGB models. We also tried augmentation with pseudo-labeled test objects but this did not help. As I mentioned above, we weren't really focused on the differences between the train and test sets. In the final days of the competition, I realized that we should be augmenting in such a way that the train distribution would more closely match the test distribution, but we didn't have time to try this idea.</p></li>
</ol>
|
PLAsTiCC Astronomical Classification
|
3rd Place Part II
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PLAsTiCC Astronomical Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>yes, right.</p>
<p>Thanks, Kyle. Actually, I was able to use more than 5000 features without overfitting problem. The only reason I used 2000 features is inference time and loading time for the huge test set.</p>
<p>no, some features are calculated on adjusted flux, others not. </p>
<p><code>
I also tried abs_curve_angle_features in total curve, per bandpass and concentrated around maximum, but they only helped my model overfit to training set in all cases
</code>
It may be related to your model. actually, my LightGBM model doesn't work at all in this competition, it overfits to training set very fast.</p>
<p>I remember feets gave a small improvement for me. (but maybe it's not significant)
yes, actually I might have been 5th place if we had not succeeded in class99 probing, because our best private LB score without class99 probing is 0.758, which is worse than your team.</p>
<p>Thank you my briliant team, all teams who competed with us, and all people who participated in this competition. Congrats Kyle Boone, who won this competition with an incredible performance! <br><br>
I'm happy because I and yuval became kaggle expert with 2 gold medals in this competition, and We finally became Prize Winner :) <br><br>
It's my second kaggle competition and it was fun to compete with AhmetErdem and CPMP again, who I competed with in my previous competition, TalkingData Adtracking Fraud Detection. Here, I won't explain all of my team's solution, please take a look at <a href="https://www.kaggle.com/yuval6967" target="_blank">@yuval6967</a> and <a href="https://www.kaggle.com/nyanpn" target="_blank">@nyanpn</a> solutions, if you want to see what my teammate has done. <br><br>
<code>yuval's solution</code> : <a href="https://www.kaggle.com/c/PLAsTiCC-2018/discussion/75116" target="_blank">https://www.kaggle.com/c/PLAsTiCC-2018/discussion/75116</a><br>
<code>nyanp's solution</code> : <a href="https://www.kaggle.com/c/PLAsTiCC-2018/discussion/75222" target="_blank">https://www.kaggle.com/c/PLAsTiCC-2018/discussion/75222</a><br>
<code>My github for the host</code>: <a href="https://github.com/takashioya/plasticc" target="_blank">https://github.com/takashioya/plasticc</a></p>
<h1>My Model</h1>
<p>my model is based on CatBoost, with lotta feature engineering. <br>
I made 3 types of models.<br>
(1) galactic model, the number of features are about 1000.<br>
(2) extra-galactic model, the number of features are about 2000. <br>
(3) extra-galactic model with hostgal_specz, the number of features are about 2000. It gave 0.004 improvement.<br>
Then, I trained 7 * 3 types models with different feature set and get simple average of them, which scored 0.806 on Public LB, while my best single model scored 0.823. <br> <br>
you may think the number of features of my model is too big, but using small number of features degraded the score. tbh I prefer to use big features, because I think big features lead to big improvement in ensemble, from my experience. </p>
<h1>My Features</h1>
<p>actually, I used nyanp's features for my model and some of them have much bigger importance than mine (especially salt-2 features). However, some of my features still have high importance, so I'll explain a bit about them. </p>
<ul>
<li><code>features related to mjd</code>: <br>
something like 'variance of mjd whose detected == 1' worked well.</li>
<li><code>abs_curve_angle_features</code>: <br>
by calculating np.arctan(flux_diff/mjd_diff), we can calculate 'angle' of light curve. <br>
I calculated many stats from this.</li>
<li><code>feets features</code>: <br>
I used <a href="https://github.com/carpyncho/feets" target="_blank">https://github.com/carpyncho/feets</a>, which was effective.</li>
<li><code>gaussian process features</code>: <br>
It worked better than the features like 'days_from_peak_n_percent_flux'. <br>
for gaussian process, I used Gpy, but it's too slow and I <br>
had to use 30 instances for feature extraction.</li>
<li><code>shifted_flux * distance ** 2 features</code>: <br>
I calculated shifted_flux = flux - flux.min() if flux.min() > 0 per each object_id and passband, which worked better than just calculating <code>flux * distance ** 2</code></li>
</ul>
<h1>Ensemble</h1>
<p>I used weighted average for ensemble. however, I gave different weights to each class, which gave us 0.006 improvement. I calculated the weights using oof-predictions by hyperopt. <br><br>
By getting weighted average of mamas's CatBoost(0.806) and nyanp's LightGBM(0.834), we got 0.780 on public LB.Then, in the same way, we calculated weighted average of 0.780 model and yuval's NN (0.791) and got 0.737.</p>
<h1>probing class99</h1>
<p>Actually, we did nothing special about class99 7 days before the end of the competition. <br>
We noticed class99 probing is very effective 6 days before the end and spent 5 days probing class99 distribution. <br>
Finally, we got 0.057 improvement by probing class99, which took us to the Prize zone (0.680 on public LB). <br><br>
For extra-galactic, we used this special formula. <br>
<code>c99 = 1.65 * (c42 + c52 + c62 + c95) ** 2.5 * (1 - c62 - c95) ** 0.635</code> <br><br>
For galactic, we did nothing special, just a normal method like this. <br>
<code>c99 = 0.2 * (1 - c42) * (1 - c52) * .... * (1 - c90)</code><br><br>
Here, I'll explain how we found this special formula.<br><br>
First, we started with the normal method like this.<br>
<code>c99 = 0.9 * (1 - c42) * (1 - c52) * .... * (1 - c90)</code> <br><br>
Then, we noticed this way works better, which means c90 is not similar to c99 at all. <br>
<code>c99 = 0.9 * (1 - c42) * (1 - c52) * .... * (1 - c90) ** 2</code> <br><br>
Then, we also tried changing power for other classes and got this formula. (We spent about 19 subs for this step) <br>
<code>powers = {c42 : 0, c52 : 0, c62 : 0.5, c95 : 1, c90 : 2, c88 : 2, c67 : 2, c64 : 2, c15 : 2}</code><br>
<code>c99 = np.prod([(1 - pred[c]) ** powers[c] for c in classes])</code> <br> <br>
This formula means c99 is very similar to c42 and c52, and somewhat similar to c62 and c95. actually, this formula already improved our score by 0.049 and it was the last day of the competition. but, we didn't give up and finally found the problem of this formula. <br> <br>
For object_id whose <code>c90 = 0.5</code> and <code>c88 = 0.5</code>, we thought we should assign <code>c99 = 0</code>.</p>
<p>However, this formula assigns <code>c99 = 0.5 ** 2 * 0.5 ** 2 = 0.0625</code>, which looks too high for us. so, we thought we should do something like <code>c99 *= (1 - c90 - c88)</code> and finally found <br>
<code>c99 = 1.65 * (1 - c90 - c88 - c67 - c64 - c15) ** 2 * (1 - c62 - c95) ** 0.635</code> <br>
<code>= 1.65 * (c42 + c52 + c62 + c95) ** 2 * (1 - c62 - c95) ** 0.635</code> <br>
is a very good approximation of the formula. <br><br>
Actually, it was our last submission in this competition and we couldn't tune coefficient at all. <br></p>
<h1>what didn't work for me</h1>
<p>some of the experiments that didn't work (Not all)<br></p>
<ul>
<li><code>Band-correction using hostgal_photoz</code><br></li>
<li><code>target encoding using ra, decl</code><br></li>
<li><code>using adversarial validation for feature selection</code><br></li>
<li><code>stacking using GBDT as meta model</code><br></li>
<li><code>UNet-like Autoencoder for feature extraction</code><br></li>
<li><code>make binary classification model and use oof-predictions as features</code><br></li>
</ul>
<h1>what worked</h1>
<ul>
<li><code>class90 pseudo-labelling</code>: <br>
It gave 0.016 improvements for me.<br>
<br></li>
<li><code>adversarial pseudo-labelling</code>: <br>
I selected the objects whose the prediction is very high and similar to train data and used them for training. It didn't give a significant improvement for me, but I suppose it somewhat contributed to diversity in ensemble. </li>
</ul>
<h1>Comments</h1>
<p>Actually, I haven't figured out why we dropped from 2nd to 3rd on Private LB. I'm wondering whether it's because of the luck or not.</p>
<p>As for the competition design, I think making 'class99' is not so good, because it leads to LB probing and is not so practical. However, on the whole, I suppose this competition is really interesting and one of the most successful competition in kaggle. I'm especially really impressed with the yuval's beautiful and practical NN modelling :)</p>
<p>Anyway, I really enjoyed my second kaggle competition. I'll continue kaggle and want to become the winner in the next competition :) </p>
<h1>Thanks all, see you again!</h1>
<p>If you are right, we would've been 4th place if we hadn't tried adversarial pseudo-labelling. it's terrible!</p>
<p>Thanks yuval, I didn't know you succeeded in adversarial pseudo-labelling. </p>
<p><code>The only one which wasn't improved was the one with mamas's best features</code></p>
<p>It may be because these features are used in my adversarial model.</p>
<p>I think it's not only because of catboost, it is very sensitive to the features we use. These are what we tried.</p>
<p><code>lgb only with my features</code> : didn't work at all. I remember it's much worse than catboost by more than 0.05.</p>
<p><code>lgb only with nyanp features</code>: it somewhat works, scored 0.834 on LB. (you can see detail in nyanp's solution)</p>
<p><code>lgb with nyanp and mamas features</code>: didn't work at all.</p>
<p><code>catboost only with my features</code>: actually I haven't checked it for a long time, but I remember it worked.</p>
<p><code>catboost with my features and nyanp features</code> : it worked well, scored 0.806 on LB as I said. </p>
<p><code>XGBoost only with my features</code>: It's not so bad as lgb, but works worse than catboost.</p>
|
PLAsTiCC Astronomical Classification
|
4th Place Solution with Github Repo
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PLAsTiCC Astronomical Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats mamas, I really enjoyed competing again with you too. Next time, let's team up instead of competing against each other, otherwise we may become archenemies:) </p>
<p>Thanks and congrats too. So Adversarial Validation is done to understand how different train and test are. And the predictions of Adversarial Model can be interpreted as likelihood of being in the test set, therefore setting sample weights using these likelihoods will make your train set more representative of test set. In that case my adversarial model was extremely simple, just hostgalphotoz ratios, and I weighted train samples accordingly. Then trained my models and directly used the predictions. If you worry about imbalancing the classes while weighting, you may check how I do it without ruining the balance on my repo.</p>
<p>Now my repo has the full solution. The things that make the code longer but give little improvement are discarded for better readibility. This refactored code is not tested yet, but will be tested soon. Please don't make the assumption that it is mistake free.</p>
<p>First of all, congrats to the prize winners and all medal winners. During the competition, I have used the name Sisyphus because Kaggle Leaderboard was normally a daily routine for me to go up and down. But in this competition, I was the most stable Sisyphus ever till the last 3 days, staying on my comfortable 5th place. Then I saw the post from CPMP about their single model scoring 0.750 and I gave up because my blend was barely scoring that much. In the weekend, I went more experimental with this recklessness and it made me 3rd. Then I landed in 4th position in private LB.</p>
<p>My final score is a blend of LGB, NN and several stacking models. I will summarize my solution shortly. Later I will edit them with more details most probably. I will put <strong>(G)</strong> on the things that you can find on my Github repo. For now, my repo doesn't have the full solution but I will complete it soon:
<a href="https://github.com/aerdem4/kaggle-plasticc">https://github.com/aerdem4/kaggle-plasticc</a></p>
<p><strong>What didn't work for me?</strong></p>
<p><strong>Autoencoders</strong>: Tried to encode the light curves. Generated vectors didn't help neither as features nor for clustering.</p>
<p><strong>Data augmentation using test set</strong>: Tried to augment the data with the samples from the test set using the pseudo-labels. Improved CV, worsened LB.</p>
<p><strong>Remove background effect</strong>: It seems background subtraction was noising the light curves a bit. Tried to normalize them as if they all have black background, couldn't find a way to do it.</p>
<p><strong>What kinda worked?</strong></p>
<p><strong>Probing class99</strong>: I assumed that class99 are similar to certain classes. By probing, I ended up with the information that it is not similar to 15, 64, 67, 88, 90 classes predicted by my model. So I gave different constant to class99 depending on my prediction's highest class.</p>
<p><strong>Adversarial Validation & Weighting</strong>: Since train/test sets were obviously very different, I tried Adversarial Validation. Using all the features, it was very easy to distinguish train and test, so it didn't work for me. Therefore, I only used hostgal_photoz and ddf for weighting the samples. With these sample weights, my score improved around 0.02 and my CV was always higher (worse) than the ones reported on the forum. <strong>(G)</strong></p>
<p><strong>What really worked?</strong></p>
<p><strong>Ratio features</strong>: I guess most of you did this. Instead of using raw features from each passband, I used their ratio over all passbands.</p>
<p><strong>Stacking</strong>: Even a simple Logistic Regression model on top of my LGB model's predictions (confusion matrix) improved my score around 0.04. <strong>(G)</strong> Later, I did more stacking.</p>
<p><strong>Hostgal_specz model</strong>: I trained a model to predict hostgal specz using training set+ test set with hostgal_specz. Then used this model's predictions as a feature.</p>
<p><strong>Using normal values and log transformed values together on Neural Net</strong>: Having both gives you the opportunity to do all four operations between the features (+, -, / *) because you can write log(xy) as log(x) + log(y). I guess hostgal_photoz was interacting with most of the features that I have.</p>
<p><strong>Bazin</strong>: This is a light curve fit method that can be found on: <a href="https://github.com/COINtoolbox/ActSNClass/blob/master/examples/1_fit_LC/fit_lc_parametric.py">https://github.com/COINtoolbox/ActSNClass/blob/master/examples/1_fit_LC/fit_lc_parametric.py</a> I have changed it significantly and it gave me very big improvement. <strong>(G)</strong></p>
<p><strong>Log Ensemble</strong>: Instead of averaging predictions, I have averaged the logarithm of the predictions because in the end we try to regress the log values. This worked better for me. <strong>(G)</strong></p>
<p>Thanks! I have used the optimal fit parameters and fit_error as features on my models.</p>
<p>Thanks and congrats to you too. Impressive to see that numba makes it 2 times faster, I will update it on my repo, thanks a lot.</p>
<p>Thanks and you are welcome. Let's see if I can manage to be a GM:)</p>
<p>Thanks and congrats to you too. Compared to giving constant (anything over 0.9 prob -> 0.02, gal -> 0.02, exgal -> 0.018), class99 probing improved my score around 0.03. Hostgalspecz model improved my NN CV by 0.01 but resulted on 0.02 improvement on LB. For LGB, it didn't improve my CV but LB is improved by 0.01. Maybe my models were overfitting to hostgalphotoz and it just prevented it.</p>
<p>Thanks a lot! My hostgal specz model improved my CV around 0.005 but LB around 0.015.</p>
<p>Thanks, congrats to you too. I have just read your solution. Really amazed by how you trained your NN on pseudo-labeled test set. Really good idea!</p>
|
PLAsTiCC Astronomical Classification
|
5th Place Partial Solution (RNN)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PLAsTiCC Astronomical Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to you too!
For classification, I came up with it from the experience in the past competition involving NLPs.. lol
For autoencoder, I referenced the following for the decoder part:
1. Paper: Effective Approaches to Attention-based Neural Machine Translation's Global Attention with Dot-based scoring function
2. Someone's code <a href="https://github.com/wanasit/katakana/blob/master/notebooks/Attention-based%20Sequence-to-Sequence%20in%20Keras.ipynb">https://github.com/wanasit/katakana/blob/master/notebooks/Attention-based%20Sequence-to-Sequence%20in%20Keras.ipynb</a>
3. <a href="/cpmpml">@cpmpml</a>'s post: <a href="https://www.kaggle.com/c/PLAsTiCC-2018/discussion/71949">https://www.kaggle.com/c/PLAsTiCC-2018/discussion/71949</a>, which contains a link to github + paper as well. P.S. I noticed 6th place cloned this in gitlab 2 months ago...</p>
<p>Thanks~ I did notice the data is not sampled uniformly in time...
So what I did was
1. Sorted each observations by mjd
2. Groupby object id, and calculate the difference of mjd (mjd-difference)
3. Add the mjd-difference as one of the dimension in the input vector per timestamp in the time-series branch.</p>
<p>In my understandings, it should somehow provide gaps info to RNN.</p>
<p>A very interesting I've tried is also autoencoder too, where I trained it with original mjd-diff, passbands... and test with different mjd-difference (uniform this time), passbands, and seemingly it could reconstruct how the data should look like according to the given information. Maybe it's showing RNN could digest the given mjd diff, passbands to construct the light curves. Unfortunately, the latent vectors learned\reconstructed curves did not help us getting better model :( (We did not apply what you've done to get lots of training data!).
May I ask how you figure out RNN struggled with gaps in data ? From the reconstructed curves in autoencoder as well?</p>
<p>Thanks, you are an amazing teammate to us too! Really well-deserved as a discussion GM, not only for the information shared on the discussion channel; as teammates, we got lots of useful suggestion from you~</p>
<p>First of all, thanks PlAsTiCC for holding such an interesting competition.
Thanks for the initial probing in the post by <a href="/titericz">@titericz</a> and others, save us all some submissions :D
Thanks for teammates for the nice cooperation, we've learned a lot from each other!
Here is solution of my parts, my teammates might share theirs too.
I'll update a github repo later.</p>
<p><strong>Basic Solution</strong></p>
<p>Basically, my parts involoves in building RNN model, since I've already got teammates proficient in lgb :D</p>
<ol>
<li>For each timestamp, encode with vector composed of flux + flux_err + one-hot encoded passband + mjd-diff</li>
<li>Manual RNN Architecture Search (gru\lstm, several rnn layers, dropout, attention, conv1d with avg\max pooling):
<ul><li>Training set is small, this is a quick trial for me.</li>
<li>Best RNN Architecture: Time-Series Branch + Meta Branch.</li>
<li>Time-Series Branch: 64-bigru + Spatial Dropout (Prob=0.1) + 64-bigru + Attention.</li>
<li>Meta Branch: Simple Fully Connected NN.</li></ul></li>
<li>Add Gaussian Noise to have run-time augmentation, so that training won't fit to train set too quickly.</li>
<li>OOF from RNN with time-series branch only help building better lgb models from <a href="/cpmpml">@cpmpml</a> and <a href="/marcuslin">@marcuslin</a></li>
<li>Single Model Scores.
<ul><li>Based on raw features only: Private 0.93560, Public: 0.91956, CV: 0.585</li>
<li>With features from <a href="/marcuslin">@marcuslin</a>: Private 0.85371, Public: 0.84372, CV: 0.474</li></ul></li>
<li>How the model looks like: see appendix.</li>
</ol>
<p><strong>What I've tried\researched ?</strong></p>
<ol>
<li>RNN autoencoder:
<ul><li>The latent vectors and reconstructed light curves (change different timing sampled) does not help in lgb modeling.</li></ul></li>
<li>Research for open-classifcation\unknown class detection problems:
<ul><li><a href="https://arxiv.org/abs/1610.02136"></a><a href="https://arxiv.org/abs/1610.02136">https://arxiv.org/abs/1610.02136</a>: Use max softmax probability for class99. Not worked for me. Oliver's approach still make sense to me more and perform better.</li>
<li><a href="https://openreview.net/forum?id=ryiAv2xAZ"></a><a href="https://openreview.net/forum?id=ryiAv2xAZ">https://openreview.net/forum?id=ryiAv2xAZ</a>: " Use GAN to generate adversarial samples to calibrate the probability. For adversarial samples, make the confidence predicted for known classes the same => Maximize (1-p0)<em>...</em>(1-p13) => Maximize prob for class 99. Seems an interesting idea to me. But did not try it eventually, don't think I could code GAN with RNN in time.</li>
<li><a href="https://arxiv.org/abs/1704.03976"></a><a href="https://arxiv.org/abs/1704.03976">https://arxiv.org/abs/1704.03976</a>: Virtual Adversarial Training. Use some tricks in loss to calibrate the confidence prediction. Still in half-way modifying the loss...</li></ul></li>
</ol>
<p><strong>What did not work for me ?</strong></p>
<ol>
<li>Adversarial Classification, predict the probabilities of how the samples are similar to test. Then use the probability as the weight for training set.</li>
<li>CNN: Conv1d, Conv2d, dilated CNN...</li>
<li>Using the RNN autoencoder to pretrain time-series branch, and use the pretrained weights as initial weights with warmup for classification model training.</li>
</ol>
<p>Thanks!</p>
|
PLAsTiCC Astronomical Classification
|
6th Place Solution Summary
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PLAsTiCC Astronomical Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you all! I used PyTorch and skorch for this competition. The code needs some serious efforts before becoming publishable and usable for others so not sure whether and when I will be able to do this.</p>
<p>6th place solution is a neural network consisting of 3 components:</p>
<ol>
<li><p>Meta Encoder - taking as input meta features hostgal photoz, hostgal photoz err, is galactic and summary stats for flux and flux_err as min/max/mean etc.</p></li>
<li><p>Light Curve Encoder - bidirectional GRU taking as input grouped by day flux, flux_err, detected and time difference</p></li>
<li><p>Spectroscopic Redshift Predictor – two fully-connected layers for predicting hostgal specz</p></li>
</ol>
<p>Outputs of these 3 components are fed into two last fully-connected layers for predicting class probabilities.</p>
<p>Light curve encoder was pre-trained as autoencoder on test data. Spectroscopic redshift predictor was also pre-trained on test data objects for which hostgal specz is available.</p>
<p>Augmentations. Three types of augmentations are performed:</p>
<ol>
<li>flux as a Gaussian with standard deviation flux_err</li>
<li>hostgal photoz as a Gaussian with standard deviation hostgal photoz err</li>
<li>randomly dropping observations</li>
</ol>
<p>Top submission is an average of 5 cross-validation runs of 3 neural network variations. </p>
|
PLAsTiCC Astronomical Classification
|
9th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PLAsTiCC Astronomical Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi <a href="/yangddd">@yangddd</a>, sorry for the late reply</p>
<p>I iteratively used lgb feature importance assessment. More precisely, I started by running lgb with all features and afterwards selecting the top 50% best ranked features. Then I restarted the process with this selected features, and so on. I stopped approximately when my CV score stopped improving and when I suspected/verified that the improvement would not generalize to the LB</p>
<p>Thanks to everybody for this competition. Besides enjoying it greatly it has been extremely instructive for me. Thanks also to my teammates <a href="https://www.kaggle.com/meaninglesslives">Siddhartha</a> and <a href="https://www.kaggle.com/naka2ka">Ynktk</a> for their great spirit, hard work, and ideas. Finally, a big thanks to those who have shared their kernels and their thoughts in the discussion board throughout the competition. The open flow of ideas enrichened the competition invaluably.</p>
<p>Our solution relied primarily on careful feature engineering and ensembling. Below I present an overview of it. </p>
<p><strong>Stacking</strong>
Our best submissions were obtained by stacking the predictions of different lgb, catboost, and nn models, and then taking a weighted average with our previous best submission. This last step improved the score by 0.02 with respect to submitting the meta-model predictions alone (for our last submissions).</p>
<p>As meta-model we initially used a shallow lgbm with depth 1 and learning rate 0.01. At the end of the competition we increased the depth to 3. </p>
<p><strong>First-level models</strong>
My teammate Siddhartha focused mostly on a densely connected CNN model. You can read more about his ideas <a href="https://www.kaggle.com/meaninglesslives/a-slightly-better-nn-arch-and-some-tricks">here</a>. Ynktk created a variety of nn, xgb, lgb and catboost models. He will add some notes on these later in this thread. We also experimented a little with regularized greedy forests but they did not add much to the ensemble. </p>
<p>Personally I focused on a single first-level lgb model. This was split into two: one was trained on galactic objects, and the other on extragalactic ones. Each training set used different features. This was the model with best lb score (0.863) of our ensemble as far as we know.</p>
<p><strong>Features</strong>
I engineered around 8000 features and used lgb importance to select the top 80 for the galactic set and the top 130 for the extragalactic set (approx). Some notes on this part:</p>
<ul>
<li><p>First of all I would like to point to <a href="/manugangler">@manugangler</a>'s <a href="https://www.kaggle.com/manugangler/optimal-feature-extraction-for-class-6">kernel</a>, where he provided a method for fitting light curves to microlensing events. The features obtained from there worked wonderfully for us, especially after taking passband-wise ratios and differences for the microamp and microbase features, respectively. Einstein time is the most important feature of my model.</p></li>
<li><p>I browsed and read some texts on astrophysics which gave me some ideas for feature engineering. Below I comment the most useful ones for my model. </p>
<ul><li><p><code>distmod/log10(hostgal_photoz)</code>. If I understood correctly, this roughly measures how far from being a so-called 'standard candle' the object is (setting standard candles to be 'standard' supernovae). The actual formula is more complicated but for a tree-based approach this suffices. This is the 2nd or 3rd most important feature of my model.</p></li>
<li><p><code>log_10( luminosity_mean )</code> where <code>luminosity = 4pi * (distance in parsecs) * flux</code> (I may be very confused and this may have nothing to do with luminosity). Some similar feats explained by Kyle Boone like magnitude <code>-2.5log_10(max flux in passband i) - distmod</code> and their ratios. There are several other feats. If someone is interested I can explain further.</p></li></ul></li>
<li><p>I used different sets of time observations for feature agreggation. One set used all observations, the other only the ones marked as detected, and the other only the ones marked as undetected. After aggregation I took ratios passband-wise and also I created some other ratio variables such as <code>passband_std_undetected/mjd_diff_detected</code> and many others.</p></li>
</ul>
<p><strong>Some notes</strong></p>
<ul>
<li><p>Siddhartha augmented the train set by randomly selecting from 30% to 70% time observations for each object id, and then creating new objects with the selected data. This improved his nn cv score by 0.01. It did not seem to improve my lgbm. Looking now at other solutions it may have been a good idea to spend some more time thinking on this idea.</p></li>
<li><p>Class 99: we used Olivier's and Cpmp's method throughout the competition. The less than 5 submissions we used for probing this class were unsuccessful.</p></li>
<li><p>We tried to model hostgal_specz but this did not work for us.</p></li>
<li><p>Pseudolabeling also did not work.</p></li>
<li><p>Our public lb top score was 0.792. However the best score obtained by one of our single first-level models was 0.863. Hence I believe one of the strong aspects of our solution is the diversity of models constructed, which may have been achieved thanks to the fact that each team member used mostly his own features and set up. These were quite different between the three of us. </p></li>
</ul>
<p>This is all. Thank you for reading! If you have any question I will be pleased to answer</p>
<p>Eskerrik asko! Good one :)</p>
<p>In fact at some point I got to up to 20000 features but only found new overfitting feats, so I went back to the 8000 set (I only used around 200 of these for training though).</p>
<p>It is mainly due to taking passband-wise ratios and having three big classes of variables (detected, undetected, all) that the number grew so much..</p>
|
PLAsTiCC Astronomical Classification
|
A solution and some learnings
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PLAsTiCC Astronomical Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi James, I do not quite understand your request. (I have not prepared my models for releasing them on Kaggle for use by others)</p>
<p>First I want to congratulate the winners and thank the organizers for a fantastic competition. It is great to have the scientific background of the competition. I feel like I just started and hope there will be a continuation.
I have read through the descriptions of the top solution. There were parts that I had included, others I had worked on but not managed to complete, as well as ideas I had not thought of. I would like to give a (hopefully) short description of the solution and what was missing. </p>
<ol>
<li>Data normalization.
For data normalization I used features luminosity = flux*distmod**2 and magnitude = -2.5 log10(flux) – distmod (+ 27) for extra galactic objects. I did not use the normalization as in <a href="https://www.kaggle.com/c/PLAsTiCC-2018/discussion/75054">https://www.kaggle.com/c/PLAsTiCC-2018/discussion/75054</a> I would like to try that out. </li>
<li>Initial Models.
I split the models early into galactic and extra galactic. The initial models were NN, catboost, lgb and xbg models.
For NN I started with the model given by <a href="https://www.kaggle.com/meaninglesslives/simple-neural-net-for-time-series-classification">https://www.kaggle.com/meaninglesslives/simple-neural-net-for-time-series-classification</a>
but changed the features to luminosity and added 1-3 layers with a tower of 3 CNNs(64/32/16), Dropout and MaxPooling/AveragePooling for processing a group of features for the 6 bands.
This was followed by a sequence of Dense layers. This was a simple NN that gave good initial results. I was not able to improve this essentially, but used it as input for stacking and averaging.
I used catboost, lgb and xgb with depth 3, initially with 5 folds and later with 10 folds. The best results were initially with catboost followed by lgb, but xgb results were worse. Of course that required much feature engineering and took some time to pass the simple NN. When using NN as input for stacking the results of lgb became best and xgb were good enough to use in weighted averaging.
This was then the final Architecture with base models with NNs, catboost and lbg, and lgb and xgb models using stacking with NN and catboost results as input. This was then followed by weighted averaging including the base models. </li>
<li>Feature Engineering and PCA
There are extremely many possibilities of creating features. The most difficult ones are for differentiating the hard classes that seem to be different types of supernovae. For this I created, as many others <br>
• Features restricted to some areas: detected area between first and last detected = 1; area where magnitude is below a limit; a growing/falling area prior/after maximal flux but within detected area, dividing by maximum to use ratios, and including counts to help the models <br>
• Taking differences between steps and divide by mjd difference to have derivatives and statistics based on the derivatives. This is to get the slope during the growth and falling of luminosity as well as the changes in the slope. <br>
• Taking average between steps and multiply by mjd difference for summing up to get integrals of the luminosity over the growing and falling area. This was followed by normalization to get a description of the shape. <br>
• Additional features such as A, eta described in <a href="https://arxiv.org/pdf/1512.01611.pdf">https://arxiv.org/pdf/1512.01611.pdf</a> <br>
• autocorrelation__lag_1, fft_coefficients, entropy, skewness (but never kurtosis) (using tsfresh) and stetson_k <br>
When adding such features to models this often lead to worse results. For handling that I included the use of PCA on the 6 bands to reduce the dimensionality of such features from 6 to a lower dimension like 1, 2 or 3. This made it possible to use much more features and to use them together. I have not seen this use of PCA mentioned here before. </li>
<li>What failed or could have been better
The part that I worked on and did not manage to complete successfully was curve fitting and use of Bayesian techniques. For curve fit I used Bazin. That gave extremely good results in CV, but the initial features gave bad results on LB. I was not able to complete this together with Baysian techniques in time. The main difficulty was the difference between train and test set. Here I did not manage to setup a reasonable process including augmentation. This was a great part of the solution by Kyle Boone. I think it is important to identify such essentials early enough to focus on the right parts. That will be my main follow-ups including complete the missing parts of the solution (and get experience in using Gaussian Processes).</li>
</ol>
|
PLAsTiCC Astronomical Classification
|
Congrats and 8th place Rapids solution updated!
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PLAsTiCC Astronomical Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Haha, you are the real hero! My sole standard to enter a kaggle competition is a consistent CV-LB. And I didn't have one until your kernel.</p>
<p>@ipraps Sorry about it. It is some personal feelings to share with my Chinese fella.</p>
<p>Thank you so much! Looking forward to your solution sharing!</p>
<p>这一招偷师舟神,向舟神致敬!
顺便吐槽一下中间做不下去的时候,给5,6支队伍发了merge request都被拒绝了而且没有一个队给我发merge request,人缘也是差到爆了 ==</p>
<p>Update: my solution will be posted <a href="https://github.com/daxiongshu/Rapids_PLAsTiCC_2018">on github</a>. Now you can find a quick end2end demo with rapids. Forgive me that I'm always distracted to other stuff but I promise I'll get the whole solution done eventually!</p>
<p>Big Congrats to the winners and new kaggle masters. I want to thank Olivier <a href="/ogrellier">@ogrellier</a> and Kyle <a href="/kyleboone">@kyleboone</a> specifically. Without your kernels, I can't even start this competition.</p>
<p>Overall this competition is a fantastic learning process for me. If you have no astronomical background, this solution is for you. Considering I started climbing the LB from a model that is just 0.95 LB, what happened is a miracle :P</p>
<p>Here is a preview of my solution and more to come! </p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/440750/10889/image.png" alt="Solution Overview"></p>
<p>Back to my laptop :P Ok, let me walk through this chart. </p>
<h1>Baseline Models and feature engineering.</h1>
<p>In case you haven't noticed, I'm using <a href="https://github.com/rapidsai/cudf">cudf</a> to replace pandas for this competition whenever I can. In general, it is at least <strong>10x</strong> faster for csv reader and general groupby - aggregation <strong>with a single GPU</strong> than pandas on CPU. Sometimes I got <strong>100x</strong> speedup. I used cudf to generate most features in <a href="https://www.kaggle.com/ogrellier/plasticc-in-a-kernel-meta-and-data/code">Olivier's kernel</a>. However, as a merely two-month old library, you can expect some missing functionalities of cudf. I will start publish cudf tutorial in kaggle competitions and work with our developers to make your favorite pandas trick come true on GPU! A demo notebook of using <strong>cudf</strong> will be published sometime tomorrow. Stay tuned!</p>
<p>In addition, I also built some other features that gave me a small boost including decline rate from the peak, number of ups and downs of the flux curve, passband count percentage around the peak, <a href="https://www.kaggle.com/rejpalcz/feature-extraction-using-period-analysis">period features</a> (optimized), and features from <a href="https://www.kaggle.com/cttsai/forked-lgbm-w-ideas-from-kernels-and-discuss">Chia-Ta's kernels</a>. A small trick to add diversity of features is to only use detected==1 and redo the above feature engineering. So in the end I have two sets of features, one with whole times series and the other with detected==1 series.</p>
<p>Unfortunately, my many other feature engineering attempts failed miserably. In the end, I have a mediocre lgb model with about 80 features of 0.58 cv and 1.05 LB. By the way, lgb always outperformed xgb by a small margin but the xgb gpu is just so much faster so in the end I used xgb gpu for experimental validation & feature selection, and lgb for submission.</p>
<h1>Stacking: non linear ensemble</h1>
<p>As the great Kaggler Giba once said, stacking is all about diversity of base models. so I actually have three base models (level 1 modes):</p>
<ol>
<li>lgb as in <a href="https://www.kaggle.com/cttsai/forked-lgbm-w-ideas-from-kernels-and-discuss">Chia-Ta's kernels</a> with whole series</li>
<li>lgb with features from detected==1 and train for galaxy and ex-galaxy separately</li>
<li>MLP as in <a href="https://www.kaggle.com/meaninglesslives/simple-neural-net-for-time-series-classification">Siddhartha's kernel</a> with same features as 2. Mysteriously, this keras mlp is just better than my own TF implementation with the exact same network structure and hyperparameters.</li>
</ol>
<p>A key to successful non-linear stacking is to avoid overfitting. I have found two tricks for this:</p>
<ol>
<li><p>In level 1 base model, don't use the oof labels for early stopping. Instead, split the train data in that fold again to get its own validation set. This will degrade the level 1 model's score but it will help level 2 stacking.</p></li>
<li><p>Make the 2nd level model simpler. For example, my 1st level lgb use depth of 7 and 3. In 2nd level lgb, the depth is always 1. The 1st level MLP has 4 layers but the 2nd level MLP only has 1 layer. And again, mysteriously, now it is my own TF MLP at 2nd level that is always better than the keras counterpart. In the end I just average the lgb/xgb and nn level 2 predictions with equal weights. </p></li>
</ol>
<p>At this stage, my stacking is 0.51 cv and 0.95 private LB which is 0.1 better than my best single model. So I have a good stacking framework setup and everything is great except for my non-impressive rank. Now <strong>what I need is just some magic</strong>.</p>
<h1>RNN feature extraction with attention</h1>
<p>The sudden jump of @hklee made me believe that deep learning could be it. I looked at the data again and the great posts by <a href="https://www.kaggle.com/c/PLAsTiCC-2018/discussion/71949">CPMP</a> and <a href="https://www.kaggle.com/c/PLAsTiCC-2018/discussion/72646">hklee</a>. I did the following preprocessing of flux:</p>
<ol>
<li>log transform the flux and calculate the diff(periods=1) and just use the diff</li>
<li>calculate mjd the diff(periods=1) and just use the diff</li>
<li>segment the time series and reset the mjd_diff and flux_diff at segment boundaries to 0. </li>
<li>Insert random size of all zero data to the gap between the segments.</li>
</ol>
<p>And I use an RNN structure as the following:</p>
<p>```</p>
<p>with tf.variable_scope(name):</p>
<pre><code> net1,net2 = net[:,:,:-1],net[:,:,-1] # B, S, F
net2 = self._get_embedding("%s/passband"%(name),net2,V,E)
net = tf.concat([net1,net2],axis=2)
state = None
net = self._bd_rnn_layer(net,"%s/rnn3"%name,cell_name,args,
state_fw=state,state_bw=state,output_size=-1,useproject=False)
args = {"num_units":H//4,"activation":'relu'}
self.next_flux_diff = self._bd_rnn_layer(net,"%s/rnn5"%name,cell_name,args,
state_fw=state,state_bw=state,output_size=1,useproject=True)[:,:-1,0]
args = {"num_units":H//4,"activation":'relu'}
net = self._bd_rnn_layer(net,"%s/rnn4"%name,cell_name,args,
state_fw=state,state_bw=state,output_size=-1,useproject=False)
w = self._get_variable(name, name='attn', shape=[1,args["num_units"]])
w = tf.expand_dims(w,axis=0)
atten = tf.nn.softmax(w*net)
self.bottleneck = tf.reduce_sum(net*atten,axis=1)
net = self._fc(self.bottleneck, num_classes, layer_name='%s/out'%(name))
</code></pre>
<p>```</p>
<p>I start to train this RNN using the 7K training data and as you can image it went nowhere. <em>Deep learning demands big data unless it can be transferred.</em> And we do have big test data with 3M samples. And then it struck me that I could use the pseudo label of test data for training purpose as my great kaggle buddy <a href="/xiaozhouwang">@xiaozhouwang</a> did in <a href="https://www.kaggle.com/c/tensorflow-speech-recognition-challenge/discussion/47722">tensorflow speech 3rd place</a>. Intuitively, I used my best model at that time (LB 0.95) to generate pseudo labels which is just np.argmax(test_predictions,axis=1) and <strong>Bazinga!</strong> it just worked! So as the chart shows I did this repeatedly for the last week of competion and my score is improved as [0.95 LB, 0.88, 0.85, 0.83, 0.82, 0.81, 0.80] and it just converged to where I am.</p>
<p>Some tricks of this step:</p>
<ol>
<li><p>only provide time series data to the RNN without meta data. In this way, we force the RNN to focus on the timer series patterns only to fit a score that's obtained by all features and complex ensemble.</p></li>
<li><p>use all the test data for training and use train data for validation and early stopping. The rational is that we expect the RNN to learn a middle ground to best fit the pseudo labels of test and true labels of train. </p></li>
<li><p>multi objective function. in addition to the final classification, the model also predicts the next flux diff. It helps the model from cold start.</p></li>
<li><p>Use different RNN cells in a interleaved way. For example, if GRU get the best score in this experiment, use its pseudo labels to train a LSTM for the next experiment.</p></li>
<li><p>Use bottleneck layer as features instead of the softmax output. Remember, we want the middle ground features that could be useful to train data, not the ones that fits the pseudo labels best.</p></li>
</ol>
<p>Yep, that's it. I'll also open source my solution on github. Thank you all. Happy kaggling! :D</p>
<p>@SimonChen 恭喜恭喜你的🏅️,舟神就是 <a href="/xiaozhouwang">@xiaozhouwang</a> 你已经关注啦!</p>
<p>I updated it. Did I answer your question?</p>
|
PLAsTiCC Astronomical Classification
|
Overview of 1st place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PLAsTiCC Astronomical Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>When I change the redshift, I adjust the flux to account for the change in distance (the general relativistic version of 1/r^2) and for the time dilation of photons. This is actually quite straightforward: the "distance modulus" is the combination of all of these effects. You just need to apply a correction of 10**(0.4*(old distmod - new distmod)) to your observations.</p>
<p>You should also take into account the fact that the redshift shifts what part of the spectrum the bands fall in. I did not incorporate that into my solution because I didn't have time to evaluate how GP modeling errors would introduce biases into the data. In principle, it should be pretty straightforward to do with the output of my GP.</p>
<p>I also adjusted the observed times to stretch out the lightcurve. eg: at redshift 1 a supernova takes twice as long to decline as at redshift 0.</p>
<p>I'll release my code along with the augmented training set and GP features after I finish cleaning it up. It will be very interesting to see how the different models complement each other.</p>
<p>Yes, the markdown formatter ate my asterisks. I fixed it now.</p>
<p>Getting the background right doesn't really matter for the scaling. After you take the exponential above (turning from magnitudes into a scaling of the flux), you end up with a positive number that you are multiplying everything by. If I move an object farther away, I end up with a scale that is less than one, so a large positive number gets transformed into a small positive number, and a large negative number gets transformed into a small negative number. Subtracting the background before or after gives the same result, regardless of what method you use.</p>
<p>I'm definitely less experienced with ML than a lot of people on here, so I decided to focus on doing one thing well. I'm very interested to see how well we can do if we blend everyone's models together.</p>
<p>I used a continuous GP for the passband, just using the passband labels as the coordinates in the second dimension.</p>
<p>Throwing out observations to get lightcurves that look more like the WFD was essential to my degradation. I came up with a model of how many lightcurve points there are for a typical WFD object, and I threw out data to get to that.</p>
<p>Hi Manu, good to hear from you. It's been a while! I definitely think that l could mix my solution with other teams' solutions and get a boost. Since I had a lot to learn, I decided to focus on one thing and try to get it right. It seems like in the end lgbm models outperformed everything else by quite a bit, so I'm not sure how much of a gain I'd get.</p>
<p>What did you use to build your models? I thought about doing that, but decided that I didn't have the computational resources or time to do fits to everything. I'm guessing that you did something like GP interpolation of well sampled timeseries + factor analysis?</p>
<p>EDIT: code is now available on my github page at <a href="https://github.com/kboone/avocado">https://github.com/kboone/avocado</a></p>
<p>First of all, thanks to everyone who participated in this competition! I learned a lot doing it, and I have enjoyed all of the discussions that I had with you. Here is an overview of my model that took 1st place in this competition. I will be releasing the code with a full writeup shortly.</p>
<p>I am an astronomer studying supernova cosmology, so my work mainly focused on trying to tell the different supernova types apart. This ended up working out well because everything else was fairly easy to tell apart. Here is a summary of my solution:</p>
<ul>
<li>Augmented the training set by degrading the well-observed lightcurves in the training set to match the properties of the test set.</li>
<li>Use Gaussian processes to predict the lightcurves.</li>
<li>Measured 200 features on the raw data and Gaussian process predictions.</li>
<li>Trained a single LGBM model with 5-fold cross-validation.</li>
</ul>
<p>I first use Gaussian process (GP) regression to extract features. I trained a GP on each object using a Matern Kernel with a fixed length scale in the wavelength direction and a variable length scale in the time direction. My machine could do ~10 fits per second so it took around 3 days of computation time to do all the fits. Gaussian processes produce very nice models for well-sampled lightcurves, and are able to get a nice model even when the measurements are in different bands. They also handle measurements with large uncertainties very gracefully. For poorly sampled lightcurves, the GP fits the available data well, but doesn't always do great for extrapolation. Here is an example of what comes out:</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/440847/10890/gp_example.png" alt="gp example"></p>
<p>Using the GP predictions, I calculated lots of different features. The distinguishing features of supernovae are their peak brightnesses and the width of their lightcurves, so I put several measures of those into the model. For poorly sampled lightcurves, the GP doesn't always give great results, so I added features that let the model know how well the GP is doing. This basically boils down to counting the number of observations in different windows around maximum light. I also added features related to the signal-to-noise in each band, and some simple peak detection and counting to help with the non-supernova classes.</p>
<p>Now the training set is very different from the test set. To deal with this, I took every lightcurve in the training set and degraded it up to 40 times to get something that looks like the less well-sampled lightcurves in the test set. The degradation includes:</p>
<ul>
<li>Modifying the brightness of galactic objects.</li>
<li>Modifying the redshift of extragalactic objects (including dilating time and changing the brightness).</li>
<li>Adding in large "gaps" like the ones in the real data that show up because of the time of year.</li>
<li>Choosing a new photo-z and photo-z error for the observation based on a model of how the spec-zs in the data turn into photo-zs.</li>
<li>Simulating the detection to choose which objects would be included in the dataset that we were given.</li>
</ul>
<p>This degradation was all tuned to the training/test datasets, and no external data was used. After this procedure, I end up with a training set of ~270000 objects that is much more representative of the test set than the original training set. I trained a LightGBM model on this training set using 5-fold cross-validation and making sure that I kept the up to 40 degradations of each object in the same folds. After tuning this model, I get a CV of around 0.4 on the original training set. Here is the confusion matrix:</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/438639/10876/confusion_matrix.png" alt="confusion matrix"></p>
<p>There are some interesting differences compared to <a href="https://www.kaggle.com/c/PLAsTiCC-2018/discussion/74564">CPMP's confusion matrix</a>, such as the fact that I have much lower accuracy on class 6 objects than CPMP. This appears to be due to the fact that my degradation procedure produces more low signal-to-noise class 65 than class 6 objects. It will be interesting to see if my method really does reproduce the test set better.</p>
<p>I played a lot with how to identify the class 99 targets. I found that the tree based models that I used are not very good for outlier detection. My best results came from choosing a flat score to give to the class 99 objects, and then using that in the soft-max to get the final probabilities. Using this, I got what I consider my best real score of 0.726 on the public leaderboard.</p>
<p>After trying to improve this score for a long time and getting nowhere, with a week to go I realized that I could figure out what the class 99 objects look like/don't look like by probing the leaderboard. This defeats the purpose of the class 99 prediction, and unfortunately ends up doing much better than any real estimate of the class 99 objects. I contacted the organizers about it, and was told that it was within the Kaggle rules to do this. In the end, I found that my best prediction of the class 99 objects was a weighted average of the predictions for classes 42, 52, 62 and 95. This trick boosted my final score to 0.670 on the public leaderboard. It will be interesting to see what other competitors did here.</p>
<p>Overall, I really enjoyed this competition and I learned a lot! I am working on cleaning up my code and making it friendly for others to play with. I think that there is quite a bit of room to improve the tuning of my model, and I did not try doing any kind of ensembling or using classifiers other than LGBM. Thanks to everyone who participated, especially <a href="/cpmpml">@cpmpml</a> who initiated a lot of great discussions and <a href="/ogrellier">@ogrellier</a> whose kernel I started with for my work!</p>
<p>For any astronomers who participated, I'll be at AAS in a couple weeks. I'd love to meet up and discuss the competition!</p>
<p>Also, I think that what happened with class 99 was very unfortunate. If there is a PLAsTiCC 2.0, the private test should have different kinds of class 99 objects than the public one so that you can't cheat by probing the leaderboard. I only discovered the probing method in the last week. I contacted the Kaggle staff, and I was advised not to share it because it would really have messed up the final placements.</p>
<p>It would be very interesting if the contest organizers could recalculate our scores with class 99 left out!</p>
<p>I actually used george instead of celerite because I have used that in the past. celerite is a newer package by the same author, and is presumably better although I haven't tried it myself.</p>
<p>I had the idea for the data augmentation and it coded up all in one night. When I finished it, I immediately jumped from 1.0X on the leaderboard to 0.8X and was very excited. Before that point I thought that I had no chance in this competition! I'll make sure to share the augmented training set that I produced with everyone. I'm sure that your ML models are much much better than mine, and I think that the augmented training set was what won me the competition.</p>
<p>Thanks for all of the posts that you made on the discussion board. They were very useful to me, and helped guide me along in the competition.</p>
<p>I used log(z) because redshift is always greater than 0. The log transformation preserves that.</p>
<p>We have more spectra of low redshift objects because they are brighter. With a big telescope, you can measure the redshift of a Type Ia supernova at redshift 0.1 in seconds, but it will take hours to measure the redshift of one at redshift 1. That naturally leads to a bias in which objects we end up having redshifts for.</p>
<p>I used a 2-dimensional Gaussian process where one dimension is the time and the other is the passband. Both dimensions used a Matern kernel. I fixed the passband kernel length scale to be a fixed, relatively large value since the fits for it weren't great. The GP definitely messes up if it doesn't have all of the information, eg this GP for a SN Ia where the rise time should be < 50 days:</p>
<p><img src="https://i.imgur.com/NYbb8bo.png" alt="bad gp"></p>
<p>It would be nice if the GP could be trained to match a SN Ia more accurately, but that is left for future work. Instead, I dealt with this in two ways. First, I added features to my LGBM model that let it figure out when the GP fit was bad. In this case, there are several features counting how many observations there were before maximum, so it knows not to trust the rise time. Second, I augmented the training set by throwing out blocks of observations. That created lots of lightcurves like this for the LGBM model to train on.</p>
<p>I did not use the Gaussian Process predictions for generating new objects. I thought about doing that, but decided that the errors on the GP model were too large, and I would end up generating non-realistic models. Instead, I simply noised up, scaled and threw out points from the training set spectra.</p>
<p>I'll release a full write-up when I get the code all cleaned up that details all of these features.</p>
<p>I basically made histograms of everything that you could measure in the test and validation sets, and added degradations based off of that.</p>
<p>For example, I found that objects that have speczs (like the training set) tend to be at lower redshifts than objects in the test set. Looking at the log(redshift), there is a shift of about 0.2 between the two:</p>
<p><img src="https://i.imgur.com/xOOxGfa.png" alt="specz comparison"></p>
<p>To pick redshifts for my new sample, I added the shift along with a gaussian with a spread of 0.2 in log(redshift). That makes the new training set more representative of the test set. I did something similar for other variables, and I'll discuss that in my full writeup.</p>
<p>The code is now available on my github page at <a href="https://github.com/kboone/plasticc">https://github.com/kboone/plasticc</a></p>
|
PLAsTiCC Astronomical Classification
|
PostProcess Trick - 21st place Partial Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PLAsTiCC Astronomical Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Great! What about LB? Did you apply it to corresponding submission?</p>
<p>Hi all!
First congrats to all competitors and especially solo gold winners. They just did an amazing job. I especially thank to <a href="/ogrellier">@ogrellier</a> for his great kernels. First time I’ve used chunk method and multi-threading.</p>
<p>Our final submission was a blend of our single models and I had a single LGBM model with a CV of 0.493 and LB of 0.915. For me, most important boost came from using ratio of different passband features and using different sample weights instead of direct inverse ratio of class counts. Also using top 55 features as a feature selection method, gave another important boost.</p>
<p>My teammates will explain details of models and features in another thread. In this thread I want to mention about postprocessing trick I’ve used and till now did not see in another thread. First I validated postprocessing on my OOF predictions and it also improved my LB score. After I was invited by the team, same process improved their models' both CV and LB.</p>
<p>Of course, as the model itself improves, gain from postprocess decreases. But, I think it will improve most of your models as well.</p>
<p>Before this method, my best model had a CV of 0.52 and LB of 0.968 (Private: 0.99101). With this method CV moved to 0.493 and LB to 0.915 (Private: 0.93077) :)</p>
<p>I think what lies behind this trick to work is the competition metric. For each row, the correct class’s probability is multiplied with 1 and the rest with zero. That is, it does not matter how do you distribute probabilities among wrong classes. So with this operation, you can compensate the lack of your model by simply changing class preds with weights. For example, my model was giving higher probability for class 42 and class 90 most of the time and lowering their predictions always improved my CV. </p>
<p>Here is the code for your oof predictions. You can give a try:</p>
<pre><code>def change(preds):
oof_df = pd.DataFrame(preds).copy()
oof_df[3] = oof_df[3]* 1
oof_df[11] = oof_df[11]* 1
oof_df[0] = oof_df[0]* 1
oof_df[1] = oof_df[1]* 1
oof_df[2] = oof_df[2]* 1
oof_df[4] = oof_df[4]* 1
oof_df[5] = oof_df[5]* 1
oof_df[6] = oof_df[6]* 1
oof_df[7] = oof_df[7]* 1
oof_df[8] = oof_df[8]* 1
oof_df[12] = oof_df[12]* 1
oof_df[10] = oof_df[10]* 1
oof_df[13] = oof_df[13]* 1
#This part scales them so their sum makes 1 back again.
sum = oof_df[[0,1,2,3,4,5,6,7,8,9,10,11,12,13]].sum(axis=1)
for col in [0,1,2,3,4,5,6,7,8,9,10,11,12,13]:
oof_df[col] = oof_df[col]/sum
return oof_df.values
oof_preds_modfy = change(oof_preds) #Give oof preds array to the function
print("Scores OOF Processed: ",multi_weighted_logloss(y_true=y, y_preds=oof_preds_modfy))
print("Scores OOF Original: ",multi_weighted_logloss(y_true=y, y_preds=oof_preds))
</code></pre>
<p>It’s quite simple. Try different weights for each class one by one and check your multi weighted logloss if it improved or not. For each class it has one direction for improvement, so it takes a little time to find correct combination. Each model has different characteristics, so you have to adjust them based on your oof. Here is my weights. (Some look weird, doesn’t it? But improved)</p>
<pre><code>oof_df[3] = oof_df[3]*0.55
oof_df[11] = oof_df[11]*0.5
oof_df[0] = oof_df[0]*2
oof_df[1] = oof_df[1]*0.85
oof_df[2] = oof_df[2]*0.5
oof_df[4] = oof_df[4]*1
oof_df[5] = oof_df[5]*4
oof_df[6] = oof_df[6]*0.65
oof_df[7] = oof_df[7]*2.9
oof_df[8] = oof_df[8]*0.35
oof_df[12] = oof_df[12]*3.7
oof_df[10] = oof_df[10]*1.9
oof_df[13] = oof_df[13]*0.9
</code></pre>
<p>Moreover, yesterday we had one left submission and I wanted to try another postprocessing.
It was simply this code:</p>
<pre><code>pred[i] = pred[i].apply(lambda x: 1.1 if x&gt;0.76 else x )
#After changing preds, make sure that sum of preds make 1 again.
</code></pre>
<p>Of course, first I tried on oof predictions and it was getting improvement in third decimals. So our final submission moved from 0.854 to 0.851 (Private: 0.87021 to 0.86777) and it helped us to move one place up in Private LB :P.</p>
<p>I would be very happy indeed if you also try and share with us if it worked for you as well or not.</p>
<p>See you in next competition!</p>
|
Parkinson's Freezing of Gait Prediction
|
10th place solution: U-Net with squeeze & excitation
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It means I trained multiple models and used the CV scores and LB scores to filter them. Essentially…</p>
<ol>
<li>Train 20 models with identical hyperparameters & the same cross validation fold, but different initial weights & different data shuffling.</li>
<li>Pick the 4 models that had the highest CV score during #1's training runs.</li>
<li>Check the public LB score each of the models from #2 achieves individually.</li>
<li>Submit ensemble that uses the top 2 models from #3.</li>
</ol>
<p>This strategy stems from an issue I had early on in the competition where certain training runs seemed to "get lucky" and produced models that generalized a lot better than others. The data augmentation described in my original post helped to reduce the amount of random variation in the LB scores for my individual models, so cherry picking models didn't matter as much with all the data augmentation in place, but I still did it for the final ensemble regardless.</p>
<p>I have a suspicion my models may have been using the scale of the features as an indication of which dataset the sequences came from.</p>
<p>Squeeze & excitation allows the model to conditionally multiply certain channels in the convolutional layer outputs by 0, effectively toggling certain convolutional filters on and off based on global context information. So the model could have dataset-specific components. I would guess leaving the features unnormalized with inconsistent units made it easier for it to do that or forced it into an approach which generalizes relatively well. </p>
<p>Looking back at my old notes, it appears normalizing to consistent units and adding a dataset type indicator as a feature works <em>almost</em> as well as using unscaled data. Scores from 10-fold cross validation with different subjects in each fold included below.</p>
<table>
<thead>
<tr>
<th>Consistent units?</th>
<th>Dataset type as feature?</th>
<th>Local CV score</th>
</tr>
</thead>
<tbody>
<tr>
<td>Yes</td>
<td>No</td>
<td>0.341</td>
</tr>
<tr>
<td>No</td>
<td>Yes</td>
<td>0.355</td>
</tr>
<tr>
<td>No</td>
<td>No</td>
<td>0.367</td>
</tr>
</tbody>
</table>
<p>Congrats to the winners! Thanks to the competition organizers for putting together an interesting challenge! Here's my solution.</p>
<h1>Model architecture</h1>
<p>I used a 1D convolutional U-Net with squeeze-and-excitation and 5 encoder/decoder pairs.</p>
<p>Squeeze-and-excitation seemed to be very beneficial, presumably because it allows the model to take global context into consideration while classifying each sample. I processed the data in extremely long context windows (10240 samples).</p>
<h1>Features</h1>
<ul>
<li><strong>Raw acleration values:</strong> AccV, AccML, AccAP<ul>
<li>I did not normalize these in any way. </li></ul></li>
<li><strong>Time features:</strong> <br>
<code>
df['NormalizedTime'] = df['Time'] / df['Time'].max()
</code><br>
<code>
df['SinNormalizedTime'] = np.sin(df['NormalizedTime'] * np.pi)
</code></li>
</ul>
<p>I also experimented with adding a variety of frequency domain features that were calculated using wavelet transforms but that didn't help.</p>
<h1>Training data augmentation</h1>
<ul>
<li><strong>Random low pass filtering:</strong><ul>
<li>Frequency cutoff was 5% - 37.5% the sample rate</li>
<li>Applied to half the training sequences</li></ul></li>
<li><strong>Random time warp:</strong><ul>
<li>Used linear interpolation to change the sequence length by +/- 10% (or any value in between; the scale was sampled from a uniform distribution)</li>
<li>Applied to half the training sequences</li></ul></li>
<li><strong>Random flip:</strong><ul>
<li>Multiplied AccML by -1 to reverse right & left</li>
<li>Applied to half the training sequences</li></ul></li>
<li><strong>Random magnitude warping:</strong><ul>
<li>The difference between each acceleration feature's value and its mean value was multiplied by a coefficient randomly sampled from a gaussian distribution with a mean of 0 and a standard deviation of 0.1</li>
<li>Applied to half the training sequences</li></ul></li>
<li><strong>Noisy time features:</strong><ul>
<li>Normalized times within each context window shifted by value sampled from gaussian distribution with mean of 0 and standard deviation of 0.05</li>
<li>Applied before calculating SinNormalizedTime (so the same noise impacts both features).</li>
<li>Applied to ALL the training sequences</li></ul></li>
</ul>
<h1>Inference time data augmentation</h1>
<p>Each sample was classified 16 times by each model.</p>
<ul>
<li>With and without multiplying AccML by -1 to reverse right & left</li>
<li>Sequences were classified in overlapping context windows with a stride equal to 1/8 the window length. Similar to random crop data augmentation.</li>
</ul>
<p>The values saved to the submission file were the simple mean of all predictions from all models.</p>
<h1>Handling defog vs tdcsfog</h1>
<p>I used the same models for both datasets. I did not do anything to normalize the sample rates or feature values. I did not even convert the features to have the same units. Normalization seemed to be harmful.</p>
<h1>Ensembling / random seed hacking</h1>
<p>I used 2 near-identical models that were trained with identical hyperparameters from the same cross-validation fold, but with different random seeds for weight initialization & shuffling the training data. They were filtered to have mAP scores in the top 20% of my local cross validation and top 50% of my LB scores. This probably improved my score by around 0.01 - 0.02 vs. just using 2 random models.</p>
<p><strong>Inference notebook:</strong> <a href="https://www.kaggle.com/jsday96/parkinsons-overlapping-se-unet-frequency-domain" target="_blank">https://www.kaggle.com/jsday96/parkinsons-overlapping-se-unet-frequency-domain</a></p>
|
Parkinson's Freezing of Gait Prediction
|
11th place solution: LSTM-CNN + rolling features
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Features</h1>
<p>I used the following features as model inputs, generated from AccV, AccML, AccAP</p>
<ul>
<li>Lags</li>
<li>Global mean, median, max, average, std and quantiles</li>
<li>Rolling mean, median, max, average, std and quantiles (+all of these applied to reversed time series)</li>
<li>Rolling mean, median, max, average, std and quantiles of first difference of time series (+all of these applied to reversed time series)</li>
<li>Mean number of sign changes over rolling window (+reversed time series)</li>
<li>Exponentially weighted mean of first difference of time series</li>
<li>Portion of time past</li>
</ul>
<p>My code for the feature generation:</p>
<pre><code> () -> pd.DataFrame:
back:
rolling = dt[cols][::-].rolling(step, min_periods=)
suffix =
:
rolling = dt[cols].rolling(step, min_periods=)
suffix =
aggfunc.startswith():
quantile = (aggfunc.split()[]) /
(
rolling.quantile(quantile)
.add_suffix(suffix))
:
(
rolling.agg(aggfunc)
.add_suffix(suffix))
():
cols = [, , ]
dt = data.copy()
defog:
data[cols] = data[cols] /
dt[] = (defog)
verbose: ()
aggfunc [, , , , ]:
dt = dt.join(
dt[cols].groupby(dt.assign(dummy=).dummy)
.transform(aggfunc).add_suffix()
)
step1 =
step2 =
verbose: ()
shift [, , -, -]:
shift > :
suffix_name =
fill_data = dt[cols].iloc[: shift]
:
suffix_name =
fill_data = dt[cols].iloc[shift:]
dt = dt.join(
dt[cols]
.shift(shift)
.fillna(fill_data)
.add_suffix(suffix_name)
)
aggfuncs = [
, , , , ,
, ,
]
verbose: ()
aggfunc aggfuncs:
dt = dt.join(
rolling_agg(dt, step1, aggfunc, cols)
)
verbose: ()
aggfunc aggfuncs:
funcname = aggfunc (aggfunc, ) aggfunc.__name__
dt = dt.join(
rolling_agg(dt, step2, aggfunc, cols)
)
verbose: ()
aggfunc aggfuncs:
dt = dt.join(
rolling_agg(dt, step1, aggfunc, cols, back=)
)
verbose: ()
aggfunc aggfuncs:
dt = dt.join(
rolling_agg(dt[::-], step2, aggfunc, cols, back=)
)
verbose: ()
diff = dt[cols].transform().add_suffix()
diff = diff.fillna(diff.iloc[])
cols = [, , ]
verbose: ()
aggfunc aggfuncs:
dt = dt.join(
rolling_agg(diff, step1, aggfunc, cols)
)
verbose: ()
aggfunc aggfuncs:
dt = dt.join(
rolling_agg(diff, step2, aggfunc, cols)
)
verbose: ()
aggfunc aggfuncs:
dt = dt.join(
rolling_agg(diff, step1, aggfunc, cols, back=)
)
verbose: ()
aggfunc aggfuncs:
dt = dt.join(
rolling_agg(diff, step2, aggfunc, cols, back=)
)
verbose: ()
sign_change = (
diff.apply(np.sign)
.transform()
.apply(np.)
.divide()
.fillna()
.add_suffix()
)
cols = [, , ]
aggfuncs = []
verbose: ()
aggfunc aggfuncs:
dt = dt.join(
rolling_agg(sign_change, step1, aggfunc, cols)
)
verbose: ()
aggfunc aggfuncs:
dt = dt.join(
rolling_agg(sign_change, step2, aggfunc, cols)
)
verbose: ()
aggfunc aggfuncs:
dt = dt.join(
rolling_agg(sign_change, step1, aggfunc, cols, back=)
)
verbose: ()
aggfunc aggfuncs:
dt = dt.join(
rolling_agg(sign_change, step2, aggfunc, cols, back=)
)
verbose: ()
dt[] = dt.Time.divide(dt.Time.())
dt.drop(, axis=).fillna()
</code></pre>
<p>After those transformations, I scaled data and divided all time series into parts of length 10000 in separate files of feather format</p>
<h1>Model</h1>
<p>I used LSTM-CNN model: the input is first fed into three parallel blocks of Conv1D with the different kernel sizes of 3, 5 and 7. Then the input is concatenated with the output of those conv layers and passed to two sequential layers of LSTM. After all - there is a linear layer that does the classification</p>
<p>Here's a code for model definition</p>
<pre><code> torch.nn nn
torch.nn.functional F
():
nn.Sequential(
nn.Conv1d(, , kernel_size, padding=),
nn.ReLU(),
nn.Conv1d(, , kernel_size, padding=),
nn.ReLU(),
)
(nn.Module):
():
self.kernels = kernels
().__init__()
self.conv_nets = nn.ModuleList([
block(i) i kernels
])
self.lstm = nn.LSTM(
* (self.kernels) + ,
, , batch_first=, bidirectional=, dropout=)
self.linear = nn.Linear( * , )
():
conv_res = []
net self.conv_nets:
conv_res.append(net(x))
conv_res.append(x)
conv_res_tensor = torch.concat(conv_res, axis=)
lstm_out, _ = self.lstm(conv_res_tensor.transpose(, ))
res = self.linear(lstm_out).transpose(, )
res
</code></pre>
<p>I did not manage to create a good validation pipeline, so i didn't use any folds. I trained the model for 30 epoch, monitoring loss on validation data (approx. 10% of subjects)</p>
|
Parkinson's Freezing of Gait Prediction
|
12th place solution: Simple features and LSTM.
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I used it because it was effective in past similar <a href="https://www.kaggle.com/competitions/ventilator-pressure-prediction/discussion?sort=recent-comments" target="_blank">competitions</a>.<br>
I also tried the transformer to tDCS but couldn't get a good cv, so I applied it to DeFOG only.</p>
<p>thank you for your comment. <br>
Why I compress data by using such scheme is to keep information even with long sequence. I want to tried other method like cropping but it was little bit difficult for me, so first I adopted the simplest one I could imagine (but the competition ended before I could try other methods😅).<br>
Other teams had tried various methods, such as Crop, it is useful for you and me I think.<br>
And 'shortest training data' that I wrote meant the shortest length of "len(df)".</p>
<p>thank you! <br>
By the way, sorry, I couldn't understand a point of your question.<br>
Competition that I mentioned is <a href="https://www.kaggle.com/competitions/ventilator-pressure-prediction" target="_blank">Google Brain - Ventilator Pressure Prediction</a>. And I just modified and used model that from the 3rd place team of this competition .</p>
<p>I would like to thank host and kaggle for organizing such interesting competition.<br>
I believe that the method proposed by the winner and prizewinners who received high scores in both the public and private tests will be helpful to host.</p>
<h2>Summary</h2>
<ul>
<li>Ensamble of 4 model based by LSTM</li>
<li>Use 4 features. mean, std, max, min, median.</li>
<li>Notype data is used as validation data.</li>
<li>Selected model separately for tDCS FOG and DeFOG.</li>
</ul>
<h2>Data Preprocessing</h2>
<p>I compressed all data into fixed length. Length is 2048 because shortest training data is 2359.</p>
<p>Compression Method</p>
<ol>
<li>Expand the original data to a smallest multiple of the target length that exceeds the original data length. (ch, target_size * n)</li>
<li>Fold the data into (ch, target_size, n).</li>
</ol>
<p>As feature, I used the mean, std, max, min, and median within this N.<br>
In addition to this, it was also useful to add each percentile point (15~90 percentile).<br>
I rarely used the information on each ID and Subjects, but only "Visit" was used as a label for multitasking.</p>
<pre><code> ():
ch = x.shape[]
input_size = x.shape[]
pad = target_size - input_size % target_size
factor = (input_size + pad) / input_size
x = np.array([ndi.zoom(xi, zoom=factor, mode=) xi x])
x = x.reshape((ch, target_size, -))
res = {}
res[] = np.mean(x, axis=).reshape(ch, -)
res[] = np.(x, axis=).reshape(ch, -)
res[] = np.(x, axis=).reshape(ch, -)
res[] = np.median(x, axis=).reshape(ch, -)
res[] = np.sqrt(np.var(x, axis=).reshape(ch, -))
use_percentile_feat:
p [, , , , , ]:
res[] = np.percentile(x, [p], axis=).reshape(ch, -)
res
</code></pre>
<h2>Model</h2>
<p>The model used was LSTM: a simple 4-block model and a model that were <a href="https://www.kaggle.com/competitions/ventilator-pressure-prediction/discussion/285330" target="_blank">3rd place of similar competitions in the past</a>. For DeFOG, I also used <a href="https://www.kaggle.com/code/cdeotte/tensorflow-transformer-0-112" target="_blank">Transformer</a>.</p>
<h2>CV Strategy</h2>
<ul>
<li><p>tDCS FOG<br>
In tDCS FOG, I used 4fold StratifiedGroupKfold with Event label and Subject as group.<br>
However, CV was quite high for certain folds.<br>
This is probably due to the large number of examples of difficult classes such as StartHesitation and Walking appearing for a long time in Subejt "2d57c2", which reduced the percentage of False Positives.<br>
Therefore, I had selected a model based on a score that excluded this Subject.<br>
Personally, I think that the reason why Shake occurred was also due to the existence of such a Subject in the Public data, and that many teams overfit to it.</p></li>
<li><p>DeFOG<br>
Because DeFOG has few labeled data, I used all labeled data as training data.<br>
Since there was a correlation between the loss of the three labels and the loss of the Event label only, I used Nototype as the validation data.<br>
Although it was not possible to make a final selection, I trained on all data including tdcsfog, and the model selected based on the Notype data was the my best private score(<a href="https://www.kaggle.com/code/mrt0933/keras-lstm-trained-by-all-labeled-data" target="_blank">this notebook</a>).</p></li>
</ul>
<h2>Final submission</h2>
<p>I selected 4 models for each of tdcsfog and defog and ensemble them.<br>
For tDCS FOG, I chose the one with the highest cv for each fold.<br>
The best public score had a high enough cv, so I selected this score as the final submission (I'm glad I didn't get caught in the shaking).</p>
<ul>
<li>tDCS FOG. CV:0.28</li>
</ul>
<table>
<thead>
<tr>
<th>Fold</th>
<th>Model</th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>0 (w/o "2d57c2”)</td>
<td>LSTM (multitasking)</td>
<td>0.179</td>
</tr>
<tr>
<td>1</td>
<td>LSTM (1d CNN head)</td>
<td>0.337</td>
</tr>
<tr>
<td>2</td>
<td>LSTM (add percentile features)</td>
<td>0.278</td>
</tr>
<tr>
<td>3</td>
<td>LSTM (1d CNN head)</td>
<td>0.293</td>
</tr>
</tbody>
</table>
<ul>
<li>DeFOG. CV: 0.381(Event)</li>
</ul>
<table>
<thead>
<tr>
<th>No</th>
<th>Model</th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>LSTM</td>
<td>0.292</td>
</tr>
<tr>
<td>1</td>
<td>LSTM (add percentile features)</td>
<td>0.318</td>
</tr>
<tr>
<td>2</td>
<td>LSTM (long 4096)</td>
<td>0.286</td>
</tr>
<tr>
<td>3</td>
<td>Transformer</td>
<td>0.286</td>
</tr>
</tbody>
</table>
<p>Public LB: 0.444<br>
Private LB: 0.341</p>
<p>Finally, I am happy to become a kaggle master with this gold medal.<br>
However, although I got a gold medal, I am disappointed that I could not design a model that could capture FOG events well both quantitatively and qualitatively.<br>
I would love to participate in the second and third competitions if they are held. Thank you very much.</p>
|
Parkinson's Freezing of Gait Prediction
|
1st place solution: transformer and acceleration data
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks!</p>
<ol>
<li>No, it's not saving memory. Model works better with reduced resolution than with true resolution. I think deep learning models do badly on targets with complex structure (for example, 128Hz like in this competition), so they should be reduced.<br>
In addition, it allows you to use raw input data instead of using feature engineering.</li>
</ol>
<p>Thanks!</p>
<p>All train data were loaded into RAM. No problems with memory.<br>
You can run my notebook if you want to make sure:<br>
<a href="https://www.kaggle.com/code/baurzhanurazalinov/parkinson-s-freezing-tdcsfog-training-code" target="_blank">https://www.kaggle.com/code/baurzhanurazalinov/parkinson-s-freezing-tdcsfog-training-code</a></p>
<p>Thanks!</p>
<ol>
<li>Yes. Without LSTM layers, the model does not work well. I think the main contribution of the transformer encoder is to classify events (StartHesitation, Turn or Walking?), and LSTM part provides continuous communication between neighboring tokens.</li>
<li>Yes, it was my idea. I am not sure that adding positional encoding improves validation scores, but I have noticed that positional encoding does not harm the model, so I decided not to remove it. I suppose that requires additional tests.</li>
<li>Yes. I got about 2000 samples for tdscfog dataset and about 25000 samples for defog dataset. Model weights do not exceed 72MB and work quickly. Average number of epochs for a well-trained model is 35 epochs.</li>
</ol>
<p>Thanks!</p>
<ol>
<li>The model has two important parameters: patch size and sequence length.<br>
optimal tdscfog patch size = 18<br>
optimal defog patch size = 14<br>
optimal sequence length = 864<br>
864 * 18 = 15552 and 864 * 14 = 12096<br>
Using other parameters values lowers the validation scores.</li>
</ol>
<p>Thanks.</p>
<ol>
<li>Valid data only (valid data = selected subset; train data = all data - selected subset)<br>
Cross validation is not used because of the low correlation between validation score and LB. I used three rules to select the trained model:<br>
Model was trained for a long time.<br>
Model at the end of long training gives good validation scores<br>
Model gives good public LB score</li>
<li>If you do not use Event flag, it is not possible to include notype data in the training process.<br>
I thought that pseudo-labeling notype data would lower my validation scores, so I didn't use pseudo-labeling.</li>
</ol>
<p>Thanks. Yes, simple window roll ([0:15552], [972:16524], [1944:17496], … ) and average predictions for overlapping tokens from different windows.</p>
<p>Greetings to the Kaggle Community. In this message I want to tell you about my solution.</p>
<p>Thanks to Kaggle for providing free GPU and TPU resources to everyone. On my graphics card (1050 Ti) I would not have achieved those results.<br>
Thanks to Google for the excellent tensorflow library.<br>
All of my work was done in Kaggle Notebooks and relies on TensorFlow capabilities.</p>
<p>The key decisions that, in my opinion, led to a good result:</p>
<ol>
<li>Use a combination of transformer encoder and two BidirectionalLSTM layers.</li>
<li>Use patches like VisualTransformer.</li>
<li>Reduce the resolution of targets.</li>
</ol>
<p><em>How does it work?</em></p>
<p>Suppose we have a tdcsfog sensor data series with AccV, AccML, AccAP columns and len of 5000.</p>
<p>First, apply mean-std normalization to AccV, AccML, AccAP columns.</p>
<pre><code> ():
mean = tf.math.reduce_mean(sample)
std = tf.math.reduce_std(sample)
sample = tf.math.divide_no_nan(sample-mean, std)
sample.numpy()
</code></pre>
<p>Then the series is zero-padded so that the final length is divisible by block_size = 15552 (or 12096 for defog). Now the series shape is (15552, 3). </p>
<p>And create patches with the patch_size = 18 (or 14 for defog):</p>
<pre><code>series
series = tf.reshape(series, shape=(CFG[] // CFG[], CFG[], ))
series = tf.reshape(series, shape=(CFG[] // CFG[], CFG[]*))
</code></pre>
<p>Now the series shape is (864, 54). It's a model input.</p>
<p>What to do with the StartHesitation, Turn, Walking data? Same, but apply tf.reduce_max at the end.</p>
<pre><code>series_targets
series_targets = tf.reshape(series_targets, shape=(CFG[] // CFG[], CFG[], ))
series_targets = tf.transpose(series_targets, perm=[, , ])
series_targets = tf.reduce_max(series_targets, axis=-)
</code></pre>
<p>Now the series shape is (864, 3). It's a model output.</p>
<p>At the end, simply return the true resolution with tf.tile</p>
<pre><code>predictions = model.predict(...)
predictions = tf.expand_dims(predictions, axis=-)
predictions = tf.transpose(predictions, perm=[, , , ])
predictions = tf.tile(predictions, multiples=[, , CFG[], ])
predictions = tf.reshape(predictions, shape=(predictions.shape[], predictions.shape[]*predictions.shape[], ))
</code></pre>
<h1>Details</h1>
<p>Daily data, events.csv, subjects.csv, tasks.csv have never been used.</p>
<p>Tdcsfog data is not used to train defog models. </p>
<p>Defog data is not used to train tdcsfog models.</p>
<p><em>Optimizer</em> </p>
<pre><code>tf.keras.optimizers.Adam(learning_rate=Schedule(LEARNING_RATE, WARMUP_STEPS), beta_1=, beta_2=, epsilon=)
</code></pre>
<p><em>Loss function</em></p>
<pre><code>
ce = tf.keras.losses.BinaryCrossentropy(reduction=)
():
loss = ce(tf.expand_dims(real[:, :, :], axis=-), tf.expand_dims(output, axis=-))
mask = tf.math.multiply(real[:, :, ], real[:, :, ])
mask = tf.cast(mask, dtype=loss.dtype)
mask = tf.expand_dims(mask, axis=-)
mask = tf.tile(mask, multiples=[, , ])
loss *= mask
tf.reduce_sum(loss) / tf.reduce_sum(mask)
</code></pre>
<p><em>Model</em> </p>
<pre><code>CFG = {: ,
: ,
: //,
: ,
: ,
: ,
: ,
: ,
: ,
: ,
: ,
}
(tf.keras.layers.Layer):
():
().__init__()
self.mha = tf.keras.layers.MultiHeadAttention(num_heads=CFG[], key_dim=CFG[], dropout=CFG[])
self.add = tf.keras.layers.Add()
self.layernorm = tf.keras.layers.LayerNormalization()
self.seq = tf.keras.Sequential([tf.keras.layers.Dense(CFG[], activation=),
tf.keras.layers.Dropout(CFG[]),
tf.keras.layers.Dense(CFG[]),
tf.keras.layers.Dropout(CFG[]),
])
():
attn_output = self.mha(query=x, key=x, value=x)
x = self.add([x, attn_output])
x = self.layernorm(x)
x = self.add([x, self.seq(x)])
x = self.layernorm(x)
x
(tf.keras.Model):
():
().__init__()
self.first_linear = tf.keras.layers.Dense(CFG[])
self.add = tf.keras.layers.Add()
self.first_dropout = tf.keras.layers.Dropout(CFG[])
self.enc_layers = [EncoderLayer() _ (CFG[])]
self.lstm_layers = [tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(CFG[], return_sequences=)) _ (CFG[])]
self.sequence_len = CFG[] // CFG[]
self.pos_encoding = tf.Variable(initial_value=tf.random.normal(shape=(, self.sequence_len, CFG[]), stddev=), trainable=)
():
x = x /
x = self.first_linear(x)
training:
random_pos_encoding = tf.roll(tf.tile(self.pos_encoding, multiples=[GPU_BATCH_SIZE, , ]),
shift=tf.random.uniform(shape=(GPU_BATCH_SIZE,), minval=-self.sequence_len, maxval=, dtype=tf.int32),
axis=GPU_BATCH_SIZE * [],
)
x = self.add([x, random_pos_encoding])
:
x = self.add([x, tf.tile(self.pos_encoding, multiples=[GPU_BATCH_SIZE, , ])])
x = self.first_dropout(x)
i (CFG[]): x = self.enc_layers[i](x)
i (CFG[]): x = self.lstm_layers[i](x)
x
(tf.keras.Model):
():
().__init__()
self.encoder = FOGEncoder()
self.last_linear = tf.keras.layers.Dense()
():
x = self.encoder(x)
x = self.last_linear(x)
x = tf.nn.sigmoid(x)
x
</code></pre>
<h1>Submission (Private Score 0.514, Public Score 0.527) consists of 8 models:</h1>
<h3>Model 1 (tdcsfog model)</h3>
<pre><code>CFG = {: ,
: ,
: //,
: ,
: ,
: ,
: ,
: ,
: ,
: ,
: ,
}
LEARNING_RATE = /
STEPS_PER_EPOCH =
WARMUP_STEPS =
BATCH_SIZE=
</code></pre>
<p>Validation subjects <br>
['07285e', '220a17', '54ee6e', '312788', '24a59d', '4bb5d0', '48fd62', '79011a', '7688c1']</p>
<p>Train 15 minutes on TPU. Validation scores:<br>
StartHesitation AP - 0.462 Turn AP - 0.896 Walking AP - 0.470 mAP - 0.609</p>
<h3>Model 2 (tdcsfog model)</h3>
<pre><code>CFG = {: ,
: ,
: //,
: ,
: ,
: ,
: ,
: ,
: ,
: ,
: ,
}
LEARNING_RATE = /
STEPS_PER_EPOCH =
WARMUP_STEPS =
BATCH_SIZE =
</code></pre>
<p>Validation subjects <br>
['07285e', '220a17', '54ee6e', '312788', '24a59d', '4bb5d0', '48fd62', '79011a', '7688c1']</p>
<p>Train 40 minutes on GPU. Validation scores:<br>
StartHesitation AP - 0.481 Turn AP - 0.886 Walking AP - 0.437 mAP - 0.601</p>
<h3>Model 3 (tdcsfog model)</h3>
<pre><code>CFG = {: ,
: ,
: //,
: ,
: ,
: ,
: ,
: ,
: ,
: ,
: ,
}
LEARNING_RATE = /
STEPS_PER_EPOCH =
WARMUP_STEPS =
BATCH_SIZE =
</code></pre>
<p>Validation subjects <br>
['e39bc5', '516a67', 'af82b2', '4dc2f8', '743f4e', 'fa8764', 'a03db7', '51574c', '2d57c2']</p>
<p>Train 11 minutes on TPU. Validation scores:<br>
StartHesitation AP - 0.601 Turn AP - 0.857 Walking AP - 0.289 mAP - 0.582</p>
<h3>Model 4 (tdcsfog model)</h3>
<pre><code>CFG = {: ,
: ,
: //,
: ,
: ,
: ,
: ,
: ,
: ,
: ,
: ,
}
LEARNING_RATE = /
STEPS_PER_EPOCH =
WARMUP_STEPS =
BATCH_SIZE =
</code></pre>
<p>Validation subjects <br>
['5c0b8a', 'a03db7', '7fcee9', '2c98f7', '2a39f8', '4f13b4', 'af82b2', 'f686f0', '93f49f', '194d1d', '02bc69', '082f01']</p>
<p>Train 13 minutes on TPU. Validation scores:<br>
StartHesitation AP - 0.367 Turn AP - 0.879 Walking AP - 0.194 mAP - 0.480</p>
<h3>Model 5 (defog model)</h3>
<pre><code>CFG = {: ,
: ,
: //,
: ,
: ,
: ,
: ,
: ,
: ,
: ,
: ,
}
LEARNING_RATE = /
STEPS_PER_EPOCH =
WARMUP_STEPS =
BATCH_SIZE =
</code></pre>
<p>Validation subjects <br>
['00f674', '8d43d9', '107712', '7b2e84', '575c60', '7f8949', '2874c5', '72e2c7']</p>
<p>Train data: defog data, notype data<br>
Validation data: defog data, notype data</p>
<p>Train 45 minutes on TPU. Validation scores:<br>
StartHesitation AP - [not used] Turn AP - 0.625 Walking AP - 0.238 mAP - 0.432<br>
Event AP - 0.800</p>
<h3>Model 6 (defog model)</h3>
<pre><code>CFG = {: ,
: ,
: //,
: ,
: ,
: ,
: ,
: ,
: ,
: ,
: ,
}
</code></pre>
<p>Train data: defog data (about 85%)<br>
Validation data: defog data (about 15%), notype data (100%)</p>
<h3>Model 7 (defog model)</h3>
<pre><code>CFG = {: ,
: ,
: //,
: ,
: ,
: ,
: ,
: ,
: ,
: ,
: ,
}
LEARNING_RATE = /
STEPS_PER_EPOCH =
WARMUP_STEPS =
BATCH_SIZE =
</code></pre>
<p>Train data: defog data (100%)<br>
Validation data: notype data (100%)</p>
<p>Train 18 minutes on TPU. Validation scores:<br>
StartHesitation AP - [not used] Turn AP - [not used] Walking AP - [not used] mAP - [not used]<br>
Event AP - 0.764</p>
<h3>Model 8 (defog model)</h3>
<pre><code>CFG = {: ,
: ,
: //,
: ,
: ,
: ,
: ,
: ,
: ,
: ,
: ,
}
LEARNING_RATE = /
STEPS_PER_EPOCH =
WARMUP_STEPS =
BATCH_SIZE =
</code></pre>
<p>Validation subjects<br>
['12f8d1', '8c1f5e', '387ea0', 'c56629', '7da72f', '413532', 'd89567', 'ab3b2e', 'c83ff6', '056372']</p>
<p>Train data: defog data, notype data<br>
Validation data: defog data, notype data</p>
<p>Train 28 minutes on TPU. Validation scores:<br>
StartHesitation AP - [not used] Turn AP - 0.758 Walking AP - 0.221 mAP - 0.489<br>
Event AP - 0.744</p>
<h1>Final models</h1>
<p>Tdcsfog: 0.25 * Model 1 + 0.25 * Model 2 + 0.25 * Model 3 + 0.25 * Model 4</p>
<p>Defog: 0.25 * Model 5 + 0.25 * Model 6 + 0.25 * Model 7 + 0.25 * Model 8</p>
<p>Thanks. Patch size is the most important parameter of this model. Increasing or decreasing significantly degrades mAP. <br>
It can also be noticed that 18 * (100Hz / 128Hz) = 14.0625, where 18 - optimal tdcsfog patch size, 14 - optimal defog patch size, but the tdcsfog and defog models are independent.</p>
<ol>
<li>I count the number of StartHesitation, Turn, Walking events in each subject with scipy.ndimage.label function. Then randomly select subset of subjects (about 15% subjects). If subset contains 20 - 30% StartHesitation events and 20 - 30% Walking events, I choose it.</li>
<li>Event model is only used for training on defog + notype data (4-target model, in case of notype examples the first three targets are masked). During submission, the fourth target is ignored. Also, event AP is a good metric. </li>
</ol>
<pre><code>series[] = series[[, , ]].aggregate(, axis=)
series[] = series[[, , ]].aggregate(, axis=)
</code></pre>
|
Parkinson's Freezing of Gait Prediction
|
20th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for your comment. I understood by your explanation.<br>
In most of my experiments, I got better results when the units were not aligned. Since the tdcsfog/defog distributions were different, it may be difficult to learn the model by aligning the units. This may be the reason why many of the top teams had different models.</p>
<p>Thank you to the organizers for the fun competition and everyone who participated.<br>
I share my solution.</p>
<h1>Summary</h1>
<ul>
<li>dataset: tdcsofg + defog (not use notype/unlabeled)</li>
<li>model: Conv1d NN (common model for tdcsfog and defog) </li>
<li>6 models ensemble</li>
</ul>
<h1>1. baseline model</h1>
<ul>
<li>I use this notebook. <a href="https://www.kaggle.com/code/mayukh18/pytorch-fog-end-to-end-baseline-lb-0-254" target="_blank">PyTorch FOG End-to-End Baseline [LB 0.254] </a></li>
<li>Thank you for <a href="https://www.kaggle.com/mayukh18" target="_blank">@mayukh18</a> </li>
</ul>
<h1>2. Validation Strategy</h1>
<ul>
<li>GroupKFold : 5-fold (groups=Subject)</li>
<li>Since it was imbalanced data, I adjusted the seed so that the number of cases and the ratio would be the same as much as possible</li>
</ul>
<h1>3. Preprocess data</h1>
<ul>
<li>Align the units of acceleration (divided by 9.8066 for tdcsfog). </li>
<li>features:<ul>
<li>raw data: AccV, AccML, AccAP</li>
<li>AccMG: np.sqrt(AccV^2 + AccML^2 + AccAP^2)</li>
<li>Time_freq: df["Time"] / df["Time"].max()</li>
<li>tdcs flag</li>
<li>Rolling Window Features</li></ul></li>
</ul>
<pre><code>
for col in [, , , ]:
for w in [10, 50, 100, 1000]:
df[f] = df[col]
df[f] = df[col] - \
df[col]
df[f] = df[col] - df[col]
</code></pre>
<h1>4. Model</h1>
<ul>
<li>I define 2 models.<ul>
<li>model A: <ul>
<li>input's feature: 54 features (all feature)</li>
<li>window_size=32: past=24, future=8, wx=8</li>
<li>Input - (Conv1d:ks=3/5/10 - GAP) x 3 - MLP - Outputs</li></ul></li>
<li>model B:<ul>
<li>input's feature: 6 features (3raw + MG + Time_freq + tdcs)</li>
<li>window_size=256: past=192, future=64, wx=1</li>
<li>Input - Conv1d:ks=90 - GAP - MLP - Outputs</li></ul></li></ul></li>
<li>training parameter:<ul>
<li>loss: BCEWithLogitsLoss / BCE+CELoss(4label)</li>
<li>optimizer: Adam(2e-5)</li>
<li>epoch: 10</li></ul></li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F164129%2F1d909bdab5119592b4ed78f061040fb6%2Fmodel.png?generation=1686321028285326&alt=media" alt="model.jpg"></p>
<h1>5. Postprocess</h1>
<ul>
<li>moving average of predicted values: window=500</li>
<li>label 1 is continuous so I could increase score just a little. (+0.004)</li>
</ul>
<h1>6. Ensemble</h1>
<ul>
<li>I trained 6 models.</li>
<li>The ensemble method is an equally weighted average.</li>
</ul>
<table>
<thead>
<tr>
<th>id</th>
<th>inputs</th>
<th>win_size</th>
<th>Acc-units</th>
<th>model</th>
<th>local-cv</th>
<th>public</th>
<th>private</th>
</tr>
</thead>
<tbody>
<tr>
<td>v10</td>
<td>54feats</td>
<td>32(wx=8)</td>
<td>None</td>
<td>modelA</td>
<td>0.290</td>
<td>0.392</td>
<td>0.316</td>
</tr>
<tr>
<td>v13</td>
<td>54feats</td>
<td>32(wx=8)</td>
<td>None</td>
<td>modelA (custom)</td>
<td>0.300</td>
<td>0.376</td>
<td>0.305</td>
</tr>
<tr>
<td>v51b</td>
<td>54feats + 2feats</td>
<td>48(wx=8)</td>
<td>div by 9.8</td>
<td>modelA</td>
<td>0.302</td>
<td>0.386</td>
<td>0.321</td>
</tr>
<tr>
<td>v60</td>
<td>54feats</td>
<td>32(wx=8)</td>
<td>div by 9.8</td>
<td>modelA (+CELoss)</td>
<td>0.301</td>
<td>0.388</td>
<td>0.306</td>
</tr>
<tr>
<td>v59</td>
<td>54feats</td>
<td>32(wx=8)</td>
<td>None</td>
<td>modelA (other cv)</td>
<td>0.324</td>
<td>0.390</td>
<td>0.312</td>
</tr>
<tr>
<td>v58</td>
<td>3raw+3feats</td>
<td>256(wx=1)</td>
<td>div by 9.8</td>
<td>modelB</td>
<td>0.306</td>
<td>0.350</td>
<td>0.312</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>ensemble</td>
<td>0.411</td>
<td>0.324</td>
</tr>
</tbody>
</table>
<h1>Did't work</h1>
<ul>
<li>Transformer / LSTM / 1dcnn+LSTM / wavenet : did not improve score</li>
<li>Data conversion by Fourier transform / MFCC: score got worse</li>
<li>RobustScaler: almost the same</li>
<li>pseudo-labeling for notype and unlabeled-data: score got worse</li>
<li>use Subject data: score got worse</li>
<li>separate model for tdcsfog and defog: score got worse</li>
</ul>
<p>There were a few methods that worked for the top teams. I may have done something wrong.</p>
<h1>My question</h1>
<ul>
<li>Why did I get similar scores with and without matching acceleration units? (Sometimes it is better not to match)</li>
<li>I didn't want to include Time_freq to make it a generic model. How can I improve my score without this?</li>
</ul>
<p>Thank you for reading.</p>
|
Parkinson's Freezing of Gait Prediction
|
21st place solution: Conv1d with denoising
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi Alberto, we can use the 'future' points to predict a 'history' one. For example, if we want to predict the time <code>T</code>, we can use the time from <code>T-50</code> to <code>T+50</code> to generate features. It seems to be a leakage, but in this comp it's fine, since we get the whole series of data.</p>
<p>That's right.</p>
<p>Thanks, your notebook is well-structured and amazing.👍</p>
<p>My 21st solution is based on a great public notebook <a href="https://www.kaggle.com/code/coderrkj/parkinson-fog-pred-conv1d-separate-tf-model" target="_blank">here</a>. The chosen one of my submissions is an ensemble of 5 similar models(<strong>public LB: 0.433, private LB: 0.324</strong>). My best private LB is a single model(<strong>public LB: 0.421, private LB: 0.327</strong>), I publish it <a href="https://www.kaggle.com/code/takanashihumbert/gait-single-models-inference/notebook" target="_blank">here</a>.</p>
<h2>model</h2>
<p>The basic structure of my model composes of 3 different <code>Conv1d</code> blocks, then <code>concatenate</code> and <code>flatten</code>, finally with a 4 labels output.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3110858%2F0662912f80e49cb1bb8a7fb91a77364c%2FWX20230609-0907112x.png?generation=1686272859248587&alt=media" alt=""><br>
The features are <strong>Time_frac</strong>, <strong>AccV</strong>, <strong>AccML</strong>, <strong>AccAP</strong>, <strong>V_ML</strong>, <strong>V_AP</strong>, <strong>ML_AP</strong>(the last 3 means the difference value between two of AccV, AccML, and AccAP). And I know <strong>Time_frac</strong> is a powerful(improve nearly 0.1) but controversial one. I use multi-class, so the labels are <strong>StartHesitation</strong>, <strong>Turn</strong>, <strong>Walking</strong>, and <strong>Normal</strong>.</p>
<h2>denoising</h2>
<p>My denoising code is very simple:</p>
<pre><code>def wavelet_denoising_2(x, wavelet=):
coeffs = pywt.wavedec(x, wavelet, mode=)
coeffs[(coeffs)] *=
coeffs[(coeffs)] *=
= pywt.waverec(coeffs, wavelet, mode=)
(x)%==:
= [:]
</code></pre>
<h2>window size</h2>
<pre><code> =
=
=
=
</code></pre>
<p>I think that's all, nothing special. Thanks. :)<br>
Looking forward to top and interesting solutions!</p>
<p>Yes, my mistake.😮</p>
<p>I use <code>StratifiedGroupKFold</code>(y=labels, group=Subject). CV is around 0.32-0.34.</p>
|
Parkinson's Freezing of Gait Prediction
|
2nd place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for the comment.<br>
You are correct about fe022_num_array.npy and fe022_target_array.npy.</p>
<p>As for fe022_mask_array.npy, it's created to represent the missing parts when generating sequences for each Id. Sometimes, the last sequence might not be long enough to meet the specified length, and that's where this array comes in. In terms of its usage, it's utilized during the computation of the loss in training. For instance, I don't include parts where mask=0 as demonstrated in the following line:</p>
<pre><code>loss = (output, y)
</code></pre>
<p>fe022_pred_use_array.npy, on the other hand, is used to ensure that I don't utilize the sequence's edge portions during prediction, as this could lead to decreased accuracy. The way I use it is as follows: I specify only the parts I want to use for inference, then measure the CV or use it for the final submission. Here's an example:</p>
<pre><code>pred_valid_index = val_pred_array ==
StartHesitation = (val_target_array,
val_preds)
Turn = (val_target_array,
val_preds)
Walking = (val_target_array,
val_preds
</code></pre>
<p>Thank you for your comment. No, I used one model to create a pseudo-label. As for how to create pseudo-labels, it is the same as in the following discussion. <br><a href="url" target="_blank"> https://www.kaggle.com/competitions/google-quest-challenge/discussion/129840</a></p>
<p>Thanks!</p>
<ol>
<li><p>I chose to use GRU because it yielded the highest CV and public scores among the models I experimented with. I did try LSTM and Transformer, but they didn't perform as well as GRU in terms of scoring. While it's true that GRU is not as widely used in Kaggle competitions compared to LSTM or Transformer, it doesn't mean it's not used at all. If you search through past solutions, you can find approaches that utilize GRU.</p></li>
<li><p>I didn't make any specific modifications for cross-validation. However, even without elaborate techniques, I observed a reasonable correlation between CV scores and both public and private scores.</p></li>
<li><p>Yes, GRU can handle input sequences of different lengths.</p></li>
</ol>
<p>Thanks!</p>
<ol>
<li><p>No, I didn't apply the weights randomly. For example, I first set the weight for the loss of the "Turn" target to 0.6 and trained the model using a 5-fold cross-validation. We followed the same approach for the other two targets, "StartHesitation" and "Walking".</p></li>
<li><p>I didn't use any specific regularization techniques.</p></li>
</ol>
<p>Thanks for the comment.</p>
<p>Firstly, in this competition, I believed it was important to extend the length of the sequence. This is because it enables a larger amount of information to be reflected in the predictions within the same ID. However, if the length of the sequence is increased in the training data, the score in the evaluation data does not seem to improve significantly, possibly due to overfitting to the training data. Therefore, as an experiment, I tried shortening the sequence length in the training data and lengthening it in the evaluation data, which greatly improved the score. So, I adopted this approach.</p>
<p>Changing the sequence length during training and inference was something I tried for the first time, but the decision was based on my past experiences. For instance, in image competitions, it can be beneficial to increase the image size during inference compared to during training. Similarly, in matching competitions, it could be helpful to increase the number of candidates during inference.</p>
<h2>2nd place solution</h2>
<p>First of all, thanks to the host for an interesting competition and congratulations to all the winners!</p>
<h2>Summary</h2>
<ul>
<li>We designed separate models for tDCS FOG and DeFog.</li>
<li>Both tDCS FOG and DeFog were trained using GRUs.</li>
<li>Each Id was split into sequences of a specified length. During training, we used a shorter length (e.g., 1000 for tdcsfog, 5000 for defog), but for inference, a longer length (e.g., 5000 for tdcsfog, 30000 for defog) was applied. This emerged as the most crucial factor in this competition.</li>
<li>For the DeFog model, we utilized the ‘notype’ data for training with pseudo-labels, significantly improving the scores; by leveraging the Event column, robust pseudo-labels could be created.</li>
<li>Although CV and the public score were generally correlated, they became uncorrelated as the public score increased. Additionally, the CV was quite variable and occasionally surprisingly high. Therefore, we employed the public score for model selection, while CV guided the sequence length.</li>
</ul>
<h2>tDCS FOG</h2>
<ul>
<li><p>Features</p>
<ul>
<li>Three provided accelerations</li>
<li>Feature engineering performed on each acceleration involved:<ul>
<li>Difference between prior and subsequent accelerations</li>
<li>Cumulative sum of acceleration</li></ul></li></ul></li>
<li><p>Standardization</p>
<ul>
<li>Used RobustScaler</li>
<li>Each Id was standardized individually</li></ul></li>
<li><p>Sequence Creation Method:</p>
<ul>
<li><p>Training</p>
<ul>
<li>Sequence length: 1000<ul>
<li>Sequences were created by shifting 500 steps from the starting position.</li></ul></li></ul></li>
<li><p>Inference</p>
<ul>
<li>Sequence length: 3000 or 5000 (3000 if data size was less than 5000, 5000 otherwise)</li>
<li>Sequences were created by shifting 1500 or 2500 steps from the starting position.<ul>
<li>During prediction, the sequence section from 750/1250 to 2250/3750 was utilized.</li>
<li>The initial segment spanned from 0 to 2250/3750, while the final segment used from 750/1250 to the end of the sequence. </li></ul></li></ul></li></ul></li>
<li><p>Models</p>
<ul>
<li><p>For each target, we ensembled four models.</p></li>
<li><p>The following settings were common to each model</p>
<ul>
<li>Model : GRU</li>
<li>Cross validation method : StratifiedGroupKFold<ul>
<li>group : Subject</li></ul></li>
<li>Loss : BCEWithLogitsLoss</li>
<li>Optimizer : AdamW</li>
<li>Scheduler : get_linear_schedule_with_warmup<ul>
<li>(Although not verified in detail) get_linear_schedule_with_warmup seemd to work better in CV than get_cosine_schedule_with_warmup.</li></ul></li>
<li>Sequence length<ul>
<li>Train : 1000</li>
<li>Inference : 3000 / 5000</li>
<li>Training the model with a longer sequence did not improve CV or public score. However, training with a short sequence and performing inference with a long sequence significantly improved both CV and public score. </li></ul></li></ul></li>
<li><p>Model1</p>
<ul>
<li>This model trained with equal loss for each target.</li>
<li>CV<ul>
<li>Sequence length 3000 / 5000 : 0.493 <br><br>
(Sequence length 1000 : 0.438)</li></ul></li>
<li>Ensemble weight : 0.2</li></ul></li>
<li><p>Model2</p>
<ul>
<li>The loss weight of one target was set to 0.6, and the remaining targets were set at 0.4.<ul>
<li>The following three patterns<ul>
<li>StartHesitation : 0.6 , Turn & Walking : 0.4</li>
<li>Turn : 0.6 , StartHesitation & Walking : 0.4</li>
<li>Walking : 0.6 , StartHesitation & Turn : 0.4</li></ul></li></ul></li>
<li>The model was saved at the epoch where the target with the weight set to 0.6 had the best score.</li>
<li>Only the predictions with the loss weight set to 0.6 were used in the test predictions.</li>
<li>CV<ul>
<li>Sequence length 3000 / 5000 : 0.520</li></ul></li>
<li>Ensemble weight : 0.4</li></ul></li>
<li><p>Model3 & 4</p>
<ul>
<li>The loss weight for two targets was set to 0.8, and the remaining target was set at 0.2.<ul>
<li>The following three patterns<ul>
<li>StartHesitation & Turn : 0.8 , Walking : 0.2</li>
<li>StartHesitation & Walking : 0.8 , Turn : 0.2</li>
<li>Turn & Walking : 0.8 , StartHesitation : 0.2</li></ul></li></ul></li>
<li>The model was saved at the epoch where the two targets with the weight set to 0.8 had the best score.</li>
<li>Only the predictions with a loss weight set to 0.8 were used in the test predictions.</li>
<li>CV <ul>
<li>Sequence length 3000 / 5000 : 0.536</li></ul></li>
<li>Ensemble weight : 0.4</li></ul></li>
<li><p>ensemble</p>
<ul>
<li>CV<ul>
<li>Sequence length 3000 / 5000 : 0.537</li></ul></li></ul></li></ul></li>
</ul>
<h2>DeFog</h2>
<ul>
<li><p>Features</p>
<ul>
<li>Three provided accelerations</li>
<li>Feature engineering performed for each acceleration included<ul>
<li>Difference between prior and subsequent accelerations</li></ul></li></ul></li>
<li><p>Standardization</p>
<ul>
<li>Used StandardScaler</li>
<li>Each Id was standardized individually</li></ul></li>
<li><p>Sequence Creation Method:</p>
<ul>
<li><p>Training</p>
<ul>
<li>Sequence length: 5000<ul>
<li>Sequences were created by shifting 2500 steps from the starting position.</li></ul></li></ul></li>
<li><p>Inference</p>
<ul>
<li>Sequence length: 15000 or 30000 (15000 if data size was less than 200000, 30000 otherwise)</li>
<li>Sequences were created by shifting 7500 or 15000 steps from the starting position.<ul>
<li>During prediction, the sequence section from 3750/7500 to 11250/22500 was utilized.</li>
<li>The initial segment spanned from 0 to 11250/22500, while the final segment used from 3750/7500 to the end of the sequence.</li></ul></li></ul></li></ul></li>
<li><p>Models</p>
<ul>
<li><p>We ensembled five models </p></li>
<li><p>The following settings are common to each model</p>
<ul>
<li>Model : GRU</li>
<li>Cross validation method : StratifiedGroupKFold<ul>
<li>group : Subject</li></ul></li>
<li>Optimizer : AdamW</li>
<li>Loss : BCEWithLogitsLoss</li>
<li>Scheduler : get_linear_schedule_with_warmup</li>
<li>Sequence length<ul>
<li>Train : 5000</li>
<li>Inference : 15000 / 30000</li></ul></li>
<li>The loss weights for each target were uniform</li>
<li>Only instances where both 'Valid' and 'Task' were true were considered for loss calculation.</li></ul></li>
<li><p>model1</p>
<ul>
<li>CV<ul>
<li>Sequence length 15000 / 30000 : 0.279</li></ul></li>
<li>Ensemble weight : 0.35</li></ul></li>
<li><p>model2</p>
<ul>
<li>Utilized the first round of pseudo-labeling<ul>
<li>Applied hard labels, with the label set to 1 only if the data value of the 'Event' was 1, otherwise it was set to 0</li>
<li>The label was determined based on the highest predictive value among the three target predictions</li>
<li>Inference results from sequences of length 15000 from model1 were used</li></ul></li>
<li>The application of pseudo-labeling significantly improved both public and private scores</li>
<li>CV<ul>
<li>Sequence length 15000 / 30000 : 0.306</li></ul></li>
<li>Ensemble weight : 0.25</li></ul></li>
<li><p>model3</p>
<ul>
<li>Utilized the second round of pseudo-labeling</li>
<li>CV<ul>
<li>Sequence length 15000 & 30000 : 0.313</li></ul></li>
<li>Ensemble weight : 0.25</li></ul></li>
<li><p>model4</p>
<ul>
<li>Increased the hidden size of the GRU</li>
<li>Utilized the first round of pseudo-labeling</li>
<li>CV<ul>
<li>Sequence length 15000 & 30000 : 0.3393</li></ul></li>
<li>Ensemble weight : 0.10</li></ul></li>
<li><p>model5</p>
<ul>
<li>Trained with all data</li>
<li>Utilized pseudo-labeling</li>
<li>Ensemble weight : 0.05</li></ul></li>
<li><p>ensemble(excluding model5)</p>
<ul>
<li>Sequence length 15000 & 30000 : 0.33706</li></ul></li></ul></li>
</ul>
<h2>tDCS FOG & DeFog</h2>
<ul>
<li>CV : 0.548</li>
<li>Public Score : 0.530</li>
<li>Private Score : 0.450</li>
</ul>
<h2>Inference notebook</h2>
<p><a href="https://www.kaggle.com/code/takoihiraokazu/cv-ensemble-sub-0607-1" target="_blank">https://www.kaggle.com/code/takoihiraokazu/cv-ensemble-sub-0607-1</a></p>
<h2>Code</h2>
<p><a href="https://github.com/TakoiHirokazu/Kaggle-Parkinsons-Freezing-of-Gait-Prediction" target="_blank">https://github.com/TakoiHirokazu/Kaggle-Parkinsons-Freezing-of-Gait-Prediction</a></p>
|
Parkinson's Freezing of Gait Prediction
|
3rd Place Solution: Transformer+GRU
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The first and second place solution have much better fits to the individual defog/tdcs datasets; this solution fit one model for both.</p>
<p>Architectures: Deberta/VisionTransformer/VisionTransformerRelPos -> LSTM/GRU<br>
(typically 2-4 layers, with single-layer RNN)</p>
<p>Patch sizes are 7-13, with sequences of 192-384 patches.</p>
<p>All sequences have heavy augmentation--stretching, cropping, ablation, accumulated Gaussian noise, etc.</p>
<p>--</p>
<p>See data.py and model.py for details:</p>
<p>Inference Code: <a href="https://www.kaggle.com/code/stochoshi/fork-of-walk3" target="_blank">https://www.kaggle.com/code/stochoshi/fork-of-walk3</a><br>
Additional Code: <a href="https://www.kaggle.com/datasets/stochoshi/walkdata4" target="_blank">https://www.kaggle.com/datasets/stochoshi/walkdata4</a></p>
|
Parkinson's Freezing of Gait Prediction
|
4th Place Solution: a MultiLayer Bidirectional GRU with Residual Connections
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi, thank you :) I do not have a reference for this architecture in particular, but I inspired myself from a part of <a href="https://www.kaggle.com/competitions/ventilator-pressure-prediction/discussion/285965" target="_blank">the #1 solution from the "Google Brain - Ventilator Pressure Prediction" competition</a>. In <a href="https://www.kaggle.com/code/shujun717/1-solution-lstm-cnn-transformer-1-fold/notebook" target="_blank">their code</a>, they are using a ResidualLSTM module where they also use fully connected layers between the LSTMs. Their architecture is complex, and the ResidualLSTM only represents a small part of it, but that was one of my main starting points in my brainstorming process to build my model. <br>
Also note that I did not have the time to explore all the possible variations one can think about when looking at my architecture: some design choices I made can be seen as arbitrary, and the architecture could probably be refined by a large margin</p>
<p>Thanks to Kaggle and the competition hosts for this competition, and congratulations to the other teams! This was for me the first Kaggle competition in which I invested myself, and it was an awesome experience, I truly learnt a lot. </p>
<p>The model that performed the best for me is a variant of a multi-layer GRU model in which some residual connections and fully connected layers have been added between the GRU layers:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1969534%2F18364882e67c41284297f4eaed4ddb47%2Fmodel.svg?generation=1686319620125192&alt=media" alt=""></p>
<p>Here are the PyTorch classes corresponding to this model: </p>
<pre><code> (nn.Module):
():
(ResidualBiGRU, self).__init__()
self.hidden_size = hidden_size
self.n_layers = n_layers
self.gru = nn.GRU(
hidden_size,
hidden_size,
n_layers,
batch_first=,
bidirectional=bidir,
)
dir_factor = bidir
self.fc1 = nn.Linear(
hidden_size * dir_factor, hidden_size * dir_factor *
)
self.ln1 = nn.LayerNorm(hidden_size * dir_factor * )
self.fc2 = nn.Linear(hidden_size * dir_factor * , hidden_size)
self.ln2 = nn.LayerNorm(hidden_size)
():
res, new_h = self.gru(x, h)
res = self.fc1(res)
res = self.ln1(res)
res = nn.functional.relu(res)
res = self.fc2(res)
res = self.ln2(res)
res = nn.functional.relu(res)
res = res + x
res, new_h
(nn.Module):
():
(MultiResidualBiGRU, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.out_size = out_size
self.n_layers = n_layers
self.fc_in = nn.Linear(input_size, hidden_size)
self.ln = nn.LayerNorm(hidden_size)
self.res_bigrus = nn.ModuleList(
[
ResidualBiGRU(hidden_size, n_layers=, bidir=bidir)
_ (n_layers)
]
)
self.fc_out = nn.Linear(hidden_size, out_size)
():
h :
h = [ _ (self.n_layers)]
x = self.fc_in(x)
x = self.ln(x)
x = nn.functional.relu(x)
new_h = []
i, res_bigru (self.res_bigrus):
x, new_hi = res_bigru(x, h[i])
new_h.append(new_hi)
x = self.fc_out(x)
x, new_h
</code></pre>
<p>Note that the code can be simplified: for my best model which performed a private lb score of 0.417, the "h" was actually always initialized with None. </p>
<h2>Preprocessing</h2>
<p>In terms of data, the choice I made for my model is very simplistic: consider only the accelerometer data (AccV, AccML, AccAP), merge the data from tdcsfog and defog together and train a single model on it. The main steps of my preprocessing pipeline are the following ones: </p>
<ul>
<li>downsample each sequence from their initial frequency (resp. 128 and 100 Hz) to 50Hz;</li>
<li>for defog: <ul>
<li>convert from g units to m/s^2 units;</li>
<li>build a mask using "Valid" and "Task" to know which time steps are labeled during the training. The unlabeled time steps are fed to the model to get the full sequence context, but they are masked during the loss computation.</li></ul></li>
<li>add a 4th "no-activity" class: the model is trained to recognize this class in the same way as the other classes. Outside of the loss, during the validation, i only use the 3 other classes to compute my metrics;</li>
<li>per-sequence standard normalization (StandardScaler). </li>
</ul>
<p>Outside of the unlabeled time steps coming from defog, I did not use any other unlabeled data. </p>
<p>For the prototype of another model i did not have the time to finish, I also began to consider some of the characteristics associated to the person who was producing the sequence, in particular I used "Visit", "Age", "Sex", "YearsSinceDx", "UPDRSIII_On" and "NFOGQ". This prototype was roughly following the same architecture as my best model ; the main idea was to initialize the GRU's hidden states with these characteristics, after using some fully connected layers to project them in the dimension of the hidden states. This prototype was also using 1D convolutions to extract features from the accelerometer data before passing them to the GRU layers, and I also considered adding dropout. I think that with more time for me to tune it, it would have beaten my current best model. The first version achieved a private lb score of 0.398. </p>
<h2>Training details</h2>
<p>My best model - the one which performed 0.417 on the private leaderboard - has been trained without any form of cross-validation, only with a train/validation split of 80% / 20%. To be honest, this model appeared as a prototype in my early experimentation process, and I considered stratified cross-validation only after. </p>
<p>In this solution, I fed <strong>each whole downsampled (50Hz)</strong> sequence to my model, one after the other ie with <strong>a batch size of 1</strong>. Note that I would have been unable to feed some of the sequences to my model without downsampling them. I tried multiple different approaches with this architecture, but was unable to produce a better score when increasing the batch size. I tried multiple window sizes for my sequences ; however as I am pretty new in time series and as I also arrived pretty late in the competition, I did not implement any form of overlap and only thought about it too late. This could have probably been a key. Also when increasing the batch size, it seemed apparent that batch normalization was better than layer normalization. </p>
<p>For the loss, I used a simple cross entropy. As the classes are pretty imbalanced (in particular with my 4th artificial one), I also considered using a weighted cross-entropy, using the inverse frequency of each class as a weight. I also considered trying a focal loss ; but these initial tests seemed unable to perform better than the cross entropy in my case. Despite these negative experiments, I still think that dealing with the imbalance nature of the problem in a better way than I did is important. </p>
<p>In terms of optimizer, I used Ranger. I also tried Adam and AdamW and honestly i don't think this choice mattered too much. With Ranger I used a learning rate of 1e-3 with 20 epochs, with a cosine annealing schedule starting at 15. </p>
<p>Note that I also used mixed precision training and gradient clipping. </p>
<p>The best parameters I found for the architecture of my model are:</p>
<ul>
<li>hidden_size: 128;</li>
<li>n_layers: 3;</li>
<li>bidir: True.</li>
</ul>
<p>Later on, I also tried a stratified k-fold cross-validation in which I stacked the predictions of my k models via a simple average. The architecture and the training details for each fold were the same as for my 0.417 lb model, and this stacking process led to my 2nd best model, performing a score of 0.415 on the private leaderboard (with k=5). I also tried retraining my model on the whole dataset, but this approach did not improve my lb score. </p>
<p>In no particular order, here are a few other architectures that I also tried but did not improve my score: </p>
<ul>
<li>replacing GRU by LSTM in my model: with my architecture, GRUs outperformed LSTMs in all the tests I've realized;</li>
<li>multiple variants of <a href="https://www.researchgate.net/publication/349964066_Multi-input_CNN-GRU_based_human_activity_recognition_using_wearable_sensors" target="_blank">https://www.researchgate.net/publication/349964066_Multi-input_CNN-GRU_based_human_activity_recognition_using_wearable_sensors</a> ;</li>
<li>a classic multi-layers bidirectional GRU followed by one or more fully connected layers, also with layer normalization and ReLUs. </li>
</ul>
<p>Edit: <br>
Submission Notebook: <br>
<a href="https://www.kaggle.com/zinxira/parkinson-fog-pred-4th-place-submission-notebook" target="_blank">https://www.kaggle.com/zinxira/parkinson-fog-pred-4th-place-submission-notebook</a></p>
<p>Pretrained models "dataset": <br>
<a href="https://www.kaggle.com/datasets/zinxira/models" target="_blank">https://www.kaggle.com/datasets/zinxira/models</a></p>
<p>Full open-source code: <br>
<a href="https://github.com/Zinxira/tlvmc-parkinsons-fog-prediction-4th-place-solution" target="_blank">https://github.com/Zinxira/tlvmc-parkinsons-fog-prediction-4th-place-solution</a></p>
|
Parkinson's Freezing of Gait Prediction
|
52nd Place Solution + Code
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks! I think you're right that GroupKFold would be similar to the way I manually split up the folds--I did that so I'd have a little more control over the process. My CV tended to be lower than LB, but they were well correlated so still useful for selecting models. I think I saw discussions of other folks seeing a similar trend?</p>
<p>Thanks to the hosts for providing such an interesting/worthwhile problem to work on, and to everyone who shared their code and insights throughout the competition! This is my first completed Kaggle competition; I feel like I spent as much time learning about the Kaggle ecosystem as I did about the FOG data 😂 but overall it was a fun experience.</p>
<p>My solution was a fairly straightforward 1D CNN ensemble, but I still wanted to share it!</p>
<p>My code: <a href="https://www.kaggle.com/code/abandura/fog-1d-cnn" target="_blank">https://www.kaggle.com/code/abandura/fog-1d-cnn</a></p>
<p>I referenced <a href="https://www.kaggle.com/code/coderrkj/parkinson-fog-pred-conv1d-separate-tf-model" target="_blank">this</a> public notebook when I began the comp, and wanted to thank the author for sharing their work.</p>
<h2>Solution Overview</h2>
<h3>Preprocessing</h3>
<p>TS window dimensions:</p>
<pre><code>window_size =
window_future =
window_past = window_size - window_future
</code></pre>
<p>TS Features (standardized for each file):</p>
<pre><code>Raw 3D accelerometer data
Seperate 3D acc data into high freq low freq components MA
Spectrogram of 3D accelerometer data (NFFT = )
Temporal location of sequence**: time_index/total_time
</code></pre>
<p>**Some people feel that using temporal features reduces model usability, but I disagree. These models could be useful for auto-labeling more data from the same FOG-inducing protocols, in which case temporal features would be consistent to the data we used in this competition. On the flip side, I doubt these models would generalize well to something like the daily living dataset regardless of whether temporal features were leveraged. Just my two cents.</p>
<p>CV strategy: 6 fold on patients, separating patients so each fold has roughly the same amount of data</p>
<h3>Model Architecture</h3>
<ul>
<li>The best models based on CV for each fold were ensembled using a weighted average.</li>
<li>There are batch normalization layers between each convolutional layer even though they aren't included in the diagram.</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13474157%2F6746c1774c63e26d52986939d718f401%2FFOG%20CNN%20Architecture.jpeg?generation=1686624959127555&alt=media"></p>
<h3>Training Parameters</h3>
<ul>
<li>batch_size = 1024</li>
<li>learning_rate = 1e-4</li>
<li>num_epochs = 1</li>
<li>max_batches_per_epoch = 12000</li>
<li>max_val_batches = 500</li>
<li>val_freq = 500</li>
<li>loss function:<br>
For tdcsfog I used cross entropy loss with inverse class frequency weights<br>
For defog I used focal loss with gamma = 5</li>
</ul>
<p><strong>Public LB</strong>: 0.362<br>
<strong>Private LB</strong>: 0.305</p>
<h4>Other Ideas I tried that didn't improve accuracy:</h4>
<ul>
<li>Combined models: single CNN model for both tdcsfog and defog after unifying sampling rates and units. </li>
<li>Multi-head CNN model: Add a second loss function for presence/absence of FOG event so that the notype data could be leveraged.</li>
<li>CNN features as inputs for a Light GBM model: Use the output of the convolutional blocks as input to a GBM model, along with other temporal and patient features.</li>
</ul>
|
Parkinson's Freezing of Gait Prediction
|
57th place solution: Conv1d model with aux head
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi!<br>
I wrote a function that splits input sequence into windows. Then I created train set where window size is 256 and trained model. For inference, I took this trained model and created inference dataset with window size of 512 (and more). This is possible, because my model can process input with variable length.<br>
In pseudo code, traning and inference look something like this:</p>
<pre><code>model = create_model()
train_dataset = create_dataset(input_file=, window_size=)
fit(model, train_dataset)
...
test_dataset = create_dataset(input_file=, window_size=)
y_preds = predict(model, test_dataset)
</code></pre>
<p>This is my model in PyTorch:</p>
<pre><code> (nn.Module):
():
(BiHeadConvFOGModel, self).__init__()
self.in_layer = nn.Linear(config.in_features, dim)
self.reduction_layer = nn.Conv1d(dim, dim, kernel_size=, stride=)
self.blocks = nn.Sequential(*[_conv_block(dim, dim, p) _ (nblocks)])
self.avg_pool = nn.AdaptiveAvgPool1d()
self.multilabel_head = nn.Linear(dim, config.out_classes)
self.binary_head = nn.Linear(dim, )
():
features = self.in_layer(x)
features = features.permute(, , )
features = self.reduction_layer(features)
block self.blocks:
features = block(features)
features = self.avg_pool(features)
features = features.squeeze()
multilabel_output = self.multilabel_head(features)
binary_output = self.binary_head(features)
multilabel_output, binary_output
</code></pre>
<p>The model receives input of shape (B, S, 4), where B - batch size, S - sequence length. Below I am showing output shape of each layer.</p>
<pre><code> ():
features = self.in_layer(x)
features = features.permute(, , )
features = self.reduction_layer(features)
block self.blocks:
features = block(features)
features = self.avg_pool(features)
features = features.squeeze()
multilabel_output = self.multilabel_head(features)
binary_output = self.binary_head(features)
multilabel_output, binary_output
</code></pre>
<p>See, after swapping two last axis, I can perform 1D convolution over sequence length. This is because Conv1D in PyTorch performs convolution over last axis. Almost at the end I am using average pooling. And that's why I don't care about sequence length.<br>
Hope this helps. Ask if you have any questions 🙃</p>
<p>Thank you!<br>
CV scores for each model (5 folds): 0.16, 0.175, 0.198, 0.254, 0.317. I will edit my post and add them to the summary.<br>
Then I do experiments with these models. So experiments with window size during inference is actually a postprocessing.<br>
I should have measured CV scores for each of these experiments, but I didn't, because it was almost the end of competition.</p>
<p>Hi! 👋 I want to share my solution with you.<br>
First of all, I want to thank the organizers of this competition!</p>
<p>Then I want to thank the participants who made useful EDA notebooks and notebooks with baselines.<br>
<em>This solution was inspired by these two notebooks: <a href="https://www.kaggle.com/code/coderrkj/parkinson-fog-pred-conv1d-separate-tf-model" target="_blank">this one</a> and <a href="https://www.kaggle.com/code/mayukh18/pytorch-fog-end-to-end-baseline-lb-0-254" target="_blank">this one</a>.</em></p>
<h1>Data preprocessing</h1>
<p>For train and inference, I used the rolling window technique. The window was made so the target timestep is in the middle of this window.<br>
Size of the window for training: 256<br>
Size of the window during inference: 1224</p>
<p><strong>AccV, AccML, AccAP</strong> features were preprocessed with savgol filter (kernel=21, n=3).<br>
For the <strong>Time</strong> feature I used <code>pandas.qcut</code> to discretize it with 10 bins.</p>
<h1>Model architecture</h1>
<p>I used Conv1D model with two heads.<br>
The first head (multilabel) is the main one, which performs multilabel classification (<em>StartHesitation, Turn, Walking</em>).<br>
The second head (binary) is aux head, which classifies into two classes: <em>event</em>, <em>no event</em>.</p>
<p>Below is the model architecture.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4268817%2Fa84cc1f97065442d4183a84de2bdf98f%2Fpfogp_solution.drawio.png?generation=1686319440790344&alt=media" alt=""></p>
<p>In this model, I also used Conv1D layer with kernel size 8 and stride 8. This helps to reduce input size while keeping important information as much as possible.</p>
<h1>Training</h1>
<p>I trained <strong>5 models</strong> on split dataset (by subject_id) using <code>GroupKFold</code>.<br>
<strong>Optimizer</strong>: AdamW + LookAhead<br>
<strong>LR</strong>: 0.0002<br>
<strong>Scheduler</strong>: OneCycleLR<br>
<strong>Loss</strong>: FocalLoss (binary head) + BCEWithLogitsLoss (multilabel head)<br>
<strong>Epochs</strong>: 20</p>
<h1>Inference</h1>
<p>Inference was performed using an ensemble of 5 models.<br>
CV scores for each of the model: 0.16, 0.175, 0.198, 0.254, 0.317</p>
<p>Then I found that using a bigger window size during inference gives better results than using a window size of 256 (which the model was trained on).<br>
Also, using an ensemble of different window sizes (3 groups of 5 models) also gives better results.</p>
<p>Here is the table of some of my submissions</p>
<table>
<thead>
<tr>
<th>N of models</th>
<th>Window size</th>
<th>Public score</th>
<th>Private score</th>
</tr>
</thead>
<tbody>
<tr>
<td>10 (5 + 5)</td>
<td>1024 + 1224</td>
<td>0.37</td>
<td>0.303</td>
</tr>
<tr>
<td>5</td>
<td>1024</td>
<td>0.352</td>
<td>0.307</td>
</tr>
<tr>
<td>15 (5 + 5 + 5)</td>
<td>256 + 512 + 1024</td>
<td>0.351</td>
<td>0.311</td>
</tr>
</tbody>
</table>
<p>For the final submission, I have chosen the first one (😭).</p>
<h1>Interesting takeaways</h1>
<p>During doing experiments I found several interesting takeaways:</p>
<ul>
<li>Aux stuff (like heads or losses) can improve performance score.</li>
<li>Smaller models perform better than bigger ones</li>
<li>ConvNets perform better than transformers with approximately the same number of parameters</li>
<li>Using a bigger window size for training reduces LB score, but using a bigger window size during inference (when the model was trained on a smaller window size) increases LB.</li>
</ul>
<h1>Conclusions</h1>
<p>It was an interesting experience for me to participate in this competition. I learned several useful optimization and training tricks (e.g. using aux head). I hope you will find my solution interesting too!<br>
If you have any questions about my solution, I would be happy to answer them! 😊</p>
|
Parkinson's Freezing of Gait Prediction
|
5th Place Training and Inference
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2><strong>Overall Pipeline</strong></h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F651378%2F699965e32e4940ab317dbe0f7cbe95d1%2Fpiplinev1.PNG?generation=1687193496974817&alt=media" alt=""></p>
<h2><strong>Training Methodology</strong></h2>
<p><strong><em>DataSet</em></strong></p>
<ul>
<li>For Pretraining divide each unlabeled series into segment with length 100000</li>
<li>For Training</li>
<li>- Divide each time series with window size of 2000 observation with overlapping of 500</li>
<li>- For defog randomly select 4 window from the above set. </li>
<li>- For tdcsfog select 1 window from the above set. tdcsfog data is resampled 10 100 Hz using librosa (v 0.9.2). when I ported my training on Kaggle code then i found that training is not converging fast enough when I did resampled using librosa( v 0.10.0). Maybe difference is due to default resampling technique.</li>
<li>- Set length of dataloader as 8 times number of time series each fold<br>
<strong><em>Folds</em></strong></li>
<li>Create 5 Folds using GroupKFold with Subject as groups</li>
<li>- This could be improved by creating by carefully selecting subjects so that there is similar representation of each target type in each fold</li>
</ul>
<p><strong><em>Network Architecture</em></strong><br>
All models has following architecture<br>
`class Wave_Block(nn.Module):</p>
<pre><code>def :
super(Wave_Block, self).
self.num_rates = dilation_rates
self.convs = nn.
self.filter_convs = nn.
self.gate_convs = nn.
self.convs.append(nn.)
dilation_rates =
dilation_rate dilation_rates:
self.filter_convs.append(
nn.)/), dilation=dilation_rate))
self.gate_convs.append(
nn.)/), dilation=dilation_rate))
self.convs.append(nn.)
def forward(self, x):
x = self.convs(x)
res = x
i range(self.num_rates):
x = torch.tanh(self.filter_convs(x))torch.sigmoid(self.gate_convs(x))
x = self.convs(x)
res = res + x
return res
</code></pre>
<p><code>
</code>class Classifier(nn.Module):<br>
def <strong>init</strong>(self, inch=3, kernel_size=3):<br>
super().<strong>init</strong>()<br>
self.LSTM = nn.GRU(input_size=128, hidden_size=128, num_layers=4, <br>
batch_first=True, bidirectional=True)</p>
<pre><code> #self.wave_block1 =
self.wave_block2 =
self.wave_block3 =
self.wave_block4 =
self.fc1 = nn.
def forward(self, x):
x = x.permute(, , )
#x = self.wave
x = self.wave
x = self.wave
x = self.wave
x = x.permute(, , )
x, h = self.
x = self.fc1(x)
return x`
</code></pre>
<h3>Different Models</h3>
<h4>WaveNet-GRU-v1</h4>
<p>Training Notebook is found at following link</p>
<ul>
<li><a href="https://www.kaggle.com/code/adityakumarsinha/wavenet-4096-v6/notebook" target="_blank">https://www.kaggle.com/code/adityakumarsinha/wavenet-4096-v6/notebook</a><br>
The model is using all available data in training and validation irrespective of <em>Valid</em> column value is True and False and only best weight are saved. Last 2 best weights for each fold are used for inference.</li>
</ul>
<h4>WaveNet-GRU-v2</h4>
<p>Training Notebook is found at following link</p>
<ul>
<li><a href="https://www.kaggle.com/code/adityakumarsinha/wavenet-2000-v6-public/notebook" target="_blank">https://www.kaggle.com/code/adityakumarsinha/wavenet-2000-v6-public/notebook</a><br>
The model is using all available data in training and for validation data <em>Valid</em> column value as True is selected.<br>
All the weights with average precision score > 0.25 are saved. Best 2 weights, based on average precision score, are selected.</li>
</ul>
<h4>WaveNet-GRU-v3</h4>
<p>This notebook is based on pre-training on unlabeled data. In pretraining target is to predict next value in the time series. For data creation each unlabeled series is divided into segment of length 100000.</p>
<p><strong>Data Creation Notebook is available at</strong> : <a href="https://www.kaggle.com/code/adityakumarsinha/unlabeled-data-creation/notebook" target="_blank">https://www.kaggle.com/code/adityakumarsinha/unlabeled-data-creation/notebook</a>. <br>
<em>Note</em>: This notebook will fail in Kaggle kernel as it requires more disk space as default available for kaggle notebooks. Please run on different PC/ Server/ VM<br>
<strong>PreTraining Notebook is available at</strong> : <a href="https://www.kaggle.com/code/adityakumarsinha/pretrain-wavenet-4096-v1/notebook" target="_blank">https://www.kaggle.com/code/adityakumarsinha/pretrain-wavenet-4096-v1/notebook</a><br>
The training will be executed for single fold and best weight will be used as initial weights (without LSTM layer) <br>
for WaveNet-GRU-v3<br>
<em>Note</em>: Singe epoch takes around 1-1:30 hours on RTX 3090 so the kernel will timeout. Please run on different PC/ Server/ VM<br>
<strong>WaveNet-GRU-v3</strong>: **Training notebook is available at **: <a href="https://www.kaggle.com/code/adityakumarsinha/wavenet-2000-from-pretrain/notebook" target="_blank">https://www.kaggle.com/code/adityakumarsinha/wavenet-2000-from-pretrain/notebook</a>.<br>
Use best weight for each fold in final inference. <br>
CV score for this notebook is low as compared to WaveNet-GRU-v1 and WaveNet-GRU-v2 but it improves the final ensemble ( During competition time it improved CV score but due to some bug in inferencing code the final private leader-board score as come down. I will explain this in inferencing section.</p>
<h2>** Inference Methodology**</h2>
<ul>
<li>Each series is predicted independently</li>
<li>For inference, each series are divided into segments of size 16000 or 20000 and the last segment is comprised of last 16000/20000 data points of the series. It is possible that with this size complete tdcsfog series is predicted in single step.</li>
<li>tdcsfog data are resampled at 100 Hz and prediction are restored back to 128 Hz.</li>
<li>librosa 0.10.0, is used for resampling. After competition, I found that librosa 0.9.2 is improves score a bit. This is miss from my side (as i did training using librosa 0.9.2) but it has not much impact on the final score.</li>
<li>Prediction of all the models are ensembled using simple mean.</li>
</ul>
<h3><strong>CPU based Inference Methodology</strong></h3>
<p>As during last week my GPU quota has been exhausted so i need to use CPU for inference. Simple CPU based pytorch inference was exceeding the time limit of 9 hours. So I need to convert pytorch models into ONNX model. <em>Please refer following notebook for model conversion</em>: <a href="https://www.kaggle.com/code/adityakumarsinha/openvino-model-converter-all-models-v3/notebook" target="_blank">https://www.kaggle.com/code/adityakumarsinha/openvino-model-converter-all-models-v3/notebook</a></p>
<p>The converted models are used in final inference. One of the final inference notebook is available at:<br>
<a href="https://www.kaggle.com/adityakumarsinha/gait-openvion-bunch-v2" target="_blank">https://www.kaggle.com/adityakumarsinha/gait-openvion-bunch-v2</a>.</p>
<p>After competition I found that, in ensemble, WaveNet-GRU-v3 (model that uses pretrained weight) is overfitting on public leaderboard and in private leaderboard its inclusion had decreased the score. While in local CV ensemble inclusion of this model was increasing the CV score. </p>
<p>So i debugged more and I found that with GPU based inference WaveNet-GRU-v3 is indeed increasing the score. in face simple ensemble of WaveNet-GRU-v1 and WaveNet-GRU-v3 has private leaderboard score of 0.437. More than third position score.</p>
<p>The best GPU based inference notebook is available at<br>
<a href="https://www.kaggle.com/code/adityakumarsinha/wavenet-subm-focal-v2/notebook" target="_blank">https://www.kaggle.com/code/adityakumarsinha/wavenet-subm-focal-v2/notebook</a></p>
<p>Regards<br>
Aditya</p>
|
Parkinson's Freezing of Gait Prediction
|
6th place solution: spectrograms, wavelets, convnets, unets, and transformers
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Not really. We used longer sequence during inference to account for edge effects and learnable attention bias would have been bad (you can imagine if we learned 128x128 mask for 128 sequence during training and tried to do inference for 160 sequence). Additionally, with learnable attention bias mask, the model may learn absolute position information which I think wouldn't be good. </p>
<p>Thanks. We tried separate models but it was worse… seeing the 1st and 2nd place solution does make me want to try again tho</p>
<p>First, thanks to the competition hosts for a meaningful competition with interesting data. Second, congrats to the winners! My team was able to shake up a bit but ended up one place shy of the prize zone. All in all, I'm still happy since we did well to survive the shakeup. I'm also excited to see what top 5 teams did to create a significant gap between us. </p>
<p>Our final ensemble is a combination of spectrogram models, wavelet models, and 1D conv models, which scores 0.369/0.462. Below I will discuss them as well as other important technical details. See below for an overview <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3355848%2F12e8af388e67c174b467596a85afebdf%2FFOG_overview.png?generation=1686277343043841&alt=media" alt=""></p>
<h1>Validation setup</h1>
<p>Validation setup is important due to the noisy data. I ended up with a nested CV setup. The procedure is as follows:</p>
<ol>
<li>split data into 4 folds stratified by data type and grouped by subject</li>
<li>set aside the validation fold (i call this the outer fold)</li>
<li>resplit the 3 training folds into 4 folds and do cross validation (I call these inner folds)</li>
<li>take last epoch or epoch with best validation score</li>
<li>evaluate on outer fold with 4 inner fold models averaged for each outer fold <br>
With this setup, we can more accurately simulate a situation where we have 250 sequences in the test set and we avg the fold model predictions. Later on, I switched to training on full inner fold data for 4 times without validation and evaluate with last epoch models on the outer fold set. </li>
</ol>
<h1>Input features</h1>
<p>All of our models use the 3 waves and the pct time feature. In addition, Ahmet uses some metadata features in his models.</p>
<h1>Spectrogram Models</h1>
<p>When I first saw the data I thought it looked like some sort of waveform, like audio data, so I thought it might work well to use spectrograms to model it. The 3 dimension waves are transformed into 2D spectrograms with STFT. Importantly, transforming the data in spectrograms significantly downscaled the data in the time dimension, so since I use a hop length of 64/50, each frame represents a 0.5 secs window and I'm basically making predictions for 0.5 sec windows. During training, labels are resized with torchvision's resize to fit the size of the time dimension of the spectrograms and during inference the model output is resized back to full dimensionality. Sequences are cut into chunks of 128 secs (256 spectrogram frames) to generate spectrograms. </p>
<p>Another important thing with using spectrograms is that if we use a regular type 2D conv model like resnet18, it wouldn't preserve the full dimensionality of the spectrogram (e.g a 256x256 becomes 8x8 after resnet18). In order to circumvent that, I thought to use a UNet to upsample the small feature map after the conv network. Following that, the spectrograms are pooled along the frequency dimension so I have a 1D sequence, which is then inputted into a transformer network before outputting predictions. </p>
<p>Best submitted single spectrogram model scores 0.432/0.372. Spectrogram models are good at predicting StartHesitation and Turn but bad at Walking. </p>
<h1>Wavelet Models</h1>
<p>Wavelets are similar to spectrograms but also different because wavelets have different frequency/time resolutions at different frequencies. Transforming a wave into a wavelet also does not reduce the dimensionality of the scaleogram (I think this is the term for the image you get after wavelet transform). Since there's no downsampling in the time dimension, Unet is no longer needed and I simply use a resnet18/34, which downsample the scaleogram to the same time resolution as spectrogram models after Unet. In turn, I'm also classifying 0.5 sec windows. Similarly, sequences are cut into chunks of 128 secs (256 spectrogram frames) to generate spectrograms.</p>
<p>Best submitted single spectrogram model scores 0.386/0.345. Wavelet models are good at predicting Walking but bad at StartHesitation and Turn, so it complements spectrogram models nicely. </p>
<h1>Transformer modeling</h1>
<p>Just plugging in a transformer actually does not work so well because it overfits, so I bias the transformer self-attention with a float bias mask to condition each prediction on its adjacent time points, which helps the model predict long events well. </p>
<p>What the float bias mask looks like (see above) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3355848%2F3db704367c0eb1a6f7e46f82684e3f31%2FFOG_attention_mask.png?generation=1686277393310664&alt=media" alt=""></p>
<pre><code>torch.((L,L))
j:
m[i,j]=(-(i-j)/L)**
i==j:
m[i,i]=
m
</code></pre>
<h1>Data augmentation</h1>
<p>I used data augmentation for spectrogram/wavelet models including the following (mostly from audiomentations): </p>
<ol>
<li>time stretch </li>
<li>gaussian noise</li>
<li>pitch shift </li>
<li>wave scale aug (randomly multiply wave by 0.75~1.5 to get scale invariance)</li>
<li>time feature shift aug. </li>
</ol>
<pre><code> self.augment = Compose([
AddGaussianNoise(=0.001, =0.015, =0.5),
TimeStretch(=0.8, =1.25, =0.5,leave_length_unchanged=False,n_fft=n_fft,hop_length=hop_length),
PitchShift(=-4, =4, =0.5,n_fft=n_fft,hop_length=hop_length),
])
</code></pre>
<pre><code>
np.random.uniform()>.:
['wave']*=np.random.uniform(.,.)
</code></pre>
<pre><code> feature shift aug
self and np()>:
data=data+np(-,)
</code></pre>
<h1>Frequency encoding and range</h1>
<p>It's important to not use the high frequency bins of fourier transform, so I simply discard them and only keep the first 64 bins, corresponding to 0-15 hz. For spectrogram models, I also encode the frequency bin with torch.linspace(0,15,n_bins) expanded in the time and channel dimension and concatted so the input to the 2D conv network has 4 channels (3 directions of spectrograms + frequency encoding). It was also useful to resample the waves to a lower frequency, which I think reduces the level of noise. I used 32, 64, and 128 hz for spectrogram models and 64 hz for wavelet models. Defog waves are resampled to match the sample rate of tdcsfog waves.</p>
<pre><code> self==:
data=FA(data,,self.sample_rate)
:
data=FA(data,,self.sample_rate)
</code></pre>
<h1>1D conv Models</h1>
<p>1D conv models are Ahmet's solution. Please see below for details:</p>
<ol>
<li>First align defog and tdcs on time axis (downsampled by 32 and 25, but kept their std as a feature)</li>
<li>pct_time, total_len, Test are used as independent features. Their prediction is summed with the prediction from the 1D CNN.</li>
<li>Because the input was only around 7 seconds long, cumsum features are also fed into 1D CNN.</li>
<li>Outlier dominant subject is downweighted.</li>
<li>Used snapshot ensembling.</li>
<li>Used notype data by applying max on the predictions.</li>
</ol>
<p>1D conv models are weaker compared to the other 2, scoring 0.373/0.293, but are still a nice addition to the ensemble. Interestingly, 1D conv models and spectrogram models have a similar gap of 0.09 between public and private, whereas wavelet models have only a gap of 0.04. We think this is due to a change in class balance between private/public where public has more start hesitation and private has more walking. </p>
<h1>Ensemble Weight Tuning</h1>
<p>For our big ensemble, the weights are first hand tuned as a starting point and then I used GP_minimize to maximize CV score. We used 2 weight tuning setups at the end 1. map of 4 folds + map of full data excluding Subject 2d57c2, 2. map of 3 folds excluding fold with Subject 2d57c2 + map of full data excluding Subject 2d57c2. We do this because we consider Subject 2d57c2 to an outlier. </p>
<pre><code>results=gp
</code></pre>
<p>The weights for our models are (we downweight loss of 2d57c2 to 0.2 in some of them)</p>
<p>[0.1821, 0.2792, 0.1052] Ahemt model<br>
[0.2153, 0.0, 0.0] test257 (32 hz spectrogram)<br>
[0.6026, 0.1734, 0.0] test262 (64 hz spectrogram)<br>
[0.0, 0.0287, 0.2579] test264 wavelet<br>
[0.0, 0.2168, 0.0] test265 (32 hz spectrogram) downweight 2d57c2<br>
[0.0, 0.1734, 0.2997] test266 wavelet downweight 2d57c2<br>
[0.0, 0.1284, 0.1124] test263 128 hz spec<br>
[0.0, 0.0, 0.2248] test271 wavelet double freq scales</p>
<p>Let me know if you have questions, and I wouldn't be surprised if I forgot to mention some details . The code is a bit messy atm but i will clean up and release it soon. </p>
<p>Thanks! Good job on the shakeup!</p>
|
Parkinson's Freezing of Gait Prediction
|
Eighth Place Solution: 5-Fold CV 1D-ResNet
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you so much, glad if it is helpful!</p>
<p>Best,<br>
Jan</p>
<p>Now that more of the prize-winners published their solutions, you seem to be right! I honestly thought about using Light-GBM at first but then decided to use the 1D Resnet as I had not much time and was more comfortable using PyTorch … Good decision it seems :)</p>
<p>Best,<br>
Jan</p>
<p>Hey Eric, I will publish my complete code once the competition is completely over (results are verified) and I had the chance to refactor my code as I have to admit that my code used is really the symbol of „quick‘n‘dirty“ - maybe we can then look for similarities / differences between our solutions!</p>
<p>Best,<br>
Jan</p>
<p>Hey Vladimir,<br>
thanks! I will publish the complete code when I had the chance to refactor it!</p>
<p>Best,<br>
Jan</p>
<p>Hey Andrii,</p>
<p>I trained from scratch and used a single epoch, as I did not have more time (I joined quite late as I was busy with the BirdCLEF Challenge (and normal life?! ) before) - the modified Dataset Class from the notebook mentioned provided 5.000.000 datapoints for one training epoch which took about 45min for training with my 3060.</p>
<p>Best,<br>
Jan</p>
<p>Hey Vladimir,</p>
<p>Thank you for your feedback!<br>
I did not used any feature engineering and I simply added all the model predictions and divided by the number of models in my ensemble. </p>
<p>Best,<br>
Jan</p>
<p>You're welcome, I think this is a very doable way to a simple but efficient model!</p>
<p>Hey Eric,<br>
I will gladly look at your code but please give me a few days for such detailed answers as I live in complete chaos since I moved yesterday.</p>
<p>Best,<br>
Jan</p>
<p>CV was lower consistently by about ~0.07 - that's all</p>
<p>EDIT BELOW!</p>
<p>First of all thank you all for organising and participating in this wonderful challenge, I learned a lot about Parkinson and the techniques I used in my solution and especially used by others!</p>
<p>The biggest thank you goes out to <a href="https://www.kaggle.com/mayukh18" target="_blank">@mayukh18</a> for an incredible starter notebook.<br>
Please give it an upvote if you haven't already!</p>
<p><a href="https://www.kaggle.com/code/mayukh18/pytorch-fog-end-to-end-baseline-lb-0-254" target="_blank">Mayukh's Baseline Notebook</a></p>
<p>I modified this notebook's dataset class to be compatible with with a 1D-ResNet that has three channels, one for each of the sensors ('AccV', 'AccML', 'AccAP'). I also removed the wx-parameter and used 1000ms cuts with a future of 50ms.</p>
<p>Additionally, I implemented a learning rate scheduler (ReduceLROnPlateau) and started with a learning rate of 0.001.</p>
<p>While the original notebook used only one train-validation split, I calculated five models based on the original folds in the notebook using each split.</p>
<p>I calculated the models using my RTX 3060 which took about an hour per model, including validation. CV was around .30 if I remember correctly.</p>
<p>My submission was the ensemble of the resulting five models and resulted in scores of : Public - .351 / Private - .356.</p>
<p>If there are any questions I will gladly try to answer them in the upcoming days but as of now this is all I can do (moved to a new city just yesterday!).</p>
<p>Best,<br>
Jan</p>
<p>EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: <br>
I took the time to generate a little write-up and publish the clean code, both available <a href="https://www.kaggle.com/competitions/tlvmc-parkinsons-freezing-gait-prediction/discussion/422477" target="_blank">here</a>.<br>
EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: </p>
<p>Hey Eric,</p>
<p>I combined the two! Did not try separate models as I was in a hurry due to me moving. Seems I was lucky …</p>
<p>Best,<br>
Jan</p>
<p>Hey Alberto, thank you for your interest in my solution - I am currently working on the publication and will post an update as soon as it is published.</p>
<p>Best,<br>
Jan</p>
<p>Hey Vladimir,</p>
<p>just published all the code and a small article:<br>
<a href="https://github.com/janbrederecke/fog" target="_blank">Code</a></p>
<p>Best,<br>
Jan</p>
<p>Hey Andrii,</p>
<p>I just published a small article on my solution including the complete source code on GItHub.</p>
<p>Article: <a href="https://arxiv.org/abs/2307.03475" target="_blank">ArXiv</a></p>
<p>Code: <a href="https://github.com/janbrederecke/fog" target="_blank">GitHub</a></p>
<p>Just FYI.</p>
<p>Best,<br>
Jan</p>
<p>Hey Eric,</p>
<p>I just published a small article on my solution including the complete source code on GitHub.</p>
<p>Article: <a href="https://arxiv.org/abs/2307.03475" target="_blank">ArXiv</a></p>
<p>Code: <a href="https://github.com/janbrederecke/fog" target="_blank">GitHub</a></p>
<p>Just FYI.</p>
<p>Best,<br>
Jan</p>
<p>Hey Alberto,</p>
<p>I just published a small article on my solution including the complete source code on GItHub.</p>
<p>Article: <a href="https://arxiv.org/abs/2307.03475" target="_blank">ArXiv</a></p>
<p>Code: <a href="https://github.com/janbrederecke/fog" target="_blank">GitHub</a></p>
<p>Just FYI.</p>
<p>Best,<br>
Jan</p>
|
Parkinson's Freezing of Gait Prediction
|
From silver 29th place to 366th nowhere, good shakeup. Final 1D-CNN solution.
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h4><a href="https://www.kaggle.com/code/ernestglukhov/practicum-final-1d-cnn-solution" target="_blank">Final 1D-CNN solution</a></h4>
<p>In this Kaggle notebook, we present a comprehensive approach aimed at predicting Freezing of Gait (FoG) using an ensemble of 1D-CNN models and a sophisticated segmentation process. The segmentation process helps capture distinct patterns in the acceleration data, while the ensemble strategy involves training multiple 1D-CNN models using K-fold cross-validation. Our model architecture consists of multiple Convolutional Blocks, each containing convolutional layers with different kernel sizes and dilations. The models in the ensemble are trained separately on the "tdcsfog" and "defog" datasets. By treating the tdcsfog and defog datasets separately, we aimed to optimize the models' performance for each specific type of data.</p>
|
Passenger Screening Algorithm Challenge
|
Post Competition Architecture Discussion
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Passenger Screening Algorithm Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Edit: <a href="https://github.com/ShayanPersonal/Kaggle-Passenger-Screening-Challenge-Solution">https://github.com/ShayanPersonal/Kaggle-Passenger-Screening-Challenge-Solution</a>
Here's my code.</p>
<p>In my first attempts I tried to use MVCNN on a3daps files and downsampled inputs to a 3rd of the original resolution. This was getting about 0.10 on the original test data. I never tried 3D convolutions as I thought it'd be too slow and memory hungry.</p>
<p>I found I got better results by using a3d files and maintaining the original resolution (and lowering the batch size considerably so it would fit in memory). I tried combining a3d and a3daps by throwing them in separate channels but I found it caused overfitting.</p>
<p>Later on I dropped MVCNN and went with a custom architecture where each view is fed through a pretrained ResNet-50 CNN and each feature map is pyramid pooled (to find objects/features of differing size) and fed to a LSTM with attention. My pyramid pooling doesn't actually use pooling layers but instead passes the last feature map through a 1x1, 3x3, and 5x5 convolution with stride 1, 2, and 3 respectively. I use a form of attention for CNNs that I apply to the feature map before the pyramid pooling. To make it run on my 1080 ti I increased the stride of the first couple layers from 2 to 3, which probably has a similar affect as decreasing the resolution but loses less information. I trained with SGD with a cosine annealing schedule. This was getting around 0.015 on the original test data.</p>
<p>I didn't use any segmentation and had my model predict on all 17 zones at once. In hindsight maybe I should have segmented as it could allow me to train with higher resolution and focus attention on relevant zones at the cost of speed, but I was also worried that the people in the stage 2 data would be drastically different and the segmentations would be wrong. I also probably shouldn't have done experimental things like CNN attention and cosine annealing but then again maybe the risk paid off and they helped.</p>
<p>Regrettably I didn't use ensembling. Just didn't have the time to train more models or think it would be too important. But looks like it could have made a difference.</p>
<p>But in the end I'm still very satisfied with my score, and congratulations to everyone!</p>
<p>Now that the competition is over let's have a place to discuss our architecture, findings, and hindsight?</p>
<p>Edit: Here's my 10th place solution: <a href="https://github.com/ShayanPersonal/Kaggle-Passenger-Screening-Challenge-Solution">https://github.com/ShayanPersonal/Kaggle-Passenger-Screening-Challenge-Solution</a></p>
<p>Hi Wensu, the CNN learns on its own which locations correspond to which labels without any human guidance. It outputs a size 17 vector and is trained against size 17 targets. Since the classes aren't mutually exclusive I don't use softmax and I train with binary cross entropy loss.</p>
<p>The average pooling layer at the end of ResNet loses all spatial information so one would imagine this would be a problem. Instead I use a pyramid pooling scheme which pools with multiple kernel sizes and feed the output (which maintains information about location) directly to the LSTM. However, although pyramid pooling helped noticeably, I found that it wasn't absolutely necessary and even with the regular average pooling layer it still gave decent results (at least on the stage 1 data).</p>
<p>The targets come from the training data. Consider the following sample from the training data:</p>
<pre><code>0050492f92e22eed3474ae3a6fc907fa_Zone1,0
0050492f92e22eed3474ae3a6fc907fa_Zone10,0
0050492f92e22eed3474ae3a6fc907fa_Zone11,0
0050492f92e22eed3474ae3a6fc907fa_Zone12,0
0050492f92e22eed3474ae3a6fc907fa_Zone13,0
0050492f92e22eed3474ae3a6fc907fa_Zone14,0
0050492f92e22eed3474ae3a6fc907fa_Zone15,0
0050492f92e22eed3474ae3a6fc907fa_Zone16,1
0050492f92e22eed3474ae3a6fc907fa_Zone17,0
0050492f92e22eed3474ae3a6fc907fa_Zone2,0
0050492f92e22eed3474ae3a6fc907fa_Zone3,0
0050492f92e22eed3474ae3a6fc907fa_Zone4,1
0050492f92e22eed3474ae3a6fc907fa_Zone5,0
0050492f92e22eed3474ae3a6fc907fa_Zone6,0
0050492f92e22eed3474ae3a6fc907fa_Zone7,0
0050492f92e22eed3474ae3a6fc907fa_Zone8,1
0050492f92e22eed3474ae3a6fc907fa_Zone9,0
</code></pre>
<p>This is treated as an input target pair where 0050492f92e22eed3474ae3a6fc907fa.aps is the input and [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0] is the target and what I train the model to output. The model is fed no other information.</p>
<p>It may feel like black magic at first but it works and it works pretty well. The architecture of the model is designed so that it's more spatially aware than regular convnets to make it easier to learn.</p>
<p><a href="https://github.com/ShayanPersonal/Kaggle-Passenger-Screening-Challenge-Solution">https://github.com/ShayanPersonal/Kaggle-Passenger-Screening-Challenge-Solution</a></p>
<p>Here's my code. Model is definied in mvcnn.py.</p>
<p><a href="https://github.com/ShayanPersonal/Kaggle-Passenger-Screening-Challenge-Solution/blob/master/mvcnn.py">https://github.com/ShayanPersonal/Kaggle-Passenger-Screening-Challenge-Solution/blob/master/mvcnn.py</a></p>
<p>In MVCNN you simply take the max probabilities calculated from each frame after they've each gone through the network independently of each other.</p>
<p>In LSTM the frames are no longer treated independently of each other. It is treated as a time sequence where each time step is a frame from the .aps file and there's 16 time steps. Instead of feeding the frames directly to the LSTM I first feed it through the CNN to "compress" it into a small high-level feature map and feed that in. The advantage of this setup over MVCNN is that the network can learn temporal dependencies. It can learn what a rotating person should look like and flag anomalies. It can learn to boost its predictions if it notices an anomaly in multiple frames as opposed to just one frame, or suppress a prediction if an anomaly was noticed in just one frame but the others didn't pick anything up. It seems to have a regularizing affect.</p>
<p>I viewed the problem as a single temporal sequence rather than 16 independent problems and it seemed natural a LSTM could be applied.</p>
|
Peking University/Baidu - Autonomous Driving
|
(part of) 5th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Peking University/Baidu - Autonomous Driving
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="/shanmukhamanoj11">@shanmukhamanoj11</a> You're welcome!</p>
<p><a href="/tonychenxyz">@tonychenxyz</a> You're welcome!
The following is maybe what improved my score.
- large input size
- RAdam (made training stable)
- width & height prediction
- KITTI pretrained backbone</p>
<p><a href="/miaorain">@miaorain</a> You're welcome!</p>
<p>Thanks!</p>
<p>Congrats to all the competitors!<br>
Thanks the whole team <a href="https://www.kaggle.com/erniechiew" target="_blank">@erniechiew</a> <a href="https://www.kaggle.com/css919" target="_blank">@css919</a> for the great collaboration!</p>
<p>For the other part, please see <a href="https://www.kaggle.com/c/pku-autonomous-driving/discussion/127145" target="_blank">https://www.kaggle.com/c/pku-autonomous-driving/discussion/127145</a>.</p>
<p>code:<br>
<a href="https://github.com/4uiiurz1/kaggle-pku-autonomous-driving" target="_blank">https://github.com/4uiiurz1/kaggle-pku-autonomous-driving</a></p>
<p>My approach is based on <a href="https://github.com/xingyizhou/CenterNet" target="_blank">CenterNet</a>.</p>
<h3>Heads</h3>
<ul>
<li>heatmap[1]</li>
<li>xy offset[2]</li>
<li>z (depth)[1]</li>
<li>pose[6]: cos(yaw), sin(yaw) cos(pitch), sin(pitch), cos(roll), sin(roll)</li>
<li>wh[2]: It's not used for prediction, but PublicLB was improved by learning this as an auxiliary task.</li>
</ul>
<p>Heatmap's loss is Focal Loss, and the others are L1Loss. The weight of wh loss is 0.05. Mask regions of mask images are ignored when calculating loss.</p>
<h3>Network Architecture</h3>
<ul>
<li><a href="https://github.com/Cadene/pretrained-models.pytorch" target="_blank">ResNet18 (pretrained ImageNet)</a> + FPN (channels: 256->128->64)</li>
<li><a href="https://github.com/xingyizhou/CenterNet/blob/master/readme/MODEL_ZOO.md" target="_blank">DLA34 (pretrained KITTI 3DOP)</a> + FPN (channels: 256->256->256)</li>
<li>Input size: 2560 x 2048 (2560 x 1024)</li>
<li>Output size: 640 x 512 (640 x 256)</li>
</ul>
<p>Increasing the input size is very effective, mAP was improved dramatically.<br>
I tried deeper networks (ResNet34, 50) but not worked.</p>
<h3>Augmentation</h3>
<ul>
<li>HFlip (p=0.5): Flip images horizontally and <code>yaw *= -1, roll *= -1</code>.</li>
<li>RandomShift (p=0.5, limit=0.1): Shift images and positions (x, y).</li>
<li>RandomScale (p=0.5, limit=0.1): Scale images and positions (x, y, z).</li>
<li>RandomHueSaturationValue (p=0.5, hue_limit=20)</li>
<li>RandomBrightness (p=0.5, limit=0.2)</li>
<li>RandomContrast (p=0.5, limit=0.2)</li>
</ul>
<h3>Training</h3>
<ul>
<li>Optimizer: RAdam</li>
<li>LR scheduler: CosineAnnealingLR (lr=1e-3 -> 1e-5)</li>
<li>50epochs</li>
<li>5-folds cv</li>
<li>Batch size: 4</li>
</ul>
<h3>Post Processing</h3>
<ul>
<li>Remove mask regions from predictions by multiplying heatmap by masks.</li>
<li>NMS (distance threshold: 0.1): I'm not sure how effective this is…</li>
<li>Find duplicate images with imagehash and ensemble them. PublicLB was slighly improved.</li>
<li>Score threshold: 0.3 (for val mAP: 0.1)</li>
</ul>
<h3>Ensemble</h3>
<p>Ensemble each fold models and two models (ResNet18, DLA34) by averaging the raw output maps.</p>
<h3>Score Summary</h3>
<table>
<thead>
<tr>
<th>model</th>
<th>val mAP (tito's script)</th>
<th>PublicLB</th>
<th>PrivateLB</th>
</tr>
</thead>
<tbody>
<tr>
<td>ResNet18 + FPN</td>
<td>0.257224305900695</td>
<td>0.118</td>
<td>0.109</td>
</tr>
<tr>
<td>DLA34 + FPN</td>
<td>0.2681900383192367</td>
<td>0.118</td>
<td>0.112</td>
</tr>
<tr>
<td>Ensemble</td>
<td>0.27363538075234406</td>
<td>0.121</td>
<td>0.115</td>
</tr>
</tbody>
</table>
<h3>What Didn't Work</h3>
<ul>
<li>Pseudo labeling</li>
<li>TTA (hflip)</li>
<li>Weight Standardization</li>
<li>Group Normalization</li>
<li>Deformable Convolution V2</li>
<li>Quaternion + L1Loss</li>
<li>Very large input size (3360 x 2688)</li>
<li>Eigen's depth prediction method used in CenterNet paper (<code>z = 1 / sigmoid(output) − 1</code>)</li>
</ul>
<p><a href="/css919">@css919</a> Let's win the prize next time!</p>
|
Peking University/Baidu - Autonomous Driving
|
1st place solution (1/26 details updated)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Peking University/Baidu - Autonomous Driving
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Keras/TF</p>
<p>Keras only, thanks.</p>
<p>No, thanks.</p>
<p>One transformation per model, thanks.</p>
<p>Thanks to everyone and congratulations to all the top teams. I'm on vacation, so I will post the details after the Chinese New Year.</p>
<h1>In brief</h1>
<ul>
<li>data augmentation: h-flip, 3 axis rotate, color, noise, blur.</li>
<li><a href="https://github.com/see--/keras-centernet">keras hourglass centernet</a></li>
<li>perspective transform for efficiency</li>
<li>regression of yaw, cos(pitch), sin(pitch), rot_pi(roll), x, y, z, r</li>
<li>blend 6 results (2 types of head * 3 types of transform)</li>
<li>post process by fitting LB</li>
</ul>
<h1>Especially thanks to these kernels:</h1>
<ul>
<li><a href="https://www.kaggle.com/hocop1/centernet-baseline">CenterNet Baseline</a> <a href="/hocop1">@hocop1</a></li>
<li><a href="https://www.kaggle.com/ebouteillon/augmented-reality">Augmented Reality</a> <a href="/ebouteillon">@ebouteillon</a></li>
<li><a href="https://www.kaggle.com/its7171/metrics-evaluation-script">metrics evaluation script</a> <a href="/its7171">@its7171</a></li>
</ul>
<p>Happy Chinese New Year</p>
<h1>1/26 details updated</h1>
<h1>Network</h1>
<p><img src="https://i.imgur.com/uFNfCFB.jpg" alt="network"></p>
<p>My approach is based on <a href="https://github.com/see--/keras-centernet">keras hourglass centernet</a>.
Some notes:
- <strong><em>6 Dof</em></strong>: regression of yaw, cos(pitch), sin(pitch), rot_pi(roll), x, y, z, distance
- discard <strong><em>XY bias</em></strong> result finally.
- remove <strong><em>Car types</em></strong> and <strong><em>XY bias</em></strong> in my second model.</p>
<p><strong># Perspective transform</strong>
Two purpose:
- reduce the size gap between small(far) and large(near) cars.
- cover more outliers without extending image.</p>
<p>I find the model don't predict well on the large car when I increase the input size, so I make them smaller. The extra benefit is to enclose outliers.</p>
<p><strong>Original image:</strong>
<img src="https://i.imgur.com/bNABgvS.jpg" alt="original">
<strong>Transformed:</strong>
<img src="https://i.imgur.com/Xe396jz.jpg" alt="transformed">
<strong>Coverage comparison:</strong> (dots denote the GT location)
<img src="https://i.imgur.com/JJYfmNn.jpg" alt="coverage">
<strong>Validation result with outlier:</strong> (red dot: GT, green: predict heat map)
<img src="https://i.imgur.com/s6UAHIO.png" alt="transformed"></p>
<p><strong># Coordinate reference</strong>
I think the same feature in different locations should get different results. So I join this layer to get better predictions, and apply random crop when training.</p>
<h1>Data augmentation</h1>
<p>I use: h-flip, camera rotate, color, noise, blur.</p>
<p><strong># Camera rotation</strong>
This is the most important part of my approach. Since I only have 4001 training images, 5 bad, and 256 for validation. It is easily to overfit without rotation augmentation. </p>
<p><strong>The augmentation looks like:</strong> (center is original image)
<img src="https://i.imgur.com/kONFCcy.jpg" alt="rotation augmentation">
Please refer to <a href="https://www.kaggle.com/outrunner/rotation-augmentation">this kernel</a> for details.</p>
<h1>Training</h1>
<ul>
<li>Focal loss for heat map</li>
<li>Huber for regression</li>
<li>Adam optimizer</li>
<li>Manually adjust learning rate from 10^-3.5 to 10^-5.5</li>
<li>About 0.4M iterations</li>
<li>Train: full network -> part of -> head only -> full ...</li>
<li>Change input size (random corp) and batch size every iteration</li>
<li>One 2080Ti per training. (I have 2)</li>
<li>Total 6 models, 2 heads * 3 transforms (different parameter and input size)</li>
</ul>
<h1>Inference</h1>
<p>Please refer to <a href="https://www.kaggle.com/outrunner/autonomous-driving-1st-place-solution-inference">this kernel</a> for details.</p>
<p><strong># Test time augmentation</strong>
Flip and multiple transformations, weighted average the predictions.</p>
<p><strong># Blending</strong>
Transform predictions to one model's transformation, weighted average 6 results.</p>
<p><strong># Weighted average neighborhood</strong>
When decoding, not only use the local maximum point but also take into account
the prediction around it.</p>
<h1>Metric probing and Post processing</h1>
<p>This is the first time I join a competition without knowing the evaluation metric. The probing is interesting, but there are some weird characters in the metric.</p>
<p><strong>Probing procedure:</strong>
<img src="https://i.imgur.com/ESMF5qS.jpg" alt="Probing process">
<strong># Image wise</strong>
Split test images to two sets A and B, then: <strong>score(A) + score(B) = score(A+B)</strong></p>
<p><strong># Confidence independent</strong>
So the metric is something like <strong>F1</strong> or <strong>TP/(TP+FN+FP)</strong></p>
<p><strong># Rotation</strong>
θ and θ+2π differently, so the score is directly impacted by roll prediction. Therefore, I train a model to predict global roll and get the score improvement.</p>
<p><strong># Translation</strong>
The most weird thing is that when I shift X by some pixels, the LB score change significantly. So I guess the Metric is:
<code>sh
(abs(x-xp)/abs(x) + abs(y-yp)/abs(y) + abs(z-zp)/abs(z))/3
</code>
<strong>And increase the threshold when abs(x) is small:</strong>
<img src="https://i.imgur.com/BmYJ9CO.jpg" alt="Confidence threshold"></p>
<p><strong>The overall post-processing:</strong></p>
<ul>
<li>[opt] replace x, y by X, Y, z, r (just like everybody do)</li>
<li>[roll] replace instance roll by global roll</li>
<li>[xs] shift X 2 pixels (don't know why)</li>
<li>[rx] drop some cars whose x are near by zero, and keep a least car number per image</li>
<li>[dz] drop duplicate cars</li>
</ul>
<p><strong># table of results:</strong>
<img src="https://i.imgur.com/WSMGpUH.gif" alt="table of results">
<em># parameters are the same as final submission, and some procedures are dependent</em></p>
<p>等我回家再跟你說😀</p>
<p>Thanks.</p>
<p>Sure.</p>
<p>Transform image and targets by rotating camera.</p>
<p>I build the rotation augmentation by your kernel, and it help a lot. Thanks.</p>
<ol>
<li><p>all augmentation methods for all models in final submission</p></li>
<li><p>the improvement:
<img src="https://i.imgur.com/kTtvyRL.gif" alt="scores2"></p></li>
</ol>
<p>I think this is as important as you add ApolloScape dataset.</p>
<p><a href="/xiejialun">@xiejialun</a> , thanks for sharing.</p>
<p>horizontal flip</p>
<p>two models blending example: <a href="https://www.kaggle.com/outrunner/autonomous-driving-1st-place-solution-inference">https://www.kaggle.com/outrunner/autonomous-driving-1st-place-solution-inference</a></p>
<p>alpha = (p*8-5.65)*np.pi/180.
beta = (np.random.random()*50-25)*np.pi/180.
gamma = (np.random.random()*6-3)*np.pi/180. + beta/3.</p>
<p>where p is in range(0,1) and keep tilt up and down 50-50 percentage</p>
<p>I mean P(alpha>0)=0.5
for a simple case, alpha = ((np.random.random()**0.5)*8-5.65)*np.pi/180.
or just like others, alpha = (np.random.random()*6-3)*np.pi/180.
There is no obvious difference in test accuracy.</p>
<p>I set tilt up more because I want to catch more cars in the sky when apply TTA since my vertical input size is narrower.</p>
<p>Beta's range could as large as possible, in this competition, it performs like random shift in normal image pre-processing. But the horizon is not in image center, so offset the gamma by beta to keep roll in a proper range.</p>
<p>just a scaled 2d coordinate:</p>
<pre><code>ref = np.reshape(np.arange(0, xo*yo), (yo, xo, -1))
ref_x = ref % xo
ref_y = ref // xo
ref = np.dstack([(ref_x-(xo-1)/2)/100, ref_y/100])
</code></pre>
|
Peking University/Baidu - Autonomous Driving
|
2nd Place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Peking University/Baidu - Autonomous Driving
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>1) We didn't try non-local embedding or GCnet, which we should have....in the end, we are a bit contraint by the computational resources so we didn't go for it eventually.
2) Yes, we regressed the full translation, and modify the x,y from box
3) without augmentation, we have 95%+ accuracy on 34 car class prediction, with augmentation, its around 85% (in reality, we shouldn't use augmentation after all). The car class accuracy matters, but it is not a very difficult task...</p>
<p>We too spent way too much time in correctly converting the euler angle to quaternions...</p>
<h1>2nd Place for Kaggle_PKU_Baidu</h1>
<p>Firstly congratulations to all top teams.</p>
<p>Secondly I would like to congratulate to all my teammates for this collaborative team work, every member of the team is indispensable in this competition.</p>
<h2>Approach</h2>
<p>The overall pipeline is largely improved on previous method 6D-VNet [1].
We reckon we are the very few teams that didn't use CenterNet as the main network).
The system pipeline is as follows (the red color denotes the modules we added for this task):</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F16463%2Fbe74039a5b29b17e735b6f209c0b7e10%2Fsystem_pipeline_kaggle.PNG?generation=1579687638026915&alt=media" alt=""></p>
<p>The three major improvement consists of
(1) better detector and conv backbone structure design;
(2) post-processing with both mask and mesh information both geometrically and via a Neural Mesh Renderer (NMR)[4] method;
(3) novel way of ensembling of multiple models and
weighted average of multiple predictions. </p>
<p>We specify the implementation details as follows:</p>
<h3>Implementation details</h3>
<p>We build our framework based upon the open source project
<a href="https://github.com/open-mmlab/mmdetection%5d">MMDetection</a>. This is an excellent framework that can help us to modulised the code.
Pixel-level transform for image augmentation is called from the <a href="https://github.com/albumentations-team/albumentations">Albumentations</a> library.</p>
<p>The detector is a 3-stage Hybrid Task Cascade (HTC) [2] and the backbone is ImageNet pretrained High-resolution networks (HRNets) [3].
We design two specific task heads for this challenge: one head taking ROIAligh feature for car class classification + quaternion regression and one head taking bounding box information
centre location, height and width) for translation regression.
With this building block, we achieved private/public LB: 0.094/0.102.</p>
<p>We then incorporated the training images from ApolloScape dataset and after cleaning the obviously wrong annotations, this leaves us with 6691 images and ~79,000 cars for training,
The kaggle dataset has around 4000 images for training,
we leave out 400 images randomly as validation. With tito(<a href="/its7171">@its7171</a>) code for evaluation, we obtained ~0.4 mAP.
On public LB, we have only 0.110.
Such discrepancy between local validation and test mAP is a conumdrum that perplexes us until today!</p>
<h3>Postprocessing</h3>
<p>After visual examination, we find out the detector is working well (really well) for bounding box detection and mask segmentation (well, there are 100+ top conference paper doing the research in instance segmentation anyway). But the generated mesh from rotation and translation does not overlap quite well with the mask prediction.
Thus, we treat <code>z</code> as the oracle prediction and amend the value for <code>x</code> and <code>y</code> prediction.
This gives us a generous boost to 0.122/0.128 (from 0.105/0.110). </p>
<h3>Model ensembles</h3>
<p>Model ensemble is a necessity for kaggle top solutions: we train one model that directly regresses translation and one model regresses the <code>sigmoid</code> transformed translation.
The third model is trained with 0.5 flip of the image.</p>
<p>Because of the speciality of the task: the network can output mask and mesh simultaneously, we merge the model by non-maximum suppression using the IoU between the predicted mesh and mask as the confident score.
The <code>max</code> strategy gives 3 model ensemble to 0.133/0.142.
The <code>average weighting</code> strategy generates even better result and is the final strategy we adopted.</p>
<p>The organisors also provide the maskes that will be ignored during test evaluation.
We filtered out the ignore mask if predicted mesh has more than 20% overlap,
this will have around 0.002 mAP improvement.</p>
<p>Below is the aforementioned progress we have achieved in a tabular form:</p>
<p>|Method | private LB | public LB|
|:------------------: | :-------:|:-------------------------:|
|HTC + HRNet + quaternion + translation | 0.094 | 0.102|
|+ ApolloScape dataset | 0.105 | 0.110|
|+ z-> x,y (postprocessing) | 0.122 | 0.128|
|+ NMR | 0.127 | 0.132|
|conf (0.1 -> 0.8) | 0.130 | 0.136|
|+ 3 models ensemble (max) | 0.133 | 0.142|
|+ filter test ignore mask | 0.136 | 0.145|
|+ 6 models ensemble(weighted average)| 0.140 | 0.151|</p>
<h2>Other bolts and nuts</h2>
<h3>Visualisation using Open3D</h3>
<p>We also use <a href="http://www.open3d.org/">Open3d</a> to visualise the predicted validation images. The interactive 3d rendering technique allows us to examine the correctly predicted cars in the valid set. </p>
<h3>Neural Mesh Renderer (NMR)</h3>
<p>Neural 3D Mesh Renderer [4] is a very cool research which generates an approximate gradient for rasterization that enables the integration of rendering into neural networks.
After releasing of the final private LB, we found out using NMR actually gives a small improvement of the overall mAP. </p>
<h3>What we haven't tried but think it has decent potential</h3>
<ul>
<li><p>Almost all the top winning solution adopted the CenterNet [5], it's very likely that the model ensemble with CentreNet will further boost the overall performance. We realise the universal adoptation of CenterNet in this challenge. We might be too comfortable sitting
in our existing framework and the migration to fine-tune CenterNet seems a bit hassle which in return might ultimately causes us the top prize. </p></li>
<li><p>Allocentric vs. Egocentric [6].
Allocentric representation is equivariant w.r.t. to RoI Image appearance, and is
better-suited for learning. As also discussed <a href="https://www.kaggle.com/c/pku-autonomous-driving/discussion/127052">here</a>, the modification of orientation id done by rotation matrix that moves camera center to target car center. But we are aware from the beginning that prediction of
translation is far more difficult that rotation prediction, we didn't spend much effort in perfecting rotation regression.</p></li>
</ul>
<p><code>python
yaw = 0
pitch = -np.arctan(x / z)
roll = np.arctan(y / z)
r = Rotation.from_euler("xyz", (roll, pitch, yaw))
</code></p>
<h3>References</h3>
<ul>
<li>[1] 6D-VNet: End-To-End 6-DoF Vehicle Pose Estimation From Monocular RGB Images, Di WU et et., CVPRW2019 </li>
<li>[2] Hybrid task cascade for instance segmentation, Chen et al., CVPR2019</li>
<li>[3] Deep High-Resolution Representation Learning for Human Pose Estimation, Sun et al., CVPR2019</li>
<li>[4] Neural 3D Mesh Renderer, Hiroharu Kato et al., CVPR2018</li>
<li>[5] Objects as Points, Xingyi Zhou et al. CVPR2019</li>
<li>[6] 3D-RCNN: Instance-level 3D Object Reconstruction via Render-and-Compare, Abhijit Kundu et al., CVPR 2018</li>
</ul>
<p>We combine Mask with Mesh, use silhouette as supervising signal to adjust R,T. It improves the accuracy of a single model. We plan to write a paper using the technique.
<a href="/corochann">@corochann</a> you are with preferred network, you might know the author of the NMR then~</p>
|
Peking University/Baidu - Autonomous Driving
|
37th place brief writeup
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Peking University/Baidu - Autonomous Driving
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the host and useful kernels and discussions.
I learned a lot during this competition.</p>
<p>My code is based on <a href="https://www.kaggle.com/phoenix9032/center-resnet-starter">Center-Resnet Starter</a>.
My public LB was 0.094 and private LB was 0.086.</p>
<h2>Model</h2>
<ul>
<li>model architecture is the same as Center-Resnet Starter Kernel except below</li>
<li>feed mask image and x,y position into model</li>
<li>predict heatmap with focal loss following CenterNet paper</li>
<li>change regression target from (x,y,z) to (u-diff, v-diff, z) following CenterNet paper</li>
<li>regress log(z) instead of z ∵ depth affects by multiplication and log(z) distribution is more balanced than z distribution</li>
</ul>
<h2>Data Augumentation</h2>
<ul>
<li>(x/z, y/z) position jittering</li>
<li>slight gauss noise</li>
<li>slight randomContrastBrightness</li>
</ul>
<h2>Preprocessing / Post Processing</h2>
<ul>
<li>masking prediction using given masks</li>
<li>restore color distorted test images.
for each image and each channel, stretch [0, '95 percentile value'] to [0, 255]
<ul><li>probably no effect on LB, pointed out by <a href="https://www.kaggle.com/c/pku-autonomous-driving/discussion/127060">this discussion</a></li></ul></li>
</ul>
<h2>Others</h2>
<ul>
<li>replacing confidence by Y-position has no effect. At this point, I doubted the evaluation metric.</li>
<li>remove corruputed 5 train images</li>
<li>adaptive heatmap threshold to predict at least one car per an image. discarded it since LB does not change</li>
<li>increase epochs and change scheduling to ReduceLROnPlateau</li>
<li>2x weights to regression targets to balance two types of losses. It improved LB</li>
<li>add (x,y) position info as head input and add two 1x1 convs to head.
better localCV and private LB (my final sub score + 0.002)
I discarded it since public LB was bad (my final sub score - 0.007)</li>
</ul>
<h2>What did not work for me</h2>
<ul>
<li>smaller input size (w,h = 1536,512)</li>
<li>larger input size + grad accumulation (accumlation_step=2)</li>
<li>deformable convolution V2 (maybe because of my poor modeling skill)</li>
<li>bins with in-bins regression for pitch (bins=4) following CenterNet paper</li>
<li>predict pitch from camera view following CenterNet paper</li>
</ul>
<h2>What I should have tried</h2>
<ul>
<li>improve predictions of large cars. Below may be relevant:
9th solution : difference models for cars at difference positions
5th solution : FPN network</li>
<li>ensemble</li>
<li>other backbones (DLA34 or resnet34 or efficientnet-b0)</li>
<li>change the each-car distance threshold</li>
<li>flip augmentation</li>
<li>use pretrained model (and use mask info for loss calculation)</li>
</ul>
<hr>
<p>code is <a href="https://github.com/lisosia/kaggle-pku-autonomous-driving">here</a></p>
|
Peking University/Baidu - Autonomous Driving
|
7th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Peking University/Baidu - Autonomous Driving
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks Nanashi!
We use hungarian algorithm(cost: euclidian distance) for ensemble, so some predictions is assigned based on cost matrix and the others is ignored.
In our pipeline image, some centernet prediction(red points) is not assigned and ignored.</p>
<p>Congrats everyone with excellent result!
I summarize and write down the part of my solution and our post process.
For other part:
<a href="https://www.kaggle.com/bamps53">camaro</a> part: <a href="https://www.kaggle.com/c/pku-autonomous-driving/discussion/127056">(part of) 7th place solution with code</a>
<a href="https://www.kaggle.com/hesene">Jhui He</a> and <a href="https://www.kaggle.com/lanjunyelan">yelan</a> part: <a href="https://www.kaggle.com/c/pku-autonomous-driving/discussion/127034#726362">https://www.kaggle.com/c/pku-autonomous-driving/discussion/127034#726362</a></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1620223%2F6062aa1f543d25daab45fa475adb223d%2Fdrive_pipeline%20(6" alt="">.jpg?generation=1579652111438826&alt=media)</p>
<h2>Model1: detection and pose estimation</h2>
<h3>common setting</h3>
<p><strong>detection model</strong>
- Mask RCNN(mask head removed)
- backbone: resnext101-32x4d
- lvis pretrained</p>
<p><strong>pose estimation model</strong>
- HRNet-w18c, efficientnetb0/b3
- imagenet pretrained</p>
<p><strong>Loss</strong>
- classification: BCE
- detectoin: Focal Loss
- pose regression: L1 Loss</p>
<p><strong>detection: Optimizer and scheduler</strong>
- optimizer: SGD(lr=0.01, momentum=0.9, weight_decay=1e-4, nesterov=True)
- scheduler: CosineAnnealingWarmRestarts</p>
<p><strong>pose estimation: Optimizer and scheduler</strong>
- optimizer: Adam(lr=0.0001)
- scheduler: None</p>
<p><strong>Augmentation</strong>
- detection: horizontal flip
- pose estimation: horizontal flip, shift, rotate, random blightness/contrast</p>
<h3>1. pretrain on the boxy-vehicle-dataset</h3>
<p>At first, I train my model on the <a href="https://boxy-dataset.com/boxy/">boxy-vehicle-dataset</a>.
This dataset include axis aligned bounding box and 3d cuboids, but I use only 2d bbox.<br>
<strong>training setting</strong>:
- image resolution: 1232x1028
- epochs: 10
- batch_size: 4</p>
<h3>2. finetune on competition dataset</h3>
<p><strong>Model</strong>
- add depth head on top of model</p>
<p><strong>preprocess</strong>
- split train vs val = 9 vs 1
- create 3d bbox using label, then create axis aligned bbox
- depth -> 1 / sigmoid(depth) - 1</p>
<p><strong>training setting</strong>
- image resolution: 800 x 2800, 1400x3300
- depth loss: L1 Loss
- epochs: 50
- batch_size: 4</p>
<h3>3. pose estimation(yaw_sin, yaw_cos, pitch)</h3>
<p><strong>preprocess</strong>
- crop image by bbox and resize</p>
<p><strong>training setting</strong>
- image resolution: 320x480
- epochs: 30
- batch_size: 128</p>
<p>public LB/private LB
- 800x2800, single fold: 0.119/0.106
- 1400x3300, single fold: 0.106/0.111</p>
<h1>Model2: centernet</h1>
<p><strong>model</strong>
- <a href="https://github.com/xingyizhou/CenterNet">pytorch dla centernet</a>.
- regression of yaw_sin, yaw_cos, pitch, depth, 2d bbox size(w, h), 3d bbox size(w, h, l)
- classification of object centerness(heat map)</p>
<p><strong>Loss</strong>
- regression: L1 Loss
- classification: Focal Loss</p>
<p><strong>optimizer and scheduler</strong>
- optimizer: Adam(lr=5e-4)
- scheduler: CosineAnnealingLR(lr=5e-5)</p>
<p><strong>Augmentation</strong>
- pose estimation: horizontal flip, shift, random blightness/contrast</p>
<p><strong>preprocess</strong>
- depth -> 1 / sigmoid(depth) - 1</p>
<p><strong>training setting</strong>
- split train vs val = 8 vs 2
- epochs: 30
- batch_size: 12</p>
<p>public LB/ private LB
- single fold: 0.100/0.096</p>
<h1>Ensemble: Linear assignment</h1>
<p>We use different model(faster rcnn, centernet), so it is difficult to ensemble predicitons.
So we decided to ensemble nearset points between predictions.
We use hungalian algorithm for linear assignment.
Please refere to below code.
```
from scipy.optimize import linear_sum_assignment
distance_th = 30
yaw_th = 10</p>
<p>sub1 = pd.read_csv('sub1.csv')
y1 = sub1['PredictionString'].str.split(' ').values
X1 = sub1['ImageId'].values</p>
<p>sub2 = pd.read_csv('sub2.csv')
y2 = sub2['PredictionString'].str.split(' ').values
X2 = sub2['ImageId'].values
for idx in tqdm(range(len(sub1)), position=0):
if str(np.nan) != str(y1[idx]) and str(np.nan) != str(y2[idx]):
label1 = np.array(y1[idx]).reshape(-1, 7).astype(float)
label2 = np.array(y2[idx]).reshape(-1, 7).astype(float)
center_points1 = get_imgcoords(label1) # [N, 3], (img_x, img_y, img_z)
center_points2 = get_imgcoords(label2)</p>
<pre><code> cost_matrix = np.zeros([len(center_points1), len(center_points1)])
for idx1, i in enumerate(center_points1):
for idx2, j in enumerate(center_points2):
cost_matrix[idx1, idx2] = np.linalg.norm(i - j)
match1, match2 = linear_sum_assignment(cost_matrix)
for i, j in zip(match1, match2):
if cost_matrix[i, j] &lt; distance_th:
if np.abs(label1[i][1] - label2[j][1]) &lt; yaw_th:
label1[j] = (label1[i] + label2[j]) / 2
label1[idx] = np.concatenate(tmp1).astype(str)
</code></pre>
<h1>use label1 for submission</h1>
<p>```</p>
|
Peking University/Baidu - Autonomous Driving
|
9th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Peking University/Baidu - Autonomous Driving
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Sorry for late reply.</p>
<ul>
<li>I did not use orientation in camera coordinate throughout this competition; I believed local orientation is more robust, especially for infrequent orientations (pitch is far from 0, +-pi).</li>
<li>The appearance sizes of cars are really different depending on the distance from camera. For the larger car models, I downsampled input images x0.5 and x0.3334 for training. Using shared feature map for these different sizes of objects are really mysterious design for me and it is somewhat surprising that the original centernet have achieved good results. I think multi-scale model would be better choice, but I used independent scale models for simplicity.</li>
</ul>
<p>I thank the host Peking University/Baidu and Kaggle team for holding this attractive competition, and congrats to all prize and medal winners.
Here is a brief summary of my solution (under construction).</p>
<h1>Approach</h1>
<p>I used two independent models. The first model (Model A) detects cars and estimates their orientations. The second model (Model B) estimates depth map of each image. Using image coordinate (ix, iy) from Model A, depth (z) from Model B, and camera intrinsic, 3D coordinate (x, y, z) of each car is calculated. Estimating accurate depth is harder than car detection or orientation estimation. Thus I separated depth part to different dedicated model (Model B).</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F745525%2F0f0e8433bfc2d1baf24b657833a5ac6d%2Fpku.png?generation=1579657768348051&alt=media" alt=""></p>
<h1>Model A</h1>
<p>Similar to CenterNet, but there are some modifications in targets and losses.</p>
<h2>Targets</h2>
<p>Model A detects cars, estimates their image coordinate (ix, iy)(not 3D camera coordinate (x, y, z) required for submission), and (yaw, pitch, roll).
Therefore, the targets are (conf, dx, dy, sin(pitch), cos(pitch), yaw, roll).
conf is confidence map for detecting car centers. (dx, dy) is relative position of car center in feature grid (0-1).
(yaw, pitch, roll) is local orientation, not the orientation in camera coordinate system that is given as groudtruth. I used local orientation because original ground-truth orientation is hard to estimate from car appearance without context (camera position and image coordinate).</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F745525%2Fa34f2f2cd8c17f49c5c89b2db92bd5c2%2F2020-01-22%2010.32.44.png?generation=1579657792448454&alt=media" alt=""></p>
<p>[1] A. Mousavian et al., "3D Bounding Box Estimation Using Deep Learning and Geometry," in Proc. of CVPR, 2017.</p>
<p>This modification of orientation is done by rotation matrix that moves camera center to target car center:</p>
<p><code>
yaw = 0
pitch = -np.arctan(x / z)
roll = np.arctan(y / z)
r = Rotation.from_euler("xyz", (roll, pitch, yaw))
</code></p>
<h2>Losses</h2>
<p>Cross entropy is used for conf, and L2 loss is used for the other targets.</p>
<h2>Architectures</h2>
<p>I separately trained different models as Model A (four models) to detect different sizes of cars by grouping cars according to their depth; 0-25, 20-50, 40-80, and 70-180. For the former two groups (closer cars), efficientnet or se-resnext is used to get x32 downsampled feature map. For the latter two groups, segmentation models (efficientnet or se-resnext + FPN) are used from segmentation_models.pytorch to get finer feature maps (x16).</p>
<h2>Augmentations</h2>
<p>Augmentation is difficult part; it is related to how ground-truth of zoomed or flipped test images is created.</p>
<ul>
<li>flip (around principal point instead of image center: img[:, :3374] = cv2.flip(img[:, :3374], 1))</li>
<li>rotation (around principal point; img = np.array(Image.fromarray(img).rotate(theta, center=(1686.2379, 1354.9849))) (-7 to 1 degrees)</li>
<li>RandomBrightnessContrast</li>
<li>Random scaling and crop</li>
</ul>
<h2>Optimizers</h2>
<p>Trained for 160 epochs with Adam; LR = 0.0001 and decreased by 0.1 at epoch 100 and 140.</p>
<h2>Ensemble</h2>
<p>Several k-fold models are integrated at feature map level (model raw outputs are averaged), but it seems to not work (why...?)</p>
<h1>Model B</h1>
<h2>Input</h2>
<p>For Model B, fixed area of input images are used for training and test: img[1558:, 23:3351]. Also, relative image coordinates from principal point is added to input in order to exploit the context of fixed camera position against ground (thus, input is gray image, dx, dy).</p>
<h2>Targets and Losses</h2>
<p>The second model predicts only the depth of cars. Actually, I trained Model B to predict (x, y, z) but used only z information.
Thus, target is 3D car coordinate (x, y, z). The loss is calculated only from the pixels of feature map that car centers exist.
Loss function used here is MSE normalized by the distance from camera: ||pred - gt||_2 / ||gt||_2.
This selection comes from my assumption that the evaluation criterion about translation is relative rather than meter.</p>
<h2>Architectures</h2>
<p>Segmentation models (efficientnet or se-resnext + FPN) are used get feature maps (x16).</p>
<h2>Augmentations</h2>
<ul>
<li>flip (around principal point instead of image center: img[:, :3374] = cv2.flip(img[:, :3374], 1))</li>
<li>RandomBrightnessContrast</li>
</ul>
<h2>Optimizers</h2>
<p>Trained for 100 epochs with Adam; LR = 0.0001 and decreased by 0.1 at epoch 70.</p>
<h1>Questions</h1>
<p>After finishing the competition, several questions are still left to the participants. I really appreciate if the organizers answer to the following questions.</p>
<ul>
<li>How was ground-truth for flipped test image created? I guess simply done by x = -x, roll = -roll, pitch = -pitch and this is not accurate as discussed in <a href="https://www.kaggle.com/c/pku-autonomous-driving/discussion/123653">https://www.kaggle.com/c/pku-autonomous-driving/discussion/123653</a></li>
<li>How was ground-truth for zoomed test image created? I guess it is done by z = z / scale_factor. The other option is leave the ground truth as it is but I think this is less appropriate as we do not know new camera intrinsic.</li>
<li>What was evaluation metric? Discussed in <a href="https://www.kaggle.com/c/pku-autonomous-driving/discussion/124489">https://www.kaggle.com/c/pku-autonomous-driving/discussion/124489</a></li>
</ul>
<p>Actually, in addition to depth (z), x and y are also regressed with normalized MSE;
((x - x')^2 + (y - y')^2 + (z - z')^2) / (x^2 + y^2 + z^2),
where (x, y, z) is gt and (x', y', z') is prediction.</p>
<p>I see. Really nice inspiration for checking this...</p>
<p>Yes, I did not check abs error because it seems to be not evaluation metric.
Abs error should highly depend on the distance from camera (or z coordinate); distant cars are relatively difficult.</p>
<p>Related to the targets of Model A, I just published to calculate local orientation and search for cars with similar orientations.
<a href="https://www.kaggle.com/ren4yu/show-cars-with-similar-local-orientations">https://www.kaggle.com/ren4yu/show-cars-with-similar-local-orientations</a></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F745525%2Fecd99b3ac37448074ccf1070e15e09fb%2F2020-01-23%200.17.00.png?generation=1579706306189090&alt=media" alt=""></p>
|
Personality Prediction Based on Twitter Stream
|
What I learned and did
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Personality Prediction Based on Twitter Stream
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Forum postings in other competitions have helped me a lot so I thought I'd share what I learned and did in this competition.</p>
<p>I learned that luck plays a big role in determining the competition winner. In cleaning my code for submission, I ran through everything again. While it beat my best model on the public leaderboard, it would not have been good enough to win on the private
leaderboard.</p>
<p>Another important thing I learned is to always check the submission file. I wasted two submissions from badly formatted data frames and another two from transferring the model weights incorrectly from my cross validation code to the final submission code.
Luckily, I caught the latter mistake a day before the competition ended and the resubmitted model won the competition.</p>
<p>Since the data set is so small, I combined the training and test data sets and used Excel and conditional formatting for data exploration. It's similar to what Jeremy Howard did (http://media.kaggle.com/MelbURN.html) Two patterns pop out.</p>
<p>Blocks of zeros tend to occur together for the Fvar variables with highly correlated non-zero values. I replaced those columns with averages (Fvar29-33, Fvar38-46, Fvar53-55, Fvar64-65, Fvar75-78). I also deleted columns with no variation (Avar9, Fvar23-28).</p>
<p>The second pattern relates to missing values. Some observations have no features and some observations had no Fvar or Lvar values. So I split the data set into three and imputed/modeled them separately. The motivation was to impute as little as possible
to reduce overfitting to the data. kNN in the R imputation package was used.</p>
<p>No other feature selection was performed since, in a way, feature selection was already done in creating these derived variables. All I did was “feature elimination” and throw away really bad ones.</p>
<p>Faced with many unknown and likely un-important features, I concentrated on ensembles of weak learners and those that do feature space transformations. All tuning was done in R with repeated 2-fold cross validation. The first four models I fitted (random
forests, gradient boosted machines, support vector machines, and multivariate adaptive regression splines) ended up in the final weighted average blend. None of the subsequent models improved the blend so were not used. I didn't try more sophisticated blending
techniques.</p>
<p>I'm curious as to what others did in feature selection. And any other good models besides the ones I mentioned? I'm especially curious about people that did well against both the public and private leaderboards. How did you avoid overfitting?</p>
<p> </p>
|
Personalize Expedia Hotel Searches - ICDM 2013
|
My Approach
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Personalize Expedia Hotel Searches - ICDM 2013
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi, </p>
<p>I am working hard on the report paper now.</p>
<p>I think after finishing the report I will post a draft on the old link: <a href="http://antinucleon.com/?p=587">http://antinucleon.com/?p=587</a></p>
<p>I will add some thing deleted from the report or not suitable to add in the report in the blog.</p>
<p>[quote=TotalNovice;33564]</p>
<p>I'm sorry if I missed the post somewhere in all the spam on this thread, but @binghsu, did you end up posting your more detailed description to your blog? If so, could you post the link?</p>
<p>[/quote]</p>
<p>Interesting. The longest training time for my model is 4 hours, random forest with 3200 tree on a core i5 machine. It will achieve near 0.5228</p>
<p>As a team member, I will share my model first.</p>
<p>I use deep learning and random forest, on nearly 8% data.</p>
<p>I will add more information later or in my blog</p>
<p> </p>
<p>I think it is a good idea... lol</p>
<p>[quote=Dmitry Efimov;33356]</p>
<p>[quote=Gert;33245]</p>
<p><span style="line-height: 1.4">@SURECOMMENDERS: don't understand why your only achievement is team member, I'd like to nominate you for forum regular as well</span></p>
<p><span style="line-height: 1.4">[/quote]</span></p>
<p><span style="line-height: 1.4">Is there a nomination "forum spamer"? :)</span></p>
<p>[/quote]</p>
|
Personalized Web Search Challenge
|
Share your approach?
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Personalized Web Search Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Here is mine:</p>
<p>I tried the Pointwise approach of ranking using association rules for the task. I found the user's past click history and past relevance results for the current query (across all users) to be most useful. I also found that associations with domains to be more useful that URLs.</p>
<p>To validate my models, I extracted 900k sessions from Day 26 of training data, and randomly sampled queries to be test queries, according to the selected criteria described. </p>
<p>My code is written in Python and ran on a 4-core desktop with 24GB of RAM. The actual prediction of test data takes less than 5 mins using pre-built models.</p>
<p>But as my models became more complex, even 24GB RAM wasn't enough. To make the test prediction task more manageable, I extracted the users and queries from the test set and only kept model data for these users and queries in memory.</p>
<p>After the 0.798 level, my progress stalled and everything I added to the model only reduced the score. I am not sure why that was...</p>
<p>Other feature/model ideas I tried:</p>
<ul>
<li><span style="line-height: 1.4">Associations of n-grams individual terms in queries with URLs and domains. Trigrams seemed to add most value.</span></li>
<li><span style="line-height: 1.4">List of top users for each query and using their relevance history.</span></li>
<li><span style="line-height: 1.4">Finding similar users based on past query history. This was slow and memory consuming, so I had to use a lot of shortcuts and tricks.</span></li>
<li><span style="line-height: 1.4">Associations of the queries made in the current session with URLs/domains. Didn't seem to add much value.</span></li>
<li><span style="line-height: 1.4">Associations of URLs clicked on in the current session with the URLs/domains. This seemed to help on my validation data, but performed poorly on the test data.</span></li>
<li><span style="line-height: 1.4">Prior probabilities of domains/URLs didn't seem to help.</span></li>
</ul>
<p>I am curious to know how other people approached this task. As the research suggests, did LambdaMART and other such models work the best?</p>
|
PetFinder.my - Pawpularity Contest
|
107th solution. Our first time to kaggle!
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks a lot!</p>
<ul>
<li><p>First thanks for every kindly kaggler for sharing their ideas!</p></li>
<li><p>This is our first time kaggling! Here is the final solution: </p></li>
</ul>
<h2>part_01</h2>
<hr>
<ul>
<li>Backbone: swin_large384.</li>
<li>Head: Using linear head and SVR embeding metadata.</li>
<li>Here is the part_01 model.<br>
Metadata: ['SubjectFocus',… 'Eyes'Info', 'Blur'] [image]<br>
| | |<br>
| | swin_large384<br>
| | |<br>
| | Linear(784,128)<br>
| ( ,12) | |<br>
| ---------Linear(140,64)<br>
| |<br>
| Linear(64,1)<br>
| |<br>
-------------------SVR(13,1)--------------<br>
|<br>
Prediction_01( ,1)</li>
</ul>
<h2>part_02</h2>
<hr>
<ul>
<li><p>In this part didn't use any metadata.</p></li>
<li><p>Backbone: swin_large384 and swin_large224</p></li>
<li><p>Head: 3 linear layers.</p></li>
<li><p>Using a ridge model to integrate those two models prediction.</p></li>
<li><p>Here is the part_02 model.<br>
-------[image]-------<br>
| |<br>
swin_large384 swin_large224<br>
| |<br>
Linear x 3 Linear x 3<br>
| |<br>
--------Ridge-------<br>
|<br>
Prediction_02( ,1)</p></li>
</ul>
<h2>part_03</h2>
<hr>
<ul>
<li>Using FastAi TTA(Test-Time-Augment) to make prediction<br>
----------[image]----------<br>
|<br>
swin_large224<br>
|<br>
TTA<br>
|<br>
Prediction_03( ,1)</li>
</ul>
<hr>
<p><code>Final_Prediction = Prediction_01*0.3 + Prediction_02*0.3 + Prediction_03*0.4</code></p>
|
PetFinder.my - Pawpularity Contest
|
12th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First,thanks to the host and every participant in this competition.<br>
My solution is below.</p>
<h3>Summary</h3>
<p>I used weighted ensemble of 3 models which consist of 2fastai models and 1pytorch models.</p>
<p>Fastai models are based on <a href="https://www.kaggle.com/tanlikesmath/petfinder-pawpularity-eda-fastai-starter" target="_blank">this</a> , <br>
and a pytorch model is based on <a href="https://www.kaggle.com/phalanx/train-swin-t-pytorch-lightning" target="_blank">this</a>.<br><br>
The detail of each model is as follows.</p>
<ul>
<li>beit_large_patch16_224<ul>
<li>framework: fastai</li>
<li>lr: 2e-5 </li>
<li>epoch: 10</li>
<li>scheduler: OneCycleScheduler</li>
<li>batch size: 4</li>
<li>gradient accumulate: 16</li>
<li>fold: 10</li>
<li>augmentation: Flip,Brightness,Contrast, Saturation</li>
<li>tta: 3x, beta=0.5, Flip only.</li>
<li><strong>CV: 17.46</strong></li>
<li><strong>public/privateLB: 18.00/17.20</strong><br>
<br></li></ul></li>
<li>swin_large_patch4_window7_224<ul>
<li>framework: fastai</li>
<li>lr: 4e-5</li>
<li>epoch: 5</li>
<li>scheduler: OneCycleScheduler</li>
<li>batch size: 4</li>
<li>gradient accumulate: 8</li>
<li>fold: 10</li>
<li>augmentation: Flip,Brightness, Contrast, Saturation</li>
<li>tta: 4x, beta=0.5, Flip only.</li>
<li><strong>CV: 17.47</strong></li>
<li><strong>public/privateLB:17.87/17.02</strong> <br>
<br></li></ul></li>
<li>swin_large_patch4_window12_384<ul>
<li>framework: pytorch-lightning (On inference,I used pure pytorch)</li>
<li>lr: 1e-5</li>
<li>epoch: 20 (using early stopping)</li>
<li>scheduler: CosineAnnealingWarmRestarts(T_0=20, eta_min=1e-4)</li>
<li>batch size: 4</li>
<li>gradient accumulate: 8</li>
<li>fold: 10</li>
<li>augmentation:RandomHorizontalFlip, RandomVerticalFlip, RandomAffine, ColorJitter</li>
<li>tta: 3x, Horizonal Flip and Random Rotate(rotate=45).</li>
<li>additional_head: SVR head </li>
<li><strong>CV: 17.33</strong></li>
<li><strong>public/privateLB: 17.86/17.01</strong></li></ul></li>
</ul>
<p>Loss function I used is BCEWithLogitsLoss, and Mixup is common over all models.<br>
Besides, I used optuna for weight optimization.</p>
<p>Ensemble <strong>CV/publicLB/privateLB: 17.121/17.774/16.936</strong></p>
<h3>What worked well</h3>
<p>The following two things worked well to me.</p>
<ul>
<li>SVR head <br>
It improve CV by around 0.15 (17.51→17.33) and privateLB by the similar degree(17.17→17.01).</li>
<li>Ensembling 'beit_large_patch16_224' with Swin<br>
Ar first, I ensembled only Swin models, got CV around 17.21(privateLB 16.947~16.975)<br>
Finally, I added Beit-L into ensemble models, got CV around 17.121(privateLB 16.917~16.936)<br><br>
I also tried to ensemble ConvNext-XL instead of Beit-L , the result is similar as the case of using Beit-L.(CV:17.121, PublicLB:17.835 PrivateLB:16.936)</li>
</ul>
<h3>About calculating RMSE CV on fastai</h3>
<p>Using AccumMetric is good to me.<br>
(This method is mentioned in several discussions. For example, <a href="https://www.kaggle.com/c/petfinder-pawpularity-score/discussion/296369" target="_blank">this</a>)</p>
<p>Using this method,<br>
predictions and targets of each batch is stored during training,<br>
and at the end of training, metric is applied to stored predictions and targets.</p>
<p>Before using this method,<br>
I sometimes got too better RMSE CV, for example, less than 16 on one fold.<br>
The reason is probably that fastai's learner calculate metric of each batch at first ,then average them by default.</p>
<h3>Inference code</h3>
<p>The link is <a href="https://www.kaggle.com/kurokurob/infer-of-12th-solution/notebook" target="_blank">here</a>.<br>
<br><br>
Thanks for reading!</p>
<p>SVR works very well to me, without this , I couldn't have reached gold.<br>
I would like to keep this method in mind for future competitions.<br>
Thanks for sharing!</p>
<p>Thank you for comment!</p>
<p>In training, hflip is worked well for CV.</p>
<p><code>did you see significant difference in the performance of SWIN 224 and 384</code><br>
In similar setting, swin 384 is better than swin 224 in CV by about 0.05~0.1.</p>
<p>Thanks for comment!<br>
I think my beit give my models diversity, since I was only using swin at first.</p>
|
PetFinder.my - Pawpularity Contest
|
13th place solution - How I survived the shakeup
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks a lot! Your solution with single model really impressed me!</p>
<p>Good work aswell! The shakeup and lb in this competition is kinda weird, some people have good cv-public lb correlation and some quite bad. Good experience to always look for stable validation approaches though</p>
<p>Thanks! Was kinda worried haha, thought that I had bugs somewhere in my pipeline, my last submission had much lower cv but the same lb score, but everything went fine :)</p>
<p>400th-something place on public, 13th place on private, here we go!</p>
<h2>Training and validation</h2>
<p>CV-LB correlation in this competition was very bad, and because of that I decided to focus on just improving my CV score as much as I can. I was using 10-fold stratified CV with binned target (20 bins), which improved the stability from just using KFold or Stratified Kfold by target.</p>
<p>I was not using early stopping anywhere, because I was worried that it would give me heavily biased OOF scores, which would make my CV even less reliable. Instead of trying to find models with the best OOF score, I was focused on finding good training regime that would give me best and most stable mean CV scores.</p>
<p>For optimization, using AdamW with cosine annealing linear warmup schedule was good. Training was usually done for 3-5 epochs.</p>
<h2>Model ensemble</h2>
<p>As many people in this competition, I started with swin models, but later focused more on ensemble and tried different models (both transformers and CNNs). Final ensemble is:</p>
<table>
<thead>
<tr>
<th>Modelname</th>
<th>CV score (with hflip TTA)</th>
</tr>
</thead>
<tbody>
<tr>
<td>swin_large_patch4_window7_224</td>
<td>17.520346</td>
</tr>
<tr>
<td>swin_large_patch4_window12_384</td>
<td>17.459443</td>
</tr>
<tr>
<td>swin_large_patch4_window12_384 CE loss</td>
<td>17.434677</td>
</tr>
<tr>
<td>vit_large_patch16_224</td>
<td>17.793921</td>
</tr>
<tr>
<td>vit_base_patch16_384</td>
<td>17.803793</td>
</tr>
<tr>
<td>ig_resnext101_32x8d</td>
<td>18.08631</td>
</tr>
<tr>
<td>tf_efficientnet_b4_ns</td>
<td>18.158774</td>
</tr>
<tr>
<td><strong>Ridge OOF ensemble</strong></td>
<td><strong>17.134049</strong></td>
</tr>
</tbody>
</table>
<p>The best model was swin_large_patch4_window12_384 with cross-entropy loss, adding vit and resnext really improved final ensemble score. </p>
<p>Final ensemble is done using Ridge regression on OOF scores.</p>
<h2>Loss functions</h2>
<p>I tried different loss functions - MSELoss, L1Loss and HuberLoss were all bad, BCELoss was the best overall, and I also tried using multiclass cross entropy loss with target binned into 20 bins, and predicting averages/bin centers for each bin. It was situationally good - the best model is using it, but overall it requires quite a bit of additional training regime tuning, and wasn't working well for other models (or I just couldn't find good training regime for them). Also, for some reason this loss function really hates mixup (and I've tried different ways to use it, so yeah)</p>
<h2>Final notes and tips</h2>
<ul>
<li>Focus on improving CV!</li>
<li>In my opinion, early stopping can be bad if not used carefully</li>
<li>Batch size 8 is better than 16 or 32 for generalization (try to lower batch size next time)</li>
<li>Try different models and check OOF ensemble score, sometimes even models with relatively bad CV scores can really improve your ensemble</li>
<li>External data from another Petfinder competition was not useful (correlation between Pawpularity and AdoptionSpeed, cat/dog or breed data was very low on overlap data (around 1500 images btw))</li>
<li>CLIP labelling was not useful (Imagenet-pretrained models can already predict cat/dog and even their breeds pretty well)</li>
</ul>
|
PetFinder.my - Pawpularity Contest
|
144th place solution & trial result note
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I use ridge weight using oof result from each model.<br>
Stacking means ridge trained using oof result and used ridge for inference in same manner.</p>
<p>I thought stacking is overfitting for this competition because target data looks ambiguous.</p>
<p>Thank you for nice competition and competitors!</p>
<p>This is 144th place solution and trial result note for my learning.<br>
When publishing high scoring notebook, I was dropped from silver place and I was afraid.<br>
However I believed my scratch code and CV.<br>
This is my 2nd silver medal, so I want gold in next time for competitions Master!</p>
<h1>Solution</h1>
<p>I used 7 models with ridge weight for final ensemble.</p>
<p>Private : 17.05251<br>
Public : 17.83720<br>
CV : 17.16897</p>
<h1>Model architecture</h1>
<p>My model architecture is below.</p>
<pre><code>Image -> BaseModel -> GAP -> Feature -> Feature1
Categorical feature -> Embed, Dropout, SiLU -> Feature2
Feature1 -> Concat -> Linear, Dropout, SiLU -> Linear -> Sigmoid
Feature2 -----^
</code></pre>
<p>Using category features is better than Image only model.<br>
When using SVR, I dropped Last Linear+Sigmoid and used feature for input of SVR.</p>
<table>
<thead>
<tr>
<th>BaseModel</th>
<th>Input image size</th>
<th>fold n</th>
<th>CV</th>
<th>weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>swin_large_patch4_window12_384</td>
<td>384</td>
<td>10</td>
<td>17.619706</td>
<td>0.15085365</td>
</tr>
<tr>
<td>swin_base_patch4_window12_384</td>
<td>384</td>
<td>10</td>
<td>17.650229</td>
<td>0.12061267</td>
</tr>
<tr>
<td>vit_large_r50_s32_384 +SVR</td>
<td>384</td>
<td>10</td>
<td>17.582580</td>
<td>0.18084586</td>
</tr>
<tr>
<td>vit_large_patch16_384</td>
<td>384</td>
<td>10</td>
<td>17.639636</td>
<td>0.13868346</td>
</tr>
<tr>
<td>DOLG + effifientnetb5</td>
<td>512</td>
<td>10</td>
<td>17.767653</td>
<td>0.11761226</td>
</tr>
<tr>
<td>dm_nfnet_f6</td>
<td>512</td>
<td>5</td>
<td>17.597885</td>
<td>0.16820361</td>
</tr>
<tr>
<td>dm_nfnet_f4 + SVR</td>
<td>512</td>
<td>5</td>
<td>17.637847</td>
<td>0.12318849</td>
</tr>
</tbody>
</table>
<p>dm_nfnet_f6 cost much time for inference, but it had large weight.</p>
<h1>Training</h1>
<ul>
<li>RMRSE loss + SmoothL1 loss</li>
</ul>
<p>RMRSE = Root Mean Root Squared Error (I mistook code, but this is lucky mistake :)<br>
RMRSE + SmoothL1 is better than only RMRSE, SmoothL1 or BCE.</p>
<ul>
<li><p>Augmentattion</p></li>
<li><p>small learning rate + frequently model validation check + eary stopping</p></li>
</ul>
<h1>Not work</h1>
<ul>
<li>Other model (ResNet, SEResNeXt, EfficientNet, Efficient v2, Swin_xxx_winddow7, CAIT, DeIT, ConvNeXt)</li>
<li>Cat, dog separate Prediction (cat's loss is better than dog, but total loss was not improved)</li>
<li>Mixup</li>
<li>Hybrid SwinTransformer</li>
<li>DOLG + ResNet, Efficientb0~b7 except b5</li>
<li>TTA (cv was not improvedd)</li>
</ul>
<h1>Regret</h1>
<ul>
<li>Stacking ridge is better than final submission PB : 17.02882 LB : 17.86157 CV : 17.1470</li>
</ul>
|
PetFinder.my - Pawpularity Contest
|
14th Place Solution- KMS
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I see, will check closely why it did not work next time, most of the time the same approach won't work due to small bugs… the devil hides in details, thanks for pointing out~</p>
<p>I see, thanks for sharing! I did not notice there were duplicates but just used it for pseudo-labeling blindly… learned a lesson from your comments… How did you figure it out?</p>
<p>Thanks to the competition host for this competition and the great works from <a href="https://www.kaggle.com/mcggood" target="_blank">@mcggood</a> and <a href="https://www.kaggle.com/shivamb" target="_blank">@shivamb</a>!</p>
<h3>Solution</h3>
<p>Overview: Ensemble of diverse feature-extraction and finetune-based models, diversity in terms of models and image sizes.</p>
<h4>Feature Extraction Based:</h4>
<ul>
<li>tf_efficientnet_l2_ns + 1024 image size + ridge: cv 17.68</li>
</ul>
<h4>Finetuning Based: (All with heavt augmentation)</h4>
<ul>
<li>tf_efficientnet_b4_ns + 384 image size: cv 17.79</li>
<li>vit_large_patch16_384+ 384 image size: cv 17.63</li>
<li>beit_large_patch16_512+ 512 image size: cv 17.59</li>
<li>swin_large_patch4_window7_224 + 224 image size: cv 17.49</li>
<li>swin_large_patch4_window12_384+ 384 image size: cv 17.48</li>
</ul>
<h4>Ensemble: cv 17.144</h4>
<h3>What we have tried but did not work</h3>
<ol>
<li>more image meta features from maskrcnn (how many pets\how much ratio of area do they occupy…)</li>
<li>usage of previous competition data (pretrain\pseudo-labeling)</li>
<li>pet centric crop\randomly crop N pets at a time: improve single model cv by 0.2, but not helpful to ensembling</li>
</ol>
|
PetFinder.my - Pawpularity Contest
|
17th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h3>Summary</h3>
<p>Our final submission consists of <strong>(10 PyTorch models)*0.7</strong> + <strong>(1 FastAI model)*0.3</strong>. (10 PyTorch models) are trained using 5-fold CV, and (1 FastAI model) model are trained using 8-fold CV. All models are trained with pseudo-labeled data using the previous "PetFinder Adoption Speed data" competition dataset. We use a simple averaging for the (10 PyTorch models).</p>
<h3>Ensemble</h3>
<p>Our ensemble also includes some models trained on "cats only" and "dogs only" images, CV and LB improve marginally training these cats only and dogs only as individuals. The rationale is "cats" are easier to predict than "dogs", so we placed more emphasis on improving the single dog model. Hence we also included those in our ensemble. We used the two TTA, <strong>1. "CenterCrop"</strong> and <strong>2. "CenterCrop + HFlip"</strong> only, using other TTA decreased our CV.</p>
<h3>Models</h3>
<p>All models were trained with BCE loss with slightly different augmentations configuration varies between models which consists of "<strong>HFlip</strong>", "<strong>ColorJitter</strong>", "<strong>CenterCrop</strong>", "<strong>RandomResizedCrop</strong>", "<strong>ToGray</strong>", "<strong>ShiftScaleRotate</strong>", "<strong>IAASharpen</strong>" and "<strong>MixUp</strong>". We found that using this approach improve our model diversity even when using the same architecture.</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>CV</th>
<th>Public</th>
<th>Private</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>swin_large_patch4_window7_224_in22k</td>
<td>16.5</td>
<td>17.87</td>
<td>17</td>
<td>external data (FastAI)</td>
</tr>
<tr>
<td>ig_resnext101_32x8d</td>
<td>17.14</td>
<td>18.04</td>
<td>17.08</td>
<td>external data</td>
</tr>
<tr>
<td>ig_resnext101_32x8d (x2)</td>
<td>17.1</td>
<td>18.06</td>
<td>17.06</td>
<td>external data (cat only + dog only model)</td>
</tr>
<tr>
<td>vit_large_patch16_224_in21k (x2)</td>
<td>17</td>
<td>18.04</td>
<td>17.22</td>
<td>external data (cat only + dog only model)</td>
</tr>
<tr>
<td>tf_efficientnet_b4_ns (x2)</td>
<td>16.98</td>
<td>18.09</td>
<td>17.2</td>
<td>external data (cat only + dog only model)</td>
</tr>
<tr>
<td>swin_large_patch4_window7_224_in22k</td>
<td>17.04</td>
<td>18</td>
<td>17.07</td>
<td>external data</td>
</tr>
<tr>
<td>tf_efficientnet_b4_ns</td>
<td>17.07</td>
<td>18.03</td>
<td>17.1</td>
<td>external data</td>
</tr>
<tr>
<td>vit_large_patch16_224_in21k</td>
<td>17.08</td>
<td>17.96</td>
<td>17.07</td>
<td>external data</td>
</tr>
<tr>
<td>swin_large_patch4_window12_384_in22k</td>
<td>16.99</td>
<td>17.99</td>
<td>17.04</td>
<td>external data</td>
</tr>
<tr>
<td>ig_resnext101_32x8d</td>
<td>17.06</td>
<td>17.98</td>
<td>17.1</td>
<td>external data</td>
</tr>
<tr>
<td>convnext_base_22k_1k_384</td>
<td>16.98</td>
<td>17.93</td>
<td>17.05</td>
<td>external data</td>
</tr>
<tr>
<td>convnext_large_22k_1k_384</td>
<td>16.91</td>
<td>17.99</td>
<td>17.04</td>
<td>external data</td>
</tr>
</tbody>
</table>
<h3>What worked</h3>
<ul>
<li>External data</li>
<li>Simple averaging</li>
<li>Different combinations of augmentation during training</li>
<li>TTA using "<strong>CenterCrop</strong>" and "<strong>CenterCrop + HFlip</strong>" improved CV ranges between ~0.01 to 0.1.</li>
<li>Train single cat and dog model, and perform inference on cat-only images and dog-only images (Using Yolov5 and CLIP for cat/dog detection).</li>
</ul>
<h3>What didn't work</h3>
<ul>
<li>More TTA like hue, saturation, rotate fails to improve CV</li>
<li>MSE loss</li>
<li>Metadata</li>
<li>Ensemble using Ridge/KNN/SVR</li>
<li>Image size larger than 448</li>
<li>Pet-centric crop (using Yolov5 to detect cat/dog).</li>
</ul>
<h3>Final words</h3>
<p>We trusted our CV and we jumped 400+ placing. Altho public LB seems discouraging but we continually strive to improve our models iteratively.</p>
|
PetFinder.my - Pawpularity Contest
|
18th solution. Single swin transformer with custom head, with code.
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks! I wish I had more time here but I was busy with Santa competition</p>
<p>My point is that if you use private LB to select model changes then you should use another dataset to evaluate your new model after you are done.</p>
<p>How do you know you're not overfitting private LB with your tta? Did you evaluate tta first without looking at private score, selected best CV, then looked at private score? Asking because you only report private score unless mistaken.</p>
<p>Sure, don't worry much about my comment.</p>
<p>Again, I need to deep dive on how KL divergence helps match the training target distribution. This is real new to me. Well done!</p>
<p>Happy it works for you too!</p>
<p>Interesting. I didn't use heavy train augmentation. </p>
<p>I used bce loss on the target I define. It is in the code I shared.</p>
<p>Thanks Chris, and congrats on your result!</p>
<blockquote>
<p>I treated the task as a CTR task…</p>
</blockquote>
<p>Wow, maybe you just explained why bce works well. Host did say that pawpularity was based on number of clicks. I was puzzled by who bce worked as well as people said.</p>
<p>Congrats on your result anyway!</p>
<p>Thanks, but not best by prediction quality for sure! Maybe best single model solution indeed.</p>
<p>Thanks! I'm curious now,can you say a bit about how you intend to reuse it?</p>
<p>For the record, here are the Simon's posts I discuss. </p>
<blockquote>
<p>Comment 2 days ago in PetFinder.my - Pawpularity Contest</p>
<p>Why so many hate on <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a> 's post. Isn't it OK to normally guesstimate like this as human?</p>
<p>All those [RAPIDS.AI] named team, please share your solution summary and release the reproducible code. It would be proper and educative move. I believe then <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a> also consider this post. Just share your solution summary and reproducible code.</p>
<p>Simon Alerdic • 2 days ago • Options • Report • Reply<br>
Talk is cheap, show us the code. To answer this, <a href="https://www.kaggle.com/c/petfinder-pawpularity-score/discussion/300924" target="_blank">https://www.kaggle.com/c/petfinder-pawpularity-score/discussion/300924</a> , please share your inference code (a reproducible code). To achieve this 18th place with only 3 submission is hard to digest (IMHO).</p>
<p>Simon Alerdic • 33 minutes ago • Options • Report • Reply<br>
<a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> cc. <a href="https://www.kaggle.com/crained" target="_blank">@crained</a> <a href="https://www.kaggle.com/rsmits" target="_blank">@rsmits</a><br>
Please note, I didn't mean to disrespect you. Initially at your post, you didn't show us the code. That's why I said to show it anyway. I am sorry if you gets offended by my speech like other get offended by your speech and you didn't care. </p>
</blockquote>
<p>First two were comments on <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a> post suggesting that my NVIDIA colleagues were cheaters. Abhishek removed his post but comments remain. For instance this one from Abhishek where he made his unfounded accusations clear:</p>
<blockquote>
<p>Comment 2 days ago in PetFinder.my - Pawpularity Contest</p>
<p>No chris. The OP is not implying anything you said. The OP is implying organized crimes, attacking individuals in an organized manner and private sharing. Please note that I have nothing against you and giba. But there are a few fishes in your vicinity that spoil it for everyone.</p>
</blockquote>
<p><a href="https://www.kaggle.com/simonalerdic" target="_blank">@simonalerdic</a> please do yourself a favor: stop digging your own grave. You made a series of post summoning a number of people to share code because you suspected them of being cheaters, without providing the slightest evidence of misconduct.</p>
<p>Justifying this because I didn't share code immediately is not helping your case. </p>
<p>What could help is you admitting being wrong posting these and apologizing for it.</p>
<p>Given I need to spell thing for you: I did not share my submission code to prove you were wrong. I usually share my code after competition end when I see interest from kagglers.</p>
<p><a href="https://www.kaggle.com/rsmits" target="_blank">@rsmits</a> Thanks for support!</p>
<p><a href="https://www.kaggle.com/crained" target="_blank">@crained</a> Happy to entertain you!</p>
<p><a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a> I don't get why you got so angry just because I disagree on how you shared a top 4 notebook one months after competition start. That's my opinion and I hope you agree that I can have an opinion different from yours.</p>
<p>I praised you in the past for sharing, see for instance <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/145099" target="_blank">Tribute to Abhishek</a> </p>
<p>I don't remember the details, but I don't think you shuffled LB after one month at the time as you did here.</p>
<p>Anyway, my opinion is just my opinion, you could just ignore it. I hope you will cool down.</p>
<p>I said I will not further discuss your post but you came here, hence this little response.</p>
<p>Thanks.</p>
<blockquote>
<p>I'm quite sure that your pseudo-training procedure is leak-free ;) </p>
</blockquote>
<p>I hope so. I generated one set of pseudo labels for each fold model and use that set to further train that fold model.</p>
<p>Thanks a lot!</p>
<p>You did well, congrats!</p>
<p>Thanks.</p>
<p>To be honest I wanted to ensemble models but I ran into a series of issues that prevented me to train more models. Maybe I avoided overfitting that way!</p>
<p>I am really surprised (and pleased) by my result.</p>
<p>My model is a single 10 fold swin large transformer. The main difference with public notebooks is that I used an ordinal regression head <a href="https://en.wikipedia.org/wiki/Ordinal_regression" target="_blank">https://en.wikipedia.org/wiki/Ordinal_regression</a></p>
<p>I didn't use fastai either, just plain pytorch.</p>
<p>I read that bce was giving great results but I couldn't agree to that, hence I went my regression head despite the poor public LB,</p>
<p>My CV was 17.25. CV score is computer with RMSE on out of fold predictions. It is not the average of fold RMSE. The latter is a bit optimistic.</p>
<p>Edit, A little more details. I shared my submission notebook, <a href="https://www.kaggle.com/cpmpml/pet-064" target="_blank">https://www.kaggle.com/cpmpml/pet-064</a> . I didn't clean it, it was done in a hurry.</p>
<p>There are two checkpoint for each fold model.</p>
<p>The first checkpoints are from training on this competition data (pet_052 checkpoints)</p>
<p>The second checkpoints (pet_064) are from further training the same model with pseudo labels on previous petfinder competition training data. I had big hopes but they proved to be wrong. The improvement is tiny. </p>
<p>I had three subs:</p>
<ul>
<li><p>pet_52 checkpoints</p></li>
<li><p>pet_64 checkpoints</p></li>
<li><p>both checkpoints.</p></li>
</ul>
<p>Their scores are very similar. Their CV was also very similar.</p>
<p>edit2: I reused part of these public notebooks, warm thanks to <a href="https://www.kaggle.com/ytakayama" target="_blank">@ytakayama</a> </p>
<p><a href="https://www.kaggle.com/ytakayama/train-pytorch-swin-5fold-some-tips" target="_blank">https://www.kaggle.com/ytakayama/train-pytorch-swin-5fold-some-tips</a></p>
<p><a href="https://www.kaggle.com/ytakayama/pytorch-baseline-try-to-reproduce-fastai-notebook" target="_blank">https://www.kaggle.com/ytakayama/pytorch-baseline-try-to-reproduce-fastai-notebook</a></p>
<p>edit3: My head is not full ordinal regression as I fix the bins boundaries instead of learning them.</p>
<p>I shared my submission code, you can see the model architecture in it.</p>
<p>Same, I didn't use TTA.</p>
<p>I computed and reported root mean squared error. I did not compute any other metric given the competition metric was rmse.</p>
<p>I shared my code.</p>
<p>CV score was 17.25 as I wrote.</p>
<p>I tried regression head and bce head, but I got quite better CV with ordinal regression head.</p>
<p>I didn't know you were on Kaggle compliance team, congrats for the promotion!</p>
|
PetFinder.my - Pawpularity Contest
|
1st Place - Winning Solution - Placeholder
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>yes, having more columns than rows may be a problem. So you can add columns always checking if RMSE still decreasing.</p>
<p><a href="https://www.kaggle.com/quentinmichotte" target="_blank">@quentinmichotte</a> <a href="https://www.kaggle.com/crained" target="_blank">@crained</a> start by installing it. Using conda is the fastest and best way to install RAPIDS in your local machine: <a href="https://rapids.ai/start.html" target="_blank">https://rapids.ai/start.html</a></p>
<p>Also, RAPIDS comes already installed in Kaggle notebooks, just switch to GPU model and you can test it. Also you can check multiple RAPIDS notebook here: <a href="https://www.kaggle.com/cdeotte/rapids/code" target="_blank">https://www.kaggle.com/cdeotte/rapids/code</a></p>
<p>Thanks <a href="https://www.kaggle.com/rsmits" target="_blank">@rsmits</a> , check my full writeup. Using SVR from combined features extracted directly from imagenet were the key.</p>
<p>Thanks, and thats right, using CLIP helped to improve even model the SVR model due low correlation with imagenet pretrained models.</p>
<p>Thanks <a href="https://www.kaggle.com/crained" target="_blank">@crained</a>, your story is inspiring as well.</p>
<p>Thanks <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a>, it means a lot to me coming from you.</p>
<p>Thanks <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>. While everyone else was looking to improve cnn and vision transformers, I was looking for ways to improve the SVR "magic" model. In the end, thinking out of the box was the key.</p>
<p>In theory yes, but in practice I stacked multiple imagenet models and run a SVR on the top of it. That scored better, and added more diversity to final blend, than finetunning individual models.</p>
<p>Thanks <a href="https://www.kaggle.com/trushk" target="_blank">@trushk</a> </p>
<ol>
<li>I used direct imagenet embeddings</li>
<li>As I used many timm and clip models to extract features I run a forward model selection algorithm (adding one model features at time) and hill climbing algorithm (rmse calculated after adding a new model features).</li>
</ol>
<p>Only using SVR scores 16.934 in Private</p>
<p>I'm not home right now (actually in vacations) but wanted to share a macro vision of my solution. I will write in more details tomorrrow.</p>
<p>I basically used three awesome great libraries, <a href="https://github.com/rwightman/pytorch-image-models" target="_blank">PyTorch Image Models</a>(thanks to Ross Wightman), <a href="https://rapids.ai/" target="_blank">RAPIDS</a> cuml (thanks NVIDIA) and OpenAI <a href="https://github.com/openai/CLIP" target="_blank">CLIP</a>.</p>
<p>The solution is an ensemble of a SVR(tabular data) and image models.</p>
<ul>
<li><p>For the tabular SVR part the features were extracted from pretrained architectures using timm and openai clip libraries, then concatenated the embeddings of multiple pretrained models and fit a cuml SVR on the top of it. The resulting dataset have thousands of columns and only using cuml SVR I was able to fit that dataset in a reasonable amount of time. Better than that I extracted features from most of the models available in timm libraries and used forward model selection to choose the best models to pick for SVR. <br>
Extracted features SVR CV: 16.92</p></li>
<li><p>The Image part of the solution is an standard image regression approach, fit some Visions transformers and CNN models then ensemble the predictions. My best single model is a beit_large_patch16_224 with CV around 17.38.<br>
Image ensemble CV: 17.02</p></li>
</ul>
<p>FInal solution is an ensemble of the two approaches.<br>
CV: 16.818 / Private LB: 16.8225 (local CV and Private LB are very close, not sure if it is just by luck)</p>
<p>Huge thanks to Kaggle and Petfinder for make it happen. Also congrats to all prize winners, gold, silver and bronze medalists and everyone that enjoyed and contributed to this competition. This is very special to me since its my first solo win in an image competition.</p>
<p>Giba</p>
|
PetFinder.my - Pawpularity Contest
|
26th solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Luckily finished this competition with 26th place! <br>
Here is my solution :)</p>
<h2>Overall model architecture</h2>
<p><img src="https://user-images.githubusercontent.com/33713884/149642878-35bd9ecd-ba1c-495f-adfe-c45355afe702.png" alt="petfinder-model drawio"></p>
<h2>CV strategy</h2>
<ul>
<li>10 stratified fold on binned Pawpularity score </li>
<li>100% trusted cv as it has been reported that cv and LB do not really correlate in this competition</li>
</ul>
<h2>Model</h2>
<table>
<thead>
<tr>
<th>Model</th>
<th>cv</th>
<th>pub LB</th>
<th>priv LB</th>
<th>note</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>LightGBM stacking</strong></td>
<td><strong>17.1585</strong></td>
<td><strong>17.8592</strong></td>
<td><strong>16.9833</strong></td>
<td><strong>Stacking of the below 5 models</strong></td>
</tr>
<tr>
<td>LightGBM tabular model</td>
<td>20.0685</td>
<td>-</td>
<td>-</td>
<td>Provided metadata <br> Inferred cat or dog label with Yolov5 <br> Inferred breed label with swin tiny</td>
</tr>
<tr>
<td>Swin large 1</td>
<td>17.4748</td>
<td>-</td>
<td>-</td>
<td>fastai, mixup</td>
</tr>
<tr>
<td>Swin large 2</td>
<td>17.4862</td>
<td>-</td>
<td>-</td>
<td>fastai, mixup</td>
</tr>
<tr>
<td>Efficientnet b0</td>
<td>19.1828</td>
<td>-</td>
<td>-</td>
<td>fastai, mixup</td>
</tr>
<tr>
<td>SVR</td>
<td>17.5398</td>
<td>-</td>
<td>-</td>
<td>Swin large 3 (headless)</td>
</tr>
</tbody>
</table>
|
PetFinder.my - Pawpularity Contest
|
27th solution, Swin transformer. Simple and Lucky.
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I find out that 10 folds score better than 5folds, and the dataset is small.Therefor i decide to run with 10 folds.</p>
<p>As you see, I made the sub two month ago, i am surprised i can get #27 in the final. <br>
So that i think i am “lucky”. </p>
<p>However, i like simple model. </p>
<p>And great score with just 3 subs. </p>
<p>Just do some experiments to test, find the best cv.</p>
<p>Congradulations, all winners.</p>
<p>My solution is quite simple, but it is a very lucky model in this competition. It achieves 16.98523 private lb and 17.83676 public lb.</p>
<p>Actually i trained an efficient model in the last days, but i am too lazy to make an ensemble with my previous swin transformer model. </p>
<p>So i share my solution here.</p>
<p>It's a swin_large with 384 input, trained with 10 folds.</p>
<p>I will introduce my solution with 3 parts: **Model, data augmentation and training policy **.</p>
<h3>1. the model:</h3>
<pre><code>class Net(nn.Module):
def __init__(self, num_classes= 1):
super().__init__()
self.model = timm.create_model('swin_large_patch4_window12_384', pretrained=True)
self._fc = nn.Sequential(nn.Linear(1536+256 , num_classes, bias=True))
self.extra_dense=nn.Sequential(nn.Linear(12,256,bias=True),
nn.GELU(),
nn.Linear(256, 256,bias=True),
nn.GELU()
)
def forward(self, x,extra):
x = self.model.forward_features(x)
extra_fm=self.extra_dense(extra)
fm=torch.cat([x,extra_fm],dim=1)
x = self._fc(fm)
return x
</code></pre>
<h3>2 the augmentation:</h3>
<pre><code>self.train_trans=A.Compose([
A.RandomResizedCrop(height=384,
width=384,
scale=(0.8, 1.0),
ratio=(0.9, 1.1),),
A.ShiftScaleRotate(shift_limit=0.1,
scale_limit=0.1,
rotate_limit=30,
p=1.),
A.HorizontalFlip(p=0.5),
A.Normalize()
])
</code></pre>
<h3>3 trainning policy:</h3>
<p>4 epochs for per fold<br>
Adamw<br>
init_lr=0.0001<br>
lr_scheduler='cos'<br>
no warmup<br>
gradient_clip=5<br>
BCEWithLogitsLoss<br>
weight_decay_factor = 1.e-5 </p>
|
PetFinder.my - Pawpularity Contest
|
32nd Place Solution [Part 1] - FastAI TTA: Beta (+0.08 CV boost)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for the clarification <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>. Really clear and maybe fastai could update their docs with this explanation 👀</p>
<p>Oh interesting, just went through it and it makes a lot of sense now. </p>
<p>First of all, thanks for the amazing competition hosted by Petfinder and the Kaggle staff. Also huge thanks to my amazing veteran teammates: <br>
<a href="https://www.kaggle.com/imeintanis" target="_blank">@imeintanis</a> (Leader)<br>
<a href="https://www.kaggle.com/mithilsalunkhe" target="_blank">@mithilsalunkhe</a><br>
<a href="https://www.kaggle.com/mrigendraagrawal" target="_blank">@mrigendraagrawal</a> <br>
<a href="https://www.kaggle.com/khotijahs1" target="_blank">@khotijahs1</a> </p>
<p>Part 2 on Ensembling <a href="https://www.kaggle.com/c/petfinder-pawpularity-score/discussion/301064#1651557" target="_blank">here</a></p>
<p>Our team had a slow start. We were experimenting with only swin large 224 and trying different coefficients in an effort to find some secret.</p>
<p>Our efforts paid off and we found something peculiar in the fastai starter <a href="https://www.kaggle.com/tanlikesmath/petfinder-pawpularity-eda-fastai-starter" target="_blank">notebook</a> by <a href="https://www.kaggle.com/tanlikesmath" target="_blank">@tanlikesmath</a>.</p>
<p>He sets beta=0. After a bit of digging in fastai's really short docs on tta, we found that beta is a sort of weight to actually use the predictions based on augmentations. For the people who stuck with beta=0, I think they are only doing predictions without augmentations.</p>
<blockquote>
<p>In practice, we get the predictions n times with the transforms of the training set and average those. The final predictions are (1-beta) multiplied by this average + beta multiplied by the predictions obtained with the transforms of the dataset. - FastAI docs</p>
</blockquote>
<h1>TTA Visualization</h1>
<p>Here's how I imagine it to be like (unsure as docs are not very clear):<br>
<img src="https://i.postimg.cc/SQZdgZWn/fastai-tta-viz.png" alt=""></p>
<h1>CV Results</h1>
<p>Nevertheless, changing beta showed remarkable results on our CV score. We have shared a notebook on the entire process <a href="https://www.kaggle.com/edrickkesuma/tta-optuna-on-beta-siti-17-55-cv-v2-5kf/notebook" target="_blank">here</a>.</p>
<p>First, we manually calculate our CV based on larger steps of beta (0.1) to get a rough idea on what beta values perform the best. Based on that, we can already see an improvement of CV from 17.55 to 17.4825. This can be seen of version 1 of the notebook.</p>
<p>Model: 224 swin large (augments: Flip(), Rotate(max_deg=10))<br>
beta 0 17.55 <br>
beta 0.2 17.49<br>
beta 0.3 17.450<br>
beta 0.4 17.4825<br>
beta 0.5 17.4835<br>
beta 0.6 17.496<br>
beta 0.7 17.501</p>
<p>Now that we have a rough idea on what range of beta values to try, we use Optuna to finetune and get the best floating point value for beta. In this case, we search around beta 0.35 and 0.55 (see version 3 of the notebook). With fine tuning, we get a beta value of 0.3818021952828972 and gives us a CV score of 17.472.</p>
<p>The main idea is that we can repeat this process for all models and get a +0.08 CV boost for each model consistently. However, we didn't have enough time to run it on all models due to time constraints.</p>
<p><strong>EDIT: Here's the correct explanation on FastAI TTA by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>:</strong></p>
<p><code>learn.tta(dl=test_dl,n=5, beta=beta)</code><br>
This means do TTA 5 times and average those 5 predictions. Call that average TTA_PRED. Also fastai infers once more without tta. Call that PRED_NO_TTA. If you watch, you will observe that the model makes 6 inferences (when n=5). Finally fastai does this</p>
<p><code>FINAL_PRED = (1-beta)*TTA_PRED + (beta)*PRED_NO_TTA</code><br>
So when beta=0. You are not using your model's prediction at all. You are only using distorted TTA images. That's why using beta=0 lowers your CV. You can confirm this by setting beta=1. If you use beta=1, then the final predictions will be the same as using no TTA. (i.e. learn.get_preds(dl=test_dl) )</p>
<p>You're welcome <a href="https://www.kaggle.com/crained" target="_blank">@crained</a>! I believe I first heard the suggestion to use fastai from you in a discussion. That aged really well. Thanks to you too! </p>
<p>Likewise <a href="https://www.kaggle.com/towhidultonmoy" target="_blank">@towhidultonmoy</a>. You know much more about random seeds than anyone 🤭</p>
|
PetFinder.my - Pawpularity Contest
|
39th place - How to train EfficientNet
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Something like this:</p>
<pre><code>def set_batchnorm_eval(m):
classname = m.__class__.__name__
if classname.find('BatchNorm') != -1:
m.eval()
model.apply(set_batchnorm_eval)
</code></pre>
<p>Thanks for kind words!</p>
<p>I will try to release it on weekend but can't promise </p>
<h2>Intro</h2>
<p>Thanks Kaggle and Petfinder for hosting this amazing competition. It was really fun building models to help pets.<br>
I guess my solution is not that interesting to read(ensemble of swin models and one efficientnet) - so i will focus on one part of it - how i was able to train tf_efficientnet_b4_ns to achive solid cv/lb score.</p>
<h2>Training details</h2>
<ol>
<li>Set batch norm to eval mode - <strong>most important part</strong></li>
<li>Freeze conv_stem and first 3 blocks</li>
<li>Set drop_rate=0.1 when creating model via timm</li>
<li>Use multi-sample dropout(improved cv score slightly comparing to three other methods)</li>
<li><strong>CV/Public LB/Private LB</strong> score of single tf_efficientnet_b4_ns model: <strong>17.3041/17.92043/17.1051</strong></li>
</ol>
<p>Overall, for all my models i also used pseudo-labels from old petfinder competition.<br>
And as it was stated multiple times before in other solutions, i trusted my CV. Best CV model scored only 17.84 on Public LB.</p>
<p>Thanks everyone for competing!</p>
|
PetFinder.my - Pawpularity Contest
|
3rd place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Maybe you should reduce the batch size. For me, I set batch size to 8.</p>
<p>Thanks Craine!😁</p>
<p>Thanks. You did a great job too. 😁</p>
<p>We trained a second stage linear regression model on OOF and we didn't have any extra samples to validate the model. So we may overfit the OOF heavily.</p>
<p>I trained each model for 20 epochs with warm up and cosine anneal learning rate scheduler.</p>
<p>Thanks Chris, Your SVR notebook is helpful. </p>
<p>For the first question, our 2nd stage model is just a regular linear regression model. We just want to find the best weights to ensemble our predictions from different models. </p>
<p>For the second question, we don't include any external data or previous comp data. Though we tried to finetune the swin large models using the previous comp data first, it didn't improve the CV.</p>
<p>First of all, I would like to thank Kaggle and the organizers for hosting this interesting competition. Also, I would like to congratulate my teammate - <a href="https://www.kaggle.com/ludovick" target="_blank">@ludovick</a> to become a Kaggle Competition GrandMaster🎉🎉. We have won 4 gold medals together in the past year. My traditional approach and his innovative ideas always help us win the competitions.</p>
<p>After tried many tricks and got no improvement in LB score, we decide to improve the local cross validation score only. Luckily CV and private LB score is quite correlated. </p>
<p>Here is our solution and I'd like to invite my teammate <a href="https://www.kaggle.com/ludovick" target="_blank">@ludovick</a> to introduce his part in detail later :</p>
<h1>Backbone models</h1>
<h2>tensorflow models:</h2>
<p>Swin large 384</p>
<p>Swin large 224</p>
<p>Vit large 384</p>
<h2>pytorch models:</h2>
<p><code>cait_m36_384</code></p>
<p><code>cait_m48_448</code></p>
<h1>Loss</h1>
<p>all the tensorflow trained with <code>tf.keras.losses.BinaryCrossentropy
</code> and all the pytorch models trained with <br>
<code>nn.CrossEntropyLoss</code></p>
<h1>Models and CV</h1>
<table>
<thead>
<tr>
<th>Models</th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>swin large 384 + nn heads</td>
<td>17.4358</td>
</tr>
<tr>
<td>swin large 384 (knowledge distillation) + nn heads</td>
<td>17.3328</td>
</tr>
<tr>
<td>swin large 384 1k + nn heads</td>
<td>17.4402</td>
</tr>
<tr>
<td>swin large 224 + nn heads</td>
<td>17.5702</td>
</tr>
<tr>
<td>vit large 384 + nn heads</td>
<td>17.6180</td>
</tr>
<tr>
<td>cait384</td>
<td>17.6475</td>
</tr>
<tr>
<td>cait448</td>
<td>17.5348</td>
</tr>
</tbody>
</table>
<p>For all the tensorflow models, 3 sklearn svr heads trained with different params are included.</p>
<p>As all of the models were trained with same folds, we trained a 2nd stage linear regression model to get our best ensemble score: 17.0275.</p>
<h1>Notes</h1>
<p>To be honest, I think our solution may overfit on the cross validation score but it's quite lucky that CV and private LB score is correlated. </p>
<h1>Inference Notebook</h1>
<p><a href="https://www.kaggle.com/tonyxu/fork-of-fork-of-baseline-swin-cv-ensemble-7-fe0956?scriptVersionId=83693169" target="_blank">inference notebook</a></p>
|
PetFinder.my - Pawpularity Contest
|
4th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> <br>
Below is the difference in RMSE with and without pseudo label.<br>
For me, CV=17.25 without pseudo label seems very high.</p>
<table>
<thead>
<tr>
<th>model</th>
<th>is pseudo</th>
<th>CV</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>swin_large_patch4_window12_384</td>
<td></td>
<td>17.4176</td>
<td>18.04019</td>
<td>17.17304</td>
</tr>
<tr>
<td>swin_large_patch4_window12_384</td>
<td>✓</td>
<td>17.1648</td>
<td>17.94469</td>
<td>17.01936</td>
</tr>
</tbody>
</table>
<p><a href="https://www.kaggle.com/crained" target="_blank">@crained</a> <br>
Thank you for your comment. I have added the way to mix with image model predictions.</p>
<h1>Solution overview</h1>
<p>The main point of our solution is post-processing using pet1 meta data.</p>
<ul>
<li>Image models<ul>
<li>8 model ensemble (simple averaging)</li></ul></li>
<li>Table models<ul>
<li>train XGBoost on pet1 table data and mix prediction with image models predictions.<ul>
<li>if pet1 train ∧ pet2 : train & inference with AdoptionSpeed</li>
<li>if pet1 test ∧ pet2 : train & inference without AdoptionSpeed</li></ul></li></ul></li>
</ul>
<h1>Validation</h1>
<ul>
<li>There were several duplicate image pairs in the data for this competition. Therefore, we included those image pairs in the same fold. (10-fold)</li>
</ul>
<h1>Preprocess</h1>
<ul>
<li>To make it look like the thumbnail view on the petfinder site, I cut the long side of the image and then resized it to the desired resolution.</li>
</ul>
<h1>Models</h1>
<h2>Image models</h2>
<ul>
<li>8 model ensemble (simple averaging)</li>
<li>All models were trained with BCE.</li>
<li>Also, some models are created separately for cats and dogs.</li>
<li>The pseudo_label using the previous competition data has consistently improved the CV. In doing so, I created a separate pseudo_label for each fold to prevent label leakage. <a href="https://www.kaggle.com/c/petfinder-pawpularity-score/discussion/294434#1614350" target="_blank">[Reference]</a></li>
</ul>
<table>
<thead>
<tr>
<th>Model</th>
<th>Is dog-cat separately</th>
<th>iIs pseudo</th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>swin_large_patch4_window12_384</td>
<td>✓</td>
<td></td>
<td>17.3495</td>
</tr>
<tr>
<td>swin_large_patch4_window7_224</td>
<td>✓</td>
<td></td>
<td>17.4379</td>
</tr>
<tr>
<td>vit_large_patch16_384</td>
<td>✓</td>
<td></td>
<td>17.6727</td>
</tr>
<tr>
<td>swin_large_patch4_window12_384</td>
<td></td>
<td>✓</td>
<td>17.1648</td>
</tr>
<tr>
<td>swsl_resnext101_32x8d</td>
<td></td>
<td>✓</td>
<td>17.2754</td>
</tr>
<tr>
<td>swsl_resnext101_32x8d</td>
<td></td>
<td></td>
<td>17.6335</td>
</tr>
<tr>
<td>vit_huge (ImageNet MAE pretrained)</td>
<td></td>
<td></td>
<td>17.7407</td>
</tr>
<tr>
<td>ConvNeXt_xlarge</td>
<td></td>
<td>✓</td>
<td>17.1516</td>
</tr>
</tbody>
</table>
<p>--> 8 model averaging: <code>CV=17.0058</code></p>
<h2>Table models</h2>
<ul>
<li>Train XGBoost on pet1 table data and mix prediction with image models predictions.<ul>
<li>if pet1 train ∧ pet2 : train & inference with AdoptionSpeed</li>
<li>if pet1 test ∧ pet2 : train & inference without AdoptionSpeed</li></ul></li>
<li>When mixing with image model predictions, the mix ratio of table model prediction was swept in the range of [0.1 - 0.9] for pet1(train) and pet1(test) respectively, to find the mix ratio with optimal RMSE was obtained.</li>
</ul>
<pre><code>before pet2 CV(all): 17.0058
----
pet1(train) ∧ pet2 image CV: 18.1221
pet1(train) ∧ pet2 table CV: 18.3088
pet1(train) ∧ pet2 0.50 x image + 0.50 x table: 17.5722
----
pet1(test) ∧ pet2 image CV: 17.3988
pet1(test) ∧ pet2 table CV: 18.8036
pet1(test) ∧ pet2 0.80 x image + 0.20 x table: 17.3087
----
after pet2 CV(all): 16.9042
</code></pre>
<p>--> Mix with image model predictions: <code>CV=16.9042</code> / <code>Public=17.60750</code> / <code>Private=16.89096</code></p>
|
PetFinder.my - Pawpularity Contest
|
54th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes……. Trying to deal with imbalanced class distribution, We chose other ways such as 100 Pawpularity classifier, but always failed. So, we decided to adopt simple focal BCELoss, and abandoned other ways. Finally, we got good CV scores luckily.</p>
<p>Thanks! I don't know why focal loss worked well, but I'd like to add models trained in focal loss just because I wanted to expand diversity of models.<br>
I used <a href="https://gist.github.com/snakers4/5739ade67e54230aba9bd8a468a3b7be" target="_blank">this code</a> as focal loss.</p>
<p>First of all, I would like to say thank you for competitors and the host!<br>
Our solution mainly consists of three parts.</p>
<ol>
<li>Using SVR and Gaussian Process Regression</li>
<li>Using Focal Loss</li>
<li>Ridge and Catboost Stacking</li>
</ol>
<p><strong>Final sub code is <a href="https://www.kaggle.com/shigeria/pawpularity-stacking-sub-new/notebook" target="_blank">here</a>.</strong></p>
<ol>
<li><p>Using SVR and Gaussian Process Regression<br>
First, We extracted image embeddings and used them for SVR and GPR. We got inspired by Chris's <a href="https://www.kaggle.com/c/petfinder-pawpularity-score/discussion/276724" target="_blank">discussion</a> and <a href="https://www.kaggle.com/c/commonlitreadabilityprize/discussion/258554" target="_blank">CommonLit 6th solution</a>.<br>
Image embeddings size was 1536(not changed) and they were normalized by sklearn normalize function.</p></li>
<li><p>Models<br>
We Finally selected 4 models for ensemble. All models used TTA(n=5) in calculating CV.</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>NN CV</th>
<th>SVR CV</th>
<th>GPR CV</th>
<th>etc.</th>
</tr>
</thead>
<tbody>
<tr>
<td>swin_large_patch4_window7_224_in22k</td>
<td>17.48</td>
<td>17.51</td>
<td>17.47</td>
<td>None</td>
</tr>
<tr>
<td>swin_large_patch4_window7_224</td>
<td>17.42</td>
<td>17.55</td>
<td>17.45</td>
<td>Focal Loss(gamma=2)</td>
</tr>
<tr>
<td>swin_large_patch4_window7_224</td>
<td>17.44</td>
<td>17.53</td>
<td>17.41</td>
<td>Focal Loss(gamma=0.5) and mixup(α=0.2)</td>
</tr>
<tr>
<td>swin_large_patch4_window7_224</td>
<td>17.58</td>
<td>None</td>
<td>None</td>
<td>trained by FastAI</td>
</tr>
<tr>
<td>Training pipeline was based on <a href="https://www.kaggle.com/ytakayama/train-pytorch-swin-5fold-some-tips" target="_blank">this Notebook</a> (thanks for sharing!) and validation was the same.</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table></li>
<li><p>Augmentation<br>
We used this code.</p></li>
</ol>
<pre><code>def get_train_transforms(epoch, dim = Config.im_size):
return A.Compose(
[
# resize like Resize in fastai
A.SmallestMaxSize(max_size=dim, p=1.0),
A.CenterCrop(height=dim, width=dim, p=1.0),
A.HorizontalFlip(p = 0.5),
A.geometric.transforms.ShiftScaleRotate(shift_limit=0.0625, scale_limit=0.1, rotate_limit=15, p=0.5)
]
)
def get_tta_transforms(epoch, dim = Config.im_size):
return A.Compose(
[
# resize like Resize in fastai
A.SmallestMaxSize(max_size=dim, p=1.0),
A.CenterCrop(height=dim, width=dim, p=1.0),
A.HorizontalFlip(p = 0.5),
A.geometric.transforms.ShiftScaleRotate(shift_limit=0.0625, scale_limit=0.1, rotate_limit=15, p=1.0)
]
)
</code></pre>
<p>4.Stacking<br>
We ensembled Catboost * 0.2 and Ridge * 0.8. Other than meta data used in train and test.csv, We added yolov5 cats dogs neither features used in <a href="https://www.kaggle.com/calvchen/swin-transformer-224-meta" target="_blank">this Notebook</a></p>
<p>By the way, 25 days ago submission using Swin large 384 is 16.975 in Private LB(reach 22nd). So, We should have trusted on Swin large 384……</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.