Title of Competition
stringclasses
343 values
Title of Writeup
stringlengths
6
139
User
stringclasses
1 value
Writeup
stringlengths
419
80.4k
OTTO – Multi-Objective Recommender System
16th Place Solution (Association x UserCF x NN-based x Matrix Factorization x Covisit x LightGBM)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Otto and Kaggle team for hosting such a challenging competition. I tried almost all methods I could come up with and seeing the score improve step by step was thrilling, especially at the end of the competition.</p> <p>The sad part is the Grandmaster/Master are allegedly involved in cheating, and many participants have been suffering/discouraged from such an unforgivable activity that hurts Kaggle's integrity and reputation.</p> <p>I would like to share my solution, <strong>which is 0.01% behind the Gold medal zone at this moment lol.</strong><br> Hope this helps, and I'm happy to answer questions.</p> <h1><strong>Candidate Selection</strong></h1> <ul> <li>revisit</li> <li>top20 covisitation matrix (two different matrix)</li> <li>top3 3-hop graph from covisitation matrix<ul> <li>lets say sessions's last aid is X, and X's top 3 covisit aids are A,B,C. I call them '1-hop' from X. A's top3 covisit aids are D,E,F. I call them '2-hop from X'. I did this for three hops so you have 3^3 cand to 1 aid)</li></ul></li> <li>top10 from 3D-covisitation matrix</li> <li>top20 from simple itemcf/usercf (similaripy)</li> <li>top20 from gru4rec (recbole)</li> </ul> <p><strong>The average cand per session is almost 100, and recall for valid is 0.6396</strong></p> <h1><strong>Feature Engineering</strong></h1> <ul> <li>simple interaction between aid and session (ex: time lapse from session's last action to candidate aid)</li> <li>session-only based (ex: num of clicks of that session)</li> <li>aid-only based (ex: num of clicks of that aid)</li> <li>covisitation matrix score from four different covisitation matrices</li> <li>association rules score (jaccard/dice/lift---)</li> <li>simple user/item cf score (using <strong>'similaripy'</strong>) </li> <li>item2vec cosine similarity (using <strong>'gensim'</strong>)</li> <li>matrix factorization type similarity (using <strong>'implicit'</strong>)<ul> <li>BPR</li> <li>ALS</li> <li>LightFM</li> <li>SVD </li></ul></li> <li>neural net based scores (using <strong>'recbole'</strong>)<ul> <li>lightGCN</li> <li>GRU4Rec</li> <li>BERT4Rec</li> <li>RecVAE</li> <li>SasREC</li> <li>SRGNN</li></ul></li> </ul> <p>The total num of features is 291.<br> From my observation, there was no silver bullet feature(s). All types of features slightly contributed to the model. <br> NN features are relatively costly because you need time and computing resources to get them but still get marginal boost.<br> Association rules and simple userCF are easy to handle because they are not so computationally intensive.</p> <p><strong>Single model best CV is .5808 and public .60206/private .6017</strong></p> <h1><strong>Modeling</strong></h1> <p>LightGBM Ranker with custom metrics(Recall at 20)<br> LightGBM binary/Catboost binary/Catboost Yetirank are all worse than LightGBM Ranker.</p> <p>The final submission is the ensemble from five slightly different candidates/model specifications. Ensemble helped me a little(+0.03%).</p> <h1><strong>What I should have done to get solo gold</strong></h1> <ul> <li>Catboost Pairwise (some report it works well)</li> <li>More Candidate (maybe 150-200 per candidates)</li> </ul> <p>Thank you for your kind reply! As two teams seemed to be involved in cheating, I may get a solo gold. <br> Hope Kaggle team thinks I deserve that, lol.</p>
OTTO – Multi-Objective Recommender System
17th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, thanks to OTTO and Kaggle for organizing this great competition.</p> <p>Here I would like to share my part.</p> <h1>Overview</h1> <ul> <li>My part<ul> <li>Two-stage recommendation </li> <li>100-200 candidates per session</li> <li>100 features for ranking model</li> <li>CV strategy<ul> <li>For local validation, train by week3 and validate by week4</li> <li>For submission, train by week4</li></ul></li> <li>Score -&gt; public: 0.598, private: 0.597</li></ul></li> <li>Team Solution<ul> <li>Ensemble the outputs of my model and those of my teammates' models with rank vote</li> <li>Score -&gt; public: 0.601, private: 0.600</li></ul></li> </ul> <h1>Candidates generation</h1> <p>Generate candidates by following methods. </p> <p>Median numbers of candidates per session are 100 for clicks, 180 for carts and 200 for orders.</p> <ul> <li>Previously actioned</li> <li>Co-visitation<ul> <li>matrix<ul> <li>action2action</li> <li>action2click, action2cart, action2order</li> <li>action2buy</li> <li>buy2buy</li></ul></li> <li>weight<ul> <li>type</li> <li>recency</li> <li>time interval</li></ul></li></ul></li> <li>word2vec (use Gensim Word2Vec)</li> <li>node2vec (use PyTorch Geometric Node2Vec)</li> </ul> <h1>Feature engineering</h1> <p>Around 100 features were created to train models. </p> <p>I think my features are not special compared to other competitors, but some examples are given below.</p> <ul> <li>CF score<ul> <li>candidates score</li> <li>candidates rank</li></ul></li> <li>Item<ul> <li>number of {type}*</li> <li>ratio of number of {type} to number of actions</li> <li>average number of {type} in one session</li> <li>probability of {type} after {type} in one session</li></ul></li> <li>User<ul> <li>session size</li> <li>unique item number</li> <li>ratio of unique item number to session size</li></ul></li> <li>Item x User<ul> <li>number of {type}</li> <li>number of {type} weighted by recency</li> <li>ratio of number of {type} to session size</li> <li>elapsed time from last {type}</li></ul></li> <li>Item similarity<ul> <li>cosine similarity of embeddings by word2vec and node2vec<ul> <li>last aid to candidate</li> <li>average of last 3 aids to candidate</li> <li>average of last 5 aids to candidate</li></ul></li></ul></li> </ul> <p>*{type} denotes clicks, carts or orders.</p> <h1>Ranker model</h1> <ul> <li>I used LightGBM ranker<ul> <li>objective: lambdarank</li> <li>boosting_type: gbdt</li></ul></li> </ul> <h1>Ensemble</h1> <ul> <li>For each type, ensemble the output of two LightGBM rankers with slightly different candidate generation<ul> <li>Score -&gt; public: 0.598, private: 0.597</li></ul></li> <li>After that, ensemble the outputs of my model and those of my teammates' models with rank vote<ul> <li><a href="https://www.kaggle.com/dehokanta" target="_blank">@dehokanta</a> &amp; <a href="https://www.kaggle.com/zakopur0" target="_blank">@zakopur0</a> model<ul> <li>Score -&gt; public: 0.598, private: 0.597</li> <li>Details -&gt; <a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/383493" target="_blank">https://www.kaggle.com/competitions/otto-recommender-system/discussion/383493</a></li></ul></li> <li><a href="https://www.kaggle.com/t88take" target="_blank">@t88take</a> model<ul> <li>Score -&gt; public: 0.594, private: 0.594</li> <li>Details -&gt; <a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/382886" target="_blank">https://www.kaggle.com/competitions/otto-recommender-system/discussion/382886</a></li></ul></li> <li>Final score -&gt; <strong>public: 0.601, private: 0.600</strong></li></ul></li> </ul> <h1>What worked</h1> <ul> <li>Item similarity between last aid and candidate by word2vec contributes significantly. This feature boosts my score <br> +0.003.</li> <li>Inspired by <a href="https://arxiv.org/abs/1804.04212" target="_blank">this paper</a>, I tuned hyperparameters of word2vec (such as epochs, window, ns_exponent,,,) and this increased my score +0.001.</li> <li>I used different N of co-visit candidates based on recency and gained +0.0015. Take action2click matrix as an example, <ul> <li>For last aid in a session, join top 50 </li> <li>For aids within 30 minutes of last action, join top 20</li> <li>For aids older than 30 minutes, no candidates are joined</li></ul></li> </ul> <h1>What did not work</h1> <ul> <li>In a recent tabular competition I participated in (H&amp;M and Amex), LightGBM dart was better than gbdt. So, I tried dart but accuracy did not improve. It just made training time longer.</li> <li>I tried different models (LightGBM classifier, CatBoost classifier, CatBoost ranker) but LightGBM ranker was best in my case.</li> </ul> <h1>Environment</h1> <p>Google Colab Pro+</p> <h1>Acknowledgement</h1> <p>Thank <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a> <a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/366194" target="_blank">introducing Polars</a>. I did not know Polars until taking part in this competition. Polars helped me to accelerate experiments and try many ideas.</p> <p>Thank <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for sharing very helpful knowledge and codes. My co-visit matrix candidates generation is totally based on <a href="https://www.kaggle.com/code/cdeotte/candidate-rerank-model-lb-0-575" target="_blank">his notebook</a>.</p>
OTTO – Multi-Objective Recommender System
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for your comments!</p> <blockquote> <p>Just curious to know, how long does it take you for one iteration? did you use any tools to accelerate your iteration? </p> </blockquote> <p>The time taken for my sampled local validation is. </p> <ul> <li>About 1.5 hours to combine multiple candidates and features from each logic , </li> <li>About 1.5 hours to train and evaluate a reranking model. </li> </ul> <p>Candidates and features by individual logic are calculated once and reused, which is not included in time above. (like training and inference of nn or covisitation matrix, etc.)<br> I didn't use any special tools.</p> <blockquote> <p>The number of candidates, ~1200, is very impressive! </p> </blockquote> <p>the 1200 candidates include candidates for clicks, carts and orders. <br> I didn’t explicitly separate which candidate is for which type, and this may be one reason of the very big candidates number.</p> <blockquote> <p>Do you have the CV/LB scores of some smaller num? e.g. 200, as many others do.</p> </blockquote> <p>I didnt test it.<br> This is sligtly difficult to conduct a good experiment now.<br> I dont know a good way to reduce candidates from multiple strategies by not using ranker ML model.</p> <blockquote> <p>And in the NN model, what are the x_timeinfos extractly? Time diff between the action and last action?</p> </blockquote> <p>assuming hh = ts // 3600 // 1000,</p> <ul> <li>hh (categorical feature)</li> <li>hh // 24 (categorical feature)</li> <li>sin(hh % 24 / 24 * np.pi * 2) (numerical feature)</li> <li>cos(hh % 24 / 24 * np.pi * 2) (numerical feature)</li> </ul> <p>I have these values for each aid in each session, and I concat these time features timeseries with aid embedding timeseries.</p> <p>Thank you for your comments!</p> <blockquote> <ol> <li>There seem to be a very large number of candidates for 1200. Did you try anything during the training to reduce the memory?</li> </ol> </blockquote> <p>I didn't do anything special.<br> I did only common things like perform negative sampling, have all features in uint32 type, and reduce features by importance.<br> I used 256GB RAM machine for training with full data.</p> <blockquote> <ol> <li>What was the Recall when forecasting with NN only?</li> </ol> </blockquote> <p>I have added NN only local score in the ablation study section.</p> <p>Thank you for your comments!</p> <blockquote> <p>One things I did, not clear if you did it too from your post, is that I used the same embedding for x_aid and y_aid.</p> </blockquote> <p>I actually did same thing.<br> I also used the same embedding for x_aid and y_aid.</p> <blockquote> <p>What surprised me is that NN predicitin next click were useful for candidate generation for orders and carts as well. I did train a couple of model to predict next cart and next order, but they don't improve much.</p> </blockquote> <p>I used multiple aids in future as positive targets, not just the next click.<br> And I uses prediction target type information when calculating session embedding so that session embedding is adjusted according to the prediction target aid type.<br> (I have added more explanations to the main text in response to your question)</p> <p>Thank you very much for organizing this fun competition.<br> The problem set up was relatively close to my actual work, and I was glad to learn a lot.</p> <h2>Candidates</h2> <p>The average number of candidates is around 1200.</p> <ul> <li>visited aids in session</li> <li>covisitation matrix<ul> <li>use multiple versions with different weighing by type and aggregation period</li> <li>apply covisitation matrix at multiple times like beam search</li></ul></li> <li>NN that predicts subsequent aids<ul> <li>use multiple versions to create candidates and rerank features</li> <li>NN structure is MLP or transformer (there was no big difference)</li> <li>I tried to focus on samples that are not predicted well</li> <li>I used the same embedding for x_aid and y_aid.</li> <li>I used multiple aids in future as positive targets.</li> <li>I used prediction target aid type information when calculating session embedding so that session embedding is adjusted according to the prediction target aid type.</li> <li>some models are trained by using only non visited aids as targets to avoid overlapping information with revisitation based candidates and features.</li></ul></li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1019365%2Fb39461ef4b594a17799b4b06f1ae6fb2%2F2023-02-06%2017.12.38.png?generation=1675671185056988&amp;alt=media" alt=""><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1019365%2Faff8177d9e576a6e40a08ed9f80f2140%2F2023-02-06%2017.04.33.png?generation=1675670729353616&amp;alt=media" alt=""></p> <h2>Reranker</h2> <h3>model</h3> <p>single LGBMRanker : LB 0.604 <br> ensemble of 9 LGBMRankers with different hyperparameters : LB 0.605 <br> I performed ensemble by averaging the predicted scores of the rankers. </p> <ul> <li>I haven't tested if this is a better method than voting etc.</li> </ul> <h3>features</h3> <ul> <li>session * aid<ul> <li>rank by covisitation matrix at candidate generation</li> <li>cosine similarity by NN at candidate generation</li> <li>aid info in the session (when it appeared, what type it is, etc)</li></ul></li> <li>aid<ul> <li>popularity of aids<ul> <li>It worked well when ranked</li> <li>calculated by multiple time windows</li></ul></li> <li>ratio of types</li></ul></li> <li>session<ul> <li>length</li> <li>aid dupplication rate</li> <li>ts between the last aid and the second last aid</li></ul></li> </ul> <p>about 200 features were created <br> select about 100 features for each target by lgbm gain importance to reduce memory usage </p> <h4>negative sampling rate</h4> <p>clicks : 5% <br> carts : 25% <br> orders : 40% <br> I set these values so that the training data can be handled by my machine (the data size is around 35GB for each).</p> <h2>Cv strategy</h2> <p>I followed radek's set up. <a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/364991" target="_blank">https://www.kaggle.com/competitions/otto-recommender-system/discussion/364991</a> <br> I can get almost perfect correlation between local validation and LB. <br> For quick iteration of improvements, I conducted experiments by training with 5% of the data and evaluating with other 10% of the data.</p> <h2>ablation study</h2> <p>ablation study by local validation. <br> Information that is involved in both candidate generation and reranker features is removed from both.</p> <table> <thead> <tr> <th>condition</th> <th>clicks_recall@20</th> <th>carts_recall@20</th> <th>orders_recall@20</th> <th>weighted_recall@20</th> </tr> </thead> <tbody> <tr> <td>my solution (LB604)</td> <td>0.556607</td> <td>0.436375</td> <td>0.669644</td> <td>0.588359</td> </tr> <tr> <td>without visited aid</td> <td>0.555677</td> <td>0.435616</td> <td>0.666456</td> <td>0.586126</td> </tr> <tr> <td>without covisitation</td> <td>0.547493</td> <td>0.430180</td> <td>0.665553</td> <td>0.583136</td> </tr> <tr> <td>without nn</td> <td>0.544811</td> <td>0.429904</td> <td>0.666004</td> <td>0.583055</td> </tr> <tr> <td>without aid feats</td> <td>0.550472</td> <td>0.433442</td> <td>0.666275</td> <td>0.584845</td> </tr> <tr> <td>without session feats</td> <td>0.555922</td> <td>0.435805</td> <td>0.669734</td> <td>0.588174</td> </tr> <tr> <td>only single nn</td> <td>0.532279</td> <td>0.410148</td> <td>0.564768</td> <td>0.515133</td> </tr> </tbody> </table>
OTTO – Multi-Objective Recommender System
20th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><ul> <li>candidate_selected_count: By how many recall methods the item is recalled</li> <li>candidate_selected_rank: The rank of candidate_selected_count over each session</li> <li>candidate_score: The score of each candidate generation method. (for example, for ItemMF the score is cosine similarity)</li> <li>candidate_rank: The rank of candidate_score over each session</li> <li>candidate_selected: 0/1 feature that indicates if the item is recalled by each recall method</li> <li>"inter" is either click/cart/order. If user did either of click/cart/order action, inter is 1, if not 0 (inter stands for interaction)</li> </ul> <p>Yes. ItemCF was the best recall method and ItemMF was the next for us</p> <h1>Acknowledgements</h1> <p>First, I would like to thank the organizers and those who shared knowledge. Especially I would like to thank <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> and <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a> for posting so many codes and discussions. I learned a lot from you. </p> <p>Considering the controversy about the possible cheaters, I will publish the entire code after everything is finalized and I will just share the ideas in this solution. (Although <a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/382440#2123460" target="_blank">Will says sharing code is not something we should withhold</a>)</p> <p><br></p> <h1>Solution Summary</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F770256%2F438095d4971fd2b40113d40cf1d96fc1%2FScreen%20Shot%202023-02-01%20at%209.28.52.png?generation=1675211352995855&amp;alt=media" alt=""></p> <p>My code is here: <a href="https://github.com/kiccho1101/kaggle-otto2" target="_blank">https://github.com/kiccho1101/kaggle-otto2</a></p> <p><br></p> <h1>CV Strategy</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F770256%2F728b7a7f2c1ef5eeec8207ac1e7f870f%2Frecommend-data-split.svg?generation=1675210198356258&amp;alt=media" alt=""></p> <ul> <li>I used Radek’s CV strategy</li> <li>Although CV data was good for the local experiments, it took quite a long time to run experiments so I used only 1/20 sessions by random sampling. As a result, the average time that takes to run 1 experiment became about 10~30mins which allowed me to run many experiments quickly.</li> <li>Even if I sampled the data, the CV-LB score correlation was very stable.</li> <li>The data used for each stage is below</li> </ul> <table> <thead> <tr> <th>Step</th> <th>CV</th> <th>LB</th> </tr> </thead> <tbody> <tr> <td>Candidate Generation</td> <td>TrainData + ValidDataA</td> <td>TrainData + ValidDataA + ValidDataB + TestDataA</td> </tr> <tr> <td>User Feature Creation</td> <td>ValidDataA</td> <td>TestDataA</td> </tr> <tr> <td>Item Feature Creation</td> <td>TrainData + ValidDataA</td> <td>TrainData + ValidDataA + ValidDataB + TestDataA</td> </tr> <tr> <td>User-Item Feature Creation</td> <td>ValidDataA</td> <td>TestDataA</td> </tr> <tr> <td>Re-Ranking</td> <td>ValidDataA</td> <td>TestDataA</td> </tr> </tbody> </table> <p><br></p> <h1>1st Stage - Candidate Generation</h1> <p><br></p> <h3>word2vec</h3> <ul> <li>Trained word2vec model with aid sequences</li> <li>Retrieve top-k aids by word2vec embeddings (Used faiss-gpu to speed up)</li> </ul> <p><br></p> <h3>CoVis</h3> <ul> <li>I used Chris’s co-visitation matrix to generate candidates</li> </ul> <p><br></p> <h3>Item MF (Item Matrix Factorization)</h3> <pre><code>class ItemMFModel(nn.Module): def __init__(self, n_aid: int, n_factors: int): super().__init__() self.criterion = BPRLoss() self.n_factors = n_factors self.n_aid = n_aid self.aid_embeddings = nn.Embedding(self.n_aid, self.n_factors) initrange = 1.0 / self.n_factors nn.init.uniform_(self.aid_embeddings.weight.data, -initrange, initrange) def forward(self, aid_x, aid_y): aid_x = self.aid_embeddings(aid_x) aid_y = self.aid_embeddings(aid_y) return (aid_x * aid_y).sum(dim=1) def calc_loss(self, aid_x, aid_y, size_x, size_y): rand_idx = torch.randperm(aid_y.size(0)) output_pos = self.forward(aid_x, aid_y) output_neg = self.forward(aid_x, aid_y[rand_idx]) loss = self.criterion(output_pos, output_neg) return loss </code></pre> <ul> <li>Trained aid_embeddings with BPR loss to make co-occurring embeddings become similar</li> <li>What worked<ul> <li>Multiply the inverse of item_size by loss (Removing popularity bias)</li> <li>Multiply the inverse of ts_diff by loss (The closer the co-occur timing is, the more similar the embeddings become)</li></ul></li> </ul> <p><br></p> <h3>User MF (User Matrix Factorization)</h3> <pre><code>class UserMFModel(nn.Module): def __init__(self, n_session: int, n_aid: int, n_factors: int): super().__init__() self.n_factors = n_factors self.n_session = n_session self.n_aid = n_aid self.session_embeddings = nn.Embedding(self.n_session, self.n_factors) self.aid_embeddings = nn.Embedding(self.n_aid, self.n_factors) self.criterion = BPRLoss() initrange = 1.0 / self.n_factors nn.init.uniform_(self.session_embeddings.weight.data, -initrange, initrange) nn.init.uniform_(self.aid_embeddings.weight.data, -initrange, initrange) def forward(self, session, aid, aid_size): session_emb = self.session_embeddings(session) aid_emb = self.aid_embeddings(aid) return (session_emb * aid_emb).sum(dim=1) def calc_loss(self, session, aid, aid_size): rand_idx = torch.randperm(aid.size(0)) output_pos = self.forward(session, aid) output_neg = self.forward(session, aid[rand_idx]) loss = self.criterion(output_pos, output_neg) return loss </code></pre> <ul> <li>Trained session_embeddings and aid_embeddings with BPR loss</li> <li>What worked<ul> <li>Multiply the inverse of item_size by loss (Removing popularity bias)</li> <li>Multiply the inverse of ts_diff by loss (The closer the co-occur timing is, the more similar the embeddings become)</li></ul></li> </ul> <p><br></p> <h3>Item CF</h3> <ul> <li>Implemented item cf with polars</li> <li>Calculated the similarity weights for each item-item pair and retrieved candidates by getting the most similar items based on sum/min/max/mean of weights</li> <li>What worked<ul> <li>Multiply the inverse of item_size by weight (Removing popularity bias)</li> <li>Multiply the inverse of ts_diff by weight (The closer the co-occur timing is, the bigger the weight becomes)</li> <li>Multiply trend coefficient (The more ts is recent, the bigger the weight becomes)</li></ul></li> </ul> <p><br></p> <h3>User CF</h3> <ul> <li>Implemented in the same way as item cf</li> </ul> <p><br></p> <h1>2nd Stage - Re-Ranking</h1> <ul> <li>Feature<ul> <li>Created ~200 features in total</li> <li>pl.col(’ts’).agg([mean,min,max,std]).over({’session’ or ‘aid’})</li> <li>candidate_selected_(count, rank)</li> <li>candidate_(score, rank, selected)</li> <li>(inter, click, cart, order)_hour_mean.over({’session’ or ‘aid’})</li> <li>(inter, click, cart, order)_count.over({’session’ or ‘aid’})@(3d, 7d, 14d, 21d)</li> <li>aid_multi_(inter,click,cart,order)_prob</li></ul></li> <li>Model<ul> <li>LightGBM Ranker (lambdarank)</li> <li>CatBoost Classifier (Logloss)</li> <li>CatBoost Ranker (YetiRank)</li></ul></li> </ul> <p><br></p> <h1>What did not work well</h1> <ul> <li>Transformers</li> <li>GRU</li> <li>CDAE</li> <li>RecVAE</li> <li>Implicit(ALS, BPR)</li> <li>Clustering by item embeddings</li> <li>Popular items</li> <li>Stacking</li> <li>Pseudo Labeling</li> </ul> <p><br></p> <h1>Lessons Learned</h1> <ul> <li>polars is all you need to create features<ul> <li>At first, I created features with pandas and cudf but I switched to polars during the competition because polars is fast, memory efficient, and the syntax is easy to understand.</li></ul></li> <li>Creating a good baseline is very important<ul> <li>It took me 1~2 month to create a baseline pipeline</li> <li>Thanks to Chris’s great discussion post, I was able to create a 2-stage baseline from scratch (I will publish the code on GitHub soon)</li></ul></li> <li>Feature Store is useful<ul> <li>I stored intermediate files to parquet files, which saved me to run experiments quickly in a more reproducible way.</li></ul></li> </ul> <p><br></p> <h1>Score Timeline</h1> <table> <thead> <tr> <th>CV (1/20 sampled)</th> <th>CV</th> <th>Public LB</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>0.536</td> <td>0.569</td> <td>0.5799</td> <td>2-Stage Baseline</td> </tr> <tr> <td>0.539</td> <td>0.572</td> <td>0.5830</td> <td>Added Item2Vec ItemMF</td> </tr> <tr> <td>0.547</td> <td>0.581</td> <td>0.5930</td> <td>Added ItemCF UserMF</td> </tr> <tr> <td>0.550</td> <td>0.585</td> <td>0.5970</td> <td>Added variation to ItemCF</td> </tr> <tr> <td>0.552</td> <td>0.587</td> <td>0.5985</td> <td>Ensemble(LightGBM + CatBoost)</td> </tr> </tbody> </table> <p><br></p> <h1>Feature Importance</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F770256%2Febea15581ccf1d44b29d37db8e493fc5%2F68747470733a2f2f71696974612d696d6167652d73746f72652e73332e61702d6e6f727468656173742d312e616d617a6f6e6177732e636f6d2f302f3235393431372f31346465363937652d373162652d396462362d663337362d3066643762336537333630322e706e67.png?generation=1675210821369636&amp;alt=media" alt=""></p> <p>I linked it. Thanks!</p> <p>I didn't submit with only retrieval methods but on my local cv, 0.5596(without re-ranking) and 0.585(with re-ranking)</p> <p>We generated average 300 candidates for each user. For the topk numbers of each reacll method, I could not come up with a good way so I tuned them heuristically just like hyper parameters.</p> <p>We concatenated all candidates vertically and dropped duplicates. Dropping duplicates is memory-heavy so we ran this by chunk. </p> <p>Congratz on your solo gold medal! I will try BPR for late submission.</p> <p>The values are from <code>model.feature_importance</code> of LightGBM.</p> <p>I uploaded my code to Github. Have fun!<br> <a href="https://github.com/kiccho1101/kaggle-otto2" target="_blank">https://github.com/kiccho1101/kaggle-otto2</a></p> <p>Thanks for reminding. I’m doing some refactoring on my code and will publish soon. </p>
OTTO – Multi-Objective Recommender System
20th place solution (Transformer inside)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I want to thank OTTO for organising great competition. It's was very challenging and clean :) <br> Secondly, I want to thank all people who share their thoughts and ideas in kernels or forums. It helps a lot.<br> I will try to put my code on GitHub in a few days, need to clean up some deadline madness there.</p> <h2>TL;DR</h2> <ul> <li>use 3 types of co-occurence matrices (somewhat similar ideas to public notebooks, but different implementation and details)</li> <li>Bert MLM</li> <li>Matrix Factorization</li> <li>Catboost + PairLogitPairwise</li> </ul> <h3>LB Progression (public)</h3> <ul> <li>0.576 - history + 60 candidates all-to-all co-occurence matrix + lightgbm ranker</li> <li>0.579 - same, but 200 candidates</li> <li>0.583 - added a bunch of item features (conversions, popularity etc.)</li> <li><strong>0.593 - optimized co-occurence matrix</strong> (switched to sessions, added time weighing)</li> <li>0.595 - add buy2buy features and Transfomer candidates, switch to catboost</li> <li>0.597 - add buy2buy and type weighted co-occurence candidates, use more data for training (16/32 chunks)</li> <li>0.598 - use full data for training</li> <li>0.599 - use different candidates configs for clicks/buys, add MF candidates and scores</li> <li>0.600 - use different transformers</li> <li>0.601 - use x1.5 more candidates from each source for carts/orders</li> </ul> <h2>Candidates retrieval</h2> <p>I use <strong>max-recall@200</strong> as a retrieval quality measure, but candidates from different sources can be more or less common with user history, so it seems more fair to outer join candidates with history to calculate max-recall. <strong>0.598 LB is achievable only with co-visitation candidates</strong></p> <p>I used different combinations of candidates for clicks and carts/orders.</p> <pre><code>clicks: history_rank: cooc_rank: buy2buy_rank: cooc_tw_rank: mfc_rank: transformer_rank: carts/oders: history_rank: cooc_rank: cooc_tw_rank: buy2buy_rank: mfc_rank: transformer_rank: </code></pre> <h3>Co-visitation (all-to-all)</h3> <ul> <li>Use actual user’s “sessions” - consecutive series of events, if there’s a gap &gt; 900 seconds, it’s another session.</li> <li>Use exponential time weighing (more distant events are less significant) 0.99995^(abs(ts.x - ts.y))</li> <li>inverse rank weighing for user history events</li> </ul> <p>Hyperparameters like session gap, time base and rank weight function were optimized with optuna, so <strong>max-recall@200</strong> was like <strong>67.6</strong> for this method.</p> <h3>Co-visitation (type weighted)</h3> <ol> <li>Use 1 day gap with exponential time weighing (no “sessions”)</li> <li>10x weight for carts, 3x for orders</li> </ol> <h3>Co-visitation (buy2buy)</h3> <ol> <li>Use 2 weeks gap, exponential time weighing</li> <li>Use only carts and orders to calculate stats</li> </ol> <h3>Transformer (small BERT)</h3> <p>It’s was hugely inspired by the <a href="https://github.com/Chubasik/yacup_recsys_2022" target="_blank">winning solution</a> (by <a href="https://www.kaggle.com/chubasik" target="_blank">@chubasik</a>) of recent Yandex.Cup Recsys track (I took <a href="https://github.com/greenwolf-nsk/yandex-cup-2022-recsys" target="_blank">2nd place</a> there with classical 2-stage approach).</p> <p>The idea is to train Masked Language Model, and then predict the “fake” last masked item in user session. Also, I fed action types (click, order, cart) as token_type_ids (which is mainly used for context separation in NLP tasks).</p> <p>I trained MLM on train sessions, used 500k most carted items, <strong>max-recall@200</strong> was like <strong>0.66.</strong> This quality can be achieved with 3 epochs on full data, but it takes very long to train (7 hours on A100 GPU), so my experiments were very limited.</p> <p>Adding this source of candidates gave boost of 0.002 in local CV, and this model score was second most valuable feature for ranking model. However, LB change was less then 0.001, and I spend last two weeks figuring out what went wrong. Finally, I trained two different models (train_no_val + val and train + test), this probably helped a little, but CV-LB gap was still bigger than before.<br> There's 3 epochs training from scratch, with metrics every 0.5 epochs<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F172966%2F916debefa57949c714c09d06c2ac6f9e%2F2023-02-01%20%2021.49.24.png?generation=1675263027888603&amp;alt=media" alt="3 epoch training (eval every 0.5 epochs)"></p> <h3>Matrix Factorization</h3> <p>Matrix factorization on Pytorch with BPR-like loss and hard negative sampling. It achieves <strong>0.665 max-recall@200</strong>. There’s probably a huge space for improvement in weighing events by type and time and negative sampling strategies. Training time is <strong>7h</strong> in total for 20 epochs with AdamW optimizer. It almost made no difference to CV/LB score, but MF score was also one of the strongest features.</p> <h2>Features</h2> <p>Best model uses around 200 features.</p> <h3>User</h3> <ul> <li>counters by type, normalized counter, time-based features</li> <li>number of “sessions”, avg session length</li> <li>avg/min/max/std “popularity” of item in user history</li> </ul> <h3>Item</h3> <ul> <li>item popularity by type - counters and ranks</li> <li>tried some derivatives to detect “trending” items, but they didn’t work for me</li> <li>item click/cart/order conversion rates</li> </ul> <h3>User-Item</h3> <ul> <li>interaction stats with item (number of clicks/carts, last timestamp)</li> <li>all features from co-visit matrices and statistics (mean/min/max/std for score/rank/normalized score)</li> <li>score from MF model</li> <li>statistics on MF item-item similarity with user history</li> </ul> <h2>Ranker</h2> <p>I found out that <code>Catboost</code> with <code>PairLogitPairwise</code> loss is the best option for my data and final score is achievable without ensembling. Inference is fast (1-1.5h), but not as fast as LightGBM/XGBoost with cuml.ForestInference (thanks <a href="https://www.kaggle.com/buumoo" target="_blank">@buumoo</a> for the clue).</p> <p>Summary:</p> <ul> <li>3-fold CV</li> <li>Catboost, PairLogitPairwise, 5000 iterations (2-4 minutes per fold on A100)</li> <li>separate models for each target (drop sessions w/o target + 20% random downsampling)</li> <li>LightGBM / XGBoost give slightly worse results (with lambdarank objectives), ensembling makes no differences</li> </ul> <h2>Pipeline &amp; technical details</h2> <p>I used mostly CUDF for data preparation and feature engineering. GPU memory is a bottleneck here, so I split data in 32 chunks.</p> <p>From the beginning I tried to implement robust pipeline with DVC, and it worked well until the last days of the competition, when I decided to increase candidates count from avg 200 to avg 300 :)</p> <p>One of the features of DVC is that it keeps track of parameters and changes, and stores results in cache. For example, if you want to experiment with some source of candidates, others won’t be recalculated. And you could return to previous state of data because of cache, but it’s not practical when you’re dealing with big amounts of data.<br> Here's how pipeline looks (below is almost the same part for test):<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F172966%2Fa51cd925311210365102fb8bb8e7211e%2F2023-02-01%20%2021.54.43.png?generation=1675263329789877&amp;alt=media" alt=""></p> <h2>Takeaways &amp; fails</h2> <ul> <li>try to keep data size as low as possible when actively trying ideas (firstly, I went from 60 to 200 candidates for 0.576 → 0.579 boost, then i went from 1/3 to full data for 0.596 → 0.597).</li> <li>as it’s multi-objective recommender system, I tried to use predictions of carts models as a feature for orders, but it did not work</li> <li>ensembling did not work after 0.6 LB. I used inverse rank averaging different rankers (e.g. catboost &amp; lightgbm) on different candidates setups.</li> <li>computational and personal time investment in Transformer models was not great in terms of leaderboard score, but knowledge I got is priceless. Also, it was the first time I tried Weight &amp; Biases for DL experiments tracking, and it’s awesome.</li> <li>this comp is hard, many ideas just don’t work. I think it took x3 time and effort than H&amp;M with almost the same LB position</li> </ul>
OTTO – Multi-Objective Recommender System
226th (?!) Place Solution & Two-cents from a First-timer
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Rayane! Good to see I'm not alone haha. And congratulations on your first medal!</p> <h2>FOREWORD</h2> <p>My work in this competition is nowhere near excellent compared to other more comprehensive approaches that have been shared. In addition, it is largely based on publicly available notebooks and ideas in the competition. Therefore, rather than solution-sharing, this document serves two other main purposes:</p> <ul> <li>Providing other newbies (like me) with a few tips on getting started with Kaggle real competitions</li> <li>Memorializing my personal journey to the first medal</li> </ul> <p>That said, my sharing consists of two parts</p> <ul> <li>My workflow in this competition</li> <li>What I have learned from it</li> </ul> <h2>CREDITS</h2> <p>First, big thanks for Kaggle and OTTO team for this learning opportunity - I have learned so much!</p> <p>Secondly, my achievement and learning in this competition owe primarily to many other participants who have shared their approaches, ideas and feedbacks, especially</p> <ul> <li><a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for sharing his <a href="https://www.kaggle.com/code/cdeotte/candidate-rerank-model-lb-0-575" target="_blank">co-visitation matrix &amp; rule-based ranker notebook</a> and <a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/370210" target="_blank">suggestions on building a model-based ranker</a></li> <li><a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a> for the <a href="https://www.kaggle.com/datasets/radek1/otto-train-and-test-data-for-local-validation" target="_blank">train-validation split dataset</a> and valuable EDA</li> <li><a href="https://www.kaggle.com/vbmokin" target="_blank">@vbmokin</a> and <a href="https://www.kaggle.com/karakasatarik" target="_blank">@karakasatarik</a> on their awesome solution ensemble notebooks (<a href="https://www.kaggle.com/code/karakasatarik/0-578-ensemble-of-public-notebooks" target="_blank">vbmokin's</a>, <a href="https://www.kaggle.com/code/vbmokin/0-578-ensemble-of-public-notebooks-upgrade?scriptVersionId=116497334" target="_blank">karakasatarik's</a>).</li> </ul> <h2>I. Solution Workflow</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F982488%2F8b2e1abf8cff557f28eca83e99654f7d%2FOTTO%20Workflow.png?generation=1675527772718204&amp;alt=media" alt=""></p> <h3>1. Item Co-visitation Matrices</h3> <p>This part is entirely based on <a href="https://www.kaggle.com/code/cdeotte/candidate-rerank-model-lb-0-575" target="_blank">Chris' notebook</a> above.</p> <p>There exist products that are frequently clicked/carted/ordered together. A co-visitation matrix, using a pre-defined rule, gives a weight <code>W</code> to a pair of products <code>A</code> &amp; <code>B</code> to signify such relationship between the products.<br> With this notion, my solution includes below three co-visitation matrices mentioned in Chris' notebook</p> <ol> <li>Order matrix: Click/cart/order to click/cart/order with type weighting</li> <li>Buy2buy matrix: Cart/order to cart/order</li> <li>Click matrix: click/cart/order to clicks with time weighting</li> </ol> <p><strong>Tests that did not work</strong><br> The original notebook truncates session to the last 30 events (<code>tail=30</code>) and select for the three matrices top 15, 15 and 20 items most associated with each <code>aid</code>. I additional tested <code>tail</code> of 35 and 40, and top 40-40-50, and finally used <code>tail=40</code> and top 40-40-50. However, this turns out not helpful for final recall scores. The extra information seems to be more noisy than helpful in this case.</p> <p><strong>Code</strong></p> <ul> <li><a href="https://www.kaggle.com/code/hoangnguyen719/otto-tr-matrixv2-tail40-top404050-w136" target="_blank">Training set's matrix notebook</a></li> <li><a href="https://www.kaggle.com/code/hoangnguyen719/otto-te-matrixv2-tail40-top404050-w136" target="_blank">Test set's matrix notebook</a></li> <li>I did try recreating Chris' co-visitation matrices (and candidate selection) in <a href="https://www.kaggle.com/code/hoangnguyen719/otto-tr-cand40-matrixv1-tail40-top40-40-50" target="_blank">this notebook</a>, which better utilizes <code>cuDF</code> and therefore shortens running time by more than half.</li> </ul> <h3>2. Feature Generation</h3> <p>Chris in <a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/370210" target="_blank">his discussion</a> suggests three sets of feature to be created</p> <ul> <li>Item features</li> <li>Session features</li> <li>Item-session interaction features</li> </ul> <p>Based on this idea, I have created the following features</p> <ul> <li>Item features (for each <code>aid</code>)<ul> <li>Count of events (click/cart/order)</li> <li>Sum of event weight</li> <li>Quarter of day (QoD) with most events (0-3)</li> <li>Day of week (DoW) with most events (0-6)</li></ul></li> <li>User features (for each <code>session</code>)<ul> <li>Count of events (click/cart/order) and interacted items (<code>aid</code>)</li> <li>Sum of event weight</li> <li>QoD with most events (0-3)</li> <li>DoW with most events (0-6)</li> <li>Number of days with events</li> <li>Days from first to last events</li></ul></li> <li>User-item features (for each <code>session</code>-<code>aid</code> pair)<ul> <li>Count of events (click/cart/order) and interacted items (<code>aid</code>)</li> <li>Sum of event weight</li> <li>QoD with most events in both categorical (0-3) and one-hot encoded (0/1 for each) format</li> <li>DoW with most events in both categorical (0-6) and one-hot encoded (0/1 for each) format</li> <li><code>last_n</code> = <code>item_chronological_rank / user_total_event_count</code></li> <li><code>last_ts</code> = <code>(user_item_last_timestamp - start_week_timestamp) / (end_week_timestamp - start_week_timestamp)</code></li></ul></li> </ul> <p>However, due to Kaggle's limited computational resources, only a subset of the features was finally selected.</p> <p><strong>Code</strong></p> <ul> <li><a href="https://www.kaggle.com/code/hoangnguyen719/otto-tr-cand40-v2-tail40-top40-40-50" target="_blank">Training set's candidate selection &amp; feature generation notebook</a></li> <li><a href="https://www.kaggle.com/code/hoangnguyen719/otto-te-cand40-v2-tail40-top40-40-50" target="_blank">Test set's candidate selection &amp; feature generation notebook</a></li> </ul> <h3>3. Candidate Selection</h3> <p>This section's logic is partly based on that of <a href="https://www.kaggle.com/code/cdeotte/candidate-rerank-model-lb-0-575" target="_blank">Chris' notebook</a> above.</p> <p>For every session, I select top <code>X</code> most relevant items in each event type (click, cart and order). "Relevancy" is scored using a number of rules:</p> <ul> <li>Number of times the session has clicked/carted/ordered the items</li> <li>Sum of co-visitation weight</li> <li>Whether the item is a top-clicked/bought item of the week</li> </ul> <p>Candidate selection is a bottleneck of this solution and needs some balancing; having too few candidates results in low recall no matter how good our ranker is, but having too many will exceed computational limit. Therefore, I tested <code>X</code> for three different values 30, 35 and 40. <code>X</code> is finally set at 40.</p> <p><strong>Code</strong></p> <ul> <li><a href="https://www.kaggle.com/code/hoangnguyen719/otto-tr-cand40-v2-tail40-top40-40-50" target="_blank">Training set's candidate selection &amp; feature generation notebook</a></li> <li><a href="https://www.kaggle.com/code/hoangnguyen719/otto-te-cand40-v2-tail40-top40-40-50" target="_blank">Test set's candidate selection &amp; feature generation notebook</a></li> </ul> <h3>4. Ranker</h3> <p>Two different ranking methods are used for rankers<br> <strong>(1) Rule-based ranker in Chris' notebook</strong><br> Scores (CV score not computed due to limited time)</p> <table> <thead> <tr> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>0.54777</td> <td>0.54769</td> </tr> </tbody> </table> <p>Notebooks</p> <ul> <li>Inference: <a href="https://www.kaggle.com/code/hoangnguyen719/otto-infer-all-v2-can40-v2-tail40-top40405" target="_blank">here</a></li> </ul> <p><strong>(2) <code>XGBRanker</code> with <code>rank:pairwise</code> objective; one single model for each event type</strong><br> Scores</p> <table> <thead> <tr> <th>CV</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>0.512189</td> <td>0.56940</td> <td>0.56884</td> </tr> </tbody> </table> <p>Notebooks</p> <ul> <li>Training: <a href="https://www.kaggle.com/code/hoangnguyen719/otto-trainingclk-can40-matrixv2-tail40-top404050" target="_blank">click training notebook</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/otto-trainingcart-can40-matrixv2-tail40-top404050" target="_blank">cart training notebook</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/otto-trainingord-can40-matrixv2-tail40-top404050" target="_blank">order training notebook</a></li> <li>Inference: <a href="https://www.kaggle.com/code/hoangnguyen719/otto-infer-all-v1-can40-v2-tail40-top404050" target="_blank">here</a></li> </ul> <p><strong>Notes</strong></p> <ul> <li>Method (2) was overfitting and requires a lot of hyper-parameter tuning.</li> <li>In LB score, method (2) managed to beat (1) by 0.0216, so a model-based ranker is, understandably, better than a rule-based approach.</li> <li>Method (1) underperforms in LB compared to Chris' original rule-based reranker notebook. The three differences between method (1) and Chris' are<ul> <li>Session truncation: Method (1)'s co-visitation matrices use last 40 events of each session, while Chris' use last 30.</li> <li>Number of items in co-visitation matrice: method (1)'s matrices use top 40, 40 and 50 items for the three matrices mentioned above, while Chris' use top 15, 15 and 20.</li> <li>Event type weight: method (1) uses <code>type_weight = {0:1, 1:3, 2:6}</code> while Chris uses <code>type_weight = {0:1, 1:6, 2:3}</code>. However, in a separate test of mine this disparity proves not to affect the score by much.</li></ul></li> </ul> <h3>5. Solution Ensemble</h3> <p>Due to the poor performance of my individual rankers, I ensemble them and other publicly available submissions (credit to <a href="https://www.kaggle.com/code/karakasatarik/0-578-ensemble-of-public-notebooks" target="_blank">@karakasatarik's notebook</a> and <a href="https://www.kaggle.com/code/vbmokin/0-578-ensemble-of-public-notebooks-upgrade?scriptVersionId=116497334" target="_blank">@vbmokin's notebook</a>), for better score. I weight each submission using their public LB score. My final two submissions are:<br> <strong>(A) Ensemble of method (2) above and public submissions</strong><br> <strong>(B) Ensemble of above two owned methods and public submissions</strong></p> <p>Below are the scores of all public submissions and ensembles</p> <p></p><ul><br> <li><p>Public LB</p><p></p> <table> <thead> <tr> <th><a href="https://www.kaggle.com/karakasatarik" target="_blank">@karakasatarik</a>'s ensemble</th> <th><a href="https://www.kaggle.com/vbmokin" target="_blank">@vbmokin</a>'s ensemble</th> <th>Submission (A)</th> <th>Submission (B)</th> </tr> </thead> <tbody> <tr> <td>0.57843</td> <td>0.57821</td> <td>0.57884</td> <td>0.57844</td> </tr> </tbody> </table> <p></p></li><br> <li><p>Private LB<br></p> <table> <thead> <tr> <th><a href="https://www.kaggle.com/karakasatarik" target="_blank">@karakasatarik</a>'s ensemble</th> <th><a href="https://www.kaggle.com/vbmokin" target="_blank">@vbmokin</a>'s ensemble</th> <th>Submission (A)</th> <th>Submission (B)</th> <th><br></th> </tr> </thead> <tbody> <tr> <td>0.57808</td> <td>0.57787</td> <td>0.57855</td> <td>0.57811</td> <td><p></p></td></tr></tbody></table></li> </ul> My submission (B) has higher scores than did the public ensembles for private LB, while submission (A) outperforms all other ensembles in both public and private LBs. This means that both methods, though achieving relatively poor scores, did add some valuable info to the final solution (the 0.57808 -&gt; 0.57855 improvement is equal to a boost from rank 653th to rank 255th!). <h2>II. What I have learned as a Kaggle competition first-timer</h2> <p>It can be seen from the above sections that my work depended largely on other participants' help and support - so thank you! Below are what I have learned from this valuable experience - hope it'd be helpful for others too!</p> <ul> <li><strong>Read the discussion and notebook forums</strong> - especially if you're new to Kaggle/ML. There are always experienced participants sharing their ideas, suggestions and feedbacks, so if you are looking for a place to start the race, this is it! And check back once in a while on notebook/discussion that is helpful for you - the comment section may give you additional insights or unexpected bug-fixings that are no less valuable than the notebook.<br> I myself in this competition would definitely not have got the bronze if it was not thanks to the knowledge shared by others.</li> <li><strong>Try as many things as you can</strong>. I read through solutions of some of the top achievers (<a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/383013" target="_blank">Top 4</a>, <a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/382802" target="_blank">Top 6</a>), and realized that they all tried more approaches (Matrix Factorization, Word2Vec) in more depth (hundred of co-visitation matrices). Of course they have done better in other aspects too (preprocessing, train-val splitting, validation, etc.), but such multi in-depth methods alone already lead to more results in better robustness. When ensembled, their solutions understandably far outperform mine.</li> <li><strong>Know every line of code</strong> you write. I lost about a month getting discouraged by CV scores going up and down unexpectedly, only to later found out a line of code misplaced in my validation step. <a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/382277" target="_blank">This can happen to anyone</a>, so make sure you understand every line of code in your work.</li> </ul>
OTTO – Multi-Objective Recommender System
28th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, we would like to thank OTTO for organizing this competition. It was the first RecSys experience for most of us and we learned a lot. I'll try to talk about the things we've all tried at each stage of our solution, and the ones that didn't work at the end.</p> <p>In addition, <a href="https://www.kaggle.com/tetsuro731" target="_blank">@tetsuro731</a> has opened a separate thread for <a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/382955" target="_blank">the stacking method he applied</a></p> <p><strong>In addition, the links to our public repos are below:</strong><br> Anil: <a href="https://github.com/nlztrk/OTTO-Multi-Objective-Recommender-System" target="_blank">GitHub Repo</a><br> Gunes: <a href="https://github.com/gunesevitan/otto-multi-objective-recommender-system" target="_blank">GitHub Repo</a> <br> Tetsuro: <a href="https://github.com/tetsuro731/OTTO-kaggle-tetsuro731" target="_blank">GitHub Repo</a> </p> <hr> <h2>Generating Covisitation Matrices</h2> <h4>Anil &amp; Ayberk</h4> <ul> <li>Generated <strong>Top-100</strong> AIDs for three well-known covisitation schemes given in: <a href="https://www.kaggle.com/code/tuongkhang/otto-pipeline2-lb-0-576" target="_blank">link</a></li> </ul> <h4>Gunes</h4> <ul> <li>7 covisitation matrices like the ones in public but slightly different (click/cart/order weighted, only click/cart, click/order, cart/order and time weighted)</li> <li>Created a counter from <strong>7</strong> covisitation matrices and <strong>50</strong> nearest neighbor of last aid in session using fasttext model + annoy</li> </ul> <h4>Tetsuro</h4> <ul> <li>Generated co-visitation matrix with several parameters/weights based on the public notebook.</li> </ul> <hr> <h2>Splitting the Data as Train/Val</h2> <h4>Anil &amp; Ayberk</h4> <ul> <li>Used the <a href="https://www.kaggle.com/datasets/radek1/otto-train-and-test-data-for-local-validation" target="_blank">local validation scheme</a> given by <a href="https://www.kaggle.com/radek1" target="_blank">Radek</a>. The local train data was also splitted into two by sessions in order to avoid possible leakage during model training and score calculation. The implementation can be seen in the corresponding notebook.</li> </ul> <h4>Gunes</h4> <ul> <li>Radek’s split is used as the validation set. Every approach is applied on two different datasets; training + validation and entire dataset. First one is used for the validation and second one is used for the submission.</li> </ul> <h4>Tetsuro</h4> <ul> <li>Didn’t use 5 weeks and used 4 weeks for training because the data for the prediction have only 4 weeks.</li> </ul> <hr> <h2>Generating Co-Occurrence Matrices</h2> <h4>Anil</h4> <ul> <li>Generated all pair occurrences for all AIDs among all sessions for all action pairs (click-cart, cart-order, etc.). This is used for feature extraction later.</li> </ul> <hr> <h2>Candidate Generation</h2> <h4>Anil &amp; Ayberk</h4> <ul> <li>Used the <a href="https://www.kaggle.com/code/tuongkhang/otto-pipeline2-lb-0-576" target="_blank">public candidate generation script</a> and generated <strong>100</strong> candidates for all action types. The rank of the candidates generation with several parameters/weights are also used for the features for the ranking model.</li> </ul> <h4>Gunes</h4> <ul> <li>Generated co-visitation matrices slightly different than publicly shared ones<ul> <li>12 hour difference 15x click weighted</li> <li>12 hour difference 15x cart weighted</li> <li>12 hour difference 15x order weighted</li> <li>12 hour difference only click and cart</li> <li>12 hour difference only click and order</li> <li>14 day difference only cart and order</li> <li>24 hour difference time weighted</li></ul></li> <li>Trained and tuned a FastText (0.547 lb score) skipgram which worked better than gensim word2vec and it was faster because of c++ bindings</li> <li>Created a counter from 7 covisitation matrices and 50 nearest neighbors of last aid in the session using FastText model and annoy (100 trees)</li> <li>80-100 candidates are selected using the approach above and session unique aids are concatenated to candidates for each session</li> </ul> <h4>Tetsuro</h4> <ul> <li>Created <strong>Top-100</strong> candidates.</li> <li>Added click/cart/order Top-50 Popular items for each candidate. Finally, 150 candidates are generated which are used for the next ranking phase. The rank of the candidates generation with several parameters/weights are also used for the features for the ranking model.</li> </ul> <hr> <h2>Feature Extraction</h2> <p>Generated features for following data subsets:</p> <ul> <li>Items</li> <li>Sessions</li> <li>Item-Session Combinations</li> <li>Covisitation and Co-Occurrence Statistics</li> </ul> <h3>Item Features</h3> <h4>Anil</h4> <ul> <li>Statistics generated from hour, weekday and weekend status</li> <li>Count features (bool for &gt;0 and &gt;1, rank among all)</li> <li>Unique count features (unique count and rank among all)</li> <li>Distribution of action types in percentiles</li> <li>Inclusion rate by all sessions</li> <li>Occurrence rate in the last week of data</li> <li>Average number of times seen in the same sessions at different times</li> <li>All of the above with filtered separately for all action types</li> </ul> <h4>Gunes</h4> <ul> <li>type mean/std</li> <li>day of week mean</li> <li>hour mean</li> <li>is session start/end mean</li> <li>candidate score mean/std/min/max</li> <li>timestamp ratio/difference</li> </ul> <p>All of those aggregations generated for each type, last week, last 1, 2, 3, 4, 5, 6, 7 days (all of the aid aggregations with count and nunique are replaced with rank percentile because two datasets had different sizes)</p> <h4>Ayberk</h4> <ul> <li>Unique count features</li> <li>Count features</li> <li>Time features</li> <li>Rank of a item in sessions</li> <li>How many times item clicked after bought</li> <li>Session based features like how many time item clicked per session</li> <li>Weekly change rates</li> <li>click-to-cart , cart-to-buy ratios</li> </ul> <p>These aggregations generated per last 2 week and type.</p> <h4>Tetsuro</h4> <ul> <li>Popular item feature</li> <li>The number/rank/unique number of click/cart/order during 1/2/4 weeks</li> <li>Counts/Unique counts of click/cart/order for each session</li> <li>The ratio of these features.</li> </ul> <h3>Session Features</h3> <h4>Anil</h4> <ul> <li>Statistics generated from hour, weekday and weekend status</li> <li>Count features (bool for &gt;0 and &gt;1, rank among all)</li> <li>Unique count features (unique count and rank among all)</li> <li>Distribution of action types in percentiles</li> <li>Length of the session</li> <li>Features generated by extracting mini-sessions according to the time differences between actions</li> <li>Statistics generated from multiple purchases made in a single basket</li> <li>Rates of taking products to the next action within the same session (click-&gt;cart, cart-&gt;order)</li> <li>All of the above with filtered separately for all action types</li> </ul> <h4>Gunes</h4> <ul> <li>Aid count</li> <li>Unique count</li> <li>Count of last aid</li> <li>Type of last aid</li> <li>Last aid itself</li> <li>Timestamp ratio/difference</li> </ul> <h4>Ayberk</h4> <ul> <li>Session unique aid count, length, time features</li> <li>Session click-to-cart , cart-to-buy ratios</li> </ul> <p>These aggregations generated per type.</p> <h4>Tetsuro</h4> <ul> <li>Mean number of click/cart/order counts/unique counts</li> <li>Session duration (min and max diff of session timestamp)</li> </ul> <h3>Item-Session Combination Features</h3> <h4>Anil</h4> <ul> <li>Statistics generated from hour, weekday and weekend status</li> <li>Count features (bool for &gt;0 and &gt;1, rank among all)</li> <li>Unique count features (unique count and rank among all)</li> <li>Distribution of action types in percentiles</li> <li>Reversed order of the item in the session</li> <li>Time difference between the latest occurrence of the item and the start - end of the session</li> </ul> <h4>Gunes</h4> <ul> <li>Candidate count in session</li> <li>Candidate click, cart and order count in session</li> </ul> <h4>Ayberk</h4> <ul> <li>Count features</li> <li>How much time passed since last action.</li> <li>Inverse rank place in session</li> </ul> <p>These aggregations generated using all of the types and per type.</p> <h4>Tetsuro</h4> <ul> <li>Diff/ratio between item and session which are related to similarity.</li> </ul> <h3>Covisitation and Co-Occurrence Statistics</h3> <h4>Anil</h4> <ul> <li>Statistics generated from covisitation and co-occurrence scores between candidate items and items in the session's history</li> </ul> <h4>Ayberk</h4> <ul> <li>Co-visitation scores used.</li> </ul> <hr> <h2>Training</h2> <h4>Anil</h4> <ul> <li><strong>Model:</strong> XGBoost</li> <li><strong>Fold Scheme:</strong> 5-Fold (Grouped by "session")</li> <li><strong>Negative Sampling Fraction:</strong> 15%</li> <li>Dropped sessions with no positive labels</li> <li>Used the first half of splitted local training set</li> </ul> <h4>Gunes</h4> <ul> <li><strong>Model:</strong> XGBoost and LightGBM</li> <li><strong>Fold Scheme:</strong> 5-Fold (Grouped by "session")</li> <li><strong>Negative Sampling Fraction:</strong> 30%</li> <li>Dropped sessions with no positive labels</li> </ul> <h4>Ayberk</h4> <ul> <li><strong>Model:</strong> LightGBM</li> <li><strong>Fold Scheme:</strong> 5-Fold (Grouped by "session")</li> <li><strong>Negative Sampling Fraction:</strong> 20%</li> <li>Early stopping used.</li> <li>Dropped sessions with no positive labels</li> <li>Used the first half of splitted local training set</li> </ul> <h4>Tetsuro</h4> <ul> <li><strong>Model:</strong> LightGBM<ul> <li>Metrics: nDCG@20</li></ul></li> <li><strong>Fold Scheme:</strong> 5-Fold (Grouped by "session")<ul> <li>Checked recall for each fold</li></ul></li> <li><strong>Negative Sampling:</strong> sampling negative to 2.5% positive.</li> <li>Dropped sessions with no positive labels</li> </ul> <hr> <h2>Inference</h2> <h4>Anil</h4> <ul> <li>Used mean blending</li> <li>Executed on the second half of splitted local training set when running local validation</li> </ul> <h4>Gunes</h4> <ul> <li>Used mean blending</li> </ul> <h4>Ayberk</h4> <ul> <li>Used mean blending</li> <li>Executed on the second half of splitted local training set when running local validation</li> <li>Filling orders predictions in the end with carts first non duplicate predictions (last 2)</li> </ul> <h4>Tetsuro</h4> <ul> <li>Used mean blending</li> </ul> <hr> <h2>Submission Blending</h2> <p>Our blending approach was pretty straightforward.</p> <ul> <li>Scale each prediction with robust scaler</li> <li>Outer join different predictions on session and aid</li> <li>Fill missing values with 0</li> <li>Do a weighted sum based on OOF scores</li> </ul> <hr> <h2>Didn't Work &amp; Improve</h2> <h4>Anil</h4> <ul> <li>Weekday-Specific aggregations</li> <li>Word2Vec features</li> <li>Different models (CatBoost, LGBM)</li> <li>Comprehensive pair scores (because of OOM errors)</li> <li>Max-median blending</li> <li>Early-stopping</li> <li>Higher negative fractions</li> <li>Different objective metrics</li> <li>Different fold counts</li> </ul> <h4>Gunes</h4> <ul> <li><strong>Collaborative filtering:</strong> Either the scores aren't good or the embeddings become too large and inference is too slow</li> <li><strong>Matrix factorization:</strong> Same as collaborative filtering</li> <li><strong>Doc2vec:</strong> Model doesn't learn anything</li> <li><strong>Sequential models from recbole library (GRU4Rec, BERT4Rec and etc.):</strong> Training is too slow and they are not competitive</li> <li><strong>General models from recbole library (BPR, CF and MF models):</strong> Training is fast but inference is too slow because models don't scale</li> <li><strong>Models from surprise library:</strong> Same as general models from recbole library</li> <li><strong>TF-IDF + pairwise similarity:</strong> Very slow inference time since I was using argsort to get top 20 most similar aids</li> </ul> <h4>Ayberk</h4> <ul> <li>Daily aid features</li> <li>User2User Similarities</li> <li>Tried MLP architecture with various settings (pairwise type prediction, session-based multi class)<br> but couldnt beat heuristic approach by Chris and as i used only kaggle resources it was hard for me to keep improving so i dropped this approach at early stages.</li> <li>Candidate generation using item embeddings from MLP architecture</li> <li>Higher negative fractions</li> <li>Candidate ensemble with best <a href="https://www.kaggle.com/20" target="_blank">@20</a> recall and best@N recall as it would rank candidates better but didnt work</li> </ul> <h4>Tetsuro</h4> <ul> <li>Weekday-Specific aggregations</li> <li>improve nDCG but local recall hadn’t improved</li> <li>Wanted to try W2C based method but I didn’t have time to do.</li> </ul>
OTTO – Multi-Objective Recommender System
2nd Place Solution(ONODERA part)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thank you for launching and organizing this terrific competition <a href="https://www.kaggle.com/pnormann" target="_blank">@pnormann</a>.</p> <p>I wanted to be first, but I don't really care so far.<br> I'd like to explain my part.</p> <h3>Candidates</h3> <p>When I teamed up with <a href="https://www.kaggle.com/psilogram" target="_blank">@psilogram</a>, he already has great candidates compared to mine.<br> So I decided to use his candidates.</p> <h3>Features</h3> <h4>Item2item Features</h4> <p>Also <a href="https://www.kaggle.com/psilogram" target="_blank">@psilogram</a> already has splendid features, but there is room for improvement regarding CF features.<br> So I focused on item2item features and that consists of</p> <ul> <li>count</li> <li>time difference</li> <li>sequence difference(invented by <a href="https://www.kaggle.com/psilogram" target="_blank">@psilogram</a>)</li> <li>2 kind of weighted above features</li> <li>Aggregation of above<br> In total, I got 93 features. After this, I could generate almost 5k features using different combination(e.g. click to order, cart to order, etc…)<br> I use just 400~500 features eventually.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F317344%2F209f3518a4ec064f4bbd8e3c0d1677d0%2F2023-02-17%205.10.31.png?generation=1676579857970702&amp;alt=media" alt=""></li> </ul> <h4>1st stage prediction Features</h4> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F317344%2F6041fac8d79269776d88f8350774950a%2F2023-02-17%205.10.47.png?generation=1676579712049232&amp;alt=media" alt=""></p> <h4>Pseudo Event Features</h4> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F317344%2Feac4b9290d28bf9725b59969745f0459%2F2023-02-17%205.11.11.png?generation=1676579770675159&amp;alt=media" alt=""></p> <h3>Models</h3> <p>I used XGBoost and CatBoost.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F317344%2Ffc93d88efbabc29bb619f7a8d5cd8858%2F2023-02-17%205.10.08.png?generation=1676579918058966&amp;alt=media" alt=""></p> <h3>Pipeline</h3> <p>After that 2nd stage, we blended our result ( <a href="https://www.kaggle.com/senkin13" target="_blank">@senkin13</a>, <a href="https://www.kaggle.com/h4211819" target="_blank">@h4211819</a> ) by rank.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F317344%2F7c88b62d96eb550b66d392c4a9d46413%2F2023-02-03%209.07.35.png?generation=1675382887632198&amp;alt=media" alt=""><br> <a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/382839" target="_blank">my teammate's solution</a></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F317344%2Fb89de93b7508e672fc007b50232dea1f%2F2023-02-17%205.09.49.png?generation=1676580022020358&amp;alt=media" alt=""></p> <h3>Acknowledgments</h3> <p>If I hadn't used cuDF and cuML, I couldn't manage a lot of experiments.<br> Thanks <a href="https://rapids.ai/index.html" target="_blank">RAPIDS</a><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F317344%2F2fad8cb1ed63c9d91ed4822fbdf133e4%2FRAPIDS-logo-white.png?generation=1675217750447580&amp;alt=media" alt=""></p> <p>Sure. Let's say a session has clicked aid [a, b, c, d] and sequence is [0, 1, 2, 3].<br> We can generate distance features from this.<br> e.g. distance between aid d and aid a is 3(3-0)</p> <p>Yes. FYI, the primary keys of my item2item features are [aid_x(aid), type_x, aid_y(aid_candidate), type_y], so I could aggregate them again. That's why I had a lot of features.</p> <p>Yeah, exactly</p> <ol> <li>same as 1st stage. All events.</li> <li>Just 1 hour or so.</li> <li>DGX(4X V100)</li> </ol> <p>I added some figures. I hope this helps you understand.</p>
OTTO – Multi-Objective Recommender System
34th (ex 37th) Place Solution (Polars is here to stay !)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to Otto and Kaggle for hosting this competition with such a challenging dataset.<br> A special thank to kagglers who shared their work making the competition even more stimulating and - as usual - congratulations to the winners and everyone who enjoied the competition !</p> <p>This competition introduced me to the Marlin daloader - a super efficient NVIDIA <a href="https://github.com/NVIDIA-Merlin/dataloader" target="_blank">dataloder </a> for recommender systems (thank you <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a> !) and <a href="https://www.pola.rs/" target="_blank">Polars</a>: a blazingly fast DataFrame library for huge datasets that natively supports multithreading and whose syntax is - for me - more intuitive than pandas. During the competition I was able to easily replace all the pandas code with polars code with a significant gain in performance (RAM and CPU).</p> <p>I think Polars it's a library that will become more and more important in the data ecosystem.</p> <h2>My solution</h2> <p>This is the visualization of the steps of my solution:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F214989%2Fb0b946b4770066127596d3b62d326f34%2Fjourney.png?generation=1675611218130910&amp;alt=media" alt=""></p> <h3>Validation</h3> <p>1_000_000 random truncated sessions out of 1_801_251 sessions from 4th week</p> <h3>Candidates Selection (Heuristic Model) (val:0.570, test:0.576)</h3> <p>Generated covisitation matrices for all aids, carts and orders and computed probability <strong>P(aid,next-aid)</strong> = <strong>next-aid</strong> follow <strong>aid</strong> in a (window of a) session containing <strong>aid</strong> </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F214989%2F92de3828cb209e69ad8227cce705dabe%2Fprobabilities.png?generation=1675611468571181&amp;alt=media" alt=""></p> <p>Calculated the probability that an aid,cart,order is present more than once in a (window of a) session.</p> <p>Heuristic model uses rules based on these two kind of probabilities.</p> <h3>Base Ranker Model (0.573,0.580)</h3> <p>Selected first 100 candidates by heuristic model and generated 26 features: </p> <p>10 interaction (between candidate and session) based features of wich the most significative are: </p> <ul> <li><em>i_count</em>: occurences of candidate in session</li> <li><em>i_self_aids_d</em>: sum of probabilities that the candidate is present more than once in the session </li> <li><em>i_sims_aids</em>: sum of probabilities that the candidate is a next-aid in the session</li> <li>…</li> </ul> <p>10 session based features : </p> <ul> <li><em>s_type_mean</em>: mean of clicks , cart and buys (see <a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/379631#2108619" target="_blank">link</a>)</li> <li>…</li> </ul> <p>6 candidated based features : </p> <ul> <li><em>a_self_buy_total</em>: occurences of orders for candidate in all sessions </li> <li>…</li> </ul> <h3>Feature engineering 1 (0.581, 0.588)</h3> <p>Added custom features for last 5 aids,carts,orders of each session (for a total of 81 feature):</p> <ul> <li><em>i_aids_count_last_aid</em>: value of covisitation matrix (last-aid-session,candidate) </li> <li><em>i_aids_total_last_aid</em>: occurence of last-aid in all sessions </li> <li><em>i_aids_count_last_aid/i_aids_total_last_aid</em>: probability that candidate follows last-aid in all sessions</li> <li>… </li> </ul> <h3>Feature engineering 2: 0.583, 0.591</h3> <p>Generated covisitation matrices based on last two weeks (validation + test) and added corresponding features (137 features)</p> <h3>Stacking : 0.585, 0.592</h3> <p>Selected best 50 candidate from previous best model, added cross stacked predictions from previous lgb models.<br> Added some interaction features based on word2vec model and used xgb as stacked model. </p>
OTTO – Multi-Objective Recommender System
3rd Place - Using Only Rules Achieves LB 0.590!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Radek. Thanks for all your sharing during the competition!</p> <p>I think all item embeddings would help. The trick was to first train some NN such as LSTM, Word2Vec, Matrix Factorization etc.</p> <p>Then we take the item embeddings and for each row in our candidate dataframe, we compute the cosine similarity between user history last item and candidate. Then user history 2nd to last, 3rd, 4th 5th. Then user history first item, 2nd, 3rd, 4th, 5th. Then user history last buy, 2nd to last buy, 3rd, 4th, 5th. Then finally we take aggregate means and standard deviations of groups of these values. These features were very helpful.</p> <p>yes, sometimes memory usage does not decrease. In this situation, i save dataframe to disk as parquet. Then i shut down notebook and start a new notebook and read in the parquet. That will clear memory.</p> <p>I did everything. I pushed rule-based as far as it would go. Afterward, i used word2vec and matrix factorization to create features for my GBT reranker. Both helped.</p> <p>I'm not sure what you're asking. I use <code>cupy</code> and GPU to perform the vector multiplication of user history item with candidate item embedding. It takes a few seconds to create a new column of cosine similarity. I create dozens of features then train XGB reranker. I do not worry about making too many features. My pipeline can handle 1000+ features. (Because for XGB reranker i remove all users where no candidate has a positive target).</p> <p>Thanks EeyoreLee. Congratulations on your solo Bronze medal finish. Your LB 0.593 was very close to Silver 0.594. You did great !</p> <p>Thanks Adam</p> <h1>Team G &amp; B &amp; D &amp; T</h1> <p>It was a pleasure to work with <a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a> <a href="https://www.kaggle.com/benediktschifferer" target="_blank">@benediktschifferer</a> <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> . We each made individual models and then ensembled all our work together by adding the ranks of each of our predictions per user target type. Below I describe my individual single model. My teammates will describe their work in their own discussion posts. You can read about Theo's LB 0.6029 model <a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/382975" target="_blank">here</a>! You can read about Benny's LB 0.601 model <a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/386497" target="_blank">here</a>!</p> <p>Most of my solution was made public in my notebook <a href="https://www.kaggle.com/code/cdeotte/candidate-rerank-model-lb-0-575" target="_blank">here</a> and discussion <a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/370210" target="_blank">here</a> during the competition. There were only 3 significant ideas missing from my public work. Let's discuss how to boost my public work to LB 0.601 single model. (UPDATE: All code published to GitHub <a href="https://github.com/cdeotte/Kaggle-OTTO-Comp" target="_blank">here</a>)</p> <h1>How To Score LB 0.601 Single Model</h1> <p>It was explained <a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/364721" target="_blank">here</a> and <a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/370210" target="_blank">here</a> that the best approach was "candidate rerank" model. My public notebook <a href="https://www.kaggle.com/code/cdeotte/candidate-rerank-model-lb-0-575" target="_blank">here</a> shows how to achieve LB 0.575. And my discussion post <a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/370210" target="_blank">here</a> explains how to improve my public notebook by adding a GBT ranker model. Below are the three missing pieces labeled (1), (2), and (3) to achieve LB 0.601</p> <h1>(1) Choosing Candidates</h1> <p>To build a "candidate rerank", we need candidates. Where do we get candidates? The easiest way is to get them from my public notebook. In the <code>def suggest_buys(df)</code> and <code>def suggest_clicks(df)</code> function, the last lines are</p> <pre><code>top_aids = [aid for aid, ct in aids_counter.most_common(20)] return top_aids </code></pre> <p>To generate 50 candidates, we change 20 to 50 as in</p> <pre><code>top_aids = [aid for aid, ct in aids_counter.most_common(50)] return top_aids </code></pre> <h1>(2) Choosing Interaction Features</h1> <p>We now have 50 candidates per user from our public notebook above. Next we need to make features for our reranker model. The strongest and easiest way to make features is extract our co-visit counts by changing the last two lines of my public notebook to the following:</p> <pre><code>top_counts = [ct for aid, ct in aids_counter.most_common(50)] return top_counts </code></pre> <p>When we merge these counts to our candidates, we now have an interaction feature for each user item pair. To make more interaction features, we can extract the counts for each co-visit matrix individually. For example, imagine that we have 3 co-visit matrices named covisit2, covisit3, and covist4. Then one by one, we extract each covisit's counts:</p> <pre><code>EXTRACT = ['covisit2'] aids_counter = Counter() if 'covisit2' in EXTRACT: aids = list(itertools.chain(*[covisit2[aid] for aid in unique_aids if aid in covisit2])) for a in aids: aids_counter[a] += 1 if 'covisit3' in EXTRACT: aids = list(itertools.chain(*[covisit3[aid] for aid in unique_aids if aid in covisit3])) for a in aids: aids_counter[a] += 1 if 'covisit4' in EXTRACT: aids = list(itertools.chain(*[covisit4[aid] for aid in unique_aids if aid in covisit4])) for a in aids: aids_counter[a] += 1 top_counts = [ct for aid, ct in aids_counter.most_common(50)] return top_counts </code></pre> <h1>Reranker Boost CV and LB +0.011</h1> <p>First we use the technique above to generate candidates and the technique above to extract covisit counts. Next we add some simple item and user features like counting the number of times an item is click cart or order. When we apply the XGB reranker described <a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/370210" target="_blank">here</a>, our CV and LB will boost by <code>+0.011</code>. For example, the public notebook will boost to LB 0.586.</p> <h1>Using Rules Only (without reranker) Scores LB 0.590</h1> <p>To score over LB 0.600, we create more co-visit matrices to boost the original notebook's LB score. My public notebook uses 3 covisit matrices and achieves LB 0.575. If we make 17 more covisit matrices, we can boost my public "rules only" notebook to LB 0.590 (new notebook published <a href="https://www.kaggle.com/cdeotte/rules-only-model-achieves-lb-590" target="_blank">here</a>). Then when we extract the covisit counts explained above, the XGB reranker will boost +0.011 and achieve LB 0.601</p> <h1>(3) Twenty Covisit Matrices</h1> <p>Below are a description of my 20 covisit matrices. These covisit matrices are the secret sauce enabling my single XGB ranker model to achieve LB 0.601. The following variable names are from my new LB 0.590 notebook posted <a href="https://www.kaggle.com/cdeotte/rules-only-model-achieves-lb-590" target="_blank">here</a>. (Example code showing how to compute covisit matrices on GPU is <a href="https://www.kaggle.com/code/cdeotte/candidate-rerank-model-lb-0-575" target="_blank">here</a>)</p> <ul> <li><strong>top_20</strong> - this covisit matrix is in my original notebook</li> <li><strong>top_20b</strong> - all covisit pair counts are consecutive items. See code below.<br> <code>df['k'] = np.arange(len(df))</code><br> <code>df = df.merge(df, on=['session'])</code><br> <code>df = df.loc[ (df.k_y - df.k_x).abs()==1 ]</code></li> <li><strong>top_20c</strong> - all covisit pair counts are <code>(df.k_y - df.k_x).abs()&lt;=2</code></li> <li><strong>top_20d</strong> - all covisit pairs are carts/orders and forward at most 3 consecutive<br> <code>df = df.loc[df['type'].isin(['carts','orders'])]</code><br> <code>df = df.merge(df, on=['session'])</code><br> <code>df = df.loc[ (df.k_y - df.k_x &gt; 0) &amp; (df.k_y - df.k_x &lt;= 3) ]</code></li> <li><strong>top_20e</strong> - all covisit pairs are <code>(df.k_y - df.k_x).abs()&lt;=3</code> and have time decay with<br> <code>df['wgt'] = (1/2)**( (df.ts_x - df.ts_y).abs() /60/60)</code></li> <li><strong>top_20f</strong> - same as above but <code>(df.k_y - df.k_x).abs()&lt;=6</code></li> <li><strong>top_20_orders</strong> - this covisit matrix is in my original notebook</li> <li><strong>top_20_buy2buy</strong> - this covisit matrix is in my original notebook</li> <li><strong>top_20_buy2buy2</strong> - use most recent 3 weeks data and only carts/orders. Apply time decay shown above.</li> <li><strong>top_20_test</strong> - use most recent 3 weeks data. Only forward in time pairs. Use clicks/carts/orders to carts/orders. Add time decay<br> <code>df = df.loc[df.ts &gt;= LAST_3_WEEKS ]</code><br> <code>df2 = df.loc[df['type'].isin(['carts','orders'])]</code><br> <code>df = df.merge(df2, on=['session'])</code><br> <code>df = df.loc[ df.ts_y - df.ts_x &gt; 0 ]</code><br> <code>df['wgt'] = (1/2)**( (df.ts_x - df.ts_y).abs() /60/60)</code></li> <li><strong>top_20_test2</strong> - use most recent 2 weeks data with time decay.</li> <li><strong>top_20_buy</strong> - Limit to forward 2 hours. Use clicks/carts/orders to carts/orders. Apply time decay.</li> <li><strong>top_20_new</strong> - Find cold start users in train. Pairs using only their first history item. Use clicks/carts/orders to carts/orders.<br> <code>df['x'] = df.groupby('session').ts.transform('min')</code><br> <code>df = df.loc[df.x &gt; train.ts.min() + TWO_WEEKS ]</code><br> <code>df['n'] = df.groupby('session').cumcount()</code><br> <code>df2 = df.loc[df['n']==0]</code><br> <code>df3 = df.loc[df['type'].isin(['carts','orders'])]</code><br> <code>df = df2.merge(df3, on='session')</code></li> <li><strong>top_20_new2</strong> - Find cold start users in train. Pairs using only their first history item. Use clicks/carts/orders to clicks/carts/orders. Apply time decay.</li> <li><strong>top_40_day</strong> - Use only last week data. Forward in time. Clicks/carts/orders to carts/orders. Time decay</li> <li><strong>top_40_day2</strong> - Use only last week data. Time decay</li> <li><strong>top_40_less</strong> - Train users with less than 6 history and test users with less than 3<br> <code>df = df.loc[df[COUNT]&lt;THRESHOLD]</code><br> <code>df = df.merge(df, on='session')</code></li> <li><strong>top_40_more</strong> - Train users with more than 6 history and test users with more than 3</li> <li><strong>top_40_less2</strong> - Use item pairs with first item before 2pm. Clicks/carts/orders to carts/orders. Time decay<br> <code>df2 = df.loc[df[HOUR]&lt;14]</code><br> <code>df = df2.merge(df, on='session')</code></li> <li><strong>top_40_more2</strong> - Use item pairs with first item after 2pm. Clicks/carts/orders to carts/orders. Time decay</li> </ul> <h1>Fast Covisit Experiments With RAPIDS cuDF</h1> <p>To find the above 20 covisit matrices, i computed hundreds of covisit matrices and then computed local CV score. To make covisit matrices quickly, I used RAPIDS cuDF to make each covisit matrix on GPU in under 1 minute. Code to make covisit matrix is shown <a href="https://www.kaggle.com/code/cdeotte/candidate-rerank-model-lb-0-575" target="_blank">here</a>. Matrices were made using Nvidia 4xV100 32GB GPUs.</p> <h1>UPDATE: GitHub Code!</h1> <p>I published all 261 jupyter notebooks in my GitHub <a href="https://github.com/cdeotte/Kaggle-OTTO-Comp" target="_blank">here</a>. Specially we can review all the code used to generate co-visititation matices. And we can see the pipeline for building, training, inferring a GBT reranker model. Our team's final Kaggle inference submit notebook is <a href="https://www.kaggle.com/code/cdeotte/3rd-place-team-g-b-d-t-0-604" target="_blank">here</a>. My notebook to generate 100 candidates for reranker is <a href="https://www.kaggle.com/cdeotte/rules-only-model-achieves-lb-590" target="_blank">here</a>. By itself it scores 49th place LB 0.590!</p> <p>Group is not needed during inference. The GBT will rank all rows during inference. So when we just look at each user during inference, the ranking will be correct.</p> <p>(Group is only needed during training)</p> <p>Thanks Andrew!</p> <p>This was experiment 116 and experiment 220. I made hundreds of different co-visit matrices and computed validation score to see which were the best.</p> <p>Thanks ChickenBoy,</p> <ol> <li><p>top_20_new is clicks/carts/orders to carts/orders without time decay. And top_20_new2 is clicks/carts/orders to clicks/carts/orders with time decay.</p></li> <li><p>Cold start is <code>user_first_action - dataset_start_time &gt; 2 weeks</code></p> <p>df['x'] = df.groupby('session').ts.transform('min')<br> df = df.loc[df.x &gt; train.ts.min() + TWO_WEEKS ]<br> df['n'] = df.groupby('session').cumcount()<br> df2 = df.loc[df['n']==0]<br> df3 = df.loc[df['type'].isin(['carts','orders'])]<br> df = df2.merge(df3, on='session')</p></li> <li><p>In top_40_more/less, we filter users</p> <p>df = df.loc[df[COUNT]&lt;THRESHOLD] <br> df = df.merge(df, on='session')</p></li> </ol> <p>And in top_40_more2/less2, we filter first item. </p> <pre><code>df2 = df.loc[df[HOUR]&lt;14] df = df2.merge(df, on='session') </code></pre> <p>UPDATE: I published all 261 jupyter notebooks to run my full single model LB 0.601 solution at GitHub here: <a href="https://github.com/cdeotte/Kaggle-OTTO-Comp" target="_blank">https://github.com/cdeotte/Kaggle-OTTO-Comp</a></p>
OTTO – Multi-Objective Recommender System
3rd place(imaginary) solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>PS: This solution is actually equivalent to 3rd place, but it was ban because a cheater was on the team. So please enjoy it as an "imaginary" 3rd place solution.</strong></p> <p>First of all, thank you for organizing a great competition. I would also like to thank everyone who participated in the competition with us and the four of us who worked together as a team.<br> About one week before the end of the competition, we formed a team. However, just on the next day, we were warned one of our teammate might have some problem(<a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/381321" target="_blank">link</a>). And since then, we other four were working hard together to achieve the best possible result. <br> We can proudly tell you that this 3rd place score is definitely a result with no outside influence. (Of course, as we have already said, we leave it up to the kaggle team to decide how our results will ultimately be handled.)<br> In this discussion, I would be happy to share with you the solutions of the four of us.<br> Of course we can share the code as well, but we hope we can open it after the rankings are finalized.<br> (The code is not yet public at this time, so it would be difficult for cheaters to imitate our accuracy.)</p> <p>We write the solution each one below.</p> <hr> <h2>Alvor part</h2> <h3>Candidates selection:</h3> <ul> <li>all items from the session's history</li> <li>items from co-visitation matrices. I did many experiments (window size, weights, action types etc.) and finally end up with 5 best matrices (in terms of maximum possible recall when using each matrix). Each of these 5 matrices I use both for candidates selection and for feature engineering (sum/avg/max candidate score, etc.)</li> <li>I did not have fixed number of candidates per session</li> <li>median number of candidates per session: 67</li> <li>but there were sessions with very high number of candidates, so the average candidates number per session is 121.46</li> <li>my recalls per target:<br> clicks recall: 0.631531<br> carts recall: 0.540274<br> orders recall: 0.729626<br> Total: 0.6630<br> What did not work for me:</li> <li>add candidates from Word2Vec embeddings (K nearest neighbours)</li> <li>add candidates from Matrix Factorization embeddings (K nearest neighbours)</li> <li>simply increase number of candidates in my methods</li> <li>all these attempts increased the maximum possible recall, but decreased the validation score</li> </ul> <h3>Feature engineering:</h3> <ul> <li>Sessions features</li> <li>Items features</li> <li>Items-2-Sessions interactions features</li> <li>Co-Visitation matrices features (sum/avg/max of candidate score, etc.)</li> <li>Word2Vec features: euclidean and angular distance between the candidate's embedding and session's last item's embedding</li> <li>Matrix Factorization features: euclidean and angular distance between the candidate's embedding and last session's item's embedding</li> <li>Features selection based on Feature importance of my experimental models<br> Some of my best candidates:</li> <li>index of last interaction with the candidate in the session</li> <li>relative number of interactions with given candidate among all session's interactions</li> <li>timestamp differences</li> <li>angular distance between candidate's w2v embedding and last session's item w2v embedding</li> <li>number of co-visitation matrices containing given item as a candidate<br> What did not work for me:</li> <li>Word2Vec and Matrix Factorization features: euclidean and angular distance between the candidate's embedding and session's penultimate item's embedding</li> <li>Word2Vec and Matrix Factorization features: average/min euclidean and angular distance between the candidate's embedding and all session's items embeddings</li> <li>Word2Vec and Matrix Factorization features: euclidean and angular distance between the candidate's embedding and session's last cart/order item's embedding (there was a minor improvement, but I abandoned these features)</li> </ul> <h3>Model</h3> <ul> <li>Group 3-Fold (by sessions) LightGBM Classifier (binary_logloss)</li> <li>One model per one target (clicks/carts/orders)</li> <li>No negative downsample (during almost the entire competition) so it required a lot of memory and time</li> <li>1/5 negative downsample for carts/orders and 1/15 for clicks (last week of competition)</li> <li>But I use another trick to reduce memory/time usage: When I train a model for carts/orders, I use only sessions for which I have at least one ground-truth candidate among my candidates. Because the main goal of model is to distinguish between good candidates and bad candidates. So I think that sessions with only bad candidates don't help my model.<br> (It could be by 2 reasons:<br> 1) the session has not ground true cart/order at all<br> 2) the session has ground-true cart/order, but I didn't select it among my candidates)</li> <li>'dart' boosting for carts nodel, 'gbdt' boosting for clicks and orders model.</li> <li>add "carts" prediction as a feature for "orders" model (the best feature)<br> What did not work for me:</li> <li>Add second click to the positive samples as well as first click</li> <li>add "clicks" prediction as a feature for "orders" model</li> <li>multiple attempts to build a second model to re-arrange only top-X candidates from the first model</li> </ul> <hr> <h2>Makotu part</h2> <p>My pipeline consists mainly of preprocess/make co-matrix/candidate / make feature / modeling parts.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F397595%2Fcff84b84bab47c015ef64ab84e20de5d%2Fmakotu_solution_v2.png?generation=1675314772195222&amp;alt=media" alt=""></p> <h3>Data</h3> <ul> <li>I used Radek's train/validation/test data</li> </ul> <h3>Preprocess</h3> <ul> <li>make aid vector using w2v</li> <li>make BPR(Bayesian Personalized Ranking) feature (recommended Sirius)</li> </ul> <h3>Make-co-matrix and candidate</h3> <ul> <li>I made many pattern co-matrix. Basically, co-matrix is created in the form of probabilities. For example, the following pattern is used to quantify the relationship between aidA and aidB.<ul> <li>aid B count within same session / all aid A count</li> <li>aid B count within same session (after aid A click) / all aid A count</li> <li>aid B count within same session / all aid A count. (But if the same user has the same pair of idA and idB, make them unique and then aggregate)</li> <li>aid B count within same session / all aid A count. (with time weighted)</li> <li>aid B count within same session and within 1 hour / all aid A count.<br> like this.</li></ul></li> <li>Select a candidate based on session last action aid / top action aid / with in 1 hour action aid / within 1 day action aid</li> <li>candidate recall<ul> <li>order: 0.732 (@190 candidates)</li> <li>cart: 0.546 (@190 candidates)</li> <li>click: 0.682 (@140 candidates)</li></ul></li> </ul> <h3>Make feature</h3> <ul> <li>User / aid / user&amp;aid interaction features.</li> <li>features that worked<ul> <li>last action aid</li> <li>distance from last aid w2v to candidate aid w2v</li> <li>BPR feature (recommened sirius)</li> <li>co-matrix features (ex. probability of clicking on candidate aid after the last aid)</li> <li>etc</li></ul></li> </ul> <h3>Modeling</h3> <ul> <li>Catboost (loss: PairLogitPairwise)</li> <li>Group Kfold(group: session)</li> <li>Add oof features in addition to the above features<ul> <li>click model does not use oof.<br> However, two patterns of click models were created. in addition to predicting the next clicked id, it also predicts all subsequent clicks of the id, and then uses as oof.</li> <li>cart model add above 2 model oof.</li> <li>order model add click 2 model oofs, cart model's oof</li></ul></li> <li>CV click: 0.5632 cart: 0.4429 order: 0.6699</li> <li>LB: 0.602</li> </ul> <h3>Stacking</h3> <p>I was not involved in the stacking of results with other members, as all the stacking was carried out by shimacos with great skill. I hope you will wait for his addition. </p> <h3>(add) code of my part</h3> <p><a href="https://github.com/makotu1208/Otto-kaggle-3rd-solution-makotupart" target="_blank">https://github.com/makotu1208/Otto-kaggle-3rd-solution-makotupart</a></p> <hr> <h2>Shimacos part</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F397595%2F9602e046e48f6d2028d0a05ccf898720%2Fsolution_shimacos.png?generation=1675249326142169&amp;alt=media" alt=""></p> <h3>data</h3> <ul> <li>First, I used my own original truncate dataset.<ul> <li>train: 2022-08-14 ~ 2022-08-21 valid 2022-08-21 ~ 2022-08-28</li> <li>train: 2022-08-21 ~ 2022-08-28 test: 2022-08-28 ~ 2022-09-04 (Training iteration is the calculated one in the previous step)</li></ul></li> <li>And Roughly 10 or so models were made and stacking. (Single 0.592 -&gt; 0.595)</li> <li>However since everyone except me was using the radek dataset, I shifted to using that one after team merge. This was to maximize ensemble results.</li> </ul> <h3>preprocess</h3> <ul> <li>made aid vector<ul> <li>w2v with gensim</li> <li>n2v with pytorch_geometric</li></ul></li> <li>made session, aid vector<ul> <li>bpr with implicit</li></ul></li> </ul> <h3>Candidates</h3> <ul> <li>As shown in the overview figure.</li> <li>I created candidates and features by BigQuery. For example, the covisit candidates can be calculated in less than one minute for both train and test.</li> <li>Recall<ul> <li>average candidate count: 188.69</li> <li>click: 0.6799</li> <li>cart: 0.5555</li> <li>order: 0.7359</li> <li>Overall: 0.6762</li></ul></li> </ul> <h3>Feature</h3> <ul> <li>As shown in the overview figure.</li> <li>I used the overall average for smoothing when calculating cvr.</li> <li>High importance feature (CatBoost LossFunctionChange importance)<ul> <li>difference between the last time the aid was actioned across the overall data and the last time of the session.</li> <li>candidate feature by next visit covisit</li> <li>I think it was because in the end it was important to hit the next aid after the last action.</li> <li>difference feature of w2v and bpr</li> <li>covisit cvr feature</li></ul></li> </ul> <h3>Model</h3> <ul> <li>LightGBM (lambdarank) and CatBoost (PairLogit)</li> <li>I used CatBoost the day before the last day and found an improvement of about 0.002-0.003 over LightGBM. I should have used it earlier…<ul> <li>I got 0.596 and 0.598 at private LB finally.</li></ul></li> </ul> <h3>Stacking</h3> <ul> <li>At the team merge, the scores were only 0.600 (makotu), 0.600 (sirius),0.598 (Alvor and Shimacos), but By rank blending each score as follows, we were able to produce a 0.603, which was the second at the time.<ul> <li>Use the top50 of each prediction.</li> <li>And the each prediction was simply outer joined to create a pair of user items.</li> <li>fillna with 1 / 50</li> <li>rank blending with formula <code>1/rank_a + 1/rank_b + 1/rank_c</code> and sort descending.</li></ul></li> <li>As the number of models increased, it became difficult to find the optimal weights, so stacking was performed.<ul> <li>Dataset is the same as above.</li> <li>Feature: prediction score and rank</li></ul></li> <li>This resulted in a score of 0.604 by LightGBM. (CV: 0.59624)</li> <li>The same was created in CatBoost (CV: 0.59592) and averaged.</li> <li>Finally, I interpolated predictions by aid with the highest number of actions for each type in the test period.</li> <li>Random Thought<ul> <li>Too many more models could lower the score. This is probably due to too many negative examples.</li> <li>Stacking models created by different candidates tended to increase the overall score. </li></ul></li> </ul> <hr> <h2>Sirius part</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F397595%2Ffa7ab3c2005f4af8929e6ed5e7b6f42b%2Fsolution_sirius.png?generation=1675249468500843&amp;alt=media" alt=""></p> <h3>Recalling</h3> <ul> <li>All the recalling methods are listed in the figure.</li> <li>Avg candidates cnt for each user: 224.74</li> <li>Overall recall: 0.677232,<ul> <li>Clicks recall:0.688493</li> <li>Carts recall:0.557334</li> <li>Orders recall:0.735304</li></ul></li> </ul> <h3>Ranking</h3> <h4>Samples</h4> <ul> <li>Click model: the generated candidates which the user clicked are labeled with 1</li> <li>Cart/Order model: the generated candidates which the user carted or ordered are labeled with 1. I merged cart and order labels because of their sharing of similar semantic</li> <li>Downsampling of negtive samples is abopted. I just keep the negative samples with amount of <code>20*len(pos_samples)</code></li> </ul> <h4>Features</h4> <ul> <li>All the features are listed in the figure.</li> <li>The feature with highest importance in my model is BPR①, implemented with Implicit package. (<a href="https://www.kaggle.com/code/sirius81/bpr-feature/notebook" target="_blank">Notebook</a> will be public after everything is verified.)</li> <li>Bi-gram② is a terminology of NLP, meaning the successive visiting of aid1, aid2 here. It is similar to co-visit matrix, but it only counts the next one action. And I normlize the count by the hotness of item then. This group of features gave the biggest boost among all my experiments, ~0.006. (<a href="https://www.kaggle.com/code/sirius81/otto-bigram-feature/notebook" target="_blank">Notebook</a> will be public after everything is verified.)</li> <li>OOF of other action③ help improve my score ~0.001.</li> <li>Features ①+③+ shimacos’s w2v improved Makotu's CV ~0.001 and features ①+② improved Alvor's CV ~0.003.</li> </ul> <h4>Models</h4> <ul> <li>Catboost is trained with Logloss. When training Cart/Order model, I set different weights to positive sample, 5 and 10 for is_carts and is_orders respectively. Along with the strategy of mergeing Cart labels and Order labels, it boosts CV ~0.002, compared with training Cart model and Order model separately.</li> <li>MLP is trained with the same features as catboost, plus an standard scaler to normlize the features. But it is not accurate and diverse enough to improve the blending score. So in our final submission, it is not included. Beside MLP, I have tried to introduce GRU and pretrained item embedding into the NN model, but all failed.</li> </ul> <h3>Ensemble</h3> <p>That’s <a href="https://www.kaggle.com/shimacos" target="_blank">@shimacos</a> ’s credit. Very impressive work! </p>
OTTO – Multi-Objective Recommender System
5th place (yet) solution (Carno & 2U & Jiahong's part)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>As you can see, the whole pipeline of our solution is complex. Probably I will only release my recall part (co-visitation based).</p> <p>Same as other competitors who benefit from <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a> and <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, we thank their great devotion to this competition. We'd also like to thank the organizers and Kaggle, and I hope what happened in this competition will end up with a satisfactory endding.</p> <h2>TL; DR</h2> <p>Our solution is a combination version from two teams' previous solutions. Before team-merge, our train-valid splits, recall methods, feature sets and rerank models are all different. We not only simply ensemble our independent submission files, but also exchange features for further improvement.</p> <h2>Ensemble</h2> <p>Here, we use lower case letters to denote a submission version <strong>(a, b, …)</strong>, which can be the 5-fold ensemble of one reranker, or an ensemble of two or more submissions. <strong>S1</strong> and <strong>S2</strong> are public validation data split and private regenerated validation data split. <strong>R1</strong> and <strong>R2</strong> are two different recall methods. <strong>F1</strong> and <strong>F2</strong> are two different feature sets, and <strong>F'1</strong> and <strong>F'2</strong> are the important feature sub-sets. <strong>XGB</strong> and <strong>CBT</strong> denotes xgboost binary classifier and catboost ranker. <strong>"+"</strong> denotes score ensemble, which means we average the raw output from multiple model to rerank candidates. <strong>"&amp;"</strong> denotes index ensemble, which means we assign index-score for the first 20 candidates from 1.00 to 0.05, with 0.05 step, and then we use the summation of index-scores to rerank candidates. <strong>"*"</strong> denotes weight during ensemble.<br> Our final solution <strong><em>i</em></strong> whould be:<br> <strong><em>c</em></strong> = a * 0.45 &amp; b * 0.575 <br> <strong><em>d</em></strong> = d1 * 0.5 + d2 * 0.5 <br> <strong><em>e</em></strong> = e1 * 0.5 + e2 * 0.5 <br> <strong><em>g</em></strong> = c * 0.5 &amp; d * 0.4 &amp; e * 0.6 <br> <strong><em>h</em></strong> = (c * 0.5 &amp; d * 0.5) &amp; f * 0.5 <br> <strong><em>i</em></strong> = g * 0.6 &amp; h * 0.5</p> <table> <thead> <tr> <th></th> <th>data splits</th> <th>recall methods</th> <th>feature set</th> <th>model</th> <th>importance feature set</th> </tr> </thead> <tbody> <tr> <td>a</td> <td>S1</td> <td>R1</td> <td>F1</td> <td>CBT</td> <td>F1', F1''</td> </tr> <tr> <td>b</td> <td>S2</td> <td>R2</td> <td>F2</td> <td>XGB</td> <td>F2'</td> </tr> <tr> <td>d1, d2</td> <td>S2</td> <td>R1</td> <td>F1+F2'</td> <td>XBG, CBT</td> <td></td> </tr> <tr> <td>e1, e2</td> <td>S2</td> <td>R2</td> <td>F2+F1'</td> <td>XBG, CBT</td> <td></td> </tr> <tr> <td>f</td> <td>S2</td> <td>R2</td> <td>F2+F1''</td> <td>CBT</td> <td></td> </tr> </tbody> </table> <h2>Recall methods</h2> <h3>R1</h3> <p>This recall methods is developed based on public co-visitation matrix notebook (4 matrix: clicks, carts, orders and buy2buy), and optimized by numba. The detailed numbers will be released with code.</p> <h3>R2</h3> <p>PLACEHOLDER</p> <h2>Feature set</h2> <h3>F1</h3> <p>F1 includes statistical features and model trained features. Same as most teams, we use sum, max, min and mean of interaction history and co-visitation score to summarize the sessions, the items and the interactions. The importances of most statistical features are not significant.<br> In trained features, we used BPR, ALS and LMF from <code>implicit</code> package, W2V from <code>gensim</code> package and <a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/382783#2124459" target="_blank">SAS</a>. In all algorithms mentioned here, we can get the embedding of items, so we use the inner product of candidate embedding and session latest average embedding as interaction features. In BPR, ALS, and LMF, we can get the embedding of both sessions and items, so we additionally use the inner product of session embedding and candidate embedding as interaction features. </p> <h3>F2</h3> <p>PLACEHOLDER</p> <h3>Importance</h3> <p>We use feature importance from reranker model to decide which features to exchange.</p> <p>Its a good idea. 😄 I'd like to say all my teammates are unique and irreplaceable. All of us work hard together to get such a high rank in this competition.</p>
OTTO – Multi-Objective Recommender System
6th place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I had 0.648 with 80 cands, with 120, I don't remember sorry :D. The most important way to boost the score was finding the association between the generated candidates and the already seen candidates in the sessions where num candidates &lt; 20. So the key for me was high quality feature generation for the ranker</p> <p>Thanks bro. Only if kaggle gives me a way to block you 😂</p> <p>congratulations on a great finish too :D !</p> <p>firstly congratulations for the great result. Yes infact if you see total possible combinations will be number of possible ordered pairs of clicks carts and orders plus one of (anyaction to any action).</p> <p>Also you can add something like clicks to (carts, orders), which I did not :D</p> <p>thanks a lot</p> <p>Firstly thanks to everyone for sharing so much on this problem. I learned a lot from all of you.</p> <p>Espsecially thanking (in alphabetical order)</p> <ol> <li><p>Carno: For sharing your numba pipeline. All of my candidate generation and many of my features were created using numba. I had almost never used numba before, so learnt a lot of numba in this comp.</p></li> <li><p>Chris: For sharing so much from comp start to end. I think every competitor owes you for that. I think you answered questions about the ranking model till the last day of the competition, so nothing but respect.</p></li> <li><p>Radek: For introducing us to polars, the speed of polars while joining tables really helped speed up my experiments</p></li> <li><p>Senkin: For your 1st place solution in H &amp; M. Many of my ideas in otto were inspired by this.</p></li> </ol> <p>My best performing solution on the private LB was almost single model (scoring almost the same as the ensemble), so will describe the single model (public LB: 0.604 and private 0.603)</p> <p><strong>Candiates Genreration(Numba) -&gt; Feature Creation (Numba , Polars) -&gt; Ranking Model (Lightgbm) -&gt; Inference (Treelite)</strong></p> <h4>Candidates Generation</h4> <p>I think having a strong candidate generation method helped me a lot, so here's how I did it.</p> <p><strong>Number of Candidates Per Session:</strong> I generated 80 candidates most of the comp, and then jumped it up to 120 cands at the last week for some score boost (of around 0.001). I had a really decent max recall of 0.648 (on the validation set) for 80 candidates. I also tried 200 candidates in one experiment, but that did not help with the score.</p> <p>Also if I take the first 20 candidates from my candidates generation model, my score on LB would be 0.585. Th</p> <p>For candidates generation I used something similar to covisit matrices, I divided the user actions in a session for any given 2 aids, into various categories like</p> <p>a. Any action to Any action<br> b. Click to cart<br> c. Cart to order<br> d. Order to order,<br> … etc</p> <p>To keep memory usage low, I chose only the top (k * 100) candidates. K here is the number of candidates I wanted to generate</p> <p>Also, I normalized the weight by the frequency of the first item. So thinking its basically like out of 100 times that milk was purchased, how many times were eggs purchased with it.</p> <p>The weight in the matrices was normalized by the number of items visited between the 2 aids we are talking about.</p> <p>Let's say if we have 5 aids, aids1, aids2, aids3, aids4, aids5</p> <p>Then the weight of (aid1, aid5) will be (5-1)/(frequency of aid1).</p> <p>Also weight of (aid5, aid1) will be wt of ((aid1, aid5))/2 (just to add something like purchase of aid5 was driven by purchase of aid1 and not the other way round)</p> <p>For the inference time, to decide which top k candidates should I take, I used optuna, keeping the things like weight of each covisit matrix, weight of the recency of the item, normalized overall frequency of the item etc as a parameter.</p> <h4>Feature Generation</h4> <ul> <li><p>Basic Features like frequency of the item, clicks to carts ratio, etc, recency of the item visited (this helps a lot if number of candidates in session is more than 20).</p></li> <li><p>Association of a generated candidate to any already seen aid in the session. This could be created by using the covisit matrix weights. Going really deep into such features helped me boost my score a lot. The idea is covisit matrix could be created in different ways to establish the relationship between 2 items, for example:</p> <p>a. Take only the average distance (number of aids between) between 2 aids.<br> b. Distance could also be measured in timestamp difference.<br> c. Consider only candidates in the 1st neighbourhood (immediate candidates).<br> d. Consider only relationships in the last week, etc.</p></li> </ul> <h4>Training a Ranking Model:</h4> <p>I used lightgbm, with 5% negative sampling and around 400 features, and data for last 2 weeks.<br> Adding data for second last week boosted the score by about 0.0005.</p> <p>Some things or tricks that worked for me:</p> <ol> <li><p>Training all clicks, carts, and orders with a single model (not 3 separate models)., this locally was easily seen to be getting around 0.001 to 0.002 better score (data is grouped into sessions not into session, type).</p></li> <li><p>Using separate labels for clicks, carts and orders whiel ranking, with ranking label gain as Orders (6) -&gt; Carts(3) -&gt; Clicks(1), instead of just using 1 when the user performed an action(click, carts and orders) and 0 when they did not (this boosted the score by around 0.0005).</p></li> <li><p>Using the ranking of the stage 1 candidate generation model, as a feature of the stage 2 model. If you think, the stage 1 model can score 0.585 on the lb, so using this ranking was the important feature of my model.</p></li> </ol> <h4>Inference:</h4> <p>Nothing much to say here, except that I used treelite for inferencing to reduce the inferencing time.</p> <p>And finally congratulations to all the winners ! It was really fun participating.</p> <p>P.S: I wrote this in a hurry before starting my office work, so let me know if I messed up some details</p> <p>Thank you <a href="https://www.kaggle.com/tezdhar" target="_blank">@tezdhar</a>, always my inspiration. Waiting for you to get the final gold</p> <p>thank you <a href="https://www.kaggle.com/duykhanh99" target="_blank">@duykhanh99</a> </p> <p>thanks a lot and congratulations <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for the wonderful result, as well as for your inspiring contributions to the community</p> <p>yes training a single model was a much lesser headache. Congratulations on the back to back great results on recommender systems</p> <p>sorry for the confusion, when I say last 2 weeks I mean last 2 weeks of the training data not the test data.</p>
OTTO – Multi-Objective Recommender System
6th place solution (single model LB 0.603)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>ok,i have modified the title😄.</p> <p>Thank you for your question, I used a machine with 128G RAM and 16 core cpu.<br> I used dict to store similarity data.<br> There are two limitations to calculating i2i similarity.<br> 1.Collinear relationships are calculated only for N days.<br> 2.Collinear relationships are calculated only for position distance &lt;=M.</p> <p>Thanks to kaggle and OTTO for the great game. This is my first solo gold medal and I'm very excited about it.<br> This is my overall model framework.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F921191%2F7164310586891a03ca0ff7a8a86b6422%2Fpipline_20230206230744.png?generation=1675696578724236&amp;alt=media" alt=""></p> <h1>Retrieval</h1> <p>I've included three recalls</p> <ul> <li>top 150 Co-visitation Matrix by CHRIS DEOTTE <a href="url" target="_blank">https://www.kaggle.com/code/cdeotte/candidate-rerank-model-lb-0-575</a></li> <li>top 100 click 2 click bidirection i2i similarity with pos, time, session, aid weight </li> <li>top 100 click 2 cart bidirection i2i similarity with pos, time, session, aid weight <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F921191%2Ffffea55d055186ec905fe62678dd819e%2Fi2i_sim_20230207001335.png?generation=1675700054785631&amp;alt=media" alt=""></li> </ul> <h1>Features</h1> <ul> <li>session feats: </li> </ul> <ol> <li>the counts and frequency of user clicks/orders/carts</li> <li>user last clicks/orders/carts aid and hour</li> <li>user last behavior type</li> </ol> <ul> <li>aid feats:</li> </ul> <ol> <li>aid clicks/orders/carts counts</li> <li>aid clicks/orders/carts ratio</li> <li>aid clicks/orders/carts time</li> <li>aid behavior mean type</li> </ol> <ul> <li>session aid feats:</li> </ul> <ol> <li>user clicks/orders/carts aid counts</li> <li>user clicks/orders/carts aid time</li> <li>user behavior aid mean type and last behavior type</li> <li>abs(hots/time of user click/cart/order the aid - aid click/cart/order hots/time)</li> </ol> <ul> <li>sim feats:</li> </ul> <ol> <li>Co-visitation Matrix rank</li> <li>clicks/carts/orders to clicks/carts/orders 2 clicks/carts/orders i2i/i2i2i sim with pos, time weight.</li> <li>clicks/carts/orders to clicks/carts/orders 2 clicks/carts/orders i2i/i2i2i mean/max/min/std/last sim </li> <li>clicks/carts/orders to clicks/carts/orders aid pair sim with pos, time weight.</li> <li>clicks/carts/orders to clicks/carts/orders aid pair mean/max/min/std/last sim </li> <li>w2v embedding sim </li> </ol> <h1>Train And Validation</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F921191%2F73e54cb63b4c969f0091a271e140b481%2Ftrain_valid_20230206232247.png?generation=1675697003585741&amp;alt=media" alt=""><br> In the verification phase, I used Radek’s CV strategy.<br> For online prediction, train_v1 + train_v2 + valid was used as training data.</p> <h1>Model</h1> <p>I used lightgbm binary classifier, learning rate: 0.02, 5500 rounds</p> <h1>Local CV And LB Score</h1> <p>best single model local cv parts:<br> 1.orders recall@20 is 0.6715<br> 1.carts recall@20 is 0.4433<br> 1.clicks recall@20 is 0.5561<br> My local cv is 0.6715 * 0.6 + 0.4433 * 0.3 + 0.5561 * 0.1 = 0.5915<br> LB is 0.60335</p> <h1>Enemble</h1> <p>I didn't run the second model,i am using the previously submitted version of the model for a blend of probability. this is give me the final score 0.60341</p>
OTTO – Multi-Objective Recommender System
70-th Place Solution : Everything done on Kaggle
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This challenge was fun and I hope everyone enjoyed it.<br> In this post I’ll try to explain my approach used to reach this position.</p> <h1>The start</h1> <p>Once I entered this competition I suddenly saw that in the first days there were already really good scores in the leaderboard and many co-visitation based notebooks.<br> I started by working on those notebook s and published this <a href="https://www.kaggle.com/code/pietromaldini1/multiple-clicks-vs-latest-items" target="_blank">notebook</a> where I presented some ideas to improve the logic used by many particiapants. At the beginning there were many interactions and many ideas flowing between participants. </p> <h1>What I did</h1> <p>I worked on public notebooks and tried to learn the approaches of many participants to improve my knowledge and I used this challenge to get used with Weight &amp; Biases that I used to tune the parameters of many models.</p> <h1>My solution</h1> <p>My solution is divided into 2 parts candidate generation and re-ranking.</p> <h1>Candidate generation</h1> <p>My main candidate selectors are shared models by other participants I used <a href="https://www.kaggle.com/code/carnozhao/otto-fast-cpu-end-to-end-pipeline" target="_blank">co-visitation</a> , <a href="https://www.kaggle.com/code/cpmpml/matrix-factorization-with-gpu" target="_blank">MF</a>, <a href="https://www.kaggle.com/code/radek1/word2vec-how-to-training-and-submission" target="_blank">W2V</a> and SRGNN using the RecBole library based on this <a href="https://www.kaggle.com/code/yamsam/recbole-gru4rec-sample-code" target="_blank">notebook</a>.<br> Other candidate selector I used are different weighting of session history since in this competition many user interact with items they already interacted with in the past.</p> <h2>Re ranking</h2> <p>For re ranking I used LGBM Classifier models, one for each interaction type.<br> I tried using also LGBM Rankers and Catboost Ranker but for the lack of time to tune them I got worse results with those models.<br> As training data for the Classifiers I used the scores given by each candidate generator and simple features.<br> For each candidate how many times it already appeared in the session for each type of interaction and the sum of this 3 counts, the length of the session (number of items) and the temporal length of the session (just now I'm thinking that I forgot to also add the count of unique items seen in the session).<br> Other features I used are Target Encodings for each interaction type and a weighted sum of them trained on the week before the test set and target encoding trained on the test week ( for training the models I used the Target encodings calculated on the week before the validation week and encoding calculated on the validation week itself).</p> <h2>Results</h2> <p>My LGBM classifiers together (<a href="https://www.kaggle.com/code/pietromaldini1/82-nd-position-lgbm-final-prediction" target="_blank">notebook</a>) got me a score of 0.58744 in public leaderboard and of 0.58746 in private leaderboard.<br> My best single model was a tuned co-visitation based approach (<a href="https://www.kaggle.com/pietromaldini1/otto-best-single-model-tuned-co-visit" target="_blank">notebook</a>) with exponential weight decay and computed only between items temporally close between each other and it got a score of 0.58126 in public leaderboard and of 0.58133 in private leaderboard.</p> <h2>Conclusions</h2> <p>This weekend I’ll try and read other participants’ solutions, I’m looking forward to see many interesting approaches I never thought about. I’ll try also to polish and improve this post and my shared notebooks<br> I want to learn and improve to get even better results the next time.</p> <p>Feel free to get in touch with me on LinkedIn and if you are interested also check my github and the solution of my team in RecSys Challenge 2022, find the links in my Kaggle profile.</p> <p>Keep kaggling and enjoy! </p>
OTTO – Multi-Objective Recommender System
7th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thank you for hosting this super exciting competition! And thank you to everyone for sharing many important insights in this competition. Discussions also have been very beneficial to us in our efforts to achieve this grade.</p> <h2>1. Overview</h2> <p>Our team consists of two members, Jack and toshi_k. Although both of us are competitions grandmaster, we have different strengths. Before making up a team, our approaches were totally different. Ensemble of two approaches cancelled out each weakness and boosted our team to the gold medal.</p> <p>Our solution is composed of LightGBM part, GNN (Graph Neural Network) part and ensemble. Jack was in charge of LightGBM part. He trained the best solo model in our team. toshi_k was in charge of GNN part and ensemble. He trained unique models by modern deep learning. It contributed +0.003 to the team score by ensemble.</p> <p>The details of LightGBM part is described in section 2. GNN part is described in section 3. Ensemble method and result are described in section 4.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F169364%2Fd46e627f90f7d1565ff24df3f9076e9f%2Foverall_03.jpg?generation=1675573670780068&amp;alt=media" alt=""></p> <h2>2. LightGBM part (by Jack)</h2> <h3>2.1 Candidate Generation</h3> <p>Before I go into details, let me just say that the method described here is my own, but when I optimized Recall@20, there was little difference of performance between it and the <a href="https://www.kaggle.com/code/cdeotte/compute-validation-score-cv-565" target="_blank">Chris's Public Notebook</a> on validation. Thus, it is unclear how much of an advantage it may have.</p> <p>The basic idea is to approximate a kind of posterior probability (let's call it a candidate score) of aids based on the co-visitation matrix, and the candidates were selected from the top ones. These candidates are common to all three action types.</p> <p>The last 10 aids and action types of the session were used to calculate the candidate score. That is, the score can be calculated as follows:</p> <p>$$Score(aid) = P(aid | (aid_1, type_1), (aid_2, type_2), …, (aid_{10}, type_{10}))$$</p> <p>Here, we forcibly assume independence and conditional independence of each action, similar in concept to Naive Bayes,</p> <p>$$\begin{eqnarray}<br> Score(aid)<br> &amp;=&amp; P(aid) \cdot \frac{P((aid_1, type_1), …, (aid_{10}, type_{10}) | aid)}{P((aid_1, type_1), …, (aid_{10}, type_{10}))} \\<br> &amp;\approx&amp; P(aid) \cdot \prod_{i=1}^{10} \frac{P((aid_i, type_i) | aid)}{P(aid_i, type_i)} \\<br> &amp;=&amp; P(aid) \cdot \prod_{i=1}^{10} \frac{P((aid_i, type_i), aid)}{P(aid_i, type_i) P(aid)}<br> \end{eqnarray}$$</p> <p>The term inside the product is the ratio of the joint probability to the product of individual probability, that represents the extent to which P(aid) is enhanced by the observation of (aid_i, type_i). It is quite impossible to assume the above independence with this data, and therefore this value can exceed 1 and is no longer a probability. However, I expected it to work reasonably well in prioritizing candidates.</p> <p>Each term in the above equation is obtained by counting the frequency of each aid and co-visitation of aid pairs and dividing by the total number of sessions. In calculating the co-visitation matrix, only interactions within a 24-hour period are counted, and no multiple counts are made within the same session. The period of calculation was the entire period including test data (in inference phase), and co-visitation in both directions was to be counted.</p> <p>The co-visitation matrix does not hold for all pairs of aids, but only those that have many co-visitation for each aid. For aid pairs that are not in the co-visitation matrix, the ratio of the joint probability on the right side of the above formula is set to 1, so that they do not affect the score calculation.</p> <p>In the actual calculation, the logarithm is taken and further weighted to the most recent action, as follows:<br> $$Score(aid) = \log(P(aid)) + \sum_{i=1}^{10} \frac{11-i}{10} \cdot \log \left( \frac{P((aid_i, type_i), aid)}{P(aid_i, type_i) P(aid)} \right)$$</p> <p>Many other heuristics, such as adding pseudo counts and adjusting by action type, have been incorporated, but they are too complicated to mention here.</p> <p>Starting with those with the highest candidate score, the top 200 were taken for training data, the top 300 for inference of test data, and then the already visited aids were added to make the final candidates.</p> <p>The recall on validation of the top 200 candidates thus obtained was as follows:</p> <ul> <li>clicks: 0.697</li> <li>carts: 0.559</li> <li>orders: 0.736</li> </ul> <h3>2.2 LightGBM Rerank Model</h3> <p>This part is not much different from the methods already shared by others. The rerank model was trained by LightGBM (LamabdaRank), and separate models were built for each action type.</p> <p>On validation, the second last week of train set (truncated) was used as training data and the last week of train set (truncated) as validation data.<br> When inference was made on the test data, a model trained on the last week's data (LightGBM1) and a model trained on the second last week's data (LightGBM2) were built, and their outputs were ensembled by simple average. Since the training data were completely swapped, I expected a reasonable ensemble effect, but in fact it seems that the effect was only slight.</p> <p>Most of the features are based on co-visitation matrix, but each aggregation period is separate for training, validation, and test. That is, the co-visitation matrix is created for each of the three different periods, and the features are created, so they are leakage free.</p> <p>The total number of features in the final model is 344, as follows:</p> <ul> <li>session features (32)<ul> <li>the number of all actions (1)</li> <li>the number of each action (3)</li> <li>the number of unique aids in the session (1)</li> <li>the number of unique aids of (carts/orders) and the ratio to the above (4)</li> <li>the last action type (1)</li> <li>the last relative timestamp from the start of the test period (1)</li> <li>the number of actions from the last of each action type to the last action of the session (3)</li> <li>elapsed time from i-th last action (i=2, …, 10) to the last action of the session (9)</li> <li>revisit ratio of all aids by pair of action types (9)</li></ul></li> <li>aid features (50)<ul> <li>count of (any/buy/click/cart/order) (5)</li> <li>exponential decay count of (any/buy/order) (3)</li> <li>count of (any/buy/order) in last n days (n=1~7) (21)</li> <li>count of (any/buy/order) in last n weeks (n=1~4) (12)</li> <li>revisit count in all sessions by pair of action types (9)</li></ul></li> <li>session*aid features (12)<ul> <li>the latest action of that aid (1)</li> <li>the number of each action of that aid (3)</li> <li>the number of actions from last visit to that aid to the last action of the session (1)</li> <li>elapsed time from last visit to that aid to the last action of the session (1)</li> <li>the above two features for each action type (6)</li></ul></li> <li>co-visitation features (250)<ul> <li>the number of co-visitation of aid with aid_i (i=1, …, 10) devided by the global count of aid_i<ul> <li>any to any (both direction/oneway) (20)</li> <li>any to buy (both direction/oneway) (20)</li> <li>buy to any (both direction/oneway) (20)</li> <li>buy to buy (both direction/oneway) (20)</li> <li>type_i to any (both direction/oneway) (20)</li> <li>type_i to buy (both direction/oneway) (20)</li> <li>click to click (both direction) (10)</li> <li>click to cart (both direction) (10)</li> <li>cart to click (both direction) (10)</li> <li>cart to cart (both direction) (10)</li></ul></li> <li>the rank of co-visitation of aid with aid_i (i=1, …, 10)<ul> <li>any to any (both direction) (10)</li> <li>any to buy (both direction) (10)</li> <li>buy to any (both direction) (10)</li> <li>buy to buy (both direction) (10)</li></ul></li> <li>global count of aid_i (any/buy/click/cart/type_i) (50)</li></ul></li> </ul> <p>* "any" means the action clicks or carts or orders, and "buy" means the action carts or orders.</p> <h2>3. GNN part (by toshi_k)</h2> <h3>3.1 Basic Idea</h3> <p>I considered using DL (Deep Learning) in this competition. Since the datasets are relatively simple, E2E approach of DL seemed like a desirable solution for me. Another advantage is that multi-dimensional interactions and outputs for clicks/carts/orders are easily designed as a DL model architecture.</p> <p>DL based recommendation was initially proposed as a kind of non-linear collaborative filtering. The typical one is training AutoEncoder model and using the reconstruction methodology to evaluate missing ratings.</p> <ul> <li>Training Deep AutoEncoders for Collaborative Filtering<ul> <li><a href="https://arxiv.org/abs/1708.01715" target="_blank">https://arxiv.org/abs/1708.01715</a></li></ul></li> </ul> <p>Although I implemented this type of method as a prototype, it didn't work well. The number of items was so large that it made input vectors ultra sparse. It also yielded the heavy requirements of GPU memory for FC (Fully Connected) layers and made hidden layers shallower and thinner.</p> <p>The disadvantage of FC layers is having weights between all combinations of aids even if most of them have nothing to do with each other. After some considerations, I figured out GNN (Graph Neural Networks) can solve this issue. The graph for GNN can represents aid relations and GNN can predict attributions of aids based on the nearly connected aids.</p> <p>Using GNN for session based recommendation is also reported in the below study. According to the paper, their method is developed to explore rich transitions among items and generate accurate latent vectors of items. Their experiments on two datasets including thousands of items show that their method outperforms the state-of-the-art methods.</p> <ul> <li>Session-based Recommendation with Graph Neural Networks<ul> <li><a href="https://arxiv.org/abs/1811.00855v4" target="_blank">https://arxiv.org/abs/1811.00855v4</a></li></ul></li> </ul> <p>My approach is similar to the previous study. One of the biggest differences of problem setting is the number of items. In this competition, the datasets contain millions of items. To handle all items, I built a simpler workflow and installed the subgraph extraction from the global session graph.</p> <p>Basically, my approach has 3 steps.</p> <ol> <li>Construct the global graph that represents aid relations</li> <li>Extract the subgraph from the global graph for each session</li> <li>Use GNN to predict which aid will be taken</li> </ol> <p>The conceptual diagram is as below.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F169364%2F2327fdf46f8bcd96370e1f7e0ebaaf4d%2Fgnn_solution_01.jpg?generation=1675573719262138&amp;alt=media" alt=""></p> <p>In the first step, the global graph that represents aid relations is constructed. A subset of training data is used to create this graph. All transitions in this data are counted up and top P transitions (P=20, 30 or 40) from each aid are adopted as the edges of the graph. This graph is roughly corresponding to the co-visitation matrix that other participants call.</p> <p>Secondly, the subgraph is extracted from the global graph for each session. The history of each session is traced and nearly connected nodes (=aids) are listed up. Closer nodes and major transitions are prioritised and hundreds aids are filtered for extraction. This process is roughly corresponding to the candidates generation other participants call.</p> <p>Thirdly, subgraphs are used to train and test GNN. The last layer of GNN has three channels. They predict if clicks/carts/orders will be taken in the future of each session. The inputs of GNN are the structures of subgraphs and features of nodes and edges. More details of features and GNN model are described in the next two subsections.</p> <h3>3.2 Features</h3> <p>The input features for my GNN consist of "node features" and "edge features". Node features represent the characteristics of each aid. Edge features represent the relations between each pair of aids.</p> <p>Basically the total number of node features is 18. Nine of them is the global characteristics of aids. These features are shared among all sessions. The other nine features represent the history of sessions. These features are calculated on the session history and different for every sessions. The list of node features is as below.</p> <ul> <li>Node features (18)<ul> <li>Global aid features (9)<ul> <li>Popularity counts (3)</li> <li>Repeat counts (3)</li> <li>Type transition counts (3)</li></ul></li> <li>Session history features (9)<ul> <li>Distance from the session history (2)</li> <li>Number of counts in the session history (3)</li> <li>Visited order features (2)</li> <li>Visited time features (2)</li></ul></li></ul></li> </ul> <p>Total number of edge features is 14. Twelve of them is the global characteristics of transitions. These features are shared among all sessions. The other two features represent the history of sessions. These features are calculated on the session history and different for every sessions. The list of edge features is as below.</p> <ul> <li>Edge features (14)<ul> <li>Global transition features (12)<ul> <li>Transition count not considering types (2)</li> <li>Transition rank not considering types (2)</li> <li>Cart-to-cart transition count (2)</li> <li>Cart-to-cart transition rank (2)</li> <li>Order-to-order transition count (2)</li> <li>Order-to-order transition rank (2)</li></ul></li> <li>Session history features (2)<ul> <li>Self loop or not (1)</li> <li>Stepped in session history or not (1)</li></ul></li></ul></li> </ul> <p>Any combinations of multiple features and higher dimension features are not added. It was expected that such complex features were automatically captured by the representation capability of GNN.</p> <p>All missing values are filled with zero and logarithmic transformation (log1p) is applied to most features for the stability of GNN.</p> <h3>3.3 Model and Loss function</h3> <p>My GNN has 8 GCN (Graph Convolution) layers. This implies GNN model can consider aids located within 8 steps from the history aids for prediction. In some trial experiments, 8 layers model was better than 4 layers one a little, but it was unclear more layers helped or not. </p> <p>Aside from GCN layers, my model employs non linear activation functions, skip connections, and normalization layers. These component made training faster and yielded less training loss.</p> <p>As mentioned above, the last layer has three channels for clicks/carts/orders. The channels for carts and orders are connected with sigmoid functions and trained by binary cross entropy loss. The channel for clicks is connected with softmax function and trained by softmax cross entropy loss.</p> <h3>3.4 Performance boosting</h3> <p>I noticed that increasing the number of candidates for inference boosted the LB score. For example, my best model improved from 0.59474 to 0.59750 on public LB by increasing the candidates from 384 to 1024.</p> <table> <thead> <tr> <th>Model</th> <th>num of candidates</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>Model1</td> <td>384</td> <td>0.59474</td> <td>0.59451</td> </tr> <tr> <td>Model1</td> <td>1024</td> <td>0.59750</td> <td>0.59719</td> </tr> </tbody> </table> <p>Unfortunately, more candidates required more computational calculation, 1024 candidates took several days for inference. I increased candidates as much as possible and save the outputs of several models.</p> <p>As the final stage of the competition, ensemble of multiple predictions is attempted. Four GNN models are used for this ensemble. Although they were trained with slightly different setting, their basic approaches were the same. Ensemble of 4 models achieved 0.59894 on public LB and 0.59874 on private LB.</p> <table> <thead> <tr> <th>Model</th> <th>num of candidates</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>Model1</td> <td>1024</td> <td>0.59750</td> <td>0.59719</td> </tr> <tr> <td>Model2</td> <td>1024</td> <td>0.59653</td> <td>0.59613</td> </tr> <tr> <td>Model3</td> <td>512</td> <td>0.59320</td> <td>0.59318</td> </tr> <tr> <td>Model4</td> <td>768</td> <td>0.59529</td> <td>0.59478</td> </tr> <tr> <td>Ensemble of 4 models</td> <td>-</td> <td>0.59894</td> <td>0.59874</td> </tr> </tbody> </table> <h2>4. Team Ensemble (by toshi_k)</h2> <p>After we made up a team, we tried several ways to merge our predictions. We started from the simple rerank by arithmetic mean of ranks in submission files. It improved the public score 0.597→0.600 then.</p> <p>The next attempt was using raw prediction to boost LB score more. We saved the raw prediction of top 50 aids of each part. The biggest issue was the outputs from LambdaRank range in any real number while the output of GNN range in (0, 1).</p> <p>Since we didn't have enough time to lead the best theoretical way, we tried some ensemble method experimentally. One interesting finding was the logit transformed value of GNN seemed to have the proportional relationship with LambdaRank with constant shift.</p> <p>$$ \mathrm{logit}(p^\text{Binary Prediction}) \propto v^\text{LambdaRank prediction} + C $$</p> <p>The value of constant shift is different on every session_types. This may be just a brute force approximation, we estimate C for each session_type and mapped the output of LambdaRank to 0-to-1 value.</p> <p>$$\begin{eqnarray}<br> \hat{C} &amp;=&amp; \arg \min_C \{ \frac{1}{R} \sum_r^R {^Gp_r} - \frac{1}{R} \sum_r^R \sigma(^Lv_r + C) \}^2 \\<br> ^Lp_r &amp;=&amp; \sigma (v^L_r + \hat{C}) \\<br> \mathrm{where:} \\<br> ^Gp_r &amp;=&amp; \text{rth output of GNN} \in (0, 1) \\\<br> ^Lv_r &amp;=&amp; \text{rth output of LambdaRank LGBM } \in \mathbb{R} \\\<br> ^Lp_r &amp;=&amp; \text{0-1 calibrated value of } ^Lv_r \in (0, 1)<br> \end{eqnarray}$$</p> <p>After this transformation, simple weighted averaging was calculated. Since LightGBM part achieved better score on PublicLB, the weight of LightGBM is set to be larger than GNN part. We tried two patterns of weight settings and chose both of them as final submissions.</p> <table> <thead> <tr> <th></th> <th>weight</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>LightGBM Part</td> <td>1.000 (LightGBM) : 0.000 (GNN)</td> <td>0.60025</td> <td>0.60008</td> </tr> <tr> <td>GNN part</td> <td>0.000 (LightGBM) : 1.000 (GNN)</td> <td>0.59894</td> <td>0.59874</td> </tr> <tr> <td>Final Submission 1</td> <td>0.600 (LightGBM) : 0.400 (GNN)</td> <td>0.60311</td> <td>0.60307</td> </tr> <tr> <td>Final Submission 2</td> <td>0.525 (LightGBM) : 0.475 (GNN)</td> <td>0.60302</td> <td>0.60313</td> </tr> </tbody> </table> <p>Both of final submissions improved LB score from the best of two parts. While Final Submission 1 was the best on public LB, Final Submission 2 was the best on private LB. Even though all single models got worse on private LB, the score of Final Submission 2 on private LB is better than public LB. Out team merge and ensemble was successful in this sense.</p> <h2>5. Conclusion</h2> <p>Our team employed LightGBM and GNN for this competition. Each approach took different way for candidate generation, feature engineering and model design. Ensemble of two types of approaches boosted our team score a lot.</p> <p>This competition gave 6th gold model to toshi_k and 7th gold to Jack. It motivate us for the further success. What we learn in this competition can be applied to not only future competitions but real world projects. It was confirmed that Kaggle is the practical platform of data science again.</p> <p>Thank you for reading this to the end!</p>
OTTO – Multi-Objective Recommender System
9th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Rayan-aay.<br> I repeated trial and error by changing the ratio of co-visitation matrix, ProNE, and w2v to maximize candidate recall. Finally, I created candidates by ensembling the following three patterns with the following code from H&amp;M competition.</p> <ol> <li>top-20 ProNE aids, top-30 w2v aids, click2click within 5 actions (top 75 aids), any2any (top 150 aids)</li> <li>top-20 ProNE aids, top-30 w2v aids, any2(cart or order) within 10 minutes (top 50 aids), click2click within 5 actions (top 85 aids), any2any (top 150 aids)</li> <li>top-20 ProNE aids, top-30 w2v aids, (click or cart)2(cart or order) within 10 actions (top 50 aids), click2click within 1 minute (top 85 aids), any2any (top 150 aids)</li> </ol> <p>I created a model for each of clicks, carts, and orders, but did not change the candidate for each of them. I used W = [1,1,1].</p> <pre><code>def cust_blend(dt, W = [1,1,1]): #Global ensemble weights #W = [1.15,0.95,0.85] #Create a list of all model predictions REC = [] REC.append(dt['prediction0'].split()) REC.append(dt['prediction1'].split()) REC.append(dt['prediction2'].split()) #Create a dictionary of items recommended. #Assign a weight according the order of appearance and multiply by global weights res = {} for M in range(len(REC)): for n, v in enumerate(REC[M]): if v in res: res[v] += (W[M]/(n+1)) else: res[v] = (W[M]/(n+1)) # Sort dictionary by item weights res = list(dict(sorted(res.items(), key=lambda item: -item[1])).keys()) # Return the top 100 items only return ' '.join(res[:100]) </code></pre> <p>reference: <a href="url" target="_blank">https://www.kaggle.com/code/titericz/h-m-ensembling-how-to</a></p> <p>I'd like to thank the Kaggle staff and the OTTO team for organizing this interesting competition!</p> <p>I'm relieved that I was able to make it through to the end, even though my score did not improve much in the final stages of the competition.</p> <p>Here is my solution</p> <h2>Candidate generation</h2> <ul> <li><strong>re-visit</strong> - all items from the session's history</li> <li><strong>co-visitation matrix</strong> <ul> <li>any2any, click2click, click2cart, click2order, (click or cart)2order, (cart or order)2order, e.t.c.</li> <li>I tried various patterns to create the co-visitation matrix. <br> e.g. actions that do not take time into account, actions immediately after an action, within 5 or 10 actions, within 5 or 10 minutes, e.t.c.</li></ul></li> <li><strong>create click2click(only consider the next item) graph and apply ProNE</strong> <ul> <li>The idea came from the hypothesis that items clicked on immediately after an item is clicked are similar to each other.</li> <li>I created a two-column DataFrame (an item and the item clicked immediately after) and ran it through ProNE. The number of dimensions of ProNE was 1000, and the more dimensions I increased, the more accurate the candidate recall became.</li> <li>Retrieve top-k aids by ProNE embeddings (Used cuml.neighbors.NearestNeighbors and metric='cosine')</li></ul></li> <li><strong>word2vec</strong><ul> <li>Trained word2vec model with aid sequences (Used gensim, size=50 or 100)</li> <li>Retrieve top-k aids by w2v embeddings (Used cuml.neighbors.NearestNeighbors and metric='cosine')</li></ul></li> </ul> <h2>Re-Ranking</h2> <h3>Model</h3> <ul> <li>LGBMRanker (lambdarank) </li> <li>I created one model each to predict clicks, carts, and orders.</li> </ul> <h3>CV</h3> <ul> <li>I created validation sets with the host's old version scripts</li> <li>I created 100 candidates per session when training</li> <li>candidate recall<ul> <li>click: 0.6622</li> <li>cart: 0.5113</li> <li>order: 0.7059</li></ul></li> <li>CV <ul> <li>click: 0.5601</li> <li>cart: 0.4414</li> <li>order: 0.6664</li></ul></li> <li>I created 300 candidates per session when inferencing</li> <li>Public LB: 0.603, Private LB: 0.603</li> </ul> <h3>Feature</h3> <ul> <li><strong>session features</strong><ul> <li>type count by session (type='clicks' or 'carts' or 'orders') </li> <li>number of unique types by session (type='clicks' or 'carts' or 'orders') </li> <li>type mean by session ('clicks'=1, 'carts'=2, 'orders'=3 and mean by session) </li></ul></li> <li><strong>aid features</strong> <ul> <li>type count within all sessions (type='clicks' or 'carts' or 'orders') </li> <li>type count within test sessions (type='clicks' or 'carts' or 'orders') </li> <li>click to cart rate within same sessions, cart to order rate within same sessions, click to order rate within same sessions</li></ul></li> <li><strong>session x aid features</strong><ul> <li>co-visitation count, rate, time-weighted count, rate, e.t.c</li> <li><strong>similarity</strong><ul> <li>cosine similarity between candidate item and last item of the session (w2v, ProNE)</li> <li>cosine similarity between candidate item and the second item from the back of the session (w2v, ProNE)</li> <li>cosine similarity between candidate item and all items of the session (w2v, ProNE)</li> <li>click2click Jaccard index score</li></ul></li></ul></li> </ul> <h2>What did not work well</h2> <ul> <li>Create candidates with node2vec</li> <li>GRU</li> <li>RecVAE</li> <li>SAR</li> <li>BPR</li> <li>Pseudo Labeling </li> </ul> <p>Thanks A.Sato.<br> I used the following library<br> <a href="url" target="_blank">https://github.com/THUDM/ProNE</a></p>
OTTO – Multi-Objective Recommender System
9th place solution🥇 [single model LB 0.602]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thank you to the competition organizers for a great competition.<br> I am very happy to win my first gold medal🥇</p> <p>Here is my solution!</p> <h1>Overview</h1> <p><strong>best single model</strong></p> <table> <thead> <tr> <th>orders CV</th> <th>LB</th> </tr> </thead> <tbody> <tr> <td>0.67087</td> <td>0.602</td> </tr> </tbody> </table> <p>(I have not calculated the CV for all orders/carts/clicks combined.)</p> <p>Probably the same as many other competitors, the candidates generation &amp; rerank method was used.</p> <h1>Candidates generation</h1> <p>On average, <strong>180 candidates</strong> were selected per session. (<strong>orders recall: 0.725</strong>)</p> <ul> <li>14 patterns of item-based CF</li> <li>2 patterns of user-based CF</li> <li>re-visit</li> </ul> <p>The candidates generated by the 14 patterns of item-based CF and 2 patterns of user-based CF were assembled to create the final candidates.</p> <p>Since <strong>sessions with a larger number of aids in the test data are more likely to have a larger number of events in the forecasting period</strong>, more candidates were created for sessions with a larger number of aids in the test data.</p> <pre><code>df = test_df[test_df["session"].isin(target_session)].groupby("session")["aid"].count().reset_index() df.columns = ["session", "count"] df["count"] = (df["count"]**0.5*10).astype("int32") df["aid"] = df.progress_apply(lambda row:list(dict(rec[row["session"]].most_common(row["count"])).keys()),axis=1) df = df.explode(["aid"]) df = df[df["aid"].notnull()].reset_index(drop=True) </code></pre> <h1>Feature Engineering</h1> <p>A total of <strong>226 features</strong> were created.</p> <ul> <li><strong>item-based CF features</strong></li> </ul> <p>First, 59 patterns of item-based CF were created.</p> <p>ex) pattern1</p> <pre><code>def get_aid_similarity1(df, topk=200): session_info = df.drop_duplicates(["session", "aid"], keep="last")\ .groupby("session", as_index=False)[["aid", "type", "ts"]].agg(list) aid_similarity = {} for session, aids, tps, tss in tqdm(zip(session_info["session"], session_info["aid"], session_info["type"], session_info["ts"]), total=len(session_info)): for aid1, tp1, ts1 in zip(aids, tps, tss): session_length = math.sqrt(len(aids)) aid_similarity.setdefault(aid1, Counter()) for aid2, tp2, ts2 in zip(aids, tps, tss): if (aid1 == aid2): continue aid_similarity[aid1][aid2] += (1/session_length) # Exclude all but the topK to save time and memory for aid1, aid2_dict in tqdm(aid_similarity.items()): relations = dict(aid2_dict.most_common(topk)) # normalize if len(relations) == 0: continue max_num = relations[max(relations, key=relations.get)] if max_num == 0: continue aid_similarity[aid1] = {k: v / max_num for k, v in relations.items()} del session_info; gc_clear() return aid_similarity </code></pre> <p>ex) pattern2</p> <pre><code>def make_real_session(df, hours=2): df["lag"] = df["ts"] - df.groupby("session")["ts"].shift(1) df["real_session"] = (df["lag"] &gt; 1000*60*60*hours).astype('int8').fillna(0) df["real_session"] = df.groupby("session")["real_session"].cumsum() del df["lag"]; gc_clear() return df def get_aid_similarity2(df, topk=200): df = make_real_session(df, hours=4) session_info = df.groupby(["session", "real_session"], as_index=False)[["aid", "type", "ts"]].agg(list) aid_similarity = {} aid_cnt = defaultdict(int) for session, real_session, aids, tps, tss in tqdm(zip(session_info["session"], session_info["real_session"], session_info["aid"], session_info["type"], session_info["ts"]), total=len(session_info)): for aid1, tp1, ts1 in zip(aids, tps, tss): aid_similarity.setdefault(aid1, Counter()) for aid2, tp2, ts2 in zip(aids, tps, tss): if (abs(ts1-ts2)&gt;24*60*60*1000) or (aid1 == aid2): continue aid_cnt[aid1] += 1 if min(tp1, tp2)==0: aid_similarity[aid1][aid2] += 1 elif min(tp1, tp2)==1: aid_similarity[aid1][aid2] += 3 elif min(tp1, tp2)==2: aid_similarity[aid1][aid2] += 6 # Exclude all but the topK to save time and memory for aid1, aid2_dict in tqdm(aid_similarity.items()): for aid2, score in aid2_dict.items(): aid_similarity[aid1][aid2] = score / math.sqrt(aid_cnt[aid1]*aid_cnt[aid2]) aid_similarity[aid1] = dict(aid2_dict.most_common(topk)) del session_info; gc_clear() return aid_similarity </code></pre> <p>(Further increasing the <code>topk</code> parameter did not improve the score.)</p> <p>Then, feature creation was performed using the various <code>aid_similarity</code> created above.<br> Unlike <a href="https://www.kaggle.com/code/cdeotte/candidate-rerank-model-lb-0-575" target="_blank">cdeotte's notebook</a>, features were created by summing the value of sim_item_dict.</p> <pre><code>def recommend_aid(df, _test_df, target_session, aid_similarity, feature_name, type_weighted=False, type_weights={0:1, 1:3, 2:6}, time_weighted=False, only_last_aid=False, same_real_session=False, real_session_hours=2, drop_duplicates=False ): # ============================================================== # test_df preprocess # ============================================================== if only_last_aid: _test_df = _test_df.groupby("session").last().reset_index() if same_real_session: _test_df = make_real_session(_test_df, hours=real_session_hours) _test_df["real_session_max"] = _test_df.groupby("session")["real_session"].transform("max") _test_df = _test_df[_test_df["real_session"]==_test_df["real_session_max"]].reset_index(drop=True) del _test_df["lag"], _test_df["real_session"], _test_df["real_session_max"]; gc_clear() if drop_duplicates: _test_df = _test_df.groupby(["session", "aid"], as_index=False)[["type", "ts"]].max() # ============================================================== # Create recommend dictionary # ============================================================== if type_weighted: _test_df["type_weights"] = _test_df["type"].map(type_weights).astype("int8") session_info = _test_df[_test_df["session"].isin(target_session)]\ .groupby("session", as_index=False)[["aid", "type_weights", "ts"]].agg(list) del _test_df; gc_clear() rec = {} for session, aids, type_weights, tss in tqdm(zip(session_info["session"], session_info["aid"], session_info["type_weights"], session_info["ts"]), total=len(session_info)): rec.setdefault(session, Counter()) if time_weighted: time_weights = make_time_weights(tss) for aid, time_weight, type_weight in zip(aids, time_weights, type_weights): rec[session] += {aid: v*time_weight*type_weight for (aid, v) in aid_similarity.get(aid, {}).items()} else: for aid, type_weight in zip(aids, type_weights): rec[session] += {aid: v*type_weight for (aid, v) in aid_similarity.get(aid, {}).items()} else: session_info = _test_df[_test_df["session"].isin(target_session)]\ .groupby("session", as_index=False)[["aid", "ts"]].agg(list) del _test_df; gc_clear() rec = {} for session, aids, tss in tqdm(zip(session_info["session"], session_info["aid"], session_info["ts"]), total=len(session_info)): rec.setdefault(session, Counter()) if time_weighted: time_weights = make_time_weights(tss) for aid, time_weight in zip(aids, time_weights): rec[session] += {aid: v*time_weight for (aid, v) in aid_similarity.get(aid, {}).items()} else: for aid in aids: rec[session] += aid_similarity.get(aid, {}) del session_info; gc_clear() # ============================================================== # Create features # ============================================================== feature = [] for session, aid in tqdm(zip(df["session"], df["aid"]), total=len(df)): feature.append( rec.get(session, {}).get(aid, 0) ) df[feature_name] = feature del feature, rec; gc_clear() return df </code></pre> <ul> <li><strong>user-based CF features</strong></li> </ul> <p>6 patterns of item-based CF were created.</p> <ul> <li><strong>other features</strong><ul> <li><strong>aid features</strong><ul> <li>number of visits per week (train)</li> <li>number of visits per day (test)</li> <li>last click/cart/order ts per aid</li></ul></li> <li><strong>session features</strong><ul> <li>number of visits per session (test)</li> <li>last click/cart/order ts per session</li></ul></li> <li><strong>aid x session features</strong><ul> <li>last click/cart/order ts per session and aid</li> <li>percentage of aids visited since the last ts of each session in the test data</li></ul></li></ul></li> </ul> <h1>Model training</h1> <p>Different pipelines were used depending on which of the orders/carts/clicks was the target.</p> <p>For orders and carts, models were trained in two stages (1st stage/2nd stage).</p> <h4>orders model</h4> <ul> <li><strong>1st stage</strong></li> </ul> <p>All candidates were trained on the model without negative sampling.<br> I didn't want to do negative sampling as much as possible because negative sampling lowers the score.</p> <ul> <li><strong>2nd stage</strong></li> </ul> <p>As a result of the 1st stage, only the top 50 candidates per session were selected for the 2nd stage.<br> The predictions from the 1st stage were not used for the 2nd stage features.</p> <h4>carts model</h4> <ul> <li><strong>1st stage</strong></li> </ul> <p>Since I could not run the model on all the data without negative sampling, I trained the model with negative sampling (x 0.3) in the 1st stage.</p> <ul> <li><strong>2nd stage</strong></li> </ul> <p>The model of the 1st stage was used to create oof for all candidates, and then the top 50 candidates per session were selected for the 2nd stage.</p> <h4>clicks model</h4> <ul> <li><strong>1st stage</strong></li> </ul> <p>Only the 1st stage was run with negative sampling (x 0.1).</p> <p>Probably the score would be higher if the 2nd stage was performed as in the carts model, but since the clicks have less weight on the score, the 2nd stage for the clicks was not conducted for cost-effectiveness.</p> <h4>Algorithm and Parameters</h4> <p>I created one Catboost model each to predict clicks, carts, and orders.</p> <h5>Parameters</h5> <pre><code>scale_pos_weight = (y_trn==0).sum()/(y_trn==1).sum() CAT_PARAMS = { 'loss_function': 'Logloss', 'learning_rate': 0.02, 'max_depth': 5, 'task_type': 'GPU', 'scale_pos_weight': scale_pos_weight, } </code></pre> <h1>Ensemble</h1> <p>I created models with 5 seeds and ensembled them.<br> (I changed not only the seed of the model parameters, but also the seed in the test/ground truth split.)</p> <p>Unlike the ensemble method in <a href="https://www.kaggle.com/code/radek1/2-methods-how-to-ensemble-predictions" target="_blank">@radek1's notebook</a>, the ensemble was performed in a way to apply weight to each of the predicted rankings.</p> <h1>Post process</h1> <p>For sessions with fewer than 20 aids to recommend, I recommended popular aids.</p> <h1>what didn't work</h1> <ul> <li>word2vec</li> <li>ALS</li> <li>BPR</li> </ul> <h1>Environment</h1> <p>only Google Colab Pro+ :)</p>
OTTO – Multi-Objective Recommender System
[Silver] 113th - Computationally simple solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This idea was on my to do list, together with trying different co-visitation matrices but unfortunately I didn't have time. </p> <p>Congrats on your medal!</p> <p><em>I am currently updating this post and I will share my github repository once I have finished fixing some code</em></p> <p>Through this competition I learned a lot, so I thought that sharing my experience might help, in particular because I was able to achieve this score by (almost) using only my laptop, which is a macbook with 16GB of ram. This is my first medal so I am quite happy nevertheless.</p> <p>Like almost everyone else I also used the two-step approach:</p> <ul> <li>Candidate generation</li> <li>Re-ranking </li> </ul> <p>One thing that saved me was a slightly modified version of the function that you can find at the link, which I ran after almost each operation: </p> <pre><code>def reduce_memory(df): """ Credits: https://www.kaggle.com/code/gemartin/load-data-reduce-memory-usage/notebook Function that iterates through all the columns of a dataframe and modify the data type to reduce memory usage. """ for col in df.columns: col_type = df[col].dtype if (str(col_type)[:3] == 'int') or (str(col_type)[:3] == 'flo') or (str(col_type)[:3] == 'obj'): if col_type != object: c_min = df[col].min() c_max = df[col].max() if str(col_type)[:3] == 'int': if c_min &gt; np.iinfo(np.int8).min and c_max &lt; np.iinfo(np.int8).max: df[col] = df[col].astype('int8') elif c_min &gt; np.iinfo(np.int16).min and c_max &lt; np.iinfo(np.int16).max: df[col] = df[col].astype('int16') elif c_min &gt; np.iinfo(np.int32).min and c_max &lt; np.iinfo(np.int32).max: df[col] = df[col].astype('int32') elif c_min &gt; np.iinfo(np.int64).min and c_max &lt; np.iinfo(np.int64).max: df[col] = df[col].astype('int64') else: if c_min &gt; np.finfo(np.float16).min and c_max &lt; np.finfo(np.float16).max: df[col] = df[col].astype('float16') elif c_min &gt; np.finfo(np.float32).min and c_max &lt; np.finfo(np.float32).max: df[col] = df[col].astype('float32') else: df[col] = df[col].astype('float64') else: df[col] = df[col].astype('category') return df </code></pre> <h2>Candidate Generation</h2> <p>I was able to reach the following recall for the validation set</p> <ul> <li>Clicks: 62.04%</li> <li>Carts: 53.89%</li> <li>Orders: 73.23%</li> </ul> <p>through the following approach. The candidates for each session were produced through:</p> <ul> <li><strong>Co-visitation matrix</strong> with everything in the train data (this is the only thing that I computed through Kaggle notebooks)</li> <li><strong>Co-visitation matrix with</strong> everything in the train data with only <strong>positive time distance</strong>, i.e. we link item y to item x only if item y is after item x and within the time frame.</li> <li><strong>Co-visitation matrix</strong> with only <strong>carts and orders</strong></li> <li><strong>Co-visitation matrix</strong> with only <strong>orders</strong></li> <li>Last <strong>previously interacted items</strong></li> <li><strong>Word2Vec similar items</strong> to the last one seen in the session</li> </ul> <p>and to make everything simple, after computing the co-visitation matrices, I split the session into folds through the following code:</p> <pre><code>session_array = test['session'].unique() random.shuffle(session_array) session_split = np.array_split(session_array, N_FOLDS) i = 0 for session_subset in session_split: if i == 0: fold_df = pd.DataFrame(zip(session_subset, np.zeros(len(session_subset)) ), columns=['session', 'fold']) fold_df['fold'] = fold_df['fold'].astype(int) else: extra = pd.DataFrame(zip(session_subset, np.zeros(len(session_subset)) ), columns=['session', 'fold']) extra['fold'] = extra['fold'].astype(int) extra['fold'] = extra['fold'] + i fold_df = pd.concat([fold_df, extra], axis=0) i = i+1 fold_df.to_csv('session_fold.csv', index=False) </code></pre> <p>This allowed me to produce the candidate dataframe separately avoiding memory issues. Therefore for each fold I had the whole set of candidates in separate files. With this I was able to perform undersampling to reduce the # of negative samples to 20 times the # of positives samples, and I saved these dataframes for the training. Having them separately made performing n-fold training and computing validation very easily since I had both sampled and original candidates ready.</p> <h2>Features</h2> <p>This was probably the part where I was lacking the most due to time constraint on my part and due to the low score that I got, given an high recall rate on the candidate generation part. I produced features regarding:</p> <ul> <li>user</li> <li>item</li> <li>user-item interactions</li> <li>how the candidate was generated</li> </ul> <p>and I saved them as csv files locally, in order to be able to combine them with the required dataframe as I needed.</p> <h2>Training</h2> <p>For training, as I stated before, having the sampled dataframes saved allowed me to combine them easily with the features and also to save them. This made me able to train an XGBoost Ranker locally on my macbook in ~20-30 minutes. I wasn't able to try the LGBM Ranker since it wasn't supported for my current MacOS version (remember to never update to the latest os version haha), while the Catboost ranker has almost the same performances.</p> <h2>Inference</h2> <p>For inference, to avoid out of memory errors I loaded each non-sampled dataframe by chunk, add features to the chunk and then perform inference and finally saving the predictions. At the end I combined the predictions and took the top 20 ones to submit. </p> <p>Thank you! I hope so, having that massive drop in ranks and landing outside silver wasn't funny at all…</p> <p>Also congrats on your score and on your silver medal!</p>
OTTO – Multi-Objective Recommender System
ex-27th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Appreciation</h2> <p>First, I would like to thank the organizers and those who shared knowledge. Especially we would like to thank <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a> for sharing so many valuable notebooks and datasets, and <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for explaining many questions in detail.</p> <p><strong>Will Release Some Reproducible Code When The LB is Finalized</strong></p> <h2>CV &amp; LB Flow</h2> <p>Since <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a>'s CV setting tracks public LB perfectly, we directly use his datasets in this competition. The following graph shows how we leverage his datasets for feature engineering, local validation, and submission generation.</p> <ul> <li>Location Validation uses his validation dataset.</li> <li>Submission generation uses his full dataset and reranking models trained from local validation.</li> <li>To avoid data leakage, covist matrix and Item2Vec are trained based on validation's <code>train.parquet</code> and <code>test.parquet</code>, full's <code>train.parquet</code> and <code>test.parquet</code> separately.</li> <li>metrics that we tracked are recall@200 for recall strategy, wdcg@20 (type weighted average ndcg scores of click, cart and order rankers) for model training, validation recall@20, and PB score after submission. The wdcg@20, validation recall@20, and PB score are perfectly aligned. This allows faster iteration for different components in a parallel way.</li> <li><strong>It's a deep collaboration within the team, with each team member responsible for several parts, as you can see in the graph</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3563032%2F93c4b8a615882a249af1509539a7c374%2Fotto-dream-team-solution.png?generation=1675291808419108&amp;alt=media" alt="otto-dream-team-solution"></li> </ul> <h2>Retrieving</h2> <p>Our retrieving strategy is quite simple, we do some hyperparameters tuning on <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> 's <a href="https://www.kaggle.com/code/cdeotte/candidate-rerank-model-lb-0-575" target="_blank">public notebook</a>. Mainly the <code>topN</code> of each kind of co-visitation matrix and the <code>N_REC</code> for each session. The optimal <code>topN</code> for each co-visitation matrix is <code>100</code>, and the optimal <code>N_REC</code> is <code>200</code>. Then we use <a href="https://www.kaggle.com/tuongkhang" target="_blank">@tuongkhang</a> 's <a href="https://www.kaggle.com/code/tuongkhang/otto-pipeline2-lb-0-576/notebook" target="_blank">public notebook</a> to generate the <code>recall200</code> candidates for both local validation and test submission. Then we were able to get recall@200 for each action type as follows:</p> <ul> <li>clicks recall = 0.68420</li> <li>carts recall = 0.54552</li> <li>orders recall = 0.72831</li> <li>overall recall = 0.66906</li> </ul> <p><strong>What didn't work</strong></p> <ul> <li>BPR-based candidates</li> <li>Graph embedding(LINE) - based candidates</li> </ul> <h2>Rerank</h2> <h3>Feature Enginneering</h3> <p><strong>We kept monitoring the feature importance in Google Sheets, this allowed us to discuss and figure out new features efficiently</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3563032%2Ff465e0967c097c07277c07f51d3080d8%2Ffeature-importance.png?generation=1675296313006485&amp;alt=media" alt="feature-importance"></p> <p>The 1st version of our feature set only contains <strong>78</strong> features, and we got <code>0.592</code> on LB and <code>0.5802</code> on CV. Finally added up to ~500 features, and we achieved <code>0.596</code> on LB with a single LGB model.</p> <h4>User Features</h4> <ul> <li>count features: (event|click|cart|order|unique aids)</li> <li>type weighted aggregated score</li> <li>time-weighted aggregated score</li> <li>ratio features (click2cart, click2order, click2cart_or_order)</li> <li>time features (1st seen|click|cart|order, last seen|click|cart|order, click|cart|order hours' sin|cos mean and median)</li> </ul> <h4>Aid Features</h4> <ul> <li>count features (event|click|cart|order|unique users)</li> <li>type weighted aggregated score</li> <li>time-weighted aggregated score</li> <li>ratio features (click2cart, click2order, <code>click2cart_or_order</code>)</li> <li>time features (1st seen|click|cart|order, last seen|click|cart|order, click|cart|order hours' sin|cos mean and median)</li> <li>co-visitation features: <code>n_covisit_{click, cart, order}</code>, <code>n_incovisit_{click, cart, order}</code>, <code>in_cosivist_{rank, avg_rank}</code><br> <em>co-visitation features generation</em></li> </ul> <h4>User-Aid Features</h4> <ul> <li>count features (event|click|cart|order)</li> <li>type weighted aggregated score</li> <li>time-weighted aggregated score</li> <li>log recency aggregated score (thanks to <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a> again)</li> <li>co-visitation features: <code>{time, type, buy}_covisit_occurs_{timeweighted, typeweighted, recency weighted score}</code> <code>{time, type, buy}_covisit_rank_{timeweighted, typeweighted, recency weighted score}</code></li> <li>jaccard similarity between target aid and user interacted aids</li> <li><code>{click, cart, order}_word2vec_similarity_{min, max, mean, sum}</code> between target aid and user interacted aids</li> <li>BPR: user2aid BPR score, aid2user BPR score</li> <li>affinity_timedecay_7 (exponential weighted time decay score of action types)</li> </ul> <h3>Rankers</h3> <ul> <li>LightGBM with <code>lambdarank</code> objective</li> <li>XGboost with <code>rank:pairwise</code> objective</li> <li>CatBoost with QueryCrossEntropy loss (the best)</li> </ul> <h3>Ensemble</h3> <p>We searched the weights of each ranker's predicted scores based on the recall@20 of the local validation candidates using optuna, which means the recall@20 is calculated by weighted summing the scores of Tens of Millions of samples and calculating the recall@20. By leveraging the <code>cudf</code> and my implemented <code>calc_recall_fast</code> function, each trial only takes 700ms for carts (60M samples). We set the weight range to be <code>(-1, 1)</code> to get a better score:</p> <pre><code> () -&gt; : weights = [ / (pred_cols)] * (pred_cols) i ((weights)): weights[i] = trial.suggest_float(, -, , step=) calc_recall_fast( model_preds, ground_truths, sess_len_cumsum, weights, gt_cnt ) study = optuna.create_study(direction=, study_name=) study.optimize(objective, n_trials=, show_progress_bar=) </code></pre> <p>Hi <a href="https://www.kaggle.com/evilpsycho42" target="_blank">@evilpsycho42</a> , thanks! In short, <code>(session, aid, label)</code> is generated based on the 4th week's 1st split and 2nd split, whereas the features are generated based on 3 weeks + 4th week's 1st split.</p> <p><code>(session, aid, label)</code> in the training samples are based on radek's <code>test.parquet</code> and <code>test_labels.parquet</code> in the his local validation dataset, where the features are generated based on the <code>train.parquet</code> and <code>test.parquet</code> in this dataset.</p> <p>Thanks, I'm updating the feature engineering and ranker parts. You can check later</p> <p>Not that much, cv from 0.584491 to 0.585179</p> <p>Both LGB and CatBoost's results are not reproducible. However, after many training experiments, I found that the gap between the tuning stage and the training stage can be decreased by increasing the <code>early_stopping_round</code>, I raised this param to <code>300</code> for this competition. The gap is relatively small. <br> If you set a relatively small <code>early_stopping_round</code> e.g. <code>20</code> or <code>40</code>. Then most likely the early stopping is triggered before your model actually converges. Because of the randomness (even with a fixed seed), it could be triggered earlier during training than tuning. Then you'll see a big gap between training and tuning regarding the model performance.</p> <blockquote> <p>the parameters searched by optuna and the final model effect could not be reproduced</p> </blockquote> <p>Did you compare this to the local CV score? If so, given the weights, the local score is determined.</p>
Online Product Sales
The code of my best submission
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Online Product Sales <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here you can find the code of my best submission (21st):</p> <p><a href="https://github.com/emanuele/kaggle_ops">https://github.com/emanuele/kaggle_ops</a></p> <p>It is a simple blending of Gradient Boosting. The initial dataset was created adding binary vectos to represent categorical variables and the &quot;Dates&quot; as (categorical and scalar) year, month, day.</p> <p>The code is based on the excellent scikit-learn Python library.</p> <p>I'm publishing my code to invite other participants to do the same.</p> <p>Please use this thread to publish your code and to discuss published code.</p> <p>... and of course, congratulations to the winners!</p>
Open Images 2019 - Instance Segmentation
11th place solution [0.4796 private LB]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Instance Segmentation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to thank the competition organizers and all the competitors! </p> <p>Here's my brief solution writeup:</p> <h2>1. Dataset</h2> <ul> <li>No external dataset. I only use FAIR's ImageNet pretrained weights for initialization, as I have described in the Official External Data Thread.</li> <li>Class balancing. For each class, images are sampled so that probability to have at least one instance of the class is equal (1/300) across 300 classes. One instance is randomly picked from an image to train the segmentation network described below.</li> </ul> <h2>2. Pipiline and Models</h2> <p>A two-stage pipeline with detection and single-instance segmentation networks is employed. - Detection Model. The detection baseline model is Feature Pyramid Network with ResNeXt152 backbone with modulated deformable convolution layers. (see <a href="https://www.kaggle.com/c/open-images-2019-object-detection/discussion/110953">my post at the detection track</a>). </p> <ul> <li>Segmentation Model. The segmentation model is ResNet152-C4 with two upsampling layers and two U-net-like skip connections. </li> </ul> <p>Each instance is cropped from the image based on: 1) At training time: the ground truth bounding boxes. 2) At inference time: the bounding boxes detected by the (ensembled) detection model including the parent classes. The cropped images are resized to (320, 320). The output mask resolution is (160, 160).</p> <p>The models and training pipeline are developed based on the maskrcnn-benchmark repo.</p> <h2>3. Training</h2> <p>The training conditions are optimized for single GPU (V100).</p> <ul> <li><p>Detection Model. The detection model has been trained using 500-class box labels and eight models are ensembled (0.597 private LB at object detection track).</p></li> <li><p>Segmentation Model. The segmentation model has been trained for 1.8 million iterations and cosine decay is scheduled for the last 0.2 million iterations. Batchsize is 8 and batchnorm layers are used.</p></li> </ul> <h2>4. Ensembling</h2> <ul> <li>Two models ensembling. Two segmentation models with different image sampling seeds are ensembled with and without horizontal flip. The output heatmaps are averaged.</li> <li>Results. Model Ensembling improved private LB score from 0.4740 (single segmentation model) to 0.4796.</li> </ul> <p>Thank you, yes it would be a very important validation but I have not done that yet. I will report it when I do additional experiments!</p> <p>It took about 6 days per segmentation model and around one month per detection model. I sometimes use multiple GPU instances to train models in parallel. </p> <p>Thank you! It's not ensembling. I use detection results (bounding boxes) only to determine the regions to crop from an input image. The segmentation model infers mask heatmaps using the (resized) cropped images as input. </p>
Open Images 2019 - Instance Segmentation
20th place solution - maskrcnn-benchmark baseline
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Instance Segmentation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Open Images and Kaggle team for this great competition(s) and congrats to all (tentative) prize and medal winners!</p> <p>My result is not outstanding but the solution might be valuable to be shared because I used the famous maskrcnn-benchmark library 'as it is' and also used its outputs as it is without TTA or any post processing. Training two models requires only 14 hours (x2) using V100 8GPUs.</p> <p>All codes are available at: <a href="https://github.com/yu4u/kaggle-open-images-2019-instance-segmentation" target="_blank">https://github.com/yu4u/kaggle-open-images-2019-instance-segmentation</a></p> <p>There are mainly two issues to be solved in this competition and the Object Detection track: (1) class imbalance and (2) class hierarchy. I tackled these issues only on a dataset creation side. The former is easy to handle: use fixed number of training images for each class. In this post, I mainly describe how to handle class hierarchy.</p> <p>Firstly, I divided all classes into two groups: layer0 and layer1. From challenge-2019-label300-segmentable-hierarchy.json we can see that:</p> <ol> <li>Maximum depth of hierarchy is 2 (starting from 0)</li> <li>The number of depth 2 classes is only 5.</li> </ol> <pre><code>Carnivore └── Bear ├── Brown bear ├── Polar bear └── Teddy bear &lt;--- Are you serious? Reptile └── Turtle ├── Tortoise └── Sea turtle </code></pre> <p>Thus, I decided to group depth 0 classes as layer0 group and depth 1 and 2 classes together as layer1 group. The idea is to make different model for each of two groups.<br> In training each model, a dedicated dataset is used, which includes only the target group class instances. By doing so, there is no need to care about class hierarchy.<br> However, practically, it is impossible to make dataset from only training images that includes only target classes and does not include non-target classes. Therefore, I removed non-target class instances from training images.</p> <p>For layer0 group dataset:</p> <ol> <li>Remove non-target class annotations that occlude target class object 25% or more</li> <li>Convert non-target class to its parent class (Thus it becomes target class. Some classes need to be processed twice. 'Teddy bear' is converted only to 'Toy', not 'Carnivore')</li> </ol> <p>For layer1 group dataset:</p> <ol> <li>Remove non-target class annotations that occlude target class object 25% or more</li> <li>Remove non-target class annotations that do not have any child class (no impact to layer1 group classes because there is no relationship between them)</li> <li>Remove non-target class annotations that have some child classes, and fill their bbox with gray in the training image (removing only annotations is not good idea because these cause 'false false positive' signal (loss) to the model)</li> </ol> <p>That's all, and let's train!</p>
Open Images 2019 - Instance Segmentation
7th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Instance Segmentation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all the winners. Here is our solution:</p> <h2>Codebase and hardware</h2> <p>We used <a href="https://github.com/open-mmlab/mmdetection">mmdetection</a> with 4 Tesla T4 during training, 8 Tesla T4 during TTA inference.</p> <h2>Model setup</h2> <p>There are 300 classes, in 3 hierarchical levels: - 275 leaf classes - 23 parent classes - 2 grandparent classes (<code>Carnivore</code> and <code>Reptile</code>)</p> <p>For the 275 leaf classes, we train models with these 275 classes as labels and use inference results directly.</p> <p>For the 25 parent and grandparent classes, there are two methods to get the predictions: - (i). Use the prediction results from the leaf model, and “expand” to the parent and grandparent. For example, if the leaf model predicted a <code>Tortoise</code> mask, we add a <code>Turtle</code> prediction (its parent) and a <code>Reptile</code> prediction (its grandparent) with the same mask and score. <br> - (ii). We train models with only the 23 parent classes as labels, and expand to the 2 grandparent classes. Note that for these models, we need to create hierarchical expansion of the instance segmentation before training (as explained <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/challenge_evaluation.md#instance-segmentation-track">here</a>).</p> <h2>Validation set</h2> <p>We sampled a small subset from the official validation set for faster inference, 2844 images for the leaf model and 3841 images for the parent model. We used the validation scores to decide when to adjust learning rate, and to choose best checkpoints. </p> <p>The validation scores are significantly higher than LB. For the 275 leaf classes, we have validation score over 0.8. But nevertheless, the delta between validation and LB are stable.</p> <h2>Training set rebalancing</h2> <p>The open images dataset is extremely imbalanced between the 300 classes. We rebalanced the training data for the leaf model as follows: Sort the 275 classes by descending number of images. So class #0 is the largest class.</p> <p>| Group | Class | original num of imgs per class | rebalancing | |-----|------------|--------------------------------|-------------------------------| | 1 | #241 to #274 | 150-13 | oversample x10 | | 2 | #64 to #240 | 1500-150 | oversample to 1500 imgs/class | | 3 | #24 to #63 | 6k-1500 | no rebalancing | | 4 | #0 to #23 | 89k-6k | downsample to 6k* |</p> <p>Each epoch has about 450k images after rebalancing. </p> <p>*For the downsample, we use different random seeds for each epoch to feed the model as many images as possible. Also, this 6k number is a rough target. In actual sampling, the top few classes ended up having more images because many images have multiple labels, and top classes already have more than 6k images after Group 1-3 sampling are done.</p> <p>Similarly for the parent model, sort the 23 classes</p> <p>| Group | Class | rebalancing | |-------|----------|------------------------------| | 1 | #10 to #22 | upsample to 10k imgs/class | | 2 | #5 to #9 | no rebalancing | | 3 | #0 to #4 | downsample to 30k imgs/class |</p> <p>Each epoch has about 330k images after rebalancing.</p> <h2>Single models</h2> <p>We trained 3 cascade mrcnn leaf models - L1: backbone x101 - L2: backbone r101 + deformable module - L3: backbone x101 + deformable module</p> <p>and 1 cascade mrcnn parent model - P1: backbone x101</p> <p>hyper-paramters: <code> imgs_per_gpu=1 num_gpu=4 img_scale=[(1333, 640), (1333, 960)] multiscale_mode='range' </code></p> <p>lr schedule: model L1: 0.005 for 760k iterations; 0.005/3 for 60k iterations; 0.005/15 for 152k iterations model L2: 0.005 for 675k iterations; 0.005/3 for 64k iterations; 0.005/15 for 48k iterations model L3: 0.005 for 226k iterations; 0.005/5 for 132k iterations; 0.005/50 for 26k iterations model P1: 0.005 for 148k iterations; 0.005/5 for 62k iterations; 0.005/50 for 8k iterations</p> <p>Training took 0.5-0.65 hour per 1000 iterations with 4 T4. So in total the training the 4 models took about 22, 18, 10 and 5 days respectively.</p> <p>Public/private scores of each (<code>max_per_img=120, thr=0</code>, without TTA):</p> <p>| model | public score | private score | | |-------|--------------|---------------|----------------| | L1 | 0.4708 | 0.4354 | on 275 classes | | L2 | 0.4705 | 0.4275 | on 275 classes | | L3 | 0.4712 | 0.4324 | on 275 classes | | P1 | 0.0349 | 0.0344 | on 25 classes |</p> <h2>TTA</h2> <p>We used the TTA implementation from <a href="https://github.com/amirassov/kaggle-imaterialist">Miras Amir's winning solution</a> of iMaterialist (Fashion) 2019 </p> <p>For the leaf model, it’s the ensemble of 3 single models, 2 scale (1333,800) and (1600,960), and flip. At RPN and BB stages, it’s the NMS ensemble of 12 single models. In the end, it’s the mean of all 12 masks for each instance. </p> <p>TTA inference of the leaf model is very slow, which took about 525 T4-hours. We split the 99999 images into 25 chunks and ran it in parallel. </p> <p>For the parent model, it’s the ensemble of 2 scale and flip, but just one single model.</p> <p>Lastly, for the 25 parent and grandparent classes, we implemented NMS ensemble at mask level (i.e. calculating the mask iou instead of bbox iou when determining which ones to suppress) to ensemble (i) leaf class predications expanded to 25 classes, and (ii) parent class predications expand to 25 classes.</p> <p>Public/private scores of each (<code>max_per_img=120, thr=0</code>):</p> <p>| model | public score | private score | | |--------------------------|--------------|---------------|----------------| | leaf model ensemble | 0.5007 | 0.4548 | on 275 classes | | leaf ensemble expanded | 0.0325 | 0.0330 | on 25 classes | | ensemble of (i) and (ii) | 0.0370 | 0.0370 | on 25 classes |</p> <p>At this point, the total score is public 0.5007+0.0370 = 0.5378, private 0.4548+0.0370 = 0.4918. </p> <p>There’s one last thing we did to boost scores to 0.5383/0.4922: set <code>max_per_img=200</code> for the leaf ensemble. But we only had time (also restricted by sub file size) to do this for 12/25 of the test images.</p> <h2>Code</h2> <p>Training, inference, pre- and post-process code are available at <a href="https://github.com/boliu61/open-images-2019-instance-segmentation">https://github.com/boliu61/open-images-2019-instance-segmentation</a> Trained model weights are also linked in the readme there</p> <p>Thanks. Yes, unsample and oversample mean the same thing in my writeup above. </p> <p>"upsample to 10k imgs/class" was done like this: If a class has 3000 images, then repeat each image 3 times, then randomly choose 1000 (so that they appear 4 times); if a class has 7000 images, then randomly choose 3000 (so that these 3000 appear twice, the other 4000 appear once); ...</p> <p>This was done using different seeds for each epoch, so that a different 3000 in above example was chosen in each epoch.</p> <p>Implementation at: <a href="https://github.com/boliu61/open-images-2019-instance-segmentation/blob/master/util/make_rebalanced_train_ann_parent.py#L116">https://github.com/boliu61/open-images-2019-instance-segmentation/blob/master/util/make_rebalanced_train_ann_parent.py#L116</a></p> <p><strong>Update</strong>: We have pushed code (training, inference, pre-process, post-process) to <a href="https://github.com/boliu61/open-images-2019-instance-segmentation">https://github.com/boliu61/open-images-2019-instance-segmentation</a> Trained weights are also linked in the repo readme</p>
Open Images 2019 - Instance Segmentation
Keras Mask-RCNN for Instance Segmentation 2019
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Instance Segmentation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I made a release of my Mask-RCNN code for Instance Segmentation task:</p> <p><a href="https://github.com/ZFTurbo/Keras-Mask-RCNN-for-Open-Images-2019-Instance-Segmentation">https://github.com/ZFTurbo/Keras-Mask-RCNN-for-Open-Images-2019-Instance-Segmentation</a></p> <p>It wasn’t used in our best submit because I finished training too late. But it gives good result on LB, so I decided to release it. </p> <p><img src="https://raw.githubusercontent.com/ZFTurbo/Keras-Mask-RCNN-for-Open-Images-2019-Instance-Segmentation/master/img/mask_rcnn_prediction_example.jpg" alt=""></p> <p>Repository contains:</p> <ul> <li>Pre-trained Mask R-CNN models (ResNet50, ResNet101 and ResNet152 backbones)</li> <li>Example code to get predictions with these models for any set of images</li> <li>Code to train (continue training) model based on Keras Mask R-CNN and OID dataset</li> </ul> <p>I was able to get <strong>0.4670</strong> Public LB and <strong>0.4311</strong> Private LB using ensemble of these 3 models.</p> <p>There are 3 graphics with training process for each model. I changed parameters of training over some epochs (Freeze Backbone, ReduceLR, AdamAccumulate, different image sampling strategy). Fast growth of score are in places where I switched to uniform classes sampling. Also to speed up the ResNet101 training, I copied weights for decoder from trained Mask-RCNN with ResNet152 backbone. So ResNet101 got good score for much lower number of epochs.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/661345/14368/ResNet50.png" alt=""></p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/661345/14369/ResNet101.png" alt=""></p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/661345/14370/ResNet152.png" alt=""></p> <p>What do you mean by "training size"?</p> <p>All data from Open Images Dataset which were available.</p> <p>It must work with [keras 2.3 + tensorflow 1.15]</p>
Open Images 2019 - Instance Segmentation
[ODS.ai] - ML trainings, video of presentation
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Instance Segmentation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>In the video, I talk about the features of my solution and the best solutions from the top of LB. <a href="https://youtu.be/NGnOY-AzDBg">link to video</a> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F407384%2Fb99a832d645a78ce0ba00d0e7f64bcaa%2FScreenshot%20from%202019-12-14%2019-41-06.png?generation=1576341697633505&amp;alt=media" alt=""></p>
Open Images 2019 - Object Detection
10th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Object Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>TL;DR</strong>: - I'm using mmdetection framework and it's really convenient; - Best single model (cascade rcnn with imagenet pretrained resnext101) + TTA (horizontal flip, multi-scale testing (600, 900), (800, 1200), (1000, 1500), (1200, 1800), (1400, 2100)) achieves 0.499 public; - Split datasets into 6 subsets by frequency and then finetune on them for 1-2 epochs. --&gt; appr. 0.05+ increase; - Parent class expansion gives appr. 0.01+ increase; - Weighted ensemble (from <a href="https://www.kaggle.com/c/google-ai-open-images-object-detection-track/discussion/64633#latest-590283">ZFTurbo's solution last year</a>) of all my high score submissions --&gt; 0.608 final public score.</p> <p><strong>1. Single model</strong> At the very beginning, I was going to attend the "Visual Relationship Detection" track. But then I realized that I didn't have a good object detection model for that one. So I started with faster rcnn+resnext101, it takes me about 20 days to train 24 epochs and results in 0.446 on public lb. Similarly, I trained cascade rcnn+resnext101, cascade rcnn+senet154 for 12 and 8 epochs respectively. I just leave these models training for several weeks, do my daily work and give up "visual relation detection". The best single model is cascade rcnn+resnext101, which was accidently trained for 19 epochs (6 epochs longer than planned). So maybe I should train longer for each model :). <strong>Conclusion</strong>: my single models are weak. They should be trained longer.</p> <p><strong>2. Finetune</strong> Since the classes are very unbalanced, I split the dataset classes into 6 subsets simply according to frequency and finetune on them using faster rcnn+resnext101 model: - Classes 0-50, appr. 1411368images, 2 epochs, lr 0.001 - Classes 51-100, appr. 308352 images, 2 epochs, lr 0.001 - Classes 101-200: appr. 208096 images, 2 epochs, lr 0.001 - Classes 201-300, appr. 93140 images, 2epochs, lr 0.001 - Classes 301-400, appr. 45840 images, 1epoch, lr 0.001 - Classes 401-500, appr. 19316 images, 1epoch, lr 0.001 Last two weeks before final deadline, I found one huge bug in my code. After solving this bug, ensembling the predictions of finetuned models gave appr. 0.05+ increase on public LB. <strong>Conclusion</strong>: solving class imbalance problem is the key to top silver or gold medal.</p> <p><strong>3. TTA and Final Ensemble</strong> Some teams merged and I may have the chance for solo gold. So I did the followings: - TTA for each model: horizontal flip, multi-scale testing with (600, 900), (800, 1200), (1000, 1500), (1200, 1800), (1400, 2100) image size; - Expand parent class for each prediction after inference. - Weighted ensemble (from <a href="https://www.kaggle.com/c/google-ai-open-images-object-detection-track/discussion/64633#latest-590283">ZFTurbo's solution last year</a>) of all models.</p> <p><strong>Other tricks:</strong> - Increase the number of limited boxes for each image, even though they are of low confidence. I choose 600 as upper limit in my final submission. - I wasn't able to finish 12 epochs for my cascade rcnn+senet154 model. Nevertheless, ensembling it gives slight improvement.</p> <p><strong>Not work for me:</strong> Soft-NMS: tried to use it for ensemble, wasn't going well. </p> <p><strong>Planned but not implemented:</strong> - Use mask annotations from the segmentation track; - Multi-scale training and mixup augmentation.</p> <p><strong>Final words:</strong> Let's play fair. Peace&amp;love. 👍 </p>
Open Images 2019 - Object Detection
1st place solutions
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Object Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Please find our solutions <a href="https://arxiv.org/abs/2003.07557">here</a>: <a href="https://arxiv.org/abs/2003.07557">https://arxiv.org/abs/2003.07557</a> The full description of our core model, <a href="https://arxiv.org/abs/2003.07540">TSD</a> (improves the mAPs for all backbones by 3~4%) is also released today at <a href="https://arxiv.org/abs/2003.07540">https://arxiv.org/abs/2003.07540</a> Our code for the TSD has been released <a href="https://github.com/Sense-X/TSD">here</a></p>
Open Images 2019 - Object Detection
32nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Object Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>solution in pdf</p>
Open Images 2019 - Object Detection
3rd place solution [0.67170|0.64214 Public|Private LB]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Object Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Learning an Efficient Network for Large-Scale Hierarchical Object Detection with Data Imbalance: 3rd Place Solution to Open Images Challenge 2019 <a href="https://arxiv.org/abs/1910.12044">https://arxiv.org/abs/1910.12044</a></p>
Open Images 2019 - Object Detection
6th place solution [0.6023 private LB]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Object Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It's based on <a href="https://github.com/facebookresearch/maskrcnn-benchmark">maskrcnn-benchmark</a>.</p> <p>First of all, I would like to thank the competition organizers and all the competitors! This was my first Kaggle competition and I really had a great time here 😀 </p> <p>Here's my brief solution writeup:</p> <h2>1. Dataset</h2> <ul> <li>No external dataset. I only use FAIR's ImageNet pretrained weights for initialization, as I have described in the Official External Data Thread.</li> <li>Class balancing. For each class, images are sampled so that probability to have at least one instance of the class is equal across 500 classes. For example, a model encounters very rare 'pressure cooker' images with probability of 1/500. For non-rare classes, the number of the images is limited.</li> </ul> <h2>2. Models</h2> <p>The baseline model is Feature Pyramid Network with ResNeXt152 backbone. Modulated deformable convolution layers are introduced in the backbone network. The model and training pipeline are developed based on the maskrcnn-benchmark repo.</p> <h2>3. Training</h2> <ul> <li>Single GPU training. The training conditions are optimized for single GPU (V100) training. The baseline model has been trained for 3 million iterations and cosine decay is scheduled for the last 1.2 million iterations. Batch size is 1 (!) and loss is accumulated for 4 batches.</li> <li>Parent class expansion. The models are trained with the ground truth boxes without parent class expansion. Parent boxes are added after inference, which achieves empirically better AP than multi-class training.</li> <li>Mini-validation. A subset of validation dataset consisting of 5,700 images is used. Validation is performed every 0.2 million iterations using an instance with K80 GPU.</li> </ul> <h2>4. Ensembling</h2> <ul> <li>Ensembling eight models. Eight models with different image sampling seeds and different model conditions (ResNeXt 152 / 101, with and without DCN) are chosen and ensembled (after NMS).</li> <li>Final NMS. NMS is performed again on the ensembled bounding boxes class by class. IoU threshold of NMS has been chosen carefully so that the resulting AP is maximized. Scores of box pairs with higher overlap than the threshold are added together.</li> <li>Results. Model Ensembling improved private LB score from 0.56369 (single model) to 0.60231.</li> </ul> <p>I simply concatenate the boxes which the models have predicted, and then perform NMS on the gathered boxes.</p> <p>Thank you! My code (repo) is not ready right now but I will try to make it available soon.</p> <p>Thank you! It takes 18 to 36 days to train one model. The models are trained in parallel using multiple GPU instances. For TTA only horizontal flip is used. The validation strategy is added to the 3. Training section. </p> <p>Thank you! Yes it took time. It takes 18 to 36 days to train one model. The models are trained in parallel using multiple GPU instances. As for ensembling, it takes almost one day per model for inference on test data and one more day for the final NMS.</p> <p>Thank you! The image size setting during training is "maximum size of the side of the image (H or W) = 800", which is the standard configuration of the R-CNN family.</p>
Open Images 2019 - Object Detection
Code to ensemble boxes for object detection
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Object Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I made a release of Python module for ensembling boxes for Object Detection models. It includes very promicing Weighted Boxes Fusion (WBF) method, which we used instead of NMS in 2019 and 2018 competitions.</p> <p><a href="https://github.com/ZFTurbo/Weighted-Boxes-Fusion">https://github.com/ZFTurbo/Weighted-Boxes-Fusion</a></p> <p>Repository contains the following methods: - Non-maximum Suppression (NMS) - Soft-NMS - Non-maximum weighted (NMW) - Weighted boxes fusion (WBF)</p> <p>Installation available using pip: <code>pip install ensemble-boxes</code></p> <p>It will be interesting to hear if WBF works better for other teams too. )</p> <p>Some details in here: <a href="https://arxiv.org/abs/1910.13302">https://arxiv.org/abs/1910.13302</a></p> <p>I plan to add more details later.</p>
Open Images 2019 - Object Detection
TPU solution: RetinaNet + NAS-FPN + ResNet/EfficientNet, code shared [24th place]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Object Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Cheers! Google's ready TPU models are quite good, but it's not easy to port something new. For example, I needed something like Cascade R-CNN / HTC for the segmentation task.</p> <p>TL;DR: TPUs actually work. They are fast (10-100 times of 1080Ti). There are limitations that make porting difficult. My fixed code and models are at <a href="https://github.com/artyompal/tpu_models">https://github.com/artyompal/tpu_models</a>.</p> <hr> <h1>TPU</h1> <p>I started this competition quite late. I had 1 month, which is not enough to train a lot of models of GPU. I had some TPU credits, so I decided to give it a try.</p> <p>I forked TPU models repo and trained RetinaNet + FPN + ResNet50. I trained the model like in this tutorial: <a href="https://cloud.google.com/tpu/docs/tutorials/retinanet">https://cloud.google.com/tpu/docs/tutorials/retinanet</a> and it actually worked. To achieve this, I had to convert my data into TFRecords and also convert my validation data to a COCO JSON file. I realized that our metric (AP@50) is just an ordinary COCO/PASCAL VOC AP@50 metric, as long as there is no hierarchy. So training just worked out of the box.</p> <p>What's TPU? It's a matrix chip. It's a stand-alone device in Google's local network which connects to your cloud instance as TensorFlow master. It has a lot of HBM memory (I used batch 64 with images up to 1024). It has no HDD and it can't connect to your SSD, so it must use Google Cloud Storage for input (images and labels) and output (weights and TensorBoard logs). I had v2x8 and v3x8 TPUs; 3rd generation is ~50% faster.</p> <p>RetinaNet+ResNet50, batch size 64, images 640x640 processes 250 samples per second on TPUv2. On TPUv3, it processes 350 samples per second: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1859557%2F0373ce32c12fbd7aa0b29ba5ee9b288b%2Fperformance.png?generation=1570020130759200&amp;alt=media" alt=""></p> <p>One issue about TPUs is, they currently require a completely static TF graph. So you have to resize every image to a fixed resolution. It's a limitation of the current implementation (TensorFlow 1.14). Another issue is, image channels are swapped from NHWC to HWCN. Another thing, it doesn't support some TF operations. And you would need special pretrained weight for TPU, normal TF models will not work.</p> <p>Oh yeah, the main issue is, one has to use TPUEstimator to train models. tf.Estimators are weird pieces of software which by default erase all of your checkpoints but last five, completely ignoring your validation results. Every Deep Learning framework supports validation score-based ReduceLROnPlateau, but not tf.Estimators. If only we had PyTorch/XLA working... (I didn't try it)</p> <h1>Training</h1> <p>Naive training on 443 leaf classes resulted in 0.283 on the public leaderboard. Following last year solutions, I split dataset classes into 6 parts by frequency: 1. Classes 1-100 2. Classes 101-200 3. Classes 201-300 4. Classes 301-400 5. Classes 401-432 6. Human parts (11 classes), as they have a special dataset.</p> <p>I trained 6 models by parts and got a baseline which resulted in 0.48 on the public leaderboard. We teamed up with Dmitry, who had collected 5 pretrained models from different sources and got a script with Soft-NMS blending. This gave us 0.54 on the public LB. I also trained this model by 5 folds, which resulted in 0.51 LB by itself and 0.55 in the ensemble. I did it because I wanted to do pseudo-labeling for some data cleaning, but ran out of time to finish this.</p> <h1>Models</h1> <p>TPU models repo has out-of-the-box support of RetinaNet with NAS-FPN (<a href="https://arxiv.org/pdf/1904.07392.pdf">https://arxiv.org/pdf/1904.07392.pdf</a>) with AutoAugment augmentations (<a href="https://arxiv.org/pdf/1805.09501.pdf">https://arxiv.org/pdf/1805.09501.pdf</a>) and DropBlock regularization for both FPN part and backbone (<a href="https://arxiv.org/pdf/1810.12890.pdf">https://arxiv.org/pdf/1810.12890.pdf</a>). By default, RetinaNet only supports ResNet backbone, but I added support of EfficientNet. I also tried adding SE-ResNeXt backbone, but it worked too slow, maybe because I didn't transpose channels to HWCN. Also, I recall that I had a problem with TPUs not supporting REFLECT padding: <a href="https://cloud.google.com/tpu/docs/tensorflow-ops">https://cloud.google.com/tpu/docs/tensorflow-ops</a>.</p> <p>Of the ResNet family, I only had pretrained weights for ResNet50. Papers above claim that with some combinations of AutoAugment and DropBlock, RetinaNet trains even better without a pretrained backbone! This wasn't the case for me, though. Anyway, I had a lot of horsepowers, so I downloaded ImageNet and made my own pretrained models for ResNet101, ResNet152, and ResNet200.</p> <p>Long story short: NAS-FPN is great, DropBlock really helps, and AutoAugment augmentations do improve model performance.</p> <h1>Data labeling</h1> <p>Data labels are really noisy. I manually selected 5 labels per class for the validation set. I made a blacklist of labels which are clearly wrong: i.e. the label is "cello", but it's actually a violin.</p> <h1>Last day inference and final ensemble construction</h1> <p>We had seriously run out of time. I had to write and fix the distributed inference script in the last two days of the competition. This could not end well, and it didn't. I trained models with ResNet152/200 backbones, but we couldn't make a proper ensemble.</p> <p>Oh yeah, the last night before the deadline I got an error with inference with EfficientNet models: <a href="https://github.com/tensorflow/tensorflow/issues/29574">https://github.com/tensorflow/tensorflow/issues/29574</a>. So we could not use any of my EfficientNet models in the final ensemble. I probably could work around this issue by hacking TPUEstimator, but... time is money, you know :)</p> <h1>Conclusions</h1> <p>Well, it was fun! TPUs are fun. As long as someone else is paying. We spent about $1600 worth of GCP credits, and I paid $75 out of my pocket on top of that.</p> <p>I'm sharing my forked TPU repo here: <a href="https://github.com/artyompal/tpu_models">https://github.com/artyompal/tpu_models</a>. I'm also going to upload my pretrained weights for TPU models. I'm editing the repo right now, please let me know if something is missing in the README.md.</p> <hr> <p>Thanks to <a href="/dsvolkov">@dsvolkov</a> for being my teammate and for all that help with ensembling, debugging and diagnostics!</p> <p>Just a job offer would be good enough 😁</p>
Open Images 2019 - Visual Relationship
13th place solution summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Visual Relationship <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi all. Conglatulations to the winners and thank you Google AI to host this interesting competition this year again.</p> <p>Like most of previous years solutions, I split the problem into two parts, <code>non-is</code> and <code>is</code> relationships as they have quite different characteristics.</p> <h3>1. non-is relationship</h3> <p>For this task, I focused on the relationship between two objects such as <code>Man on Chair</code>, <code>Cat under Table</code>. My approach has two steps. Detect objects and then find out relationships for every possible triplets.</p> <p><strong>Object Detection</strong></p> <p>There are 57 objects such as <code>Man</code>, <code>Oven</code> to be a part of triplets. I used cascade-rcnn from <a href="https://github.com/open-mmlab/mmdetection/tree/master/mmdet">mmdet</a> for these 57 classes with a bit of modification such as adding test time augumentations.</p> <p><a href="https://github.com/appian42/kaggle/blob/master/openimages/cascade-rcnn.conf">This</a> is the .conf file I feeded to mmdet. Noticeble changes from default parametersz are </p> <ul> <li>2 more anchor boxes in RPN to capture high aspect ratio objects.</li> <li>CosineAnnealing instead of StepLR.</li> <li>score threshold 0.0001 for RCNN instead of 0.05.</li> <li>NMS treshold 0.4 instead of 0.5.</li> <li>max_per_image 400 instead of 100.</li> </ul> <p>I under-sampled frequent classes such as Man, Woman, Chair and used only 150,000 images to shorten the train time at the cost of accuracy.</p> <p><strong>triplet relationships</strong></p> <p>There are 287 triplet relationships. I took a similar approach to <a href="https://www.kaggle.com/c/google-ai-open-images-visual-relationship-track/discussion/64630">anokas's solution</a> last year as it's extremely simple and easy to implement.</p> <p>Some changes I made was</p> <ul> <li>Merged similar triplets into the same class based on some engineered features such as IOU, IOF so that rare triplets can be trained thanks to more frequent triplets.</li> <li>After merging, there are 90 classes out of 287 triplet relationships and 4-fold LightGBM models were separetely trained for each of these classes.</li> </ul> <p>Averaged AUC is 0.9641. After separating merged classes into original classes, averaged AUC is 0.9623.</p> <p><strong>Submission</strong></p> <ul> <li>0.31692 (public)</li> <li>0.24060 (private)</li> </ul> <h3>2. is relationship</h3> <p><strong>Object detection</strong></p> <p>I used cascade-rcnn to directly detect attributed objects such as <code>Table Wooden</code>, <code>Bench Plastic</code>. There are 42 classes for this task. This approach is similar to <a href="https://www.kaggle.com/c/google-ai-open-images-visual-relationship-track/discussion/64642">toshif's solution</a> last year but I failed to make the model as good as he did. </p> <p><strong>Submission</strong></p> <ul> <li>0.07346 (public)</li> <li>0.07130 (private)</li> </ul> <h3>3. submissions combined</h3> <p>I just put them together.</p> <ul> <li>0.38469 (public)</li> <li>0.30781 (private)</li> </ul> <h3>4. Possible improvements</h3> <ul> <li>Use full image size.</li> <li>Use all dataset.</li> <li>Use external dataset (such as COCO, Objects365 if it improves any).</li> <li>Train CNN model for triplet relationships just like <a href="https://www.kaggle.com/c/google-ai-open-images-visual-relationship-track/discussion/64651">tito's solution</a> last year. Some triplet relationships such as <code>Man holds Violin</code>, <code>Man plays Violin</code> are hard to differentiate because simple features such as IoU, IoF could not really capture the difference but CNN could.</li> </ul> <p>Thanks for reading!</p>
Open Images 2019 - Visual Relationship
brief summary of 2nd place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Visual Relationship <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all the winners, and thanks to competition organizers for this interesting competition again.</p> <p>I used almost same architecture as <a href="https://www.kaggle.com/c/google-ai-open-images-visual-relationship-track/discussion/64651">last year</a>. So I'll summarize only difference.</p> <h1>Model 1: object detection</h1> <p>I made cascade-rcnn model using <a href="https://github.com/open-mmlab/mmdetection">mmdetection</a>.</p> <p>mAP for 57 classes improved a lot, more than 0.1 compared to last year yolo model.</p> <h1>Model 2: visual relationship</h1> <h2>2-1: relation 'is'</h2> <p>I made 3 models for this part, and then I made ensemble of them.</p> <h3>2-1-1: relation 'is' (2 stage model)</h3> <p>This is the model I used for relation 'is' last year.</p> <h3>2-1-2: relation 'is' (1 stage model)</h3> <p>I made cascade-rcnn model which detect 42 'is-relation' classes. This model is almost same as <a href="https://www.kaggle.com/c/google-ai-open-images-visual-relationship-track/discussion/64642">'toshif' explained last year</a>.</p> <h3>2-1-3: relation 'is' (1 stage model with material head)</h3> <p>I added 'material' detection head to cascade-rcnn. This model predict Bounding Box and class and material at the same time.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F548996%2Fbb42658ab15675df44a4ab36aff46630%2Fmaterial_head.png?generation=1570245068374646&amp;alt=media" alt=""></p> <p>Results:</p> <p>|model |public |private | |---|---|---| |2-1-1 |0.07523 |0.07264 | |2-1-2 |0.08332 |0.08075 | |2-1-3 |0.08191 |0.07948 | |ensemble |0.08514 |0.08232 |</p> <p>I expected 2-1-3 to have better score...</p> <h2>2-2: Triplet Relationships</h2> <p>Base model is almost same as I shared <a href="https://www.kaggle.com/c/google-ai-open-images-visual-relationship-track/discussion/64651#380288">here</a> for this part.</p> <p>I made expert models which only in charge of small sample class, and made ensemble of them with weighted average of their probability.</p> <p>This is the result AP for validation data:</p> <p>|class |grand truth BB |predicted BB | |---|---|---| |at |93% |31% | |on |92% |32% | |holds |89% |54% | |plays |94% |58% | |interacts with |82% |45% | |inside of |72% |37% | |wears |94% |55% | |hits |55% |57% | |under |50% |20% | |mAP without hits/under |88% |45% | |mAP |80% |43% |</p> <p>For grand truth BB pairs, this relationships prediction model has very high accuracy. mAP without hits/under which have very small samples is 88%!</p> <h1>Model 3: Final Score Prediction</h1> <p>I did not used Light GBM for this part. I just used simple formula.</p> <p><code>Final Score = Object1Score x Object2Score x RelationsipScore</code></p> <p>This year, my LB score improved to 0.38818 from last year score 0.23709. Most of this improvement comes from object detection improvement.</p> <p>It seems that good object detection is the most important part of this competition.</p> <p>BTW, I became GM as of this competition. I'd like to thank to my previous team mate. I learned lots of things from them and I could not be GM without them. Thank you, Carl, Little Boat, KazAnova, Ahmet, Kohei-san, Akiyama-san, owruby!</p> <p>thanks, it took about 2 weeks with 2 x 1080ti.</p>
Open Problems - Multimodal Single-Cell Integration
12th-Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems - Multimodal Single-Cell Integration <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, congratulations to the winners, especially <a href="https://www.kaggle.com/senkin" target="_blank">@senkin</a> and <a href="https://www.kaggle.com/tmp" target="_blank">@tmp</a> for leading throughout and <a href="https://www.kaggle.com/shujisuzuki65" target="_blank">@shujisuzuki65</a> for his big jump from public to private LB. I'm eager to hear from both teams about their techniques. Overall, it was an interesting competition that gave me a greater appreciation for the challenges that bioinformaticians face. </p> <p>I don't know if others feel the same, but it seemed like a very long competition to me. I ran out of gas about midway and didn't really work on it much the last 4 weeks or so, which means I didn't use the raw counts at all. My solution, therefore, is fairly simple. </p> <p><strong>CV setup:</strong> I assigned each batch (unique user/day) to a separate fold, so I had 9 folds for citeseq and 12 for multiome. This was expensive, but had the virtue that it wasn't optimized towards either new donors (public LB) or new days (private LB). LB scores tracked local CV scores very closely. Even small gains in local CV almost always led to similar gains on the LB. This CV scheme was probably the reason that I fared well on the private LB (46-&gt;12).</p> <p><strong>Data transformations:</strong> I tried a lot of ways to denoise and transform the data, but most of them failed. In the end, I just used PCA and tSVD of the original data. </p> <p><strong>Feature engineering:</strong> None for Multiome. For Citeseq, I trained 140 shallow LGB models (one for each target) using the full set of data. The goal here was not to use the models themselves, but to see which features were important for each target. I used the top 100-200 features per target in the later, deeper modeling.</p> <p><strong>Modeling:</strong> For Citeseq, I trained both single-target (140 separate models) and multi-target NN models (using Fastai). I also trained LGB and CatBoost models for each target. Altogether, I trained over 20 sets of models using different variations of the PCA data combined with selected features from the feature engineering. Individually, the models had local CV scores in the range 0.8995 - 0.9017. For multiome, I trained 3 multi-target NNs on the tSVD-reduced targets, and one CatBoost model.</p> <p><strong>Ensembling:</strong> I blended the models together using a very simple optimized weighting scheme where the only possible weights were 0,1,2, or 3. I tried other ensembling techniques that had higher CV scores, but they performed worse on the LB. I was afraid this might be due to some hidden leakage between folds, so I stuck to the simpler weighting scheme. This led to local CV scores of 0.9039 for Citseq and 0.669 for Multiome.</p> <p><strong>Thoughts about trends in data:</strong> I was intrigued by <a href="https://www.kaggle.com/AmbroseM" target="_blank">@AmbroseM</a>'s posts arguing that the data is a time series. There are undoubtedly trends over the 7 days of training data, but I was concerned about whether these trends would continue to day 10. I don't know enough about the biology, but it seems likely that cell behavior has both long-term trends (aging) and short-term trends based on things like diet, exercise, illness, etc. I decided that the trends visible in the 7 days of training data could very easily be short-term trends that would reverse themselves after 7 days, or could be just coincidental due to technical aspects of the data collection. Based on <a href="https://www.kaggle.com/AmbroseM" target="_blank">@AmbroseM</a>'s post here (<a href="https://www.kaggle.com/competitions/open-problems-multimodal/discussion/366395)" target="_blank">https://www.kaggle.com/competitions/open-problems-multimodal/discussion/366395)</a>, it seems like the trends did, in fact, continue to the test set. I would be very interested to hear from some cell scientists about what these trends might signify, whether they're cyclical in nature, and if yes, how long each cycle is typically.</p> <p>I looked at it from a slightly different perspective. The public LB data has the same days but a different donor, so to optimize CV for the public score, we would want 3 folds, one for each donor in the training set. But this set-up could potentially penalize the private LB since it would not optimize for unseen days. </p> <p>Conversely, to optimize for the private LB, we would want separate folds by day (or even possibly past/future days if we believe it is a time series problem), but such a set-up might give us misleading feedback on the public LB if it turns out that the data is highly specific to donors. </p> <p>My system is a compromise between the donor/day approaches. The fact that it aligned so closely with the public LB gave me confidence that it was generalizing well for new donors, and the fact that each training fold contained data from all days meant that it wasn't biased too much by any particular day. </p> <p>The one thing my CV set-up doesn't do is optimize for future days, and this turned out to be its biggest failing. Despite some clear trends in the data, I didn't know enough about the underlying biology to be persuaded that the trends would continue into the private test days. In retrospect, this is my biggest regret. Also, that I didn't bite the bullet and re-run my entire pipeline with the new raw data when it was released.</p> <p><a href="https://www.kaggle.com/alexandervc" target="_blank">@alexandervc</a> Sorry for the delay. I've uploaded the feature importance tables here: <a href="https://www.kaggle.com/datasets/psilogram/citeq-feature-importance" target="_blank">https://www.kaggle.com/datasets/psilogram/citeq-feature-importance</a></p> <p>there are a couple different versions from different runs. Let me know if you have any questions.</p>
Open Problems - Multimodal Single-Cell Integration
13th place and how to
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems - Multimodal Single-Cell Integration <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes. And you can run it on GPU too! </p> <p>Hey guys, it's been a while since my last actively participated competition and this is a good experience! Though I am not active any more, I frequently come back to kaggle to browse new ideas. One thing I notice is that it is not really straightforward to learn new things by just reading top solutions if you were not active in that competition… And one factor could be that we mostly share what finally worked (and/or what didn't work), but not the thought process of getting there. For someone like me, either a bit too lazy or a bit too busy to actively participate, or someone who might be a bit inexperienced, I think it would be more beneficial to share "how I got here" more than "here I am". So in that spirit, I would like to start sharing solutions this way…</p> <h5>Some background:</h5> <p>My job can be demanding so whenever I can delegate the work to computers, I do, whenever I cannot, I minimize the time needed for coding such that I can leverage fragamented time as much as possible. And I mainly participate in this competition to learn how feature extraction could work with high dimension inputs and outputs. And they shape what next experiments I decided to try, and mistakes I made along the way so I thought it is important to share.</p> <h5>My journey:</h5> <ol> <li><p>started by reading the Discussion to understand what the data is about, and walk through popular public kernels to understand what can be used as baselines. I noticed that mostly TruncatedSVD was used to reduce dimensions. </p></li> <li><p>I figured it might be a good idea to just train a mlp model with all features included. And I was too lazy to build a local validation pipeline so I just randomly sampled 10% as validation set. And the result wasn't so good on public LB compared to public kernels.</p></li> <li><p>Then I was thinking, ok, maybe it was because mlp was bad. But xgboost/lightgbm on cpu would take forever to train, and would run out of memory on gpu. So need a better neural network model. What about Tabnet (terrible). Ok. There was one paper I remember claiming similar performance to xgboost. OK. found it. <a href="https://arxiv.org/pdf/2112.02962.pdf" target="_blank">https://arxiv.org/pdf/2112.02962.pdf</a> (It is called DANets). Better but still not as good…</p></li> <li><p>Now back to public kernel as the baseline. Maybe instead of SVD, we can use autoencoder? OK, only linear autoencoder performed ok-ish, any nonlinearity didn't work… no matter what tricks (e.g. swap noise augmentation) used. Hmmm…</p></li> <li><p>Back to TSVD + MLP as baseline again… Let's make this baseline better first. First swap MLP with DANets. And let's just focus on cite since it only have high dim inputs. Whatever works for cite should work for multi right? (TimeMachine: Nope!)</p></li> <li><p>Let's standardize the data since PCA likes it. OK. Slightly better. The explained variance seems quite low, but adding more components as features doesn't seem too helpful. So likely the inputs are quite noisy. Don't really know what to do. Well, we can always add different decomposition methods if no better ideas. Added NMF, FactorAnalysis, FastICA. Ok. All of them worked. </p></li> <li><p>Now let's also train some xgboost model since now we can train on gpu. Ok cool. Averaging xgboost with DANets improves results significantly.</p></li> <li><p>Maybe should try autoencoder again…. Read some papers. Ok. Still didn't work.</p></li> <li><p>Let's try some popular nonlinear dimension reduction techniques. UMAP, TriMAP, PaCMAP. Doesn't seem working.</p></li> <li><p>XGBoost or LightGBM train one model per target which seems wasteful and may not consider the correlation between outputs. Let's see if there is a better way out there. <a href="https://arxiv.org/abs/1909.04373" target="_blank">https://arxiv.org/abs/1909.04373</a> found this GBDT-MO. and it has code. Tried. Didn't work so well. </p></li> <li><p>Read in the Discussion that we can select features by matching input/output names for Cite. Tried and it worked. Neat.</p></li> <li><p>Also read in the Discussion that the 0s in inputs may not be actual 0s could also be missing. OK. Tried to calculate the mean/std by considering all 0s as missing and then calculate PCA. Adding to the features improved the model a bit.</p></li> <li><p>Realized that if 0s can be treated as missing then we can calculate PCA differently too according to this old paper (<a href="https://www.sciencedirect.com/science/article/abs/pii/S016974399600007X)" target="_blank">https://www.sciencedirect.com/science/article/abs/pii/S016974399600007X)</a>. Helped a bit.</p></li> <li><p>OK… stop the laziness and spend the weekend on building proper cross validation pipeline and retrain the models for cite. Nice. a huge jump from 0.812 to 0.814 on leaderboard.</p></li> <li><p>Saw some discussion about the supplymentary raw data. Well, I believe the host has done the best preprocessing and probably not helpful, also it is a pain to handle two datasets, so ignored. (TimeMachine: Big Mistake!)</p></li> <li><p>Approaching the last week of competition so rewire the cite pipeline for multi. Hoping to see a huge jump on score for multi as seen on cite. But that didn't happen. So… They are actually very different… Maybe I should have accepted some team merging invite earlier….</p></li> <li><p>stack a few DANets and GBDT models. </p></li> <li><p>picked the wrong submissions for final evaluation. But what can you do.</p></li> </ol> <h5>Model Summary</h5> <p>Cite:<br> (PCA + NMF + FA + ICA + NanPCA + Missing Value PCA (i.e. NIPALS) ) + (DANets + XGBoost)</p> <p>Multi:<br> (PCA + NMF) + (DANets + XGBoost with SVDed Output)</p> <h5>Looking Back, to improve the score further</h5> <ol> <li>Should have spent more time on data preprocessing.</li> <li>Should anticipate and prepare for a complex ensemble pipeline so save all the models and predictions properly along the way for multi layer stacking.</li> <li>Shouldn't have assumed Cite and Multi being similar and go for team up.</li> </ol> <p>Hopefully it is helpful (also to those who didn't participate!)</p> <p>just a lot of messy code + W&amp;B runs… I don't know the best way to trade off code quality with the speed of experiments actually. The good thing is that you have to do things with quality at work. So I guess that balances things 😄</p> <p>if I got time to clean up the code will post here</p> <p>it is multi layer perceptron, a.k.a fully connected neural network <a href="https://en.wikipedia.org/wiki/Multilayer_perceptron" target="_blank">https://en.wikipedia.org/wiki/Multilayer_perceptron</a></p>
Open Problems - Multimodal Single-Cell Integration
16th Place Solution Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems - Multimodal Single-Cell Integration <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the competition host, Kaggle team, Saturn cloud team and congrats to all the winners!</p> <p>Although many great solutions have already been posted and my solution may not contain new approaches, I would like to leave my efforts over the past two months. In this term I learned a lot. Thanks to the all competitors for a great game.</p> <p>Please forgive me if it is difficult to read this post or see the schematic diagram below, as my command of English is not very good and my educational background was different from computer science or machine learning area.</p> <hr> <h1>Overview:</h1> <p>The machine learning algorithms used were as follows:</p> <ul> <li>Two MLPs (4 and 9 hidden layers, the variants of <a href="https://www.kaggle.com/code/pourchot/all-in-one-citeseq-multiome-with-keras?scriptVersionId=108466116" target="_blank">Laurent Pourchot's model</a>)</li> <li>Conv1d (almost all the same as the <a href="https://www.kaggle.com/c/lish-moa/discussion/202256" target="_blank">tmp’s 1D-CNN model for tabular data on MoA competition</a>)</li> <li>LGBM</li> </ul> <p>In my case, stacking scheme boosted the score. The outputs of level 1 models were concatenated and then used as input for level 2. It was effective to apply dimensionality reduction to concatenated level 1 outputs after standardization. When the outputs were just concatenated without dimensionality reduction, the score of level 2 was rather lower than that of level 1.</p> <p>Also, ensemble worked well. I created several models with slight difference (different dimensionality reduction algorithms, feature extraction methods and loss functions) and blended them. In addition, each learning process was performed on 15 random-seeds and results were averaged.<br> <br></p> <h1>CV scheme:</h1> <p>I used simple KFold (k = 5). Fortunately, I resulted in shakeup in private LB.<br> <br></p> <h1>Citeseq:</h1> <p>The diagram of my Citeseq stacking scheme is as follow.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11204962%2F65d09e1ed96e210c63bab1d627b8497b%2Fsolution%20scheme-vstack.jpg?generation=1669194022322983&amp;alt=media" alt=""></p> <h4>Preprocess</h4> <p>Before dimensionality reduction or feature extraction in level 1, the set of all features which are constant in the train or test were eliminated according to the <a href="https://www.kaggle.com/code/ambrosm/msci-citeseq-quickstart" target="_blank">AmbrosM’s Code</a>.</p> <ul> <li>Dimensionality reduction: tSVD and PCA (n_components = 64) were used separately and inference results were finally blended.</li> <li>Feature extraction: According to the <a href="https://www.kaggle.com/code/fabiencrom/msci-correlations-eda-citeseq/notebook" target="_blank">Fabien Crom's Code</a>, I picked features with high Pearson correlation coefficients for the targets. In order to gain diversity as much as possible, I changed the picking query for each models. For example:<br> ・Extract the RNAs in order of highest <strong>average</strong> of correlation to 140 proteins<br> ・Extract the RNAs that have a high correlation value for a <strong>single</strong> protein, <strong>not the average</strong><br> ・Change how many of the top RNAs are extracted<br> ・With or without dimensionality reduction after extracted<br> <br></li> </ul> <h1>Multiome:</h1> <p>The scheme is almost the same as that of Citeseq. The differences are as follows:</p> <ul> <li>Extracted features were not used. In the Multiome case, the score was deteriorated when they were concatenated with 64 dims compressed features.</li> <li>For NN algorisms, only MSE was used as loss function.</li> <li>The target vectors were compressed to 512 (for NN) or 128 (for LGBM) dims by tSVD and used in training. The inferred vectors were decompressed to 23418 dims by inverse SVD.</li> </ul>
Open Problems - Multimodal Single-Cell Integration
1st Place Solution Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems - Multimodal Single-Cell Integration <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thank you to the organizers and kaggle management and to everyone who participated with me.<br> Since I needed to gain experience in analyzing single cell data, this competition was an excellent experience for me.</p> <p>I would like to introduce the overview of my solution.</p> <h1>Multiome</h1> <h2>Model Overview</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1053132%2F05d87da294f71eca450a304785333ac7%2Fmult-model-overview.png?generation=1668772959634455&amp;alt=media" alt=""></p> <h2>Input Preprocessing</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1053132%2Fdedb439f946e0682644451a531d4df33%2Fmulti-input-preprocessing.png?generation=1668773118728830&amp;alt=media" alt=""></p> <h2>Target Preprocessing</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1053132%2F76231a7e2c23f3f588e763bf8035dcd0%2Fmulti-target-preprocessing.png?generation=1669418912414786&amp;alt=media" alt=""></p> <p>tSVD-based imputation method: </p> <ol> <li>Perform dimensionality reduction on the data with tSVD</li> <li>And then, Transform the data back to the original space</li> <li>Copy the value of the 0 part of the original data from the transformed values.</li> </ol> <h2>Model</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1053132%2F8861945be57ca34a6bfa2b23e5627100%2Fmulti-model.png?generation=1668773599707773&amp;alt=media" alt=""></p> <h2>Output Postprocessing and Loss</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1053132%2Ff59241b0358a15294bdc38c25722f37d%2Fmulti-postprocessing_1.png?generation=1669418940293447&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1053132%2Fbff80fd72fd2a9d0ab94704c641c024b%2Fmulti-postprocessing_2.png?generation=1669419002523851&amp;alt=media" alt=""></p> <p>In the inference phase, the model outputs the average of the five predicted target data.</p> <h1>CITEseq</h1> <h2>Model Overview</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1053132%2Fa9563a70e5639db2b4b72b7645ecd910%2Fcite-model-overview.png?generation=1668774675355569&amp;alt=media" alt=""></p> <h2>Input Preprocessing</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1053132%2Ff5ddef7a059aa3a94c3afc6f3f52b440%2Fcite-input-preprocessing.png?generation=1669419233099917&amp;alt=media" alt=""></p> <p>In selecting important genes in CITEseq, the correlation coefficient is calculated for each batch and select only genes with high correlation in many batches.<br> Genes were selected from those related to the target proteins and pathway.<br> I use <a href="https://reactome.org/" target="_blank">Reactome</a> as pathway database.</p> <h2>Target Preprocessing</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1053132%2Fc538366bbc83a1de46005c5108d6b20d%2Fcite-target-preprocessing.png?generation=1668774972017361&amp;alt=media" alt=""></p> <h2>Model</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1053132%2F3a30e371014051289d99d5cbe57009e9%2Fcite-model.png?generation=1668775048364648&amp;alt=media" alt=""></p> <h2>Output Postprocessing and Loss</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1053132%2F677385c92a3be6a0e0a9702b3005edc8%2Fcite-postprocessing.png?generation=1668775162024292&amp;alt=media" alt=""></p> <p>In the inference phase, the model outputs the average of the five predicted target data.</p> <h1>Local evaluation</h1> <p>I used two evaluation schemes.</p> <ol> <li>Evaluation with cross validation:<ul> <li>5-fold cross validation grouped by donor and day</li></ul></li> <li>Evaluation for hyperparameter optimization with Optuna:<ul> <li>Training data set is divided into training and validation data sets. ( Training data set: 80%, validation data set: 20%. )</li></ul></li> </ol> <h1>Ensemble</h1> <p>I used the weighted average of predictions of the following models.</p> <ol> <li>Models trained with changing the seed </li> <li>Models fine-tuned on only some batches<ul> <li>Batch combination pattern examples: males only, female only, Day 4, 7 only, etc.</li> <li>Use a model trained on the full training data set as a pre-training model </li></ul></li> </ol> <h1>Code</h1> <p><a href="https://github.com/shu65/open-problems-multimodal" target="_blank">https://github.com/shu65/open-problems-multimodal</a></p> <h1>Update</h1> <p>2022/11/20 add the repository url of my solution<br> 2022/11/26 fix some figures</p> <p>Thank you! </p> <p>I couldn't believe it at first either and showed the screenshot of LB to my colleagues at work to make sure I was in first place and not wrong.</p> <p>I noticed that the library-size normalization + log1p data is not being returned well after dimensionality compression with tSVD. The correlation coefficient between the original data and the converted and reverted data is only about 0.70.</p> <p>For this reason, I tried to find a better preprocessing method and found it.</p> <p>The MAE/MSE weights were set at 1.0 at the beginning of training and the weights were gradually reduced as training progressed.</p> <p>The detail of weight schedule is here:<br> <a href="https://github.com/shu65/open-problems-multimodal/blob/3d57dd3837b17079fed5678043e681749ba32324/ss_opm/model/encoder_decoder/cite_encoder_decoder_module.py#L81" target="_blank">https://github.com/shu65/open-problems-multimodal/blob/3d57dd3837b17079fed5678043e681749ba32324/ss_opm/model/encoder_decoder/cite_encoder_decoder_module.py#L81</a></p> <p>I tried MAE as in multiome part. But, the score of the model with MAE is lower than that with MSE. </p>
Open Problems - Multimodal Single-Cell Integration
2nd place solution(senkin part with code)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems - Multimodal Single-Cell Integration <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for invitation, I am going to share more details at NeurIPS workshop at 7/Dec, you can follow that.</p> <p>yes, it also can be used for GBDT or ridge model.By the way,TF-IDF is the old version data provided,you can check Dataset Description page.</p> <p>Thanks to all the organizers and kaggle team hosting such a challengeable competition.Thanks my team mate <a href="https://www.kaggle.com/baosenguo" target="_blank">@baosenguo</a>, I have no knowledge about bioinformatics ,learned a lot from him.I thought my team could win this competition as we were at the 1st place of LB from start to end,but the time domain shift is unpredictable,we accept this result and congratulas to <a href="https://www.kaggle.com/shujisuzuki65" target="_blank">@shujisuzuki65</a> ,great shakeup!</p> <h1>Overview</h1> <h2>Cite</h2> <p><a href="url" target="_blank">[<img src="https://i.postimg.cc/8kBQqQL2/cite.png" alt="cite.png">](https://postimg.cc/s1XNhL6m)</a></p> <h2>Multi</h2> <p><a href="url" target="_blank">[<img src="https://i.postimg.cc/tgHz0f3c/multi.png" alt="multi.png">](https://postimg.cc/sMwW7T0P)</a></p> <h1>preprocessing</h1> <p>1) <strong>centered log ratio transformation (CLR)</strong> is the best normalization method for both of cite and multi, I found the method from nature articles. <a href="https://www.nature.com/articles/s41467-022-29356-8" target="_blank">https://www.nature.com/articles/s41467-022-29356-8</a></p> <p>2) high correlation raw features with target </p> <p>3) <a href="https://www.kaggle.com/baosenguo" target="_blank">@baosenguo</a> designed fine tuned process</p> <ul> <li>using raw count:</li> <li>normalization:sample normalization by mean values over features</li> <li>transformation:sqrt transformation</li> <li>standardization:feature z-score</li> <li>batch-effect correction:take "day" as batch, for each batch, we calculate the column-wise median to get a "median-sample" representing the batch, and then subtract this sample from each sample in this batch. This method may not bring much improvement, but it is simple enough to avoid risks.</li> </ul> <p>4) row-wise zscore transformation before input to neural network</p> <h1>validation</h1> <p>The biggest challenge in this competition is how to build a robust model for unseen donor in public test and unseen day&amp;donor in private test. At the early stage I used random kfold, cross validation and LB score matched very well, so we don't need to worry about donor domain shift.But time domain shift is unpredictable, after team merge, we check our features one by one with out-of-day validation(groupkfold by day) to make sure all the features can improve every day.</p> <h1>model</h1> <ul> <li><p>Lightgbm<br> train 4 lightgbm models with different input features,then transform oof predictions to tsvd as nn model's meta features<br> -- library-size normalized and log1p transformed counts -&gt; tsvd<br> -- raw counts -&gt; clr -&gt; tsvd<br> -- raw counts<br> -- raw counts with raw target<br> one trick is input sparse matrix of raw count to lightgbm directly with small "feature_fraction": 0.1,it brings nn model much improvment.</p></li> <li><p>NN<br> Basiclly 3layers MLP works well,one trick is to use GRU to replace first dense layer or add GRU after final dense layer.<br> Cite target is transformed to dsb having negative values, compared to ReLU, ELU is much better to deal with negative target values,Swish is also work well for both of cite and multi.<br> At the early stage I found cosine similarity is best as loss funtion for my model, after team merge, I learned from teammate to use MSE and Huber to build more different models.</p></li> </ul> <h1>notebook</h1> <p>[simple cite version]<br> <a href="https://www.kaggle.com/code/senkin13/2nd-place-gru-cite" target="_blank">https://www.kaggle.com/code/senkin13/2nd-place-gru-cite</a></p> <h1>github</h1> <p><a href="https://github.com/senkin13/kaggle/tree/master/Open-Problems-Multimodal-Single-Cell-Integration-2nd-Place-Solution" target="_blank">https://github.com/senkin13/kaggle/tree/master/Open-Problems-Multimodal-Single-Cell-Integration-2nd-Place-Solution</a></p> <p>yes, we should ensemble models as many as possible, I regret I didn't ensemble lots of models.</p> <p>I don't have precise theory supported,but I assume you are right,gru layer extract some more and different information than mlp.<br> there were two successful experience in the past。<br> <a href="https://www.kaggle.com/competitions/favorita-grocery-sales-forecasting/discussion/47582" target="_blank">https://www.kaggle.com/competitions/favorita-grocery-sales-forecasting/discussion/47582</a><br> <a href="https://www.kaggle.com/competitions/talkingdata-adtracking-fraud-detection/discussion/56262" target="_blank">https://www.kaggle.com/competitions/talkingdata-adtracking-fraud-detection/discussion/56262</a></p> <p>I remember it boosted cv 0.0003</p> <p>sorry, I mean one group by one group, for example clr of raw count is one group, lightgbm oof predictions is one group.</p> <p>yes, I only use the raw data for cite exclusively</p> <p>good job, it seems lower cv,higher plb, our random kfold didn't fit private very well</p> <p>I tried mse,mae,pearson correlation,cosine similarity, cosine similarity is best for my model</p> <p>it includes clr transformation(200), lgb oof(400), fine-tuned transformation(164), high correlation and important features(245)</p> <p>source code uploaded</p>
Open Problems - Multimodal Single-Cell Integration
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems - Multimodal Single-Cell Integration <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, to the organizers and kaggle management, and to everyone who participated with me, thank you for organizing a great competition!</p> <p>Before starting this competition, I had no knowledge of the biological field.To be honest, I still don't know much about it.<br> (Since this was a completely unprofessional field, I may have been able to try various things without bias…?)</p> <p>I would like to share my solution below. Please forgive me if it is difficult to read in many respects, as my English skills are not very good.</p> <hr> <h1>Summary:</h1> <h2>multiome</h2> <h4>preprocess</h4> <ul> <li>use okapi bm25 instead of tfidf</li> <li>dimensionality reduction:use lsi(implemented in muon) instead of svd<br> <a href="https://muon.readthedocs.io/en/latest/api/generated/muon.atac.tl.lsi.html" target="_blank">https://muon.readthedocs.io/en/latest/api/generated/muon.atac.tl.lsi.html</a><br> <code>muon.atac.tl.lsi(rawdata(with okapi preprocessing), n_comps=64)</code></li> </ul> <h4>feature</h4> <p>Basically, pre-processing contributed greatly to the accuracy, but the following features also contributed somewhat to the accuracy.</p> <ul> <li>binary feature<br> transformed 0/1 binary and reduced 16 dimensions(svd) as features</li> <li>w2v vector feature<br> For each cell, the top100 with the highest expression levels were lined up<br> and vectorized by gensim to get feature vector(16dims) for each gene.<br> <a href="https://radimrehurek.com/gensim/models/word2vec.html" target="_blank">https://radimrehurek.com/gensim/models/word2vec.html</a><br> Ex. CellA: geneB → geneE → geneF → …<br> CellB: geneA → geneC → geneM → …<br> top100 genes vector average in each cell used as features.</li> <li>leiden cluster mean feature<br> I made Clusters using muon's leiden clustering(23 cluster).<br> <a href="https://muon.readthedocs.io/en/latest/api/generated/muon.tl.leiden.html#muon.tl.leiden" target="_blank">https://muon.readthedocs.io/en/latest/api/generated/muon.tl.leiden.html#muon.tl.leiden</a><br> After taking the average of the features for each cluster(23 cluster × 228942 feat),<br> they were reduced to 16 dimensions by svd and used as features(23 cluster × 16feat).<br> After that, join on clusters.</li> <li>connectivy matrix feature<br> Since muon's leiden clustering generates an adjacency matrix between cells<br> as a byproduct, I also use it 16-dimensional with svd as a feature.</li> </ul> <h4>model</h4> <ul> <li>mlp<ul> <li>Simple 4-layer mlp; no major differences from mlp in public notebooks</li> <li>target has been reduced to 128 dimensions with svd.</li> <li>use rmse loss</li></ul></li> <li>catboost<ul> <li>target has been reduced to 128 dimensions with svd.</li></ul></li> </ul> <h4>ensemble</h4> <ul> <li>I made a model of nearly 20 mlp and 3 catboosts with various feature combinations. and cv-based weighted averaging.</li> </ul> <h2>cite</h2> <h4>preprocess</h4> <ul> <li>The same process as in the organizer was applied.<br> <code>use sc.pp.normalize_per_cell and sc.pp.log1p</code> (excluding the gene that are significantly related to the target protein)</li> </ul> <h4>feature</h4> <p>I've made a lot of features, and here are some of them that have worked to some degree.</p> <ul> <li>leiden cluster feature<br> I made Clusters using muon's leiden clustering.<br> Average of features per cluster and reduce dimensions with svd.<br> (excluding Important genes. It were not used svd, and use raw count's average for each cluster was used as features as is.)</li> <li>w2v vector feature<br> Same as multiome.</li> </ul> <h4>model</h4> <ul> <li>mlp<ul> <li>Simple 4-layer mlp; no major differences from mlp in public notebooks</li> <li>Using correlation_loss. No different from public notebook.</li></ul></li> <li>catboost</li> </ul> <h4>ensemble</h4> <ul> <li>I made a model of nearly 20 mlp and 2 catboosts with various feature combinations. and cv-based weighted averaging.</li> </ul> <h2>Validation</h2> <p>It goes without saying that one of the key elements of this competition is validation.<br> I tried binary classification to classify test data used in PB and others. (At this point, the classification accuracy was so high. So I think it is dangerous to trust LB.)<br> The 10% of the training data that is close to the PB, is used as validation data. <br> This method seemed to work well, the submit with my highest cv was highest pb score.</p> <h2>update: Code &amp; Model &amp; Data</h2> <p>share code and models in github and kaggle datasets<br> <a href="https://github.com/makotu1208/open-problems-multimodal-3rd-solution" target="_blank">https://github.com/makotu1208/open-problems-multimodal-3rd-solution</a></p> <p>Thanks for your comment! I also tried making feature using autoencoder, but it did not work that well within my experiments.</p> <p>Thank you for comment!<br> I think it would be faster to have you look at code, so I created a notebook.<br> <a href="https://www.kaggle.com/code/mhyodo/w2v-feature-sample" target="_blank">https://www.kaggle.com/code/mhyodo/w2v-feature-sample</a></p> <p>Oops, sorry. cite used the 140 dimensional target as is without dimensional reduction. I will correct this. Thanks for pointing that out!</p> <p>Thanks for your question! <br> Also, sorry for the delay in responding.<br> Sorry, I need to add something. Precisely, I clearly felt that it might be dangerous to trust LB because when I performed the classification with the LB test data as 0 and the PB test data as 1 I felt this was dangerous because of the high accuracy of the classification.<br> If the test data used in LB and the test data used in PB are similar I would believe LB, but since this was not the case, I thought it was dangerous to believe LB.</p>
Open Problems - Multimodal Single-Cell Integration
43rd Place : Summary and What Worked Well
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems - Multimodal Single-Cell Integration <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Many thanks to the organizers for this interesting multimodal challenge and giving us unique multi-target time series dataset to ideate upon.</p> <h3><strong>Data Preparation and Feature Pipeline</strong></h3> <ul> <li>TruncatedSVD of both cite and multi inputs - 100 components</li> <li>Binarized data TruncatedSVD of both cite and multi inputs - 100 components</li> <li>TruncatedSVD of multi targets - 256 components</li> <li>PCA of both cite and multi inputs - 40 components - used only in some of the models for additional features</li> <li>Most correlated raw features for individual cite targets</li> <li>Usage of '<em>Day</em>' as a feature</li> </ul> <h3><strong>CV Scheme</strong></h3> <ul> <li>GroupKFold by donor for both cite and multi - used with higher weightage in the final pipeline</li> <li>KFold for both cite and multi - since it was also correlated, kept it in the final pipeline with low weightage</li> </ul> <h3><strong>Modeling Pipeline</strong></h3> <ul> <li>MLPs with varied number of layers without binary components for both cite and multi (0.813 on public)</li> <li>MLPs with a mixture of both binary and non binary components for both cite and multi (0.813 on public)</li> <li>TabNet and LGBM model with dimensionality reduced cite data and multi (0.812 on public)</li> <li>Individual models with <em>highly correlated important features</em> per target with LGBM, XGB and CB for cite data (jump to 0.8142 on public, resulted in best ensemble)</li> </ul> <h4><strong>TakeAways</strong></h4> <ul> <li>pyBoost</li> <li>day similarity analysis (as <a href="https://www.kaggle.com/l0glikelihood" target="_blank">@l0glikelihood</a> trained only on day 7 for multiome)</li> </ul> <p>Would have expected to go upward with the shakeup, realized that other teams really did great and many congratulations to them. It was a wonderful competition, one of it's kind. </p> <p>Cheers!</p>
Open Problems - Multimodal Single-Cell Integration
4th place solution (with code)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems - Multimodal Single-Cell Integration <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks and your work is very profound and helpful ! 👍</p> <h2>Intro</h2> <p>To begin with, thanks to the Kaggle team and Open Problems team for hosting such a wonderful contest. I would also like to share my gratitude to all the competitors, especially those generous competitors who are willing to share their codes and thoughts like <a href="https://www.kaggle.com/ambrosm" target="_blank">@ambrosm</a> <a href="https://www.kaggle.com/alexandervc" target="_blank">@alexandervc</a> <a href="https://www.kaggle.com/baosenguo" target="_blank">@baosenguo</a> <a href="https://www.kaggle.com/pourchot" target="_blank">@pourchot</a> and so on. I couldn't have gone so far without their help.</p> <p>You can find full code on Github <a href="https://github.com/oliverwang15/4th-Place-Solution-for-Open-Problems-Multimodal-Single-Cell" target="_blank">here</a></p> <h2>Cite</h2> <h3>Data preprocessing</h3> <p>At first, all of my feature engineering methods are based on the <a href="https://www.kaggle.com/competitions/open-problems-multimodal/data" target="_blank">original</a> data, but my public score raised from 0.812 to 0.813 after I merely change the data source so all the feature engineering processes and based on the <a href="https://www.kaggle.com/competitions/open-problems-multimodal/discussion/359355" target="_blank">raw</a> data. </p> <p>My preprocessing method is using <code>np.log1p</code> to change the raw data. I have also tried other preprocessing methods like <code>MAGIC</code> and <code>TF-IDF</code> but they can't improve my CV score.</p> <h3>Feature engineering</h3> <p>The final inputs of the models consist of mainly six parts. Three of them are dimension reduction parts including <code>Tsvd</code>, <code>UMAP</code>, and <code>Novel’s method</code>. The rest are feature selection parts including <code>name importance</code>, <code>corr importance,</code> and <code>rf importance</code>.</p> <ul> <li><p><code>Tsvd</code>: <code>TruncatedSVD(n_components=128, random_state=42)</code></p></li> <li><p><code>UMAP</code>: <code>UMAP(n_neighbors = 16,n_components=128, random_state=42,verbose = True)</code></p></li> <li><p><code>Novel’s method</code>: The original method can be found <a href="https://github.com/openproblems-bio/neurips2021_multimodal_topmethods/blob/dc7bd58dacbe804dcc7be047531d795b1b04741e/src/predict_modality/methods/novel/resources/helper_functions.py" target="_blank">here</a>. At first, I wanted to implement the preprocessing method to replace simple <code>log1p</code> but after I replaced the <code>Tsvd</code> results of <code>log1p</code> by the <code>Tsvd</code> results of <code>Novel’s method</code> I found that my CV went down. But if I kept both of them, the CV score would increase a little bit. So I kept the <code>Tsvd</code> results of <code>Novel’s method</code>.</p></li> <li><p><code>name importance</code>: It 's mainly based on AmbrosM's <a href="https://www.kaggle.com/code/ambrosm/msci-eda-which-makes-sense/notebook#Name-matching" target="_blank">notebook</a>. But I added additional information from <code>mygene</code> while matching. I will release my complete preprocessing code later and specific results can be found there.</p></li> <li><p><code>corr importance</code>: As the name suggested, I chose the top 3 features that correlated with the targets. There was overlap and the number of selected features was about 104</p></li> <li><p><code>rf importance</code>: Since the feature importances of random forest may apply to NN and other models as well. So I selected 128 top feature importances of the random forest model.</p></li> </ul> <p>I have also tried other mothed including <code>PCA</code>, <code>KernelPCA</code>, <code>LocallyLinearEmbedding</code>, and <code>SpectralEmbedding</code>.<code>PCA</code> gives little help and it will cause severe overfitting when used with <code>Tsvd</code>. I could' t finish the manifold methods in 24 hours so I gave them up.</p> <h3>Models</h3> <p>I have implemented the CV strategy like the private test, but it turns out that the strategy like the public test is better. So all of the results are based on <code>GroupKFold</code> on <code>donors</code>. I have done there-layers stacking in the competition. and I have also done the ensemble on the stacking results and the results of independent models. Here are the models I used and I will also release the code later. </p> <table> <thead> <tr> <th>Method</th> <th>Stacking</th> <th>NN</th> <th>NN_online</th> <th>CNN</th> <th>kernel_rigde</th> <th>LGBM</th> <th>Catboost</th> </tr> </thead> <tbody> <tr> <td>CV</td> <td>0.89677</td> <td>0.89596</td> <td>0.89580</td> <td>0.89530</td> <td>0.89326</td> <td>0.89270</td> <td>0.89100</td> </tr> </tbody> </table> <ul> <li><p><code>NN</code>: A personal-designed NN network, trying to do something like the transformers. I used MLP to replace the dot product in the mechanism of attention. This may not be so reasonable and I am also aware of the importance of feature vectors and dot products. But I was so fascinated by attention and I also tried <code>tabnet</code> and <code>rtdl</code> but they didn't work very well. But my method seemed to work even better than simple MLP. <br> <a href="https://www.kaggle.com/oliverwang15/4th-solution-cite-nn" target="_blank">Demo notebook</a></p></li> <li><p><code>CNN</code>: Inspired by the tmp method <a href="https://www.kaggle.com/competitions/lish-moa/discussion/202256" target="_blank">here</a> and also added multidimensional convolution kernel like the Resnet. </p></li> <li><p><code>NN(Online)</code>: This model is mainly based on pourchot's method <a href="https://www.kaggle.com/code/pourchot/all-in-one-citeseq-multiome-with-keras" target="_blank">here</a> and only some tiny change was made.<br> <a href="https://www.kaggle.com/code/oliverwang15/4th-solution-cite-online-nn" target="_blank">Demo notebook</a></p></li> <li><p><code>Kernel Rigde</code>: This model is inspired by the best solution of last year's competition. I used <a href="https://docs.ray.io/en/master/tune/index.html" target="_blank">Ray Tune</a> to optimize the hypermeters<br> <a href="https://www.kaggle.com/code/oliverwang15/4th-solution-ray-tune-krr" target="_blank">Demo notebook with ray tune</a></p></li> <li><p><code>Catboost</code>: There are many options for <code>catboost</code> here. Using <code>MultiOutputRegressor</code> or <code>MultiRMSE</code> as <code>objective</code>.But we can't do earlystopping to prevent overfitting in the first method and the result of the second method is not good enough so I made a class <code>MultiOutputCatboostRegressor</code> personally, using <code>MSE</code> to fit the normalized targets.</p></li> <li><p><code>LGBM</code>: I also wrote <code>MultiOutputLGBMRegressor</code> and the results seem to be better and the training process was so slow that I had to give it up in the stacking. However, I still trained a independent LGBM model and used it in the final training. <br> <a href="https://www.kaggle.com/code/oliverwang15/4th-solution-cite-multioutputlgbm" target="_blank">Demo notebook</a> </p></li> <li><p><code>stacking</code>: I used <code>KNN</code>,<code>CNN</code>,<code>ridge</code>,<code>rf</code>,<code>catboost</code>,<code>NN</code> in the first layer and only <code>CNN</code>,<code>catboost</code>,<code>NN</code> in the second and just a simple <code>MLP</code> in the last layer. To avoid overfitting, I used <code>KFold</code> and oof predictions between layers, and every stacking model are using <code>GroupKFold</code>(so there are 3 stacking models here). It seems to be a little bit to understand so you may refer to the picture. If you still have confusion please feel free to ask me.<br> <a href="https://www.kaggle.com/code/oliverwang15/4th-solution-cite-stacking-train" target="_blank">Demo notebook train</a> <br> <a href="https://www.kaggle.com/code/oliverwang15/4th-solution-cite-stacking-predict" target="_blank">Demo notebook predict</a> </p></li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2641164%2F2dedb86bf1f9498fb9da9232da1e579a%2FStacking%20Training.png?generation=1668608507518581&amp;alt=media" alt=""></p> <table> <thead> <tr> <th>CV Results</th> <th>Model Ⅰ (vaild 32606)</th> <th>Model Ⅱ (vaild 13176)</th> <th>Model Ⅲ (vaild 31800)</th> </tr> </thead> <tbody> <tr> <td>Fold 1</td> <td>0.8989</td> <td>0.8967</td> <td>0.8947</td> </tr> <tr> <td>Fold 2</td> <td>0.8995</td> <td>0.8967</td> <td>0.8951</td> </tr> <tr> <td>Fold 3</td> <td>0.8985</td> <td>0.8959</td> <td>0.8949</td> </tr> <tr> <td>Fold Mean</td> <td>0.89897</td> <td>0.89643</td> <td>0.89490</td> </tr> <tr> <td>Model Mean</td> <td>0.89677</td> <td>-</td> <td>-</td> </tr> </tbody> </table> <h3>Ensemble</h3> <p><a href="https://www.kaggle.com/code/oliverwang15/4th-solution-ensemble/notebook" target="_blank">notebook</a> </p> <h2>Multi</h2> <p>To be honest, I put most of my efforts on cite part so there is nothing very special here and I will make a brief introduction. </p> <h3>Data preprocessing &amp; Feature engineering</h3> <h4>inputs:</h4> <ol> <li>TF-IDF normalization</li> <li><code>np.log1p(data * 1e4)</code></li> <li>Tsvd -&gt; 512</li> </ol> <h4>targets:</h4> <ol> <li>Normalization -&gt; mean = 0, std = 1</li> <li>Tsvd -&gt; 1024</li> </ol> <h3>Models</h3> <ul> <li><p><code>NN</code>: A personal-designed NN network as mentioned above. The output of the model is 1024 dim and make dot product with <code>tsvd.components_</code>(constant) to get the final prediction than use <code>correl_loss</code> to calculate the loss then back propagate the grads.</p></li> <li><p><code>Catboost</code>: The results from online <a href="https://www.kaggle.com/code/xiafire/lb-t15-msci-multiome-catboostregressor" target="_blank">notebook</a></p></li> <li><p><code>LGBM</code>: The same as the <code>MultiOutputLGBMRegressor</code> mentioned above. Using <code>MSE</code> to fit the tsvd results of normalized targets.</p></li> </ul> <h3>Ensemble</h3> <p>The same notebook as mentioned above.<br> <a href="https://www.kaggle.com/code/oliverwang15/4th-solution-ensemble/notebook" target="_blank">notebook</a></p> <p>Ok. I see. I will try it. But since I made the stacking parts in a hurry because of the approaching deadline at that time. I will first finish reorganizing the code then make this experiment. If you can't wait to know the answer you may also try it by yourself after I make the stacking parts public.</p> <p>Here are the codes of the folds selection process. Hope they may help you get a better understand</p> <pre><code> (): random.seed() random.shuffle(lis) num_fold = ((lis)/folds) [lis[i::folds] i (folds)] meta_train[] = [i i (meta_train.shape[])] people_list = [,,] fold_list = [] num_fold = val_people tqdm([,,]): train_people = [i i people_list i != val_people] train_idx = meta_train[meta_train.donor.isin(train_people)]..to_list() val_idx = meta_train[meta_train.donor == val_people]..to_list() useless_idx = [i i meta_train..to_list() i train_idx+val_idx] train_fold_1,train_fold_2,train_fold_3 = get_folds(train_idx,num_fold) val_fold_1,val_fold_2,val_fold_3 = get_folds(val_idx,num_fold) one_fold = [ [[train_fold_1+train_fold_2,val_fold_1+val_fold_2],train_fold_3+val_fold_3], [[train_fold_1+train_fold_3,val_fold_1+val_fold_3],train_fold_2+val_fold_2], [[train_fold_2+train_fold_3,val_fold_2+val_fold_3],train_fold_1+val_fold_1+useless_idx], ] fold_list.append(one_fold) </code></pre> <p>Thanks. From my perspective, the reason why stacking is lower is often related to overfitting. The CV score is very high but the LB score is relatively low. So in order to avoid or alleviate overfitting, I used the special KFold strategy and simple MLP as the last layer, which is illustrated in the picture.</p> <p>Thanks! Hope they can help you!</p> <p>Thanks. Yes, you are right. But in the second layer input there are not only the output of those models in the first layer but the original features. Actually I was inspired my Mu Li‘s idea <a href="https://www.bilibili.com/video/BV1PZ4y197CX/" target="_blank">here</a>. If you are interested you may have a look</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2641164%2F3b0bff6cf6ec5543e26f3310cdf42f2e%2F_20221118131601.png?generation=1668748770938919&amp;alt=media" alt=""></p> <p>Thanks. Yes, you get it right. I'm sorry but I'm not quite sure about the last sentence. Do you mean comparing the PB performance of the final results between using oof and not using oof? Or comparing the PB performance of the oof predictions of the first layer and the final results? Or anything else?</p> <p>Hi, Aesop. Thanks for your question. Indeed, for the sake of diversity, we should use different data to train different models and ensemble them to get the best results. But actually, when I was doing this part since the training part was time-consuming, and trying the best coefficient or doing stacking would also be time-consuming. So I reduced the diversity in data and trained models with all the data I mentioned above, even in the stacking part. So maybe we could try to use different data to train the models and then ensemble them to see whether the results would be better.</p>
Open Problems - Multimodal Single-Cell Integration
7th Place Solution Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems - Multimodal Single-Cell Integration <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Answer_1: yep, you're right, we measured model performance by taking simple average.</p> <p>Answer_2: blend weights were tuned with optuna (we've tried out stacking with pyBoost but in the end simple linear combination of predictions worked better for us).</p> <p>Since the dataset with predictions was quite large, the weights were selected not for all folds, but only for a part of them.</p> <p>First of all, we would like to thank the organizers for an interesting challenge, as well as for the opportunity to use Saturn cloud.</p> <p><strong>CV Scheme</strong></p> <p>Since the organizer initially provided the information how public and private parts are splitted, we decided to utilize this info and design our local validation as similar as possible to the private part. So, we can say that the public part of the leaderboard was of little interest to us.</p> <p>The validation scheme is shown in the picture below (for citeseq part). Since we have three days in train dataset (citeseq task), the number of splits equals 6, for the multiome part the scheme was the same. However, there were 9 splits due to the larger number of days (at the very end we slightly modified this scheme so that validation fold always contained only one (nearest) day).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1243561%2F761304b2456999a0a31225be67858207%2Fcv_citeseq.png?generation=1668591942826347&amp;alt=media" alt=""></p> <p><strong>How to submit</strong></p> <p>Since the last available day (in train dataset) was always in validation folds, we could not use this validation scheme for the traditional folds blending. So, when we wanted to make submit, we used the usual KFold validation with shuffle.<br> The approach was the following: all hypotheses and experiments, including hyperparameters tuning, were tested on true cv and afterwards these models were retrained on KFold.</p> <p><strong>Features and dimensionality reduction</strong></p> <p>We used PCA for dimensionality reduction for both tasks. Autoencoder was tested on citeseq dataset too, but performed slightly worse than PCA.<br> We also used "important" features in their raw form for citeseq task.<br> Moreover we created some additional features based on aggregations of "important" columns over metadata (for example, "mean_feature_1_by_donor"). Such features gave us + 0.0003 for GBDT model on local CV.</p> <p><strong>Models (ensemble)</strong></p> <p><em>Citeseq task</em> : 3x multilayer perceptrons, 1x Conv1D, 1x pyBoost (best single model).<br> <em>Multiome task</em> : 1x multilayer perceptron, 1x TabNet, 1x pyBoost (best single model).</p> <p>pyBoost seems to be the new SOTA on multioutput tasks (at least among GBDT models).<br> It's extremely fast to train as it uses GPU only and super easy to customize.<br> <a href="https://openreview.net/forum?id=WSxarC8t-T" target="_blank">Paper</a><br> <a href="https://github.com/sb-ai-lab/Py-Boost" target="_blank">Code</a></p> <p><strong>Some remarks about models</strong></p> <ul> <li><p><em>Multiome task</em>: all neural nets had 23418 output neurons. For pyBoost we reduced targets' dimension to 64 components using PCA.</p></li> <li><p>pyBoost was the best single model on True CV and KFold validation on citeseq data.</p></li> <li><p>For multiome task, it was the best model according to True CV and the<br> worst by KFold.</p></li> <li><p>We noticed that splitting targets into groups and building a separate pyBoost model for each group improved our local CV a lot. By default, pyBoost can split targets into groups randomly, so we decided to try to improve it by splitting targets into groups based on their clusters, however, in the end it worked nearly the same as random splitting.</p></li> </ul> <p><strong>Data used to train</strong></p> <p><em>Citeseq task</em> : all available data<br> <em>Multiome task</em> : day 7 data only.</p> <p>Solving multiome task, we noticed that there is a significant performance drop on day 7 (on True CV).<br> There was an idea that the reason is data drift in time, so we tried to train the model not on all available days, but only on the last available one. Locally, this improved our score by + 0.02. However, the problem was that there were no unseen days on the public leaderboard,<br> so training the model only on the last available day seriously dropped our public score (from 0.814 to 0.808). Nevertheless, we decided to follow the mantra "trust your cv" and as a result, this particular submission became our best on private.</p> <p>We also conducted a study on the similarity of days and found out that among available training days, day 3 is the most similar to private day 10 (day 7 is the second most similar).<br> Nevertheless, since for other days the most similar day was always the closest in time, we decided to train our models on day 7.</p> <p>Thank you very much for the kind words!</p> <p>In fact, the most similar day to private day 10 was determined in two ways:</p> <ol> <li><p>Visual analysis of the distribution of individual features + comparing the number of non-zero elements for different days</p></li> <li><p>We trained CatBoost classifier that tried to distinguish the day from the training dataset from day 10. The day with lowest ROC AUC we considered as the most similar to day 10.</p></li> </ol> <p>According to both of these approaches, day 3 looked as the most similar day (the next one was day 7)</p> <p>Hello!</p> <p>We were not interested in how our model performs on KFold cv. The only purpose of its usage was the possibility to train multiple models and blend them (instead of training one model on the whole train dataset). As an alternative for Kfold validation, a common bootstrap could also be used here.</p> <p>Good question! Frankly speaking, no. As we observed model performance across all folds, it seemed that new day was always much more problematic than new donor. So we concentrated on time component of this problem only.</p>
Open Problems - Multimodal Single-Cell Integration
9th place solution – strong fundamentals
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems - Multimodal Single-Cell Integration <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Open Problems in Single-Cell Analysis and Kaggle for organizing this exciting competition. Ever since I was a little kid, I have had a passion for understanding how things worked. Three topics fascinate me the most: how the universe works (astrophysics and quantum mechanics), how the brain works (AI), and how life works (genetics and molecular biology). This competition was a wonderful opportunity to combine two of these topics. Unfortunately, I failed to live up to the expectations I had coming in. I was unable to come up with any good ideas with the potential to advance the field. Many years ago, I read a few books on genetics, starting with “Shadows of forgotten ancestors” (big Carl Sagan fan), but unfortunately, they are all very superficial on the science. I believe my illiteracy in the area was too big a handicap to overcome.</p> <p>I’d like to congratulate the winners and thank everyone who shared their knowledge and ideas, particularly <a href="https://www.kaggle.com/ambrosm" target="_blank">@ambrosm</a>, <a href="https://www.kaggle.com/psilogram" target="_blank">@psilogram</a> and <a href="https://www.kaggle.com/pourchot" target="_blank">@pourchot</a>. I’d also like to thank and commend <a href="https://www.kaggle.com/alexandervc" target="_blank">@alexandervc</a> for his contributions and tireless effort to motivate others and advance the science around the competition.</p> <h1>My journey</h1> <p>Upon reading the competition description, looking at the data, reading some posts and browsing a couple of notebooks, I listed a set of ideas to try. At that stage, my hopes rested mainly on two of them: pseudo labeling unused test data and NLP transformer models. My plan was to build a strong model, generate pseudo labels, improve the model to get better labels, incorporate transformer models, gain a better understanding of the problem, come up with some breakthrough idea and have plenty of time to explore it. I knew that cite weighted more on the PB score, but I thought that multi’s low scores presented a better opportunity to differentiate. Despite having my sights on multi, I typically would build new code (create inputs, train, analyze results, predict, ensemble) first for cite and then adjust for multi. The thinking was that cite’s smaller data would make be easier to get the code right.</p> <p>The need for a strong model to produce pseudo labels led to a big focus on ensembling, which I believe was detrimental to my performance. Also, the size of the data proved to be a much bigger challenge than I anticipated. Easily more than 90% of the time I dedicated to this competition was spent on trying to make things work within the memory available in Kaggle. It took me a while to build my first ensemble and conduct training with pseudo labels. The results were very disappointing. I wondered if it was because the model was generalizing better to unseen data, or if pseudo labeling just didn’t work. I spent some more time improving the base models and gave it another try, but with the same poor results.</p> <p>I then focused on a few other small improvements to my models, while building a more manageable overall pipeline (I broke processing in several pieces to deal with the memory and disk constraints). Then family demands and the need to revisit my analysis of several prospect investments during the recent stock market rout, ate away at my personal time. Before I knew it, the competition was over, and I hadn’t moved past the more fundamental stuff. If I were to do it again, I’d focus more on better understanding the science, better feature engineering and better individual models.</p> <h1>CV strategy</h1> <p>The CV strategy was the first thing I defined. The way I understood the problem, we had to make two types of predictions for which I devised two different CV approaches:</p> <table> <thead> <tr> <th>Prediction</th> <th>CV method</th> </tr> </thead> <tbody> <tr> <td>Future day for 3 known donors</td> <td>day (d): 3 folds based on day (4 for multi)</td> </tr> <tr> <td>Future day for an unknown donor</td> <td>day/donor (dd): 9 folds based on day/donor (12 for multi). Training data excludes any records for the donor or day in the validation fold.</td> </tr> </tbody> </table> <p>Although this strategy gave me confidence on the results, it had the downside of leaving a lot of data on the plate during training. I decided to edge this approach by using a variation of day/donor in which I would extend the training sample to any data that wasn’t for the same donor/day as the validation (dde). The downside of this approach is that it will overfit, and its results cannot be fully trusted. The upside is that it can lead to a better model as it uses more data. The following diagram illustrates the 3 methods, and how I used them for my two selected submissions.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F982731%2F99aee27b5c305985ee02805bb6741222%2Fcv%20strategy.JPG?generation=1668662280551465&amp;alt=media" alt=""></p> <p>As you may notice, I didn’t bother about the LB range. I didn’t expect the absolute LB score to be of significance, so I only used it to check if CV improvements would translate to the LB. All methods provided good CV/LB correlation with one notable exception, which I suspected was caused by a bug.</p> <h1>Solution</h1> <h2>Feature engineering</h2> <p>I used combinations of the following input features in models optimized by Optuna:</p> <ul> <li>Raw: This refers to the original input features. I noticed that several people asked for the actual raw data, which was made available at some point. I decided to forego using it, because my intuition was that it wouldn’t make a difference and it would save me precious time. Based on other writeups, my intuition might have been wrong.</li> <li>PCA: input features reduced through PCA to up to 2048 principal components. Going beyond 2048 components didn’t seem to help. The PCA input was complemented by important features (borrowed from <a href="https://www.kaggle.com/ambrosm" target="_blank">@ambrosm</a>). I generated a version of PCA using all columns and another excluding important columns. I trained models using both the regular version of the PCA output and normalized versions. These latter versions didn’t seem to help, so I discarded them.</li> <li>Type: cell type used only for cite. I theorized that I could build a good classification model to predict it for multi, but using the training data available didn’t seem to help CV.</li> <li>Gender: donor gender</li> <li>Day</li> <li>KNN mean: mean value of input features for N nearest neighbors, reduced through PCA.</li> <li>Mean: mean value of input features for donor/day, reduced through PCA. I used both the raw PCA and a normalized version.</li> <li>Non-zero: mean number of non-zero features per cell for donor/day, reduced through PCA. Again, both the raw PCA and a normalized version.</li> <li>Binary: input features converted to binary. My intuition was that the value matter less than whether the feature was present or not.</li> </ul> <p>Contrary to my expectation, PCA input data performed better than raw data. It seems that it worked better to denoise data than other approaches I tried. After verifying that the public versions of important cite features helped training, I started using them, with the intention of conducting my own selection later. I never got to do it, though. Towards the end, I concluded I should have prioritized it.</p> <p>I used the following targets:</p> <ul> <li>Raw: original targets</li> <li>Binary: original targets converted to binary. I thought this would help denoise the target data without degrading performance (same intuition I applied to the input data). Although the correlation between the actual targets and the binary form was very high, this didn’t seem to help CV, so I discarded the approach.</li> <li>PCA: original targets reduced through PCA to up to 2048 principal components. Training with a large number of components and using a smaller number for the prediction boosted the CV of individual models (up to 0.002, with the high end of the range seen in weaker models), but hindered ensembling.</li> </ul> <p>I tried these approaches independently and in combination, using multiple heads and loss functions. I discarded the combination models because the added complexity didn’t seem to help CV.</p> <h1>model architecture</h1> <p>I used the following types of models:</p> <ul> <li>NNs, mostly MLPs, residual networks, and 1dCNNs.</li> <li>XGBD</li> <li>ElasticNet</li> </ul> <p>NNs covered a range of options: dropout, normalization, gaussian noise, activation and others. For loss I used correlation, mse and binary cross entropy (for binary targets). I tried incorporating autoencoders into the models to denoise data, but it didn’t help CV. Non NN models performed clearly worse and didn’t even help with ensembling. With better tunning they might have helped with ensembling, but I never made that a priority.</p> <p>Conceptually, this felt like a time series problem. However, as I understand it, the data collection process is intrusive and prevents us from having historical data for the same cell. I discarded using timeseries approaches given the small number of days we had and the risk of overfitting. In the last weekend, I tried enriching cell data with data from other cells for the same day/donor. This data was added as additional channels in a 3D input stream (batch, channel, features). I fed that to both LSTM and CNN models for a couple of dd folds with about the same results as my best existing models. With better tunning the results might have been different. Also, due to lack of time I only used the prediction of the first channel. I initially intended to ensemble the predictions from all channels as I theorized that the diversity would help. However, I wondered if it would offset the improvements of ensembling with other models, so I decided not to spend the time writing the corresponding code.</p> <h1>NLP Transformers</h1> <p>In my mind this problem is a classic case of language translation: from DNA to RNA, and from RNA to proteins. Transcription errors occur all the time and the measurement also introduces errors, hence the data ends up being noisy. I envisioned the following approach when I started the competition:</p> <ul> <li>Assign a token to each input feature, to be defined as the column_number + a constant; build the input streams using the tokens of non zero features.</li> <li>For multi, assign a token to each target column, to be defined as the column_number + a constant; build the target streams using the tokens of non zero columns.</li> <li>Split the data in some way that makes sense to accommodate the maximum length limitations of the transformer. For example, for multi I intended to break data per chromosome and then merge the outputs. I had a few ideas for the merge, but wanted to see the output data before making a decision. My expectation was that each protein would be produced only by one of the chromosomes and that could facilitate the merging. In the cases in which the stream length for a chromosome was still too large, I'd further split it in partially overlapping segments, which the model would run through the transformer, to subsequently concatenate the outputs and run through the rest of the model;</li> <li>Use any specific transformers for genetic data or regular ones for text (deberta, for example) and pretrain them with the competition data (e.g. MLM) or any other data publicly available (not needing tokens might facilitate that).</li> <li>For multi, build both sequence-to-sequence models that take the binary input features and produce binary targets, and models that directly predict the value of each target column. For cite, use only the latter approach.</li> <li>Depending on the performance of the transformer models, use their predictions directly in ensembling or to adjust the predictions of other models, e.g., merge both models and use the output of the transformer as a multiplier for the output of the other model, while also merging the losses of both models.</li> </ul> <p>Last Monday I finally gave it a go for multi. Given the limited time I had left, I went with a direct prediction of the target values using deberta. It didn’t take me long to write a draft version of the code, but I kept running into memory issues. I tried to work through them for a while, but it was late and after a week of not much sleep I sadly concluded that I had ran the clock. From the moment I started I was conscious that there wasn’t enough time left on that day to still use this approach in the competition, but I wanted to know if it would work.</p> <h1>ensembling</h1> <p>I used linear regression on oof data for ensembling. I intended to use only the cells that better matched the PB sample but ended up discarding that idea, because the adversarial approach I used suggested that the vast majority of the training data was easily distinguishable from the PB data. To address the fact that the split of cells per day/donor was significantly different between training and LB, I balanced the oof data to have an equal number of cells per day/donor.</p> <p>As mentioned earlier, I produced three sets of ensembles based respectively on d, dd and dde models. In the latter case, I excluded any models that used feature combinations that performed poorly with d and dd models, especially in the last day of the training data. A notable example were models using only PCA and day, which were the best performers for dde, but did poorly on d or dd. That suggested overfitting that would not generalize well to unseen days.</p> <p>The following diagram summarizes the main characteristics of the models that compose my best solution (dde).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F982731%2F7bbd8f5a720fee18a3e9940cd2bda8e6%2Fbest%20ensembles.JPG?generation=1668662251034402&amp;alt=media" alt=""></p> <h1>Final thoughts</h1> <p>My two final selections were an ensemble of dde models and an ensemble of merged d/dd models. The former performed better, but I suspect there is a problem with the merging of the d and dd ensembles. Out of curiosity, while writing this I submitted my best cite and multi models and got a 0.770982 PB score, which would rank 37th in the PB. Ensembling boosted it by +0.0012, which is consistent with what I measured with the CV. I should note though that the best cite model was not part of my selected ensemble. I’m pleased that I selected the ensemble with the highest PB score. I suspected there was going to be a big shakeup in the LB, except perhaps for the top positions, and for the most part that ended up happening. In the absence of any novel ideas, I credit my final position to strong fundamentals, particularly a solid CV strategy. That’s not what I expected to accomplish when I started, but I had a lot of fun participating in this competition and look forward to doing things differently, and hopefully better, next time.</p>
Open Problems - Multimodal Single-Cell Integration
Idea: use of precalculated genes embeddings - variation on Makotu (3-th place) theme
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems - Multimodal Single-Cell Integration <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Makotu (@mhyodo) (3-th place) described interesting ideas:<br> <a href="https://www.kaggle.com/competitions/open-problems-multimodal/discussion/366428" target="_blank">https://www.kaggle.com/competitions/open-problems-multimodal/discussion/366428</a> <br> Here is some variation on his theme:</p> <p>The columns names in CITE-seq features are - genes - we have enourmous-enourmous amounts of information in biology about them, but how to incorporate that information into the model ? </p> <p>1) take precalculated embeddings for genes - (there are many many ways to do produce such embeddings - see some examples below) </p> <p>2) For each sample (i.e. cell) take only say 100 top expressed genes </p> <p>3) Create a new feature - which is just an average (or sum) of embeddings for these 100 top genes.</p> <p>That is it. </p> <p>==========</p> <p>Original idea of Makotu was NOT to use precalculated embeddings - but to calculate embeddings by a clever use of word2vec approach. That might be more powerful approach for the task. However the variation might have some benefits since it allows to incorporate existing biological knowledge. </p> <p>==========</p> <p>Examples of genes embeddings: </p> <p>1) Simplest way. Take any dataset which of form genes * samples - take any dimensional reduction to get: genes * "metasamples" - so it gives us some embeddings for genes. Taking some large and famous datasets like e.g. TCGA one may hope that these embeddings capture some important biological information.</p> <p>2) Graph based. Take any graph involving genes - PPI (Protein-protein interactions) or Knowledge Graphs (where genes are involved even Wikidata) there are plenty algorithms which produce vector embeddings from graphs - so any of them will produce embeddings for genes.</p> <p>3) NLP based. Take textual description of the genes. Use something like Sentence Bert for each textual description - we got an embedding for the gene. <br> That approach we used in the competition for feature selection - see notebooks by Anton Kostin:<br> <a href="https://www.kaggle.com/code/visualcomments/genes-embeddings-clustermap" target="_blank">https://www.kaggle.com/code/visualcomments/genes-embeddings-clustermap</a></p> <p>PS <br> We analyzed PPI as a possible way to produce features candidates:<br> <a href="https://www.kaggle.com/code/visualcomments/cd-genes-ppi-neighbors" target="_blank">https://www.kaggle.com/code/visualcomments/cd-genes-ppi-neighbors</a><br> Some of them worked. </p> <p>==========</p> <p>Some another more simple variation of the Makotu idea - which can be used for any tabular dataset (not genes):<br> just make from the original feature matrix - a new matrix of the same size with only 1 and 0. <br> Where 1 will be placed at positions of the top100 features. Take PCA/SVD from the that matrix - you got new features. </p>
Open Problems - Multimodal Single-Cell Integration
Open Problems | My solution and ideas
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems - Multimodal Single-Cell Integration <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi! Thanks to all the organizers and kagglers for this interesting, important and hard competition. I want to share my solution and ideas with you because it may be helpful. </p> <h1>Multiome</h1> <h3>Preprocessing</h3> <ol> <li>SVD with concatenated <a href="https://www.kaggle.com/datasets/stasborodynkin/feature-shop-for-mmscel-multiome" target="_blank">Feature Shop</a> (best is 256 components)</li> <li>SVD with default data (128, 256)</li> <li>SVD for targets (best is 128 components)</li> <li>Feature selection with <a href="https://www.kaggle.com/bejeweled/multiome-rf-feature-selection" target="_blank">Random Forest</a></li> </ol> <p>Corr selection, KNN worked bad.</p> <h3>Models</h3> <ol> <li>LGB single regression</li> <li>Ridge with pseudo-labelling </li> <li>NNs - dense NN with Bi-LSTM, dense NN, 1D CNN, 2D CNN (dense, reshape, 2D convs)</li> </ol> <p>TabNet, CB worked bad. NNs were trained as single regressions and multi regressions both.</p> <h1>CITEseq</h1> <h3>Preprocessing</h3> <ol> <li>Concatenated <a href="https://www.kaggle.com/datasets/alexandervc/feature-shop-for-multimodal-singlecell-competition" target="_blank">Feature Shop</a> for NN</li> <li>SVD with default data (256)</li> <li>Feature selection with <a href="https://www.kaggle.com/code/bejeweled/siteseq-rf-feature-selection" target="_blank">Random Forest</a></li> <li>Feature selection based on <a href="https://www.kaggle.com/code/bejeweled/siteseq-corr-feature-selection" target="_blank">correlation</a> -&gt; worked well for single regressions with flipping of negative correlated features</li> </ol> <h3>Models</h3> <ol> <li>LGB single regression with RF and corrs features</li> <li>CB single regression with RF features</li> <li>NNs - dense NN, 1D CNN, 2D CNN, with/without cell embedings</li> </ol> <p>TabNet also worked bad. </p> <h1>Blending</h1> <p>I tried 4 different methods:</p> <ol> <li>Normalized averaging</li> <li>Weighted normalized averaging by total oof correlation</li> <li>Weighted normalized averaging by single target oof correlation</li> <li>Stacking with Ridge</li> </ol> <p>Third method gives best results.</p> <h1>Ideas I did not try or tried a little</h1> <ol> <li>Transform vectors to distance matrices and fit it to 2D CNN. It may help make connections between features. Also we can build model with dual input - one for feature vector and one for distance matrix</li> <li>Augmentations (flip, little noise, vec rotating) for CNN</li> <li>Autoencoders</li> <li>Clip values and make group of features </li> <li>Kalman filter (was too long to compute) </li> <li>Fit only non-zero features to NN as sequences</li> <li>Different models for different cell types or weighted averaging, because different models show different results on different cell types</li> </ol> <h1>Some my notebooks with analysis</h1> <ol> <li>CITEseq targets <a href="https://www.kaggle.com/code/bejeweled/mmscel-citeseq-targets-eda?scriptVersionId=109777501" target="_blank">EDA</a></li> <li>Playground <a href="https://www.kaggle.com/code/bejeweled/mmscel-all-targs-modeling-playground-fold/notebook" target="_blank">modeling</a></li> <li>CITEseq sklearn different cells <a href="https://www.kaggle.com/code/bejeweled/citeseq-sklearn-cells-feature-shop?scriptVersionId=110648831" target="_blank">modeling</a></li> </ol>
Open Problems - Multimodal Single-Cell Integration
Part of 21st place solutions: Methods driven by biological information
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems - Multimodal Single-Cell Integration <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi all, I summarize the ideas I have tried inspired by concepts in biology, bioinformatics and computational biology. Hope it can help you all understand this competition and the data more completely.</p> <p><a href="https://www.kaggle.com/code/llttyy/open-problem-biological-ideas/notebook" target="_blank">https://www.kaggle.com/code/llttyy/open-problem-biological-ideas/notebook</a></p> <p>For our best model, only two approaches work: 1. Select important genes 2. Binarize multiome data. </p> <p>Hi, thanks for your comments! Yes, the important genes means the genes which are names with important_cols. These genes participate in the translation process marked by their name.</p>
Open Problems - Multimodal Single-Cell Integration
Private 41st Solution summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems - Multimodal Single-Cell Integration <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for all kagglers and organizers for this competition.<br> This note is a record of my work on the competition.<br> This was a competition where the Private test data was from a future date that did not exist in the training data, and the so-called domain generalization performance was being tested.<br> On the other hand, there was an element of variation by date, and it was expected that it would be undesirable to completely ignore the date feature.<br> First, we conducted adversarial training (a task to classify training and test data) and found that Citeseq was capable of 99% classification, and we were concerned that training with this feature set would result in overtraining on the training data.<br> However, when we reduced the number of features to reduce the accuracy of Adversarial training, the score of Public LB also dropped significantly.<br> Therefore, we decided to devise some kind of biological features and to improve generalization performance through model variation.</p> <h2>✨ Result</h2> <ul> <li>Private: 0.769</li> <li>Public: 0.813</li> </ul> <h2>🖼️ Solution</h2> <h3>🌱 Preprocess</h3> <ul> <li>Citeseq<ul> <li>The input data was reduced to 100 dimensions by PCA.</li> <li>On the other hand, the data of important columns were preserved.</li> <li><a href="https://bering-ivis.readthedocs.io/en/latest/unsupervised.html" target="_blank">Ivis unsupervised learning</a> was used to generate 100 dimensional features.</li> <li>In addition, we added the sum of mitochondrial RNA cells to the features.</li> <li>Cell type in Metadata was added to the features.</li></ul></li> <li>Multiome<ul> <li>For each group with the same column name prefix, PCA reduced the number of dimensions to approximately 100 each.</li> <li>Ivis unsupervised learning was used to generate 100 dimensional features.</li></ul></li> </ul> <h3>🤸 Pre Training</h3> <ul> <li>Adversarial training (a task to classify training data and test data) is performed and the misjudged training data is used as good validation data.</li> <li>Prediction of Cell type for Multiome is performed and added to the features.</li> </ul> <h3>🏃 Training</h3> <ul> <li>StratifiedKFold with good validation data as positive labels.</li> <li>Pearson correlation coefficient was used for the Loss function. XGBoost was implemented as described below.</li> <li>TabNet also performed pre-training. (In this competition, pre-training was more accurate.)</li> </ul> <h3>🎨 Base Models</h3> <ul> <li>Citeseq<ul> <li>TabNet</li> <li>Simple MLP</li> <li>ResNet</li> <li>1D CNN</li> <li>XGBoost</li></ul></li> <li>Multiome<ul> <li>1D CNN<br> Citeseq scored well with an ensemble of various models.<br> On the other hand, Multiome had a strong 1D CNN and did not score well with ensembles of other models, so only the 1D CNN was used.</li></ul></li> </ul> <h3>🚀 Postprocess</h3> <ul> <li>Since the evaluation metric is the Pearson correlation coefficient, each inference result (including OOF results) was normalized before ensemble.</li> <li>Optuna was used to optimize the ensemble weights. Good validation data was used as the evaluation metric.</li> <li>Ensemble with Public Notebook x2 and teammate submissions.</li> </ul> <h2>💡 Tips</h2> <h3>Pearson Loss for XGBoost</h3> <p>XGBoost does not provide a Pearson Loss Function, so I implemented it as follows.<br> However, this implementation is slow in learning, and I have the impression that I would like to improve it a little more.</p> <pre><code>from functools import partial from typing import Any, Callable import numpy as np import torch import torch.nn.functional as F import xgboost as xgb def pearson_cc_loss(inputs, targets): try: assert inputs.shape == targets.shape except AssertionError: inputs = inputs.view(targets.shape) pcc = F.cosine_similarity(inputs, targets) return 1.0 - pcc # https://towardsdatascience.com/jax-vs-pytorch-automatic-differentiation-for-xgboost-10222e1404ec def torch_autodiff_grad_hess( loss_function: Callable[[torch.Tensor, torch.Tensor], torch.Tensor], y_true: np.ndarray, y_pred: np.ndarray ): """ Perform automatic differentiation to get the Gradient and the Hessian of `loss_function`. """ y_true = torch.tensor(y_true, dtype=torch.float, requires_grad=False) y_pred = torch.tensor(y_pred, dtype=torch.float, requires_grad=True) loss_function_sum = lambda y_pred: loss_function(y_true, y_pred).sum() loss_function_sum(y_pred).backward() grad = y_pred.grad.reshape(-1) # hess_matrix = torch.autograd.functional.hessian(loss_function_sum, y_pred, vectorize=True) # hess = torch.diagonal(hess_matrix) hess = np.ones(grad.shape) return grad, hess custom_objective = partial(torch_autodiff_grad_hess, pearson_cc_loss) xgb_params = dict( n_estimators=10000, early_stopping_rounds=20, # learning_rate=0.05, objective=custom_objective, # "binary:logistic", "reg:squarederror", eval_metric=pearson_cc_xgb_score, # "logloss", "rmse", random_state=440, tree_method="gpu_hist", ) # type: dict[str, Any] clf = xgb.XGBRegressor(**xgb_params) </code></pre> <h2>🏷️ Links</h2> <ul> <li><a href="https://github.com/IMOKURI/kaggle-multimodal-single-cell-integration" target="_blank">My Solution</a></li> </ul>
Open Problems - Multimodal Single-Cell Integration
Private 5th Solution (A Beginner part)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems - Multimodal Single-Cell Integration <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Intro</h1> <p>This is a scheme from a beginner. Public notebooks, the top scheme of last year, and ensemble have helped me a lot.</p> <h2>Citeseq</h2> <h3>Data preprocessing and feature engineering</h3> <p>I used two different methods<br> ①Preprocessing method of public notebook from <a href="https://www.kaggle.com/pourchot" target="_blank">@pourchot</a> . <br> ②Using PCA to reduce dimensions to 128 + Direct features based on absolute correlation to targets. <br> I have normalized the row after both methods.<br> The first method is more effective, and the second method is only used for ensemble. </p> <h3>Model</h3> <p>①MLP without BN and drop (Adam as the optimizer). I tried different activation functions and ensemble them, and this greatly improved CV. <br> ②LGBM. I trained two LGBM models (different data preprocessing).</p> <h3>CV</h3> <p>①groupkfold on donor<br> ②groupkfold on donor and day<br> The first one scored higher on public LB, but the second one performed slightly better on private LB.</p> <h1>ensemble</h1> <p>Through oof prediction, I selected 4 NN models to mix with 2 LGBM models, and determined the weight. The best CV score was 0.896(donor).</p> <h2>Multiome</h2> <h3>Data preprocessing and feature engineering</h3> <p>I used the top method of last year and made some adjustments.<br> This method has the following steps:<br> ①tf-idf<br> ②log1p<br> ③sklearn.preprocessing.Normalizer(norm="l2") or sklearn.preprocessing.Normalizer(norm="max")<br> ④PCA(512)<br> ⑤row normalization<br> ⑥Select the first 64 items in 512 and the first 100 items in 512 generated by direct dimension reduction as all features.</p> <h3>Model</h3> <p>MLP with drop(AdamW as the optimizer). I used different activation functions and ensemble them.</p> <h3>CV</h3> <p>①groupkfold on donor<br> ②groupkfold on donor and day</p> <h3>ensemble</h3> <p>I used 4 NN and ensembled them. The best CV score was 0.670(donor).</p> <p>In this way, I got the submission of 0.814 public LB and 0.772 private LB. I think this is probably the easiest way to win the gold medal.</p> <p>Finally, I would like to thank my two teammates <a href="https://www.kaggle.com/jcerpentier" target="_blank">@jcerpentier</a> <a href="https://www.kaggle.com/ahmedelfazouan" target="_blank">@ahmedelfazouan</a> . I have learned a lot from them. </p> <p>Looking forward to the next progress!</p>
Open Problems - Multimodal Single-Cell Integration
Public 6th Private 14 Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems - Multimodal Single-Cell Integration <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Intro</h2> <ul> <li>I have been mainly working on the Cite part. I tried many things, </li> <li>multi part by <a href="https://www.kaggle.com/paragkale" target="_blank">@paragkale</a> <a href="https://www.kaggle.com/paragkale/private-14th-public-6th-multiome-portion" target="_blank">https://www.kaggle.com/paragkale/private-14th-public-6th-multiome-portion</a></li> <li>The following tricks gave me the most gain.</li> <li>I will update in this post about the code and what didn't work.</li> </ul> <h2>Extra Data</h2> <ul> <li>The raw count data released by the host.</li> </ul> <h2>Dimensionality Reduction</h2> <p>I think the most helpful one are:</p> <ul> <li>sklearn.decomposition.TruncatedSVD (128 comps)</li> <li>Self-made denosing auto encoder (128 hidden nodes)</li> </ul> <h2>Direct Features</h2> <ul> <li>Direct features based on matching the names</li> <li>Direct features based on absolute correlation to targets</li> <li>Direct features based on the list shared by the hosts in this thread <a href="https://www.kaggle.com/competitions/open-problems-multimodal/discussion/366392" target="_blank">https://www.kaggle.com/competitions/open-problems-multimodal/discussion/366392</a></li> </ul> <h2>Use base models outcomes as NN inputs feature for ensembling</h2> <p>It is known that MSE is not a really good loss function for the competition metric. Therefore within each fold, we trained 4 base models and used their features for the NN input.</p> <ul> <li>sklearn.linear_model.Ridge</li> <li>sklearn.linear_model.MultiTaskElasticNet</li> <li>sklearn.kernel_ridge.KernelRidge</li> <li>sklearn.ensemble.HistGradientBoostingRegressor</li> </ul> <p>We added heavy noise to their predictions to make sure the NN can learn from other features as well</p> <pre><code> self.blender = torch.nn.Sequential( GaussianNoise(self.blend_noise), torch.nn.Linear(out_dim * 4, 128), torch.nn.LayerNorm(128), activation(), torch.nn.Dropout(self.blend_dropout), ) </code></pre> <h2>GroupK Cross Validation on Target Clusters</h2> <p>The tricks in this section increased both public and private LB, but we cannot compare the CV because it is a CV scheme change. Luckily it is (relatively, I guess?) performing well on both public and private.</p> <p>It is known that there are some subtle domain shifts between train, private and public test sets. However, the difficulty is that the shift is happening in at least 3 directions (donor, day, cell types). To create a hard but not too hard CV scheme, we find that clustering the target values performed very well on both of the public and private leaderboard.</p> <p>Let's consider the CV scheme selection as a spectrum:</p> <ul> <li>The easiest CV scheme: Random K fold (Downside: not representative of the test set)</li> <li>The hard CV scheme: GroupKfold by day/donor (Downside: too few fold to train)</li> <li>The hardest CV scheme 1: Time series split (Downside: wasting the last day data)</li> <li>The hardest CV scheme 2: Excluding the 1 day or 1 donor completely from the training set (Downside: too hard/defensive)</li> </ul> <p>Another reason of doing the clustering is that the day here is categorical, however in real life, time is continuous. GroupK CV by day is not that satisfying.</p> <p>The first image shows the target kmeans result (colors) visualized with the tsvd targets (points):<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F690886%2Fb5e02c4c3d9edc2029a1458d02976f34%2F1.png?generation=1668564701067041&amp;alt=media" alt=""></p> <p>Next, you can see the target clusters capture the cell type differences:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F690886%2Fa76444828dbe22e970e9af2a5909005a%2F2.png?generation=1668564719243229&amp;alt=media" alt=""></p> <p>And the shifts of day and donor are not that significant compared to the cell types in the context of target clustering:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F690886%2Fd061e807ed3799640c34629a225bc5cc%2F3.png?generation=1668564731817794&amp;alt=media" alt=""><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F690886%2F53706c2ecd0ffc45cdf32d2517b71a8a%2F4.png?generation=1668564744081492&amp;alt=media" alt=""></p> <h2>Regularization/Augmentation</h2> <p>The tricks in this section increased both CV and LB.</p> <h3>Seed-bagging</h3> <p>I think most people have done this, we trained the same model a few more times with the different seeds for blending.</p> <h3>Mixup Augmentation and Stochastic Weight Averaging</h3> <p>The training is done on roughly 3 stages</p> <h4>1. Mix up augmentation stage</h4> <p>Since all features are numerical values, mixup worked well.</p> <pre><code>def mixup_augmentation(x: torch.Tensor, y: torch.Tensor, alpha: float = 5): lam = np.random.beta(alpha, alpha) rand_idx = torch.randperm(x.shape[0]) mixed_x = lam * x + (1 - lam) * x[rand_idx, :] target_a, target_b = y, y[rand_idx] return mixed_x, target_a, target_b, lam </code></pre> <h4>2. Normal training stage</h4> <h4>3. SWA stage</h4> <p><a href="https://pytorch.org/docs/stable/optim.html#putting-it-all-together" target="_blank">https://pytorch.org/docs/stable/optim.html#putting-it-all-together</a><br> This is similar to seed-bagging, I am not sure if they are overlapping or if they have their benefits here. </p> <p>For example, this is 4-seed groupk cv by donor result cv</p> <pre><code>cite_tsvd_50_torch_nn_oof_0.894400.npz 0.8943999611714102 cite_tsvd_50_torch_nn_oof_0.894471.npz 0.8944711303264004 cite_tsvd_50_torch_nn_oof_0.894500.npz 0.8945004071945007 cite_tsvd_50_torch_nn_oof_0.894614.npz 0.8946143443923297 0.8956681046650627 </code></pre> <p>I haven't compare the lb of seed bagging for so long time, so cannot give you a number now,.</p> <p>We have both noise and dropout. </p> <p>We have added this layer copied from the intenet with stddev ~ 0.8</p> <pre><code>class GaussianNoise(torch.nn.Module): def __init__(self, stddev): super().__init__() self.stddev = stddev def forward(self, din): if self.training: return din + torch.autograd.Variable( torch.randn(din.size(), device=din.device) * self.stddev ) return din </code></pre> <p>also 0.8 dropout as well</p> <p>To be clear, I am not saying we should ignore shifts in data.</p> <p>In the context of competition, the highest chance is that all participants cannot make a significantly closer public-private leaderboard score gap, which means the gap is more like a hidden difference between datasets, which is really hard to solve in a 3-month competition (or even years of research). </p> <p>Btw, I learned the concept and the difficulties of dataset shift in this paper: <a href="https://arxiv.org/abs/2007.00644" target="_blank">https://arxiv.org/abs/2007.00644</a></p> <blockquote> <p>Most research on robustness focuses on synthetic image perturbations (noise, simulated weather artifacts, adversarial examples, etc.), which leaves open how robustness on synthetic distribution shift relates to distribution shift arising in real data. …. most current techniques provide no robustness to the natural distribution shifts in our testbed. The main exception is training on larger and more diverse datasets</p> </blockquote> <p>For research, of course, we want to make the public-private leaderboard gap as close as possible. So, my reflection on this is we should have the leaderboard ranking based on the public-private leaderboard score gap in this type of competition. Then the competition ranking aligns with the host's objective of studying domain shift and domain adaptation. I think this is a feature request to <a href="https://www.kaggle.com/ryanholbrook" target="_blank">@ryanholbrook</a> , because it seems like people now care more about robustness then performance.</p> <p>Lastly, out of curiosity, I want to ask for the host's comment <a href="https://www.kaggle.com/danielburkhardt" target="_blank">@danielburkhardt</a> on the current finalized public-private leaderboard gap (0.81x ~ 0.77x). Is this gap "good enough", "can be improved" or "totally unacceptable"? Without domain knowledge, I think the gap is not crazily huge, is it expected?</p> <p>Thanks for you kind words.</p> <p>As always, I can't conclude it is really helpful based on only the final result; I think we need more experiments for most tricks reported in this comp to "conclude" what tricks are actually helpful to both unseen donors and unseen days.</p> <p>If you look at our sub sorted by private score in the below image, groupk by donor got the gold range private scores, but the public score is very low… target clustering seems to be ok-ish…</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F690886%2F86c6579d569842d93719098a799d146b%2FScreenshot%202022-11-26%20at%2010.56.25%20PM.png?generation=1669478203034830&amp;alt=media" alt=""></p> <p>If I could redo the entire competition again, I think I would do a simple K-fold, but validate/early-stop on the last day of the validation fold data. This trick has been used on kaggle many times to allow us to train on full data but still fit towards the latest data in time.</p> <ul> <li>You can see <a href="https://www.kaggle.com/AmbrosM" target="_blank">@AmbrosM</a> mentioned here as well <a href="https://www.kaggle.com/competitions/open-problems-multimodal/discussion/366395#2031471" target="_blank">https://www.kaggle.com/competitions/open-problems-multimodal/discussion/366395#2031471</a> </li> <li>My old team has used this trick before as well (3 years ago): <a href="https://www.kaggle.com/competitions/nfl-big-data-bowl-2020/discussion/119395" target="_blank">https://www.kaggle.com/competitions/nfl-big-data-bowl-2020/discussion/119395</a></li> </ul> <p>I think I was overthinking about the "domain shift" here. There are always some domain shifts in kaggle data, if you compare the shakeup this time to other historical kaggle competitions, this time is not huge… And looking at the gold solutions, nothing crazy/fancy domain adaptation skills have been performed. Mostly is about careful feature generation/selection if I hasn't missed anything.</p> <p>I tried to rank the teams by 2 different "robustness measures" in this notebook<br> <a href="https://www.kaggle.com/code/kingychiu/robustness-on-open-problems-multimodal?scriptVersionId=112466254" target="_blank">https://www.kaggle.com/code/kingychiu/robustness-on-open-problems-multimodal?scriptVersionId=112466254</a></p> <p>haha, I realized it is tricky to use robustness for a leaderboard because, generally, underfit models / poorly performed teams got quite robust scores… </p>
Open Problems - Multimodal Single-Cell Integration
Up 580 positions on lb and a bronze medal for a simple catboost solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems - Multimodal Single-Cell Integration <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I guess most people here understand that significant moves on lb cause emotions. Right now I feel excited with the results.</p> <p>My plan was to select and engineer features using catboost models and then create NN models, but a week before competition end I understood I am out of time and out of kaggle accelerator quota to test NN models anyway, so I decided to make the improvements I can and upload whatever result I am able to achieve, even if it is lower then the bulk of competition results. <br> This morning the private lb showed my models aren't that bad.</p> <p>Now a few points about the data, my models and how I worked on it.</p> <ul> <li>spent a lost of time trying to make some usefull features from raw data. All those efforts turned out to be fruitless, as the results were better without them.</li> <li>I could not understand why so many notebooks published here on kaggle have no features made from metadata, I mean donors and days. I made one-hot vectors for donors and initially also made one-hot vectors for days. But I read what <a href="https://www.kaggle.com/AmbrosM" target="_blank">@AmbrosM</a> wrote about data being time series and tested one-hot vectors for days versus just an int value for days (tested on Multiome data only and then changed the days features to int both for cite and multiome). I guess that was an important reason why my private lb results turned out to be than much better than the public lb results.</li> <li>put a lot of effort into selecting best features for CITE model. By 10 batches (2000 features in a batch) I tried all non-constant CITE inputs and selected about 600 features total from all batches with highest feature_importance_</li> <li>also tried to select some meaningfull features for multiome among those we were given as inputs, but here I had to pre-select correlating features. Out of about 1500 pre-selected features I have found three that were as important as 30's SVD components, while all the other features were less important than any SVD components. ['svd_x_chr1:630875-631689', 'svd_x_chr1:633700-634539', 'svd_x_chr17:22520955-22521852'] </li> <li>had an idea to fit a second level linear model for all multiome targets individually, but finally didn't even try</li> <li>divided all the cite targets into groups and used catboost models with different parameters. Increased learning rates and iterations for the best models and decreased these parameters for the worst ones. Also used stronger parameters for catboost models calculating first components of target SVD's in multiome subtask.</li> <li>in a number of models published on kaggle I saw that people calculate SVD on train inputs only, and then just use transform for test, instead of combing train and test inputs and running fit_transform on both. I guess those people saw their positions dropped on lb after the competition end.</li> </ul> <p>Didn't notice your comment. Kaggle doesn't notify about replies, so I've missed your question. I understand the question is most likely no longer relevant, but I'll answer anyway.</p> <ol> <li>As for cross-validation I started with keeping out either one day or one donor, but very soon noticed that keeping out a female donor made no sense as results were so much worse, than in case if I kept out a male donor. So, most of time I worked with a 5-fold cross-validation, keeping out one of days or one of male donors. Close to the end of competition I used a 3-fold cross-validation for CITE, keeping one day out, and 2-fold cross-validation for multiome, keeping out either first or last day of train data.<br> The primary reason was to reduce the accelaration quota usage, and I also wanted to focus on getting better score, and this meant predicting the last day of test data better.</li> <li>I didn't use cell type as metadata since this information was not available for the test dataset. Final submission used information about all donors present in train as one-hot vector (so, one of those features actually was a sex feature as only one donor was female) and information about day as an integer. Day and sex metadata features were often among the most important features for most of 140 CITE targets. But there were some targets for which either day or sex were not important.</li> </ol>
Open Problems - Multimodal Single-Cell Integration
[6th private - 3rd public] Summary of our solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems - Multimodal Single-Cell Integration <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Feature selection first, then the reduction. We want the embedding feature learned from clean data</p> <p>The raw count dataset is this one: <a href="https://www.kaggle.com/competitions/open-problems-multimodal/discussion/359355" target="_blank">https://www.kaggle.com/competitions/open-problems-multimodal/discussion/359355</a></p> <p>First of all, thanks to the Kaggle team and the host for providing this cutting-edge technology dataset and hosting this great challenge. I learned a lot from the competition. Many thanks to my teammates <a href="https://www.kaggle.com/mathormad" target="_blank">@mathormad</a> <a href="https://www.kaggle.com/nguyenvlm" target="_blank">@nguyenvlm</a> for hard-working days. This is a summary of what we did to get 3rd place in LB and 6th place in private</p> <h1>Local CV (for both cite and multiome task)</h1> <p>We use <strong>stratified-5-Fold</strong> (stratify by day/donor/celltype) CV strategy. We do not use GroupKFold to avoid overfitting in public, and the private dataset day is also quite "far" from the training dataset, so it's a bit risky to use a CV split by day (we've tried it, but both CV and LB drop). In our opinion, <strong>this is the key to our stable placement in both public and private leaderboards. Our best submission in CV are also the best on Public and Private leaderboard.</strong></p> <h1>Feature engineering (cite)</h1> <ul> <li>Feature selection: remove all 0 cols group by metadata (day/donor/cell_type/train/test) --&gt; remove ~4,500 features</li> <li>Dimension reduction: we use different methods to increase the diversity for the final ensemble: 240 n_components SVD/quantiledPCA and denoise Autoencoder (256 latent dims). For SVD, increase n_iter params slightly improve CV by 2e-4 (but run longer) (ours is 50 iter, the default of sklearn library is only 5). The denoise autoencoder helps us improve CV by 1e-3 (comparing to SVD/PCA)</li> <li>Feature importance<ul> <li>Use name matching from this discussion <a href="https://www.kaggle.com/competitions/open-problems-multimodal/discussion/349242" target="_blank">https://www.kaggle.com/competitions/open-problems-multimodal/discussion/349242</a></li> <li>Search using xgb feature importance: for cite, we fit multiple xgb for each target (full 22k feature) then choose the top 5 important features of each target. In total, we get around 500 important features for our model.</li></ul></li> <li>TargetEncoder for xgb: we apply target encode the cell_type for each target (each cell_type will be represented by a 140-dims vector)</li> </ul> <h1>Feature engineering (multiome)</h1> <ul> <li>Feature selection: remove chY features, which is not correlated to our target. We also remove all 0 cols group by metadata (day/donor/cell_type/train/test) --&gt; remove ~ 500 features</li> <li>Dimension reduction: we use 256 n_components SVD fitted with 200iter</li> </ul> <h1>Training process</h1> <ul> <li>Use custom loss (weighted correlation and MSE loss)</li> <li>Use c-mixup to increase the diversity for tabnet</li> <li>SWA when training MLP, Denoise Autocoder, Tabnet and 1D-CNN</li> <li>Adam optimizer with high learning rate (1e-2)</li> <li>(Multiome) XGB is trained with the PCA of target as label</li> </ul> <h1>External data</h1> <p>For cite task, we <strong>apply the whole training process above for the <a href="https://www.kaggle.com/competitions/open-problems-multimodal/discussion/359355" target="_blank">raw count dataset</a></strong>, then ensemble with the original one. This significantly boosts the performance by 1e-3<br> For multiome task, we do not use the raw count dataset because it's lacking some rows, we cannot match the OOF between the original and raw count so we do not have CV score to validate. I think it should work but we do not have enough time to rematch the OOF</p> <h1>Stacking feature &amp; Pseudo-labeling</h1> <p>We concatenate the prediction of 1 model with current features as input to another model (e.g. use output of xgb as input of MLP, and vice versa), which improves the performance of a single model around 5e-4. <br> We also use pseudo labeling (not much improvement, about 2-3e-4)</p> <h1>Post process</h1> <ul> <li>(Cite) Apply standard scaler (axis 1) for the output of each fold before taking the average</li> <li>(Multiome) Some of the target is all 0, so we remove them from the training process and replace 0 later. This helps us increase our CV by 1e-3</li> </ul> <h1>Ensemble</h1> <ul> <li>(Cite) Our final submission is a blending of the following models:<ul> <li>MLP</li> <li>XGB</li> <li>Tabnet</li> <li>1D-CNN<br> Each model is trained on the original and raw count dataset -&gt; 8 models in total</li></ul></li> <li>(Multiome) Blending of:<ul> <li>MLP</li> <li>XGB</li> <li>Tabnet</li></ul></li> </ul> <h1>Tryhard</h1> <p>We worked 6 hours/day, from the very beginning of the competition to the last hour. Especially for this competition when we have 2 tasks with 2 datasets, a lot of work and experiments to do. The most important thing I learned during this competition is that the more time you spend, the higher place you will be.</p> <h1>What does not work</h1> <ul> <li>Encoder for multiome task</li> <li>Use important features in multiome task</li> <li>Some bio techniques such as Ivis (dimension reduction), Magic (denoising)</li> <li>Use raw label for training</li> <li>…</li> </ul> <p>To my mind, any important features searching strategy works as well. I tried some of them (correlation features, shap, random selection, etc) and they all lead to a common subset of features (let's say all strategies result overlap more than 80%)</p>
Open Problems - Multimodal Single-Cell Integration
my summary(72nd)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems - Multimodal Single-Cell Integration <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you all very much.<br> I share my ingenuity in this task.</p> <p>Citeseq:<br> I did not PCA the explanatory variables and used the explanatory variables with higher std than GAPDH (housekeeping gene) for prediction.<br> The dropout rate was set to 0.75 in the first layer of the MLP to automate the selection of explanatory variables.<br> Catboost made predictions for each of the 140 targets; Catboost was more accurate than lightgbm without hyperparameter tuning.</p> <p>Multi:<br> explanatory variables were dimensionally compressed using SVD for each chromosome. The number of explanatory variables for each chromosome was compressed to the number of explanatory variables divided by 20.<br> Objective variables were converted to int8. This is expected not only to compress the data but also to round measurement error.</p> <p>Thanks to Dr.Alexander Chervov!</p> <p>お疲れ様でした。</p> <p>私が今回のタスクで行った取り組みを共有致します。</p> <p>Citeseq:<br> ・説明変数はPCA実施せず、GAPDH(House keeping gene)よりstdが大きい説明変数を予測に使用した。<br> ・MLPの第一層目でDropout rateを0.75とかなり大きな値を取り説明変数の選択を自動化した。<br> ・Catboostは140ターゲット一つずつ予測を行った。ハイパーパラメータチューニングしない条件ではlightgbmと比較しcatboostは予測精度が高い。</p> <p>Multi:<br> ・説明変数はchromosome毎にSVDを用いて次元圧縮した。各Chromosomeの説明変数の数を20で<br> 割った数に圧縮した。<br> ・目的変数をint8に変換した。これはデータの圧縮のみでなく測定誤差を丸め込めることも期待した。</p> <p>Alexander Chervovさんありがとうございました!</p> <p>Thank you very much!!!<br> I will share the importance of catboost predicted for single target of cite task.<br> Keras' permutation importance is not enough for my calculator resources. I gave up on it…</p> <p><a href="https://www.kaggle.com/datasets/miyawakiyoshifumi/cite-cat-importance" target="_blank">https://www.kaggle.com/datasets/miyawakiyoshifumi/cite-cat-importance</a></p>
Open Problems - Multimodal Single-Cell Integration
private 29th place (public 4th)solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems - Multimodal Single-Cell Integration <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers and to everyone who share their ideas in public notebooks and discussion. I also needed to gain experience in analyzing single cell data and this competition help me gain a lot of knowledge and will become a precise experience.</p> <p>I'm a postgraduate student and it's my first time to join the Kaggle competition, so if my organization in this nootbook was not clear and you want to know more other information, you can comment below or send me a private message. As a result of I didn't control the time very well, so the final plans were not been finished and submitted. My focus was on the feature engineering, some methods such as DCA, Magic, TruncatedSVD, FA, LDA and so on can make a positive effect in the final result. About the cross validation part, I divided the data by batch, but I don't think it's a good method for the private score. About the ensemble part, I selected the models including NN, Lightgbm, Catboost, Xgboost and Kernel ridge(For citeseq is ridge).</p> <p>Because the feature engineer's parts are not organized well, now I just to share the NN model structure which gained the best score among single models, though the final TruncaredSVD's parameters were changed a little.<a href="https://www.kaggle.com/songqizhou/private-39th-public-4th-s-basical-single-model-nn" target="_blank">https://www.kaggle.com/songqizhou/private-39th-public-4th-s-basical-single-model-nn</a>.</p> <p>Thanks for your attention, it's my honour to do that. I'll publish them in the github and post a link in this page when I organized them well.</p> <p>Thank you ,Marília Prata. 😄</p>
Open Problems – Single-Cell Perturbations
#13: U900 team - PYBOOST is what you need
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We would like to express great thanks to Kaggle and the organizers for creating that exciting (and quite difficult) challenge which is devoted to cutting-edge questions in bioinformatics. Research community will surely benefit from that. And great thanks to all participants and those who shared their ideas, notebooks, datasets, insights…</p> <p>Here is the report on U900 team approach. We follow the guidelines of the report provided by the organizers. The detailed Kaggle-style write-up of the solution is placed in the section 3.2 "Model design. Details" - Kagglers may prefer to jump to that subsection directly. </p> <h1>Context</h1> <p>Competition Overview: <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview</a><br> Open Problems: <a href="https://openproblems.bio/" target="_blank">https://openproblems.bio/</a></p> <h1>Table of contents</h1> <p>We follow <a href="www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview/judges-prizes-scoring-rubrics" target="_blank">organizer's guideline</a>:</p> <ol> <li>Integration of Biological Knowledge</li> <li>Exploration of the problem</li> <li>Model design</li> <li>Robustness</li> <li>Documentation &amp; code style</li> <li>Reproducibility</li> </ol> <h1>Highlights</h1> <ul> <li>Main innovative tool - new gradient boosting algorithm designed for MULTI-target tasks - PYBOOST - developed by team member A. Vakhrushev. Effectiveness to predict thousands targets at once - distinguishes it from XGBoost, etc. E.g. aftermath: <a href="https://www.kaggle.com/code/alexandervc/op2-explore-4th-place-magic" target="_blank">solo PYBOOST</a> model can achieve private score 0.718 - better than top1 - 0.728. </li> <li>Openness and knowledge sharing. Team shared dozens notebooks, posts, datasets during the challenge - obtained: hundreds forks, thousands views, among 10 upvoted code notebooks 4 from the team (in particular <a href="https://www.kaggle.com/code/alexandervc/op2-eda-baseline-s" target="_blank">top1</a>). <a href="https://www.kaggle.com/code/alexandervc/pyboost-secret-grandmaster-s-tool" target="_blank">PYBOOST approach</a> has been openly shared, medal winning solutions incorporate it and as well as all top scored publicly open solo-models. We also organized and shared on Youtube webinars around the challenge (<a href="https://youtu.be/dRG3qTaALp0?si=wruKSL2wu-DZb6D2" target="_blank">1</a>,<a href="https://youtu.be/6ySKxnjHX8Y?si=llQxil9FCY-NB5Mc" target="_blank">2</a>,<a href="https://youtu.be/lcc5vY-Pycs?si=94hhV9IOwcbLbZHP" target="_blank">3</a>,) (as well as the one in 2022: <a href="https://youtu.be/aqUOz3nFYm4?si=XLWxMsoef8l6OpVU" target="_blank">1</a>,<a href="https://youtu.be/dS0p3e-Je90?si=REmpRqLgY3pIOdhO" target="_blank">2</a>… ) - with thousand+ views. </li> <li>Not only PYBOOST: several neural networks, in depth analysis of cross-validation schemes, methods to carefully control the diversity for models ensemble, non-standard approach to ensemble - forms the solution.</li> <li>Stability: 1) our public and private leaderboard rankings are approximately the same 2) <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/458939" target="_blank">aftermath:</a> correlation between public and private scoring - 0.98. Thus our models are stable and generalize well on unseen data - thanks to careful cross-validation for solo models as well as diversity control of the entire ensemble.</li> <li>In-depth biological knowledge exploration: we performed and publicly shared standard single-cell pipelines analysis with <a href="https://www.kaggle.com/code/alexandervc/op2-rna-seq-data-scanpy-adata-cell-cycle" target="_blank">Scanpy</a> and <a href="https://www.kaggle.com/code/antoninadolgorukova/op2-adata-analysis-with-seurat" target="_blank">Seurat</a>, <a href="https://www.kaggle.com/code/alexandervc/op2-rna-seq-data-scanpy-adata-cell-cycle" target="_blank">cell cycle analysis</a>, <a href="https://www.kaggle.com/code/alexandervc/op2-eda-baseline-s" target="_blank">top upvoted EDA notebook</a>, <a href="https://www.kaggle.com/datasets/antoninadolgorukova/op2-supplementary-calcs-for-ml" target="_blank">created</a>, <a href="https://www.kaggle.com/code/alexandervc/op2-category-encoders-chembert-fingerpints-moldes?scriptVersionId=150999986&amp;cellId=1" target="_blank">benchmarked</a> and analyzed <a href="https://www.kaggle.com/code/alexandervc/eda-morgan-fingerprint-features" target="_blank">1</a>,<a href="(https://www.kaggle.com/code/alexandervc/eda-molecular-descriptors-features" target="_blank">2</a> many features like ChemBert, molecular descriptors, Morgan fingerprints, etc…</li> </ul> <h1>1. Integration of Biological Knowledge</h1> <h2>1.1 Did you use the chemical structures in your model? Did you use other data sources? Which ones, why?</h2> <h4>Use of SMILES.</h4> <p>One of our key Neural Networks (see section “Family of Neural Networks based on NLP-like SMILES embedding”) use encoding for compounds based on their SMILES representation. It starts with Text Vectorization followed by Embedding layer and thus learns the embedding from the current data. We extended the training set with <a href="https://github.com/Ebjerrum/SMILES-enumeration" target="_blank">SMILES augmentation library</a>, unfortunately - no score uplift.</p> <h4>Use and benchmark Morgan Fingerprints and Molecular Descriptors, ChemBert embeddings.</h4> <p>We encoded compounds by these techniques (<a href="https://www.kaggle.com/code/antoninadolgorukova/op2-feature-engineering/notebook" target="_blank">Notebook</a>, <a href="https://www.kaggle.com/datasets/antoninadolgorukova/op2-supplementary-calcs-for-ml" target="_blank">Kaggle dataset</a>, <a href="https://www.kaggle.com/code/alexandervc/eda-morgan-fingerprint-features" target="_blank">EDA1</a>, <a href="https://www.kaggle.com/code/alexandervc/eda-molecular-descriptors-features" target="_blank">EDA2</a> ). Systematically compared these features with other encodings: ChemBert embeddings, pure machine learning encodings: one-hot, Helmert contrast encoding, Backward Difference. The tables in the <a href="https://www.kaggle.com/code/alexandervc/op2-category-encoders-chembert-fingerpints-moldes?scriptVersionId=150999986&amp;cellId=1" target="_blank">notebook</a> show a bit surprising outcome that the most simple one-hot encoding is the most effective among those. At least among those encodings - which are not incorporating targets, target encoding techniques are more effective - <a href="https://www.kaggle.com/code/alexandervc/op2-target-encoders" target="_blank">benchmarked separately</a>.(All these notebooks and datasets were openly shared during the challenge). Final ensemble did not include these models. </p> <h4>DrugBank</h4> <p>We also analyzed and shared on Kaggle the DrugBank database ( <a href="https://www.kaggle.com/datasets/antoninadolgorukova/op2-supplementary-calcs-for-ml/data?select=drug_bank" target="_blank">Kaggle dataset</a> ) with the idea - split compounds by similarity groups and use group indicators as additional features for our models. However due to technical reasons (not all challenge compounds found in DrugBank) and lack of time - that was not implemented. Aftermath: <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/460567" target="_blank">team#43 reported</a> uplift for Pyboost from the similar idea.</p> <p>Our other models relied on pure ML technique for encoding compounds and cell types - target encoding. </p> <h2>1.2 What representation of the single-cell data did you use? Did you reduce genes into modules? Did you learn a gene regulatory network?</h2> <p>Mainly we worked directly with the pseudo-bulk differential expressions train dataset provided by the organizers ('de_train.parquet'). Various target encoding techniques (see “model design” section) were employed. </p> <h3>Genes reduction by clustering - helps some models</h3> <p>Two of our models included the reduction genes into groups. The genes were clustered by K-means into 3 groups based on input train dataset. Features were constructed by target encoding techniques for each group and neural networks were predicting each group independently. Concatenation was done at the final step. These models are among our top scored solo models (0.569, 0.570) as well as they allowed us to increase diversity in that family of our models. See e.g. <a href="https://www.kaggle.com/code/antoninadolgorukova/op2-simple-mlp-part-of-13th-place-solution?scriptVersionId=154341265&amp;cellId=56" target="_blank">correlations clustermap</a> for that family of models - the two mentioned above are: N3,4 ("3kmeans" in id). </p> <h3>Use of raw scRNA-seq counts data</h3> <p>Another two our models employed raw single-cell RNA sequencing data. That have been done using aggregation by cell-types and compound the raw counts expressions data, and further PCA and target encoding (see <a href="https://www.kaggle.com/code/antoninadolgorukova/op2-adata-analysis-with-seurat#6.1.-Make-features" target="_blank">section 6.1. Make-features</a> ). Thus we created new features which have been used for training the neural networks. These features have been concatenated with the original one - we did not gain the performance, but we gained some diversity and so blend with the original one - brings uplift. The performance of the original model and the one with raw count features is described in the <a href="https://www.kaggle.com/code/antoninadolgorukova/op2-simple-mlp-part-of-13th-place-solution?scriptVersionId=154412513&amp;cellId=49" target="_blank">table</a> - pre-last raw (MLPv15 TE scaled_counts_features) - public score 0.583 - similar to other models. All the models from that table were averaged gaining score 0.573 and that entered as a component to the final ensemble (described in the <a href="https://www.kaggle.com/code/antoninadolgorukova/op2-simple-mlp-part-of-13th-place-solution?scriptVersionId=154412513&amp;cellId=52" target="_blank">next table</a>). </p> <h2>1.3 How did you integrate the ATAC data? Which representation did you use?</h2> <p>Integration of single cell ATAC data, or any other single cell (e.g. CITE-seq) data can be done by exactly the same scheme as described and utilized above for raw single cell RNA sequencing count data - aggregation, dimensional reduction (PCA), target encoding. We did not have enough time to explore these models. </p> <h2>1.4 If adding a particular biological prior didn’t work, how did you judge this and why do you think this failed?</h2> <p>Prior bio-knowledge will always contain a kind of "batch effect" - different type of cells, donors, conditions, technology so on… Batch effect problem is no so solvable or even well-defined because what can be unwanted batch is one situation, is desired biological effect in the other. During the Open Problems 2022 we studied a lot how to use various biological prior knowledge - we and colleagues organized a kind <a href="https://www.kaggle.com/competitions/open-problems-multimodal/discussion/348293" target="_blank">crowd-source activity</a> and participants openly shared with community solutions and datasets based on <a href="https://www.kaggle.com/code/annanparfenenkova/ridge-with-reactome-features" target="_blank">Reactome pathway database</a>, <a href="https://www.kaggle.com/code/visualcomments/sim-ppi-corr-output" target="_blank">Protein-protein interaction networks</a>, and so on and so forth. The idea was constructing features based on aggregation by the biologically motivated groups of genes , pre-selecting those which related to targets based on prior knowledge. Followed by modified forward selection addition of these features <a href="https://www.kaggle.com/code/visualcomments/mmscel-crossvalidation-schemes-features-select#Exploration-of-additional-features" target="_blank">if the cross-validation scores increases</a>. However the outcomes were less prominent than pure ML approaches by the other teams. It resembles the situation with NLP where key successes of LLM are big models and large datasets - while prior knowledge (linguistic) approaches are not so effective. As we can see from Open Problems 2021, 2022 and the current challenges there are always teams on top who rely solely on ML methods. In some sense ML-models extract information from the train data more effectively than our prior knowledge databases. </p> <h1>2 Exploration of the problem</h1> <h2>2.1 Are there some cell types it’s easier to predict across? What about sets of genes?</h2> <h3>Myeloid cells are more difficult to predict than B cells for the current challenge. (Not surprising biologically).</h3> <p>However that is most probably specific to the current dataset.<br> That is quite natural from prior knowledge: B-cells and all cell types from the train - are lymphoid cells, while myeloid is different branch of the blood cells e.g. see <a href="https://en.wikipedia.org/wiki/Haematopoiesis" target="_blank">hematopoiesis</a>. So B-cells are more similar to train cells than myeloid cells and so it is natural that prediction for B-cells goes better. </p> <p>Similar we can see from the data (without prior knowledge): multiple evidence (<a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/458842" target="_blank">e.g. clustermap, umap, etc</a>) leads to the following picture - NK-cells are the most close to test set, and the most close to B-cells rather than to Myeloid cells, T-regs are the next close, while T-cells CD4+ and <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/458842" target="_blank">especially CD8+ least close</a>. So since NK-cells a) are in the train b) closer to B-cells - hence we see translation goes better for B-cells. If train set would contain other cell type which is close to Myeloid cells - than it would be opposite. By “most close” we mean with respect to the current data, not the prior biological knowledge. </p> <p>The analysis comparing predictability of B-cells and Myeloid cells is the following:<br> There are 17 samples of each type in the train set - so one can compare local metrics for these samples and see that B-cells are better predicted <br> For test samples we do not have ground truth - but we can compare disagreement between different models predictions - we see that models quite more often disagree on Myeloid cells rather than on B-cells. See e.g. <a href="https://www.kaggle.com/code/antoninadolgorukova/op2-analysis-of-different-models-predictions#-Correlations-between-all-models-included-in-the-final-ensemble" target="_blank">https://www.kaggle.com/code/antoninadolgorukova/op2-analysis-of-different-models-predictions#-Correlations-between-all-models-included-in-the-final-ensemble</a> </p> <h3>Genes</h3> <p>The first order of magnitude effect controlling genes predictability is, of course, how big are their values (more precisely how big are the values of their differential expression, since we are working with it) - bigger values - everything is bigger - prediction errors, variations etc… <br> The interesting question is what are the other effects. <a href="https://www.kaggle.com/code/antoninadolgorukova/op2-analysis-of-different-models-predictions?scriptVersionId=154657444&amp;cellId=148" target="_blank">Figures here</a> show the analysis.<br> We see that, for each model, especially, Pyboost, there is a subset of genes with big SD and quite low variance, meaning that a model is quite confident in their prediction despite the high variability of DE. Also, each model gives highly variable predictions to a subset of genes with quite low SDs. </p> <p>More details on the analysis added in the <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/461663" target="_blank">post</a> and <a href="https://www.kaggle.com/code/antoninadolgorukova/op2-analysis-of-different-models-predictions" target="_blank">notebook</a>. Some highlights:</p> <ul> <li>All models are less confident in their predictions for myeloid cells compared to B cells (medians of prediction variability across genes and samples are higher).</li> <li>However, the highest bias (differences between predicted and true values) and variability of gene expression change predictions are associated with individual drugs rather than cell types.</li> <li>These drugs are mostly outliers - with the lowest number of cells (≤10 cells), or drugs that affected the cells in such a way that they were misclassified (discovered by <a href="https://www.kaggle.com/ambrosm" target="_blank">@ambrosm</a> in his <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/458661" target="_blank">Excellent EDA</a> ).</li> <li>GO enrichment analysis suggests that the hard-to-predict genes are often related to immune cell activities, cytotoxicity, and cell death. Though it might be some artefact. </li> </ul> <h3>2.2 Do you have any evidence to suggest how you might develop an ideal training set for cell type translation beyond random sampling of compounds in cell types?</h3> <p>As we understand the question - it is about planning new experiments to cover much higher number of the cell type, comparing to only 6 in the current challenge. With the goal to reduce expansive experiments costs in favor of a cheap computational computational approach. For that question - the experience of the current challenge suggests the following: </p> <p>Ideally we should take into account similarity distance between the cell types. Having the similarity - the strategy is the standard one - uniformly subsample train set with respect to similarity distance. In other words (simplified a bit): perform clustering of cell types with respect to similarity distance and choose say 1 representative from each cluster - that would be “ideal” training set. That ensures that every cell type would have a “neighbor cell type” belonging to the train set which is similar enough to it and so “translation” would go smoothly. </p> <p>So the key question - what similarity relation for cell-types to consider.</p> <p>We suggest: first run a preliminary experiment with SMALL number of drugs but LARGE number of cell-types - which allows to define similarity for cell types as similarity of their response to drugs. And take that similarity relation as a basis. </p> <p>Rationale and details behind that suggestion are the following. The <a href="https://www.kaggle.com/code/alexandervc/op2-eda-baseline-s?scriptVersionId=147818286&amp;cellId=21" target="_blank">clustermap of cell-types</a> clearly suggests the relations described above: NK-cells close to B-cells and Myeloid, T-cells CD8+ are the most distinct, and the key points are the following:</p> <ul> <li>That similarity is consistent with models results. So: it is defined without any modeling, but models “respects” it: e.g. <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/458842" target="_blank">exclude T-cells CD8+</a> often improves modeling quality - and that corresponds to the fact CD8+ cells are the most different from the others on the clustermap; <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/460251" target="_blank">NK-cells is the best validation fold</a> for some models like Pyboost, etc. - and that corresponds to the fact that NK-cells most close to B-cells and Myeloid cells on the clustermap</li> <li>It is not evident from the prior biological knowledge. </li> </ul> <p>So it would be much more cost effective to define the similarity between cell types based on some prior biological knowledge (e.g. just the distance in some umap space for some atlas scale single cell dataset). But experience of the current challenge makes us doubt that such similarity would perform well on drug response tasks. </p> <p>If experiments are planned “one by one”, but not “all at once”, it is worth considering “active learning” strategy - analyzing results after each step, and choosing for the next step of experiment those cell types which are in the worst predicted clusters. </p> <h1>3 Model design.</h1> <p>We split that section into two parts the first one is devoted to answers to organizer's questions. The second part is detailed write-up of the solution - Kaggler's may prefer to jump directly to the subsection 3.2</p> <h2>3.1 Answers to organizer's questions</h2> <h3>3.1.1 Is there certain technical innovation in your model that you believe represents a step-change in the field?</h3> <h4>PYBOOST - a new innovative gradient boosting tool</h4> <p>Which is developed for MULTI-target tasks by team member A. Vakhrushev - we believe an important step-change in a field. It is well-known that for tabular data with SINGLE target gradient boosting (XGBoost, LightGBM, CatBoost) are the top performers - showing better result than e.g. Random Forest, SVR, etc. and even Neural Networks (neural works are best performing on images, audio, text - some kind of continuous, not tabular data). However these packages are not so effective when one needs to predict many targets simultaneously. PYBOOST resolves that issue providing an effective strategy to predict even thousands of targets at once by a gradient boosting approach. </p> <p>The innovative features of the PYBOOST consists of two parts: strictly-Pyboost - which is software library and the SketchBoost - which is algorithmic innovation which improves algorithmic part of gradient boosting on multi-target tasks. (But for brevity by PYBOOST we typically mean both parts). The software part - strictly-Pyboost - is software library which allows the efficient realization of the complicated boosting algorithms directly in Python utilizing GPU, that means we can write easy to deal Python code, but it will be almost as efficient as low level optimized C-code - because of utilizing the GPU. The second part is algorithmic innovation - "Sketchboost" - provides new strategy to speed up tree structure search in multioutput setup by approximating ("sketching") the scoring function used to find optimal splits. Approximation is made by reducing dimensions of the gradient and hessian matrices while keep other boosting steps without change, thus enables crucial speed-up for the main bottleneck in boosting algorithm.</p> <p>For more details we refer to the <a href="https://openreview.net/forum?id=WSxarC8t-T" target="_blank">paper</a>, and the <a href="https://youtu.be/5xRxuDh_cGk" target="_blank">webinar</a>. </p> <p>We openly shared the PYBOOST approach with the community during the challenge <a href="https://www.kaggle.com/code/alexandervc/pyboost-secret-grandmaster-s-tool" target="_blank">Notebook</a>, <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/454700" target="_blank">Post</a>. It gained hundred forks, becoming component of gold-zone solutions as well as other medal winning. Moreover aftermath shows that <a href="https://www.kaggle.com/code/alexandervc/op2-explore-4th-place-magic" target="_blank">solo-Pyboost solution</a> combined with ideas by the other teams provides better results than current top1. Recent top2 solution for the CAFA5 challenge - prediction Gene Ontology terms is also <a href="https://www.kaggle.com/competitions/cafa-5-protein-function-prediction/discussion/434064" target="_blank">based on Pyboost</a>. Thus PYBOOST is quite effective for such kind of MULTI-target biological tasks. </p> <h3>3.1.2 Can you show that top performing methods can be well approximated by a simpler model?</h3> <p>It depends on the meaning of the “simpler”, let us try two variants for that meaning: </p> <h4>Answer 1. Production ready solution expected not to lose much compared to huge Kaggle-style ensemble</h4> <p>1) One side of the question seems to be: What is the estimated performance loss between Kaggle-style huge ensembles (not production ready) and production-ready reasonable solutions ?<br> In short - we think the performance loss would NOT be essential - some very rough and pessimistic estimation can be - let us say the top gives 0.558, then production ready (with ~2 solo models) - 0.566, with 3 solo models - 0.563, with 4 solo models 0.559. <br> We also think that appropriate modification of the PYBOOST solution deserves to be considered as the production ready solution, it is high performing, easy to use, maintain and modify. It is typically quite diverse from NN solutions and blend with any NN would uplift the scores. </p> <p>But … <br> But it seems we are not ready to give more precise analysis, because - strange and unusual things happened - just after the competition closure and based on published solutions and write-ups - new combined solutions breaking current top1 appeared (we followed that route - and demonstrated that <a href="https://www.kaggle.com/code/alexandervc/op2-explore-4th-place-magic" target="_blank">solo PYBOOST model beats the top1</a> ). So in some sense we do not know what are the real “top performing solutions” - almost surely combining approaches we can go quite further. Nevertheless we hope that it would not change the basic answer - the difference between the huge ensembles and production ready solutions is not expected to be essential. </p> <p>What seems to be essential - the setup with metric (MRRMSE) and preprocessing (LIMMA log-p-values) is not the perfect way. And so we recommend to update that first - before making any further conclusions for production choices. To give some details - it seems that setup MRRMSE + log-p-values is very sensitive to outliers, and that is the reason why see: solutions “Nothing but just multiplied a factor of 1.2” , leaderboard super-successful probing during the challenge and so on. </p> <h4>Answer 2</h4> <p>If we understand the question in a slightly different manner: is it possible to approximate top solutions by some conceptually “trivial” ones ? <br> Then the answer is: NO. It is clear from write-ups that teams incorporate models like Neural Networks, Pyboost, and have non-trivial findings - so we would not call that “trivial”. Also at the early stage of the competition we have tried more than 50 simple models+feature encodings: Ridge, SVR, KernelRidge, Catboost, etc… - but all of them showed results worse than 0.600 - so to break that barrier one should already do something a bit non-trivial. (The predictions and the analysis were openly shared during the challenge <a href="https://www.kaggle.com/datasets/alexandervc/open-problems-single-cell-perturbations-submitsetc" target="_blank">Kaggle dataset</a> link.)</p> <h3>3.1.3 Is your model explainable? How well can you identify what is causing your model to respond to certain inputs?</h3> <p>PYBOOST has feature importance estimation as any other boosting or Random Forest algorithm. For the Neural Networks we can apply the special techniques like activation maps technique to gain certain interpretation.</p> <h2>3.2 Model design. Details</h2> <p>We constructed diverse models to gain the stability and better performance. Each has been carefully cross-validated. While ensembling we controlled diversity and preferred to rely on the most stable schemes. The main innovative part of the solution - is PYBOOST - a new gradient boosting algorithm developed by team member (Kaggle grandmaser) Anton Vakhrushev. </p> <h3>3.2.0 Solution principal components:</h3> <p>1 Family of Pyboost/Catboost models<br> 2 Family of MLP-like Neural Networks employing target encoding<br> 3 Family of Neural Networks based on NLP-like SMILES embedding<br> 4 Analysis of several cross-validation schemes and CV-LB correspondence <br> 5 Multi-stage blend scheme with diversity control and weights equal to 0.5 at each stage</p> <p>Below we report on each item one by one. </p> <h3>3.2.1 Family of Pyboost/Catboost models</h3> <p>Here we describe construction of PYBOOST and CatBoost models - both by the same scheme. PYBOOST performs better, but CatBoost is diverse enough and provide uplift in blend. The code: Pyboost: <a href="https://www.kaggle.com/code/alexandervc/pyboost-secret-grandmaster-s-tool" target="_blank">the basic baseline notebook</a>, other versions of the PYBOOST are in the <a href="https://www.kaggle.com/alexandervc/pyboost-u900" target="_blank">notebook</a>. Catboost <a href="https://www.kaggle.com/code/alexandervc/fork-of-op2-oof-new-folds-v3-catboost" target="_blank">Notebook</a> , <a href="https://www.kaggle.com/code/alexandervc/fork-of-op2-oof-new-folds-v3-catboost?scriptVersionId=152513917" target="_blank">version 64, scores 0.584, 0.776</a></p> <h4>Highlights:</h4> <ol> <li>PYBOOST “out of box” gives quite good results (better than “out of box” our other models), but couple of tricks improves it:</li> <li>Target Encoding by Quantile 80 - that was found by systematic consideration of all target encoders and all their params</li> <li>Retraining on several “ALMOST ENTIRE” train subsets - the logic is simple: we have very few samples - so: retrain on entire train set - helps the model, but we slightly improved it: generate several “almost entire” train subsets, train on all of them, average the results. Thus we gain from both - larger train sets and diversity. </li> <li>CatBoost provides diverse enough solutions from Pyboost, even with less performance it is useful in blend. </li> </ol> <h4>Modeling organization:</h4> <p>The core Pyboost and Catboost models are organized as follows (TSVD + TargetEncoder scheme):</p> <ul> <li>TSVD reduction of targets to say 70 dimensions (components)</li> <li>Target encoding of cell type and compound by these components </li> <li>Train model to predict these components (NOT the original targets). <br> (For PYBOOST - one model predicts all components at once,<br> For CatBoost - train 70 models - one for each component - it is time consuming, but feasible)</li> <li>Predict TSVD components for the test set. And finally use TSVD-inverse-transform - to obtain original (genes) targets from the predicted components. </li> </ul> <h4>The key findings :</h4> <h5>Quantile 80 target encoder</h5> <p>brings significant boost in performance e.g. 0.602-&gt;0.586 for Pyboost and CatBoost. Default value - Quantile 50 is significantly worse. That have been found by systematic consideration of all possible category encoders and all their params. The notebook (openly shared) <a href="https://www.kaggle.com/code/alexandervc/op2-gentle-param-tuner" target="_blank">“Gentle tuner”</a> provides a framework to tune params of the models and encoders together (employing several CV-schemes simultaneously). First we found that effect for CatBoost (<a href="https://www.kaggle.com/code/alexandervc/op2-gentle-param-tuner?scriptVersionId=150416991&amp;cellId=36" target="_blank">notebook v61 linked figures</a>) and then employed for PYBOOST.</p> <h5>The subsets for training - critically affect the scores.</h5> <p>Idea - “train on multiple ALMOST ENTIRE train subsets”. <br> Motivation: due to the small number of samples many of our models benefit if we retrain them on the ENTIRE train set (before the submission). But that is not the best way, which is - employ ALMOST ENTIRE train subsets, but SEVERAL of them :<br> I.e. retrain models on 5-10 subsets of the train (each sized 80-99% of the entire train set) and average the predictions of all these models to get the submission.<br> Thus models benefit from both - more information and diversity. <br> The trick uplifts Pyboost from 0.584 to 0.577</p> <p>Some details. Let us emphasize one moment - “CV tuning and submit preparation are DIVORCED” in contrast to the usual Kaggle approach. I.e. The whole process is two staged - first one is standard - we search for optimal params of the model using the cross-validation. At the second stage - submission preparation - we forget about CV folds and generate new training subsets (these “almost entire train” subsets). We train the model with SAME params (found by CV) on these subsets and average the predictions. It is important that we do not use early stopping - number of trees/epochs was optimized by CV at the first stage and fixed on the second stage. That allows to retrain on (almost) entire train set - impossible with early stopping. So cross-validation and submission preparation are divorced in contrast to the usual Kaggle approach. The strategy works most probably due to the small sample number. It is employed for boostings and one of our Neural Networks (Target Encoding based). </p> <h5>Exclude T-cells CD8+</h5> <p>One small improvement 0.586-&gt;0.584 (but stably seen for other variants of the PYBOOST also) - exclude T-cells CD8+ from the training set.</p> <h4>Notes:</h4> <p>Pyboost outperforms CatBoost about 0.010 for that task in equal setups, but their predictions are diverse enough to get uplift in blend. </p> <p>The standard tuning experiments:<br> Tuning the standard params for boostings - number of trees, max depth, learning rate, etc… as well as number of TSVD components - bring uplift from around 0.604 to 0.602 - so not that much crucial as ones above. We tried a bit PCA/ICA instead but TSVD but got downlifts. </p> <h4>Comments.</h4> <p>Comment on TSVD-scheme. Employment of TSVD (or PCA, or ICA) reduction of targets is a more or less standard approach to treat mult-target tasks e.g. widely used in Open Problems 2022. Its obvious benefit is simplification - direct prediction of 18211 targets is not feasible for many models (except NN). Less obvious benefit ( a bit surprisingly): it often improves the performance, despite seemingly loss of information reducing 18211 targets to say 70. The reason is: what is lost - mostly noise, not the useful information and so reduction to say 70 components - kind of denoises the data and helps the model. We also experimented with PCA/ICA, but TSVD seems better for boostings, while for NN we used PCA. (See our first notebooks for some experiments. And of course, that is not universal - depends on the data). </p> <p>Remark (other models): the TSVD-scheme above can be applied for any model - we experimented a lot with Ridge,SVR, Kernel Ridge, LightGBM, Random Forest, ExtraTrees - but only Pyboost and Catboost showed good results for us. Somehow surprisingly, LightGMB was not effective, despite CatBoost was - typically it is not like that. See <a href="https://www.kaggle.com/code/alexandervc/op2-gentle-param-tuner" target="_blank">“Gentle tuner”</a> public notebook. </p> <p>Comment (Target Encoders - pay attention to LeaveOneOutEncoder): <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.TargetEncoder.html" target="_blank">Target encoding</a> is a standard way to treat categorical features. The idea is to substitute the category by mean (median, quantile, etc) of target with respect to that category. There are many modifications of target encoding and they have several parameters: Quantile Encoder (respectively), LeaveOneOutEncoder, CatBoost Encoder, James-Stein Encoder. We made systematic benchmarking of the encoders for that task for many models. As said above Quantile80 Encoder uplifts boostings a lot. We should also note that LeaveOneEncoder deserves special attention - for linear and close to linear (SVR, some Kernel Ridges) it stably outperforms other encoders (<a href="https://www.kaggle.com/code/alexandervc/op2-target-encoders" target="_blank">tables</a>). For Boostings it is either the second one (after Quantile80) and even the first one (depending on training set configuration e.g. <a href="https://www.kaggle.com/code/madrismiller/copy-of-pyboost-secret-grandmaster-s-to-1d68b4?scriptVersionId=150557250" target="_blank">top public Pyboost 0.574</a> utlized LeaveOneOut and tricky preparation of the train set). </p> <p>PS</p> <p>Not enough time: </p> <p>PYBOOST predicting directly 18211 targets , i.e. not predicting TSVD-components followed by tsvd.inverse_transform - but just directly. <br> We did not have enough time to tune params, out-of-box we got 0.594 <a href="https://www.kaggle.com/code/alexandervc/fork-of-op2-oof-new-folds-v5-pyboost-no-tsvd" target="_blank">notebook</a> - not enough score comparing to our other models, so not included in the final ensemble. On the other hand we checked it is quite diverse from the tsvd-based PYBOOST, so we think it is promising to combine these two approaches. </p> <p>We planned to try feature engineering by target encoding not only from TSVD , but from biologically motivated groups of genes, or from most important features (<a href="https://www.kaggle.com/competitions/open-problems-multimodal/discussion/366455" target="_blank">as grandmaster Silogram did in 2022</a>) but did not have enough time for that.</p> <h3>3.2.2 Family of MLP-like Neural Networks employing target encoding</h3> <p>We developed a Neural Network model which features are: Target Encoding of PCA components. And then we developed a huge number of variations for that basic model. Key ensemble gained 0.566 and included 8 model variations. <br> The main notebook with models: <a href="https://www.kaggle.com/code/antoninadolgorukova/op2-simple-mlp-part-of-13th-place-solution" target="_blank">Notebook MLP with Target Encoding</a> </p> <h4>Highlights:</h4> <ol> <li>Easy to diversify the basic model and benefit from the ensemble of the variations - changing augmentation, noise levels, varying features, training subsets, activations etc. - one obtains models with similar performance, but diverse enough to boost the ensemble (blend)</li> <li>Raw single cell RNA-seq data employed in the same scheme, same can be done for ATAC-seq </li> <li>Model is very stable and easy to implement - various changes do not degrade the performance </li> <li>Genes clustering into groups is easily employed and boost the performance</li> <li>Magic (simple) train duplicating trick improved score significantly: 0.600+ -&gt; 0.580+</li> <li>Training on "almost entire" train subsets boosted 0.580+-&gt; 0.570+; blend boosted to 0.566</li> </ol> <h4>Modeling organization and details:</h4> <ul> <li>Feature creation: Target Encoding of cell-type and compounds by PCA-components, 100 components considered for both </li> <li>Architecture: Multi-layer perceptron with 2 layers (200,256,18211) ; activation: “relu”</li> <li>Prediction scheme: 18211 targets directly (PCA is used for feature creation, but we do not predict PCA components here - in contrast to Pyboost scheme)</li> <li>CV scheme: 5-fold cross-validation scheme - folds containing only leaderboard drugs are used, split randomly in 5 groups. </li> <li>Training/Tuning: loss: MAE; optimizer: AdamW; batch size: 256; max learning rate: 0.01, decayed with weight: wd = 0.5 - one-cycle learning rate strategy <a href="https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/training_efficiency.html" target="_blank">lr_one_cycle</a>; epoch number have been tuned and fixed to 20. (Fixed epoch number allows to retrain model on the (almost) entire train set, while early stopping methods forbid that way).</li> <li>Training/Submit: Retrain model on “almost entire” train subsets (i.e. entire train with exclusion 2-3-10 subsamples)</li> <li>Magic (simple) train duplicating trick improved score significantly: 0.600+ -&gt; 0.580+</li> </ul> <h4>The strategy to create variations of the basic model employed the following techniques:</h4> <ul> <li>Changing the training set methods: exclusions of the samples which originate from extremely low numbers (1 or 2) of single cells processed in pseudo-bulk procedure. </li> <li>Genes clustering into groups (e.g. 3 groups by K-means); processing each group separately and concatenating the predictions</li> <li>Augmentation techniques: varying number train duplicates; different noise levels for cell type and compounds; linear combinations of features + targets to create new samples; </li> </ul> <p>Params used during the challenge: <a href="https://www.kaggle.com/code/antoninadolgorukova/op2-simple-mlp-part-of-13th-place-solution?scriptVersionId=153981848" target="_blank">notebook version 52</a>. The precise description of all 10 variations of the basic model entered in the final submission is <a href="https://www.kaggle.com/code/antoninadolgorukova/op2-simple-mlp-part-of-13th-place-solution?scriptVersionId=154412513&amp;cellId=50" target="_blank">here</a>. The diversity analysis of the these variations is <a href="https://www.kaggle.com/code/antoninadolgorukova/op2-analysis-of-different-models-predictions?scriptVersionId=154657444&amp;cellId=153" target="_blank">here</a> - one can see - some models are quite diverse from the others - correlation score 0.9. </p> <p>We employed the same idea training on "almost entire" train subsets as described for PYBOOST above. It boosted scores approximately: 0.580+-&gt; 0.570+. </p> <h3>3.2.3 Family of Neural Networks based on NLP-like SMILES embedding</h3> <p>We developed several NN models employing direct encoding of SMILES by embedding layer (technique coming from NLP). The key single model achieved 0.574, another quite diverse model entering the final ensemble - 0.587 and the last hours combination achieved 0.571 score - incorporating pseudo-labeling technique (not included in the selected blend submit). These solutions originate from the <a href="https://www.kaggle.com/code/kishanvavdara/nlp-regression" target="_blank">public one</a> by Kishan Vavdara though substantially reworked from architectural and training points of view uplifting the score from 0.607 (original) to 0.574 and further.</p> <h4>Highlights:</h4> <ol> <li>Lion - new powerful optimizer - outperformed Adam </li> <li>Magic (simple) train duplicating trick improved score 0.582 -&gt; 0.574</li> <li>SMILES encoding by the embedding layer</li> </ol> <h4>Modeling organization (key 0.574 model):</h4> <p>The <a href="https://www.kaggle.com/code/alexandervc/nlp-regression-custom-kfold-update1" target="_blank">main notebook</a>, the submission with 0.574 (0.766 pricate) score is <a href="https://www.kaggle.com/code/alexandervc/nlp-regression-custom-kfold-update1?scriptVersionId=151529557" target="_blank">version 11</a>.</p> <ul> <li>Feature Encoding: SMILES - by Embedding layer, Cell Types - One-hot; both concatenated</li> <li>Architecture: 5-Layer (1558,512,256, 128, 256, 18211) Perceptron with carefully chosen Batchnorm and Dropout layers positions, activation: “elu”</li> <li>Preprocessing: Standard Scaler for targets, Add Gaussian Noise for features</li> <li>Training: Lion optimizer; loss: competition loss - MRRMSE (custom); 5 almost random folds; best (by validation score) epoch (out of 300) is restored for each fold - that appears to be quite important</li> <li>Prediction scheme: 18211 targets directly, (TSVD - not used at all) </li> <li>The trick with duplicating the train for each fold yields 0.582-&gt;0.574 uplift. Similar to our other NN models. </li> <li>Tuning: params were optimized by CV</li> </ul> <p>The model is defined in notebook section <a href="https://www.kaggle.com/code/alexandervc/nlp-regression-custom-kfold-update1#The-model" target="_blank">"The model"</a>, the next cell contains a <a href="https://www.kaggle.com/code/alexandervc/nlp-regression-custom-kfold-update1?scriptVersionId=154712856&amp;cellId=64" target="_blank">figure</a> with the graphical description.</p> <p>So the model organized as follows: SMILES are encoded via the embedding layer and Cell Type via one-hot; both encodings concatenated; that followed by 5-dense-layers perceptron (1558,512,256, 128, 256, 18211) carefully interchanged with batchnorm and dropout layers; activation is “elu”. <br> We checked the stability of the model as follows. Rerun it with several times with similar params compare CV scores and submit. We stably observed similar CV scores and moreover LB scores at the range 0.581-0.583 - before adding train duplication trick and 0.574 after. That is quite in contrast to the original model - which has larger score variance: 0.600 - 0.617 at least (see experiments <a href="https://www.kaggle.com/code/erotar/fork-of-nlp-regression-12a31a?scriptVersionId=150057568" target="_blank">here</a> ). </p> <h4>What did not worked well for that version of the NN:</h4> <ul> <li><a href="https://github.com/Ebjerrum/SMILES-enumeration" target="_blank">SMILES augmentation package</a> </li> <li>LSTM/CNN architectures, other optimizers, pseudolabeling, dropping out noisy samples. </li> <li>The trick to retrain model on the entire train also was not successful for that NN (in contrast to other models) because the epoch number determined by early stopping was different from fold to fold and fixing it to some particular number - degraded the CV-scores and so we did not want to risk employing the models not having good CV scores. Spent quite a lot efforts to resolve it, but unsuccessful. </li> </ul> <h4>That family of models also included other variants:</h4> <p>It yielded 0.587 score in initial version (included in final blend) <a href="https://www.kaggle.com/bejeweled/scp-blend-own" target="_blank">notebook1</a>, <a href="https://www.kaggle.com/bejeweled/scp-pseudo50-ct-strat-mrrmse-tf-smilesv" target="_blank">notebook2</a>. And last hours change yielded 0.571, but not giving significant boost to the entire blend construction (so not included in the chosen submits): </p> <p>Highlight:</p> <ul> <li>The 0.571 versions heavily employed pseudolabeling: <a href="https://www.kaggle.com/code/bejeweled/op2-u900-part-of-solution-pytorch-tf-nns" target="_blank">https://www.kaggle.com/code/bejeweled/op2-u900-part-of-solution-pytorch-tf-nns</a></li> </ul> <p>The other findings are the following: </p> <ul> <li>With the LSTM layer after smile embeddings.</li> <li>With sigmoidal range activation as the model output.</li> <li>With multiplication of outputs by coefs.</li> <li>With pseudolabels from these models blends.</li> </ul> <p>PS</p> <p>What did not work: we spent quite efforts on Neural Network based on one-hot encoding of the compounds, even achieving local CV-uplift, but LB score still appeared to be 0.619 <a href="https://www.kaggle.com/code/alexandervc/fork-of-op2-oof-new-folds-v4-nnohe" target="_blank">notebook</a>, changing architecture, augmenting train, changing one-hot to similar encodings: Helmert, Backward difference, etc - nothing worked. We got same LB score as in the early version - which is just average of many random seeds in the simple version of the net from the public: <a href="https://www.kaggle.com/code/alexandervc/op2-kishan-s-nn-streamlined-and-blended" target="_blank">notebook</a>. The <a href="https://www.kaggle.com/code/kishanvavdara/neural-network-regression" target="_blank">original net</a> seems to be quite unstable - public score varies with the seed 0.599 - 0.620, and not so good results on private. </p> <h3>3.2.4 Analysis of several cross-validation schemes and CV-LB correspondence</h3> <p>Here we describe our approaches to cross-validation, analysis of the CV-LB correspondence.<br> More details (tables, figures, etc) can be found in the separate <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/460251" target="_blank">post</a>. </p> <p>CV - LB correspondence is quite problematic in the current challenge. Its better understanding would be important for research community future works. Even aftermath writeups analysis seems to reveal that good solution for CV-LB correspondence is not found yet. During the challenge several logical CV-schemes were proposed - AmbrosM - <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/443395#2457831" target="_blank">discussion</a>, <a href="https://www.kaggle.com/code/ambrosm/scp-quickstart?scriptVersionId=144293041&amp;cellId=8" target="_blank">notebook</a> or MT's scheme: <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/444494#2466644" target="_blank">discussion</a>, <a href="https://www.kaggle.com/code/masato114/scp-quickstart-another-cv-strategy/notebook" target="_blank">notebook</a>. MT proposes to put in validation SAME CELL-TYPES as on LB, while AmbrosM proposes SAME COMPOUNDS. However even early analysis showed far from perfect correspondence to LB for both schemes. Note: we developed and <a href="https://www.kaggle.com/code/alexandervc/op2-class-for-custom-cv-schemes" target="_blank">openly shared</a> the Python class which conveniently encapsulates these and other CV-schemes.</p> <p>Here are some our findings:</p> <h4>Highlights</h4> <ol> <li>Local (CV) row-wise correlation score is better related (0.5) to LB (mrrmse score) than other metrics </li> <li>Local (CV) mrrmse score is near zero correlated with the LB (mrrmse score) for all CV schemes considered</li> <li>NK cells local mrrmse is better correlated to LB ( 0.2+), while for T-cells CD8+ it is negative (-0.1+) </li> <li>NK-cells local mrrmse is well related with LB for Pyboost models, but not for other e.g. NN models</li> <li>Random folds are NOT worse than more logical and sophisticated CV-schemes; and seems preferable for NN models</li> <li>Public and private LB scores - highly correlated: 0.98, despite poor CV-LB correspondence</li> </ol> <p>So, there seems to be many surprises: despite the LB metric is mrrmse - the best locally related to it - is the OTHER metric - row-wise correlation; while local mrrmse performs near zero. Another surprise - CV-LB correspondence is poor - while public-private LB is very good - 0.98 correlation. And also it is surprising that random folds performs not worse than more logical schemes.</p> <h3>Further notes/suggestions:</h3> <ol> <li>Models of the same nature/features - the CV-LB correspondence somehow working (not so good but still) for all CV schemes. So strategy can be - tune each particular model by CV, verifying by LB - that what we used. </li> <li>The main problem to compare different models - even close models Pyboost and Catboost, with same public LB scores e.g. 0.584 may show quite different CV score like 0.92 vs 0.89, and even worse for boosting vs NN. So for final blend we decided to rely more on LB score, rather than on CV. </li> </ol> <p><strong>Setup. Potential bias.</strong> The analysis is based on more than 50 quite diverse models, still it can be biased by models choice. We see very clearly that local metrics better corresponding to LB are quite dependent on the model/features/etc.</p> <p>See further analysis in the separate <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/460251" target="_blank">post</a>. </p> <p>PS</p> <p>To complement: here is table (<a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/460251#2553943" target="_blank">from here</a>) showing correlation of CV and LB for different metrics for a set of SIMILAR models (NN based on target encoding) - we see it is quite high (better for public, less for private): </p> <table> <thead> <tr> <th>metric</th> <th>corr_vs_public</th> <th>corr_vs_private</th> </tr> </thead> <tbody> <tr> <td>MRRMSE</td> <td>0.69</td> <td>0.39</td> </tr> <tr> <td>corr_rows</td> <td>-0.79</td> <td>-0.53</td> </tr> <tr> <td>corr_cols</td> <td>-0.74</td> <td>-0.48</td> </tr> <tr> <td>R2</td> <td>-0.64</td> <td>-0.42</td> </tr> </tbody> </table> <p>Pay attention that for models of diverse nature - correlations are much lower, even for that family of models but with bigger modifications of feature construction - correlations become much lower (see <a href="https://www.kaggle.com/code/antoninadolgorukova/op2-tricks-and-metrics?scriptVersionId=155213609&amp;cellId=53" target="_blank">table</a>). (See <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/460251#2553943" target="_blank">post</a> for more details).</p> <h3>3.2.5 Multi-stage blend scheme with diversity control and weights equal to 0.5 at each stage</h3> <p>Here we describe our approach to ensemble (blend). The more detailed explanations and working code are in the <a href="https://www.kaggle.com/code/alexandervc/op2-u900-team-blend" target="_blank">notebook</a>.</p> <h4>Highlights:</h4> <ul> <li>Main problem - absence of good CV - LB correspondence - forbids the usual strategy to choose weights by CV</li> <li>Solution relies on: how to increase and control diversification; how to avoid overfit-proning choice of weights - a scheme of multi-step blend with the only weight = 0.5 on each step.</li> <li>Measure of diversity - average target-wise correlation of predictions</li> <li>Check by various experiments: models with correlation score 0.8-0.9 - consistently give substantial uplift in blend (about +0.01 - 0.006)</li> <li>Avoid overfit-proning question: how to choose weights of the models with DIFFERENT scores, by the following scheme:</li> <li>Core scheme: sequential blend of the models with EQUAL (almost) scores, giving them EQUAL blend weight (=0.5):</li> <li>step1: LB score 0.575 = 0.5 Pyboost(0.584) + 0.5 <a href="https://www.kaggle.com/code/alexandervc/fork-of-op2-oof-new-folds-v3-catboost?scriptVersionId=152513917" target="_blank">Catboost(0.584)</a> </li> <li>step2: LB score 0.566 = 0.5 step1 (0.575 ) + 0.5 <a href="https://www.kaggle.com/code/alexandervc/nlp-regression-custom-kfold-update1" target="_blank">NN-NLP (0.574)</a></li> <li>step3: LB score 0.559 = 0.5 step2 (0.566 ) + 0.5 <a href="https://www.kaggle.com/code/antoninadolgorukova/op2-simple-mlp-part-of-13th-place-solution" target="_blank">MLP_TargetEncEnsemble (0.566)</a></li> <li>final polishing: 0.558 - blend more Pyboost (0.574,0.577) and NN (0.569,0.570,0.572,0.587) models:</li> </ul> <h4>Experiments with other blend ideas:</h4> <ul> <li>Different weights for B-cells and Myeloid cells (partially successful)</li> <li>Tried, but had not enough time to succeeded:<ul> <li>Estimate variance and correlations for each target (or each row) of predictions and choose blend weights according to modifications of the classical statistical formula - weights are proportional to variance - bigger variance - less confidence - lower weight in blend</li></ul></li> </ul> <h1>4. Robustness</h1> <h2>4.1 How robust is your model to variability in the data? Here are some ideas for how you might explore this, but we’re interested in unique ideas too.</h2> <p>The robustness of our models can be advocated e.g. as follows. Our public and private leaderboard ranking is approximately the same, moreover it corresponds to our ranking during last weeks of challenge (ignoring the effect of public-LB-probing notebooks appearing at the end). Aftermath: we <a href="https://www.kaggle.com/code/alexandervc/op2-public-vs-private-scores" target="_blank">computed the correlation</a> between public and private scores of our submits and it is 0.98. So models well generalize on the unseen data. <br> Additionally we performed the following tests for most of our models during the challenge - changed params a bit and made submissions - the variations was always around 0.001-0.002. </p> <p>All the models have been optimized by local cross-validation scores and only then submitted to LB, we accepted only those changes which improve both CV and LB. </p> <h3>4.2 Add small amounts of noise to the input data. What kinds of noise is your model invariant to? Bonus points if the noise is biologically motivated.</h3> <p>Gaussian noise has been included at the feature generation stage for our neural network models. We tested several values of noise magnitude and chose the optimal values of the noise level. </p> <h1>5. Documentation &amp; code style</h1> <p>The code is documented in the notebooks.<br> The section 3.2 "Model Design - Details” here provides quite detailed description of the solution. </p> <h1>6. Reproducibility</h1> <p>Source code is here available in the notebooks:<br> Pyboost: <a href="https://www.kaggle.com/code/alexandervc/pyboost-secret-grandmaster-s-tool" target="_blank">the basic baseline notebook</a>, other version of the PYBOOST are in the <a href="https://www.kaggle.com/alexandervc/pyboost-u900" target="_blank">notebook</a>.</p> <p>Catboost <a href="https://www.kaggle.com/code/alexandervc/fork-of-op2-oof-new-folds-v3-catboost" target="_blank">Notebook</a> , <a href="https://www.kaggle.com/code/alexandervc/fork-of-op2-oof-new-folds-v3-catboost?scriptVersionId=152513917" target="_blank">version 64, scores 0.584, 0.776</a></p> <p>MLP-like Neural Networks employing target encoding <a href="https://www.kaggle.com/code/antoninadolgorukova/op2-simple-mlp-part-of-13th-place-solution" target="_blank">Notebook</a>, params used during the challenge: <a href="https://www.kaggle.com/code/antoninadolgorukova/op2-simple-mlp-part-of-13th-place-solution?scriptVersionId=153981848" target="_blank">notebook version 52</a>.</p> <p>Neural Networks based on NLP-like SMILES embedding - <a href="https://www.kaggle.com/code/alexandervc/nlp-regression-custom-kfold-update1" target="_blank">the main notebook</a>, the submission with 0.574 (0.766 private) score is <a href="https://www.kaggle.com/code/alexandervc/nlp-regression-custom-kfold-update1?scriptVersionId=151529557" target="_blank">version 11</a>.</p> <p>Blend: <a href="https://www.kaggle.com/code/alexandervc/op2-u900-team-blend" target="_blank">Notebook</a>, selected final submit is version 15 - <a href="https://www.kaggle.com/code/alexandervc/op2-u900-team-blend?scriptVersionId=153084243" target="_blank">direct link</a>.</p> <p>Most of our submissions can be found in the Kaggle datasets <a href="https://www.kaggle.com/datasets/alexandervc/open-problems-single-cell-perturbations-submitsetc" target="_blank">submits and out-of-fold predictions</a>, <a href="https://www.kaggle.com/datasets/alexandervc/open-problems-2-submits-etc" target="_blank">Open Problems 2 Submits, etc</a></p> <p>Information on all submits with public and private scores is in the <a href="https://www.kaggle.com/code/alexandervc/op2-public-vs-private-scores/output?select=df_stat_submissions.csv" target="_blank">file</a></p> <p>Our initial PYBOOST notebook has been openly shared and forked about 100 times, being a component of top public solo models as well as many medal winning solutions - probably the best indication of the reproducibility. </p> <h1>Concluding remarks</h1> <h2>MRRMSE and log-p-values - may not be the perfect choice</h2> <p>The metric mrrmse and preprocessing - log-p-values by Limma - seems to cause certain problems. It seems that combination - mrrmse and log-p-values is too much sensitive to outliers. During the competition - the leaderboard has been probed too easily. It is also quite unusual appearance of the better than top1 late submits just 1-2 days after the end and medal zone solutions like “Nothing but just multiplied a factor of 1.2”. All that indicates: a) we (as a community) not fully understand the problem b) the metric was not chosen perfectly. We are not fully convinced that the argument that p-values allow to catch difference in distributions, while log-fold change will capture only the difference in averages between distributions - that would be the case for p-values of the concordance criteria like KS or Chi2, but it seems p-values by Limma capture only the difference in averages. What should be the proper choice of the metric and processing ? - seems to an interesting and important question. </p> <h2>small number of samples - but sill stable - how to anticipate ?</h2> <p>It seems the small number of samples was really frightening and prevented participation of many experienced Kagglers at the challenge. Small number of samples typically leads to high instability and shake-up at the end, thus people not willing to invest their time with big chances to be randomly ranked at the end. However, surprisingly, that seems to have appeared to be a mistake. There were only moderate changes in ranking for the leaders, aftermath shows quite high correlation 0.98 between public and private leaderboard scoring. So in some sense, a small number of samples was compensated by a large number of targets and overall ensured certain stability. Could be anticipated from the beginning i.e. despite small number of samples overall predictability is quite stable ?</p> <p>Overall “Open problems” team and Kaggle team are doing great job bringing cutting-edge datasets to community consideration and thus allowing to contribute the cutting-edge scientific research. We are happy to be a part of that activity. </p> <p>Thanks for your question ! </p> <p>The main submits (in particular those in the final submit) are collected in the Kaggle dataset: <a href="https://www.kaggle.com/datasets/alexandervc/open-problems-2-submits-etc" target="_blank">https://www.kaggle.com/datasets/alexandervc/open-problems-2-submits-etc</a></p> <p>The blend (ensemble) notebook is here: <a href="https://www.kaggle.com/code/alexandervc/op2-u900-team-blend;" target="_blank">https://www.kaggle.com/code/alexandervc/op2-u900-team-blend;</a> the selected final submit is version 15 - direct link: <a href="https://www.kaggle.com/code/alexandervc/op2-u900-team-blend?scriptVersionId=153084243" target="_blank">https://www.kaggle.com/code/alexandervc/op2-u900-team-blend?scriptVersionId=153084243</a> </p> <p>Please advise me if you need something else. </p> <p>PS </p> <p>Some more submits (partly interseсting with the above mentioned, mainly by the "MLP-like Neural Networks employing target encoding") are collected in the datasets : <br> (with oof predicts): <a href="https://www.kaggle.com/datasets/antoninadolgorukova/op2-submits-and-yoof" target="_blank">https://www.kaggle.com/datasets/antoninadolgorukova/op2-submits-and-yoof</a><br> And in: <a href="https://www.kaggle.com/datasets/antoninadolgorukova/op2-submissions" target="_blank">https://www.kaggle.com/datasets/antoninadolgorukova/op2-submissions</a></p> <p>PSPS</p> <p>Information on ALL submits of out team with public and private scores is in the <a href="https://www.kaggle.com/code/alexandervc/op2-public-vs-private-scores/output?select=df_stat_submissions.csv" target="_blank">file</a> - see also the <a href="https://www.kaggle.com/code/alexandervc/op2-public-vs-private-scores" target="_blank">notebook</a>.</p> <p>To complement the main post:</p> <p>Here is PYBOOST/SKETCHBOOST Nips-poster attached (see under the post) and screenshotted:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2262596%2F870715f11b782d7f267e8fe0933afbf2%2FScreenshot%202023-12-14%20100724.png?generation=1702544987859125&amp;alt=media" alt=""></p> <p>Here is demonstration of speed-up Pyboost achieves comparing to CatBoost, XGBoost. (All three for GPU).<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2262596%2F15781706c29462709c3717f796f31028%2Fphoto_2023-12-14_10-20-03.jpg?generation=1702545696077154&amp;alt=media" alt=""></p> <p>More details in the <a href="https://openreview.net/forum?id=WSxarC8t-T" target="_blank">paper</a> and <a href="https://youtu.be/5xRxuDh_cGk" target="_blank">webinar</a>. </p> <p><a href="https://www.kaggle.com/code/alexandervc/op2-explore-4th-place-magic" target="_blank">https://www.kaggle.com/code/alexandervc/op2-explore-4th-place-magic</a><br> That is 4-th place Magic applied to AmbrosM Pyboost on t-scores</p>
Open Problems – Single-Cell Perturbations
#18: Py-boost predicting t-scores
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>HI <a href="https://www.kaggle.com/alexandervc" target="_blank">@alexandervc</a>, thanks for bringing py-boost to my attention, and congratulations to you, too!</p> <p>Sorry, <a href="https://www.kaggle.com/antoninadolgorukova" target="_blank">@antoninadolgorukova</a>, I overlooked them (maybe because I'm not fluent in R).</p> <p>Congratulations to you, too! </p> <p>Did you notice that in this competition few real EDA notebooks have been published? Besides explaining my machine learning model, I'd like to share some observations which help understand the data and the intricacies of Limma.</p> <h1>Integration of biological knowledge</h1> <h2>Don't trust the cell types!</h2> <p>Let's recapitulate the course of the experiment in a simplified form. We can imagine an experimenter who is in front of a large pot of human blood cells. The pot contains a mixture of six cell types in certain proportions. T cells CD4+ take the largest share (42 %), only 2 % are T regulatory cells:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7917824%2F26d3a6c5971cf2a5a441cb550424728e%2Fpie-chart.png?generation=1701390049641476&amp;alt=media" alt=""><br> The experimenter now takes 145 droplets out of the large pot. Every droplet contains 1550 ± 240 cells (normally distributed). If we counted the cells per cell type in the droplets, we'd see a multinomial distribution. The 145 droplets might be composed like in the following bar chart (fictitious data, sorted from smallest to largest droplet):</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7917824%2F838eb6fe789104225c5bf6cb0f73e060%2Fdrops-before.png?generation=1701390069789098&amp;alt=media" alt=""><br> In the next step, the experimenter adds 145 substances to the 145 droplets and waits 24 hours. After 24 hours the cells are analyzed. If we count the cells again, we get the following picture, as taken from the competition's training data (cell counts for the test data are hidden):</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7917824%2F9a57da671b14ec3198d61c54861cc27f%2Fdrops-after.png?generation=1701390083658870&amp;alt=media" alt=""><br> In this diagram we first see that some compounds are so toxic that in some droplets less than 100 cells survive. These droplets are represented by the leftmost bars in the bar chart.</p> <p>The second observation is much more important: The long red part in the bars for Oprozomib and IN1451 show that these droplets contain several hundred T regulatory cells — much more than at the start of the experiment. Other compounds (e.g., CGM-079) have too many T cells CD8+ (green bar). How can we interpret this observation?</p> <ol> <li>Does IN1451 incite the T regulatory cells to multiply so that we have five times more of them after 24 hours? No.</li> <li>Does IN1451 magically convert NK cells into T regulatory cells? No.</li> <li>Does IN1451 affect the cells in such a way that they are misclassified? Maybe.</li> </ol> <p>Discussing differential gene expression for specific cell types becomes pointless if the cells change their type during the experiment. For the Kaggle competition this means that we have to deal with many outliers: Beyond the at least five toxic compounds, there are at least seven compounds which change the cells' types. Differential expression for these outliers is hard to model. They make cross-validation unreliable, and the outliers in the private leaderboard can't even be predicted by probing the public leaderboard.</p> <h2>Cell count shouldn't affect differential gene expression</h2> <p>Does gene expression in a cell depend on how many cells are in the experiment? Theoretically, it doesn't. A cell behaves the same way whether there are 10 cells in the experiment or 10000. We'd expect, however, a difference in the significance of the experimental results: An experiment with 10000 cells should give more precise measurements than a 10-cell experiment: As the cell count grows, variance of the measurements should decrease, t-score should be farther away from zero, and pvalues should decrease.</p> <p>The competition data don't fulfill this expectation. If we plot the mean t-scores versus the cell count for the 602 cell type–compound combinations (excluding the control compounds), we see a linear relationship: For every cell type, compounds with lower cell counts have positive t-score means, and compounds with higher cell counts have negative t-score means. This correlation between cell counts and t-scores shouldn't exist. It is an artefact of Limma rather than a biological effect.</p> <p>You can plot the diagram with median or variance instead of mean — it will look similar. You can even compare the cell counts to the first principal component of the t-scores and see the same correlation. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7917824%2F50f71507c0241429e92b4ae50b092222%2Fcell-t-before.png?generation=1701390101863675&amp;alt=media" alt=""></p> <p>We can now put together a list of 20 compounds which are to be considered outliers because of low cell counts. Notice that we don't declare single rows of the dataset to be outliers, but all 86 rows related to the 20 compounds:</p> <pre><code>Outliers AT13387 T regulatory cells Alvocidib ≤ several cell BAY mean t-score CD8+ cells &gt; BMS T cells CD8+, Myeloid cells Belinostat control compound too many cells CEP (Delanzomib) ≤ several cell CGM too many T cells CD8+ CGP ≤ several cell Dabrafenib control compound too many cells Ganetespib (STA) T regulatory cells, too many NK cells I-BET151 too many T cells CD8+ IN1451 ≤ several cell LY2090314 T cells CD8+ MLN ≤ several cell Oprozomib (ONX ) ≤ several cell Proscillaridin A;Proscillaridin-A ≤ several cell Resminostat T cells CD8+ Scriptaid T regulatory cells UNII-BXU45ZH6LI T cells CD8+ Vorinostat T regulatory cell </code></pre> <p>After removing the outliers, the diagram looks much cleaner. The variance of the cell counts remains. It is a source of noise which impedes the correct interpretation (and prediction) of differential expressions. Maybe we'd get cleaner data if we equalized the cell counts before library size normalization. This would amount to throwing away a part of the measurements, which isn't desirable either.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7917824%2Ffd906cb718a3308dd60f9dc5e56e1c3d%2Fcell-t-after.png?generation=1701390162014901&amp;alt=media" alt=""></p> <p>After considering the small size of the dataset, the amount of noise and the Limma artefacts (more of them will be shown in the next section), I didn't try to integrate any external biological data into my model. </p> <h1>Exploration of the problem</h1> <h2>A mixture of probability distributions</h2> <p>A histogram of a single row of the training data (18211 t-scores for T cells CD8+ treated with Scriptaid) shows that the distribution is multimodal.</p> <p>The highest mode consists of 269 genes which all have an identical t-score of -3.769. It turns out that these are the 269 genes which are never expressed in T cells CD8+, neither with the negative control nor with any other compound. Isn't this strange? A gene which is never expressed in the whole experiment should have a log-fold change of zero and should not get a t-score at all (because t-score computation involves a division by the variance, and the variance of a never-expressed gene is zero).</p> <p>For Myeloid cells treated with Foretinib, 3856 genes are not expressed (RNA count of zero), yet most of them have a positive t-score. Their highest t-score is 6.228 (resulting in a pvalue of 4e-10 and a log10pvalue of 9.33). If an RNA count is zero, the corresponding log-fold-change (and t-score) should never be positive.</p> <p>We may say that the distribution of the values is a mixture of two distributions:</p> <ol> <li>The values for the genes which are expressed (blue) have a more or less bell-shaped distribution.</li> <li>The values for the genes which are not expressed (orange) have a distribution with an unusual shape, and it is strange that positive differential expressions are reported when not a single piece of RNA is counted.</li> </ol> <p>What we see here is an artefact of Limma, which affects every row of the datset. It suggests that Limma output can be biased and is not ideal for investigating cell-type translation of differential expressions.</p> <pre><code> expressed in T cells CD8+ Scriptaid: not expressed in T cells CD8+ Scriptaid: : -. for genes not expressed at in T cells CD8+ </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7917824%2F7134e68a91b05970eba9705209a7ed46%2Fmixture1.png?generation=1701390193470010&amp;alt=media" alt=""></p> <pre><code> expressed in Myeloid cells Foretinib: not expressed in Myeloid cells Foretinib: : . for genes not expressed at in Myeloid cells </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7917824%2Ff2d00ede4d7735ddf76f1c16aec6fe79%2Fmixture2.png?generation=1701390213955949&amp;alt=media" alt=""></p> <h2>An ideal training set</h2> <p>In the competition overview, the organizers ask: <em>Do you have any evidence to suggest how you might develop an ideal training set for cell type translation beyond random sampling of compounds in cell types? What is the relationship between the number of compounds measured in the held-out cell types and model performance?</em></p> <p>I think we are not yet ready to answer these questions. We first need cleaner data (and more of it):</p> <ul> <li>Cell types must be classified correctly. This may imply that we limit the scope of the work to compounds which do not hamper cell type classification.</li> <li>Samples containing too few cells must be eliminated from the dataset. These samples just add hay to the haystack where we want to find the needle.</li> <li>Even if we have many cells, genes with low rna counts may need to be eliminated. Otherwise they add even more hay to the haystack.</li> </ul> <p>Second, modeling strange t-scores of genes which are never expressed is a waste of time. We need to define a machine-learning task and a metric which reward biological insight rather than forcing people into modeling the noise created by upstream processing steps:</p> <ul> <li>As t-scores are always affected by cell counts and variance estimates, a metric based on less highly-processed data (i.e., log-fold changes or rna counts rather than log10pvalues or t-scores) may lead research into a better direction.</li> <li>Even with log-fold changes, genes with low rna count make more noise than genes with high rna count. A suitable metric should account for this fact.</li> </ul> <h1>Model design</h1> <h2>T-scores are better than log10pvalues</h2> <p>Limma performs t-tests. t-scores are (almost) normally distributed, which is good for machine learning inputs. For this competition, the t-scores were nonlinearly transformed to log10pvalues. The transformation squeezes the nice bell shape into a distribution with a much higher kurtosis.</p> <p>My machine learning models perform better if I transform the log10pvalues into t-score in a preprocessing step, predict t-scores, and transform the predictions back afterwards. Perhaps working with log-fold changes or RNA counts would be even better.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7917824%2Fad0c81caaf72f36d9a859296565e5882%2Ft-score-is-better.png?generation=1701390233521477&amp;alt=media" alt=""></p> <h2>The models</h2> <p>I developed four models:</p> <ul> <li>Py-boost</li> <li>A recommender system based on ridge regression</li> <li>A recommender system based on k nearest neighbors</li> <li>ExtraTrees</li> </ul> <p>I first implemented the Py-boost model, derived from <a href="https://www.kaggle.com/alexandervc" target="_blank">@alexandervc</a>'s public notebook.</p> <p>I then implemented the ExtraTrees model, which resembles <a href="https://www.kaggle.com/alexandervc" target="_blank">@alexandervc</a>'s Py-boost model. All the decision trees are fully grown (i.e., overfitted). The model gets its generalization capability from noise which is added to the target-encoded features deliberately.</p> <p>I then implemented the knn <a href="https://en.wikipedia.org/wiki/Recommender_system" target="_blank">recommender system</a> to have some diversity in the ensemble. Cell types and compounds are identified with users and items, respectively; gene expression is identified with item ratings by users.</p> <p>ExtraTrees and k-nearest-neighbors share the weakness that they cannot extrapolate. Even after dimensionality reduction, our training dataset essentially consists of 614 points in a high-dimensional space, so that most of the points will lie on the convex hull. Of the 255 test points, many will lie outside the convex hull of the training points, which means that the model must extrapolate. To bring the extrapolation capability into the game, I implemented the ridge regression model. </p> <p>The models have cv scores between 0.878 (ExtraTrees) and 0.906 (Py-boost). Py-boost, which was the worst in cross-validation, has the best public and private lb scores (0.572 and 0.748, respectively).</p> <h2>Data augmentation</h2> <p>One of the models (k nearest neighbors) is fed with <strong>data augmentation</strong>: If we know the differential expressions for two compounds, we may assume that a mixture of the two compounds will produce a differential expression which is the average of the two single-compound differential expressions.</p> <p>I experimented with another kind of data augmentation a well: Because there are more than twice as many T cells CD4+ as either Myeloid or B cells and I knew that the cell count biases the results of Limma, I reduced the cell count of the T cells CD4+, pseudobulked them, ran them through Limma and added the results to the training data as another cell type. This augmentation improved the scores of ExtraTrees, but not to the level of Py-boost. Perhaps I should have combined the additional cell type with Py-boost…</p> <h1>Robustness</h1> <p>The robustness of my models is demonstrated in two ways:</p> <p>(1) The models are fully cross-validated. The cross-validation strategy, first documented in <a href="https://www.kaggle.com/code/ambrosm/scp-quickstart" target="_blank">SCP Quickstart</a>, ensures that the model is validated on predicting cell_type–sm_name combinations so that it knows only 17 other compounds for the same cell type. This cross-validation strategy is more robust than the ordinary shuffled KFold, where the model knows 4/5 of all compounds for the same cell type. (And it is much more robust than a simple train-test-split.)</p> <p>I have to admit, though, that I'm not happy with the cv–lb correspondence.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7917824%2F424a99a8b518dd08b96b0026f598c557%2Fcv-scheme.png?generation=1701390265025626&amp;alt=media" alt=""></p> <p>(2) For all models the performance was tested after adding Gaussian noise to the input t-scores. All models are robust against small noise. When the noise gets stronger, the knn and ExtraTrees models suffer more than Py-boost and the ridge recommender system.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7917824%2F9f9538ca1d28e570ae930339c9490145%2Fnoise.png?generation=1701409934387211&amp;alt=media" alt=""></p> <h1>Documentation and code style</h1> <p>The code is documented in the notebooks.</p> <h1>Reproducibility</h1> <p>Source code is here:</p> <ul> <li><a href="https://www.kaggle.com/code/ambrosm/scp-eda-which-makes-sense" target="_blank">EDA which makes sense ⭐️⭐️⭐️⭐️⭐️</a></li> <li><a href="https://www.kaggle.com/code/ambrosm/scp-26-py-boost-recommender-system-and-et" target="_blank">SCP #26: Py-boost, recommender system and ET</a></li> <li><a href="https://github.com/Ambros-M/Single-Cell-Perturbations-2023" target="_blank">GitHub</a></li> </ul> <h1>Conclusion</h1> <p>Let me conclude by summarizing the four main messages of this post:</p> <ol> <li>Recommender systems are a promising starting point for developing models for cross-cell-type differential gene expression prediction. Because of commercial interests, recommender systems are a well-researched topic, and a lot of information is available.</li> <li>Data augmentation is useful, and mixtures of compounds are a natural approach to data augmentation.</li> <li>Although Kaggle competitions with data cleaning, outlier removal and unusual metrics are entertaining, the research objective would profit from another setting. Providing clean data and scoring with a well-understood metric would help participants focus on the real topic rather than the noise in the data.</li> <li>We have seen that Limma in certain situations produces biased outputs. I hope that professional Limma users are aware of these effects and account for them when interpreting results in their research.</li> </ol>
Open Problems – Single-Cell Perturbations
#30: "Melting" the Data
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for keeping up the research on this problem! I am more than happy to try and clarify. </p> <p>As I wrote in "Exploration of the Problem" I converted all input data into long format such that genes actually turned into a feature. Here is a screenshot of the "melted" data frame to make it more concrete:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F15827820%2F52cac9c1ec9653eb971174281a0cef73%2FMeltedTrainData.png?generation=1703893251100013&amp;alt=media"></p> <p>You can find the additional gene features I extracted from the ENSEMBL database in the <code>geneinfo.parquet</code> file in the <a href="https://www.kaggle.com/datasets/frenio/op2-scp-additional-cell-gene-and-mol-features?select=geneinfo.parquet" target="_blank">OP2 Additional Features Dataset</a> that I've made public. I merged this geneinfo with the training data on <code>gene</code> like shown in the following code snippet:</p> <pre><code>train_df_ginf = pd.merge(train_df, selected_geneinfo, on=, how=) </code></pre> <p>Please note that the CPU RAM on Kaggle is too low for this step to work, so this needs to be done on a different platform. I just published a notebook that shows how I added all additional features to the data (<a href="https://www.kaggle.com/code/frenio/30-op2scp-randomforest-with-additional-features/notebook" target="_blank">Random Forest with Additional Features</a>), but due to limited RAM it will not run on Kaggle, unfortunately.</p> <p>I hope this helps and I am happy to explain more, so let me know if you have further questions.</p> <p>Haha, until you came in with your 0.546 score and gave me nightmares! And towards the end, before overfitting public notebooks took over, <a href="https://www.kaggle.com/eliork" target="_blank">@eliork</a> caused nightmares for us all I suppose. </p> <p>Thank you for pointing me to this! I'll look into his solution more closely.</p> <p>Great, thank you for finding the version that I had in mind!</p> <p>Thank you to the organizers of the Open Problems – Single-Cell Perturbations competition for this incredible opportunity. </p> <p>My approach combines four model types: Tabular Models with embeddings and dense layers, fine-tuned Transformer Models, Random Forests, and XGBoosted Forests, all trained on a "melted" format of the training data. </p> <p>In the following sections, I provide a comprehensive breakdown of my approach and the specific models employed. Links to models and other resources are provided throughout the text and can also be found in Section 6.</p> <p></p> <h2>1. Integration of biological knowledge</h2> <p>My initial approach involved training Tabular Neural Networks and Transformers specifically on the categorical features available in the training data. These models excel in encoding complex relationships within their embedding layers. Concurrently, I planned to augment the training data with information sourced from biological databases and web searches, and I intended to incorporate calculated molecular descriptors derived from SMILES. This enriched dataset would then be used to train tree-based models.</p> <h3>Molecular Descriptors</h3> <p>I calculated 1600+ molecular descriptors using the <a href="https://jcheminf.biomedcentral.com/articles/10.1186/s13321-018-0258-y" target="_blank">mordred</a> python library and trained a random forest using all molecular descriptors in order to select the most important features (importance &gt; 0.5 %) based on the feature importance results of the training run, which yielded a set of 23 molecular descriptors. All molecular descriptors as well as the list of most important descriptors can be found in the <a href="https://www.kaggle.com/datasets/frenio/op2-scp-additional-cell-gene-and-mol-features" target="_blank">OP2 Additional Features Dataset</a> in the files <code>mol_descriptors.parquet</code> and <code>important_mol_descriptors.csv</code>, respectively.</p> <pre><code> rdkit, rdkit.Chem mordred, mordred.descriptors calc = mordred.Calculator(mordred.descriptors, ignore_3D=) molecules = [rdkit.Chem.MolFromSmiles(smi) smi compounds[]] features = calc.pandas(molecules) </code></pre> <p>Here <code>compounds['SMILES']</code> contained a list of unique SMILES strings from the training data.</p> <h3>Gene Information</h3> <p>Gene information was obtained from the ENSEMBL database through the <a href="https://jrderuiter.github.io/pybiomart/index.html" target="_blank">pybiomart</a> python package. Note that gene info could only be obtained for 13,423 of the 18,211 genes in this way.</p> <pre><code> pybiomart Server server = Server(host=) dataset = server[][] results = dataset.query(attributes=[, , , , ], filters={: id_filter, : }) </code></pre> <p>From there I added additional features. For example, the number of each nucleobase (ACGT) obtained from the sequence data seemed particularly useful, according to feature importance results evaluated after training of Random Forests. A data frame containing added gene info can be found in the <a href="https://www.kaggle.com/datasets/frenio/op2-scp-additional-cell-gene-and-mol-features" target="_blank">OP2 Additional Features Dataset</a> as <code>geneinfo.parquet</code>.</p> <h3>Cell Features</h3> <p>Finding useful features that distinguish the 6 different cell types in the training data was a challenge for me. I ended up browsing Wikipedia, PubMed, and the <a href="https://www.immunology.org/public-information/bitesized-immunology/cells" target="_blank">website</a> of the British Society for Immunology for information that I could add as cell features. The resulting data frame can be found in the <a href="https://www.kaggle.com/datasets/frenio/op2-scp-additional-cell-gene-and-mol-features" target="_blank">OP2 Additional Features Dataset</a> as <code>cellinfo.csv</code>.</p> <h3>Result of Direct Integration of Biological Knowledge</h3> <p>The best result that I was able to obtain using a tree-based model after adding the features described above was a Random Forest that achieved scores of 0.628 and 0.836 on the public and private leaderboards, respectively. For comparison, my best Random Forest trained on embeddings learned by a Tabular Neural Network (TabMod NN) achieved scores of 0.588 and 0.780.</p> <h3>Embeddings</h3> <p>Due to the inadequacy of direct integration of molecular descriptors, gene, and cell information, I decided to train Random and XGBoosted Forests on embeddings learned by the TabMod Neural Network. The use of embeddings significantly improved my results and might reasonably be considered an indirect form of "integration of biological knowledge," because neural networks like TabMod NN are able to uncover and encode complex, often non-linear relationships inherent in biological systems into these embedding vectors. The embeddings obtained from TabMod NN can be found in the <a href="https://www.kaggle.com/datasets/frenio/op2-single-cell-perturbations-tabmodnn-embeddings" target="_blank">OP2 TabMod NN Embeddings Dataset</a>.</p> <p>Plots of the <code>cell_type</code>, <code>sm_name</code>, and <code>gene</code> embeddings after PCA using 2 components as well as the corresponding code can be found in the <a href="https://www.kaggle.com/code/frenio/30-op2scp-look-at-tabmod-nn-embeddings" target="_blank">Look at Embeddings Notebook</a>. The plot of the <code>gene</code> embeddings shows two large distinct clusters – one very dense, the other less dense. The red marks with labels show a random selection of gene names to be displayed.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F15827820%2Fa3a4ee89d9b9f0bc7d51f7c55e6faad6%2Fgene_embeds.png?generation=1702353451391406&amp;alt=media" alt="Gene Embeddings"></p> <h2>2. Exploration of the problem</h2> <p>At first glance, the training data seemed like a collaborative filtering problem, where cell types are the users, drugs are the items, and gene expression confidence values are the targets. However, thinking about it that way leads to 18211 collaborative filtering problems – one for every gene – and I was intimidated by the idea of training 18211 models to come up with a single submission. Then I realized that collaborative filtering is just a special case of a tabular modeling problem with two features and one target, which led to the idea to "melt" the training data and the test data (convert from wide to long format):</p> <pre><code>train_df = df_de_train.melt(id_vars=[, ], value_vars=df_de_train.iloc[:,:].columns, var_name=, value_name=) </code></pre> <p>Melting of the training data results in a data frame with 11,181,554 rows (11,181,554/614 = 18,211), three features (<code>cell_type</code>, <code>sm_name</code> and <code>gene</code>), and one target (signed -log(p-value) called <code>value</code>). I used this data format for all my models, which then had to predict only one target for each <code>cell_type</code>/<code>sm_name</code>/<code>gene</code> combination. Inference using a model trained on data in that format yielded a list of 4,643,805 prediction values, which I just reshaped back into the submission format of 255 x 18,211 (see e.g. my <a href="https://www.kaggle.com/code/frenio/30-op2scp-random-forest-with-tabmod-embeds" target="_blank">Random Forest Notebook</a>).</p> <h2>3. Model design</h2> <p>For final submission, I used an ensemble of 4 different model types: Tabular Model Neural Networks, fine-tuned Transformer Models, Random Forests, and XGBoosted Forests. These are described in more detail below. The ensemble that achieved the placement in position 30 of the private leader board (with scores of 0.553 and 0.753 in the public and private leaderboards, respectively) had the following structure:</p> <p>5 TabMod NNs using 600 dimensions for gene embeddings and random seeds 42, 55, 120, 457, and 736 and 5 TabMod NNs using 1000 dimensions for gene embeddings and random seeds 42, 199, 550, 855, and 970:</p> <pre><code>df1 = df1_1* + df1_2* + df1_3* + df1_4* + df1_5* + df1_6* + df1_7* + df1_8* + df1_9* + df1_10* </code></pre> <p>5 Random Forests using random seeds 209, 569, 739, 885, and 926:</p> <pre><code>df2 = df2_1* + df2_2* + df2_3* + df2_4* + df2_5* </code></pre> <p>5 XGBoosted Forests using random seeds 117, 150, 234, 624, and 804:</p> <pre><code>df3 = df3_1* + df3_2* + df3_3* + df3_4* + df3_5* </code></pre> <p>5 fine-tuned Transformer Models, TinyBioBert and DeBERTa-v3-small (see details below):</p> <pre><code>df4 = df4_1* + df4_2* + df4_3* + df4_4* + df4_5* </code></pre> <p>The weights used for the final submission were:</p> <pre><code>submission = df1* + df2* + df3* + df4* </code></pre> <p>The following figure shows a correlation heat map of the predictions of the different model types (see the <a href="https://www.kaggle.com/code/frenio/30-op2scp-correlation-of-predictions" target="_blank">Prediction Correlation Notebook</a> for the source code):</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F15827820%2F261951234c01103ce2cb74c09c7dad41%2Fprediction_correlation.png?generation=1702353628253144&amp;alt=media" alt="Prediction Correlation"></p> <p>In the subsequent sections I present detailed descriptions of each model type, along with links to the relevant Kaggle notebooks. These notebooks are designed to run on Kaggle. However, the fine-tuning of transformer models, utilizing the full training data, was conducted on SaturnCloud. To adapt these notebooks for Kaggle, the training data was substantially reduced. As a result, the transformer models showcased on Kaggle are for demonstrative purposes only and do not achieve scores indicative of their full potential in the competition.</p> <h3>TabMod NN</h3> <p>TabMod NN performance was best when the training data was denoised using PCA and 10 components prior to training (see the <a href="https://www.kaggle.com/code/frenio/30-op2scp-tabular-model-nn-with-pca10-denoising" target="_blank">TabMod NN Notebook</a> for details and code). The two kinds of tabular neural networks used for final submission were identical except for the dimensionality (600 and 1000) of gene embeddings used. The models were based on fast.ai’s <code>tabular_learner</code> with the following configuration:</p> <pre><code>learn = tabular_learner(dls, y_range=(y_min, y_max), emb_szs=emb_szs, layers=[, , ], n_out=, loss_func=F.mse_loss) </code></pre> <p>To achieve a gene embedding size of 1000 instead of the default 389, the <code>emb_szs</code> dictionary was customized.</p> <pre><code>emb_szs = {: , : , : } </code></pre> <p>The range of the final sigmoid layer was set to <code>y_min</code> to <code>y_max</code> which were obtained by determining the minimum and maximum target values in the training data after PCA denoising. The three dense layers of size 1000, 500, and 250 were optimized for appropriate expressivity. Finally, the mean squared loss function was chosen to best reflect the evaluation in the leaderboard. Experiments using mean absolute error, huber loss, and log-cosh did not lead to improvements of the model and instead reduced model performance.</p> <h3>Random Forest</h3> <p>Random Forests (see <a href="https://www.kaggle.com/code/frenio/30-op2scp-random-forest-with-tabmod-embeds" target="_blank">Random Forest Notebook</a>) were trained after adding embeddings for <code>cell_type</code>, <code>sm_name</code>, and <code>gene</code> to the training data. These embeddings were obtained from a TabMod NN similar to the one described in the previous section with gene embeddings of dimensionality 1000 but without the use of PCA denoising on the training data (see <a href="https://www.kaggle.com/code/frenio/30-op2scp-export-tabmod-nn-embeddings" target="_blank">Export Embeddings Notebook</a> for the code). Of those the <code>cell_type</code> embeddings of dimensionality 5 were left unchanged, whereas <code>sm_name</code> and <code>gene</code> embeddings (of dimensionality 26 and 1000, respectively) were reduced to 10 components each using PCA.</p> <p>All Random Forest models used in the final submission were identical except for the random seeds used. They were based on scikitlearn’s <code>RandomForestRegressor</code> using 100 trees on 66 % of samples in order to achieve a meaningful out-of-bag error and score. The square root of feature number was used as the maximum number of features per tree which has been <a href="https://scikit-learn.org/stable/auto_examples/ensemble/plot_ensemble_oob.html" target="_blank">shown</a> to be beneficial for the model’s ability to generalize. The minimum number of samples per leaf was set to 5 in order to achieve high expressivity. Such a Random Forest could be trained using the following example code:</p> <pre><code> sklearn.ensemble RandomForestRegressor m = RandomForestRegressor(n_jobs=-, n_estimators=, max_samples=, max_features=, min_samples_leaf=, oob_score=) m.fit(xs, y) </code></pre> <h3>XGBoosted Forest</h3> <p>The same embeddings used to train Random Forests were also used to train XGBoosted Forests using the same dimensionality reductions.</p> <p>The XGBoosted Forests used in the final submission were also identical to each other except for the random seed used. They were trained using the following code (see <a href="https://www.kaggle.com/code/frenio/30-op2scp-xgboosted-forest-with-tabmod-embeds" target="_blank">XGBoosted Forest Notebook</a> for details):</p> <pre><code> xgboost xgb m = xgb.XGBRegressor(device=, n_estimators=, learning_rate=, max_depth=, min_child_weight=, gamma=, subsample=, colsample_bytree=) m.fit(xs, y) </code></pre> <p>The model parameters were optimized using the cross validation scheme proposed by <a href="https://www.kaggle.com/code/ambrosm/scp-quickstart?scriptVersionId=144293041&amp;cellId=8" target="_blank">AmbrosM</a>, where all but 10 % of the data of one of four cell types is used as a validation set. It should be noted that the performance of the model could potentially be enhanced by configuring the <code>max_depth</code> parameter to values exceeding 20. This approach was not extensively explored due to the higher computational costs associated with larger values.</p> <h3>Transformer Models</h3> <p>Two kinds of Transformer models from the <a href="https://huggingface.co/docs/transformers/index" target="_blank">Huggingface Transformers</a> data base were fine-tuned to output a number when given a standardized input sentence. The general model configuration is shown in the following code snippet:</p> <pre><code> transformers TrainingArguments, Trainer args = TrainingArguments(, save_steps=steps, learning_rate=, warmup_ratio=, lr_scheduler_type=, fp16=, evaluation_strategy=, per_device_train_batch_size=bs, per_device_eval_batch_size=bs*, num_train_epochs=epochs, weight_decay=, report_to=, seed=random_seed) model = AutoModelForSequenceClassification.from_pretrained(model_nm, num_labels=) </code></pre> <p>The argument <code>num_labels=1</code> configures the final linear layer of the model to have a single output neuron which is appropriate for a regression task, as it leads to prediction of a single continuous value.</p> <p>The first model was based on the <a href="https://huggingface.co/microsoft/deberta-v3-small" target="_blank">deberta-v3-small</a> model (<a href="https://openreview.net/forum?id=XPZIaotutsD" target="_blank">link to article</a>) with 44M parameters pre-trained on general text data. The "sparse" input used for the DeBERTa model was:</p> <pre><code>trdf[] = + trdf.cell_type + + trdf.sm_name + + trdf.gene </code></pre> <p>Fine-tuning the DeBERTa for 20-25 epochs on an A10 GPU using an 80/20 train-valid-split and a batch size of 256 took about 48-60 hours (see <a href="https://www.kaggle.com/code/frenio/30-op2scp-transformer-deberta-v3-small-demo" target="_blank">Transformer DeBERTa (Demo) Notebook</a> for a training demo that runs on Kaggle in ~2 hours).</p> <p>The second model was based on the <a href="https://huggingface.co/nlpie/tiny-biobert" target="_blank">tiny-biobert</a> model (<a href="https://doi.org/10.48550/arxiv.2209.03182" target="_blank">link to article</a>) with 15M parameters pre-trained on citations and abstracts of biomedical literature using the PubMed dataset. The "verbose" input used for the TinyBioBert model was:</p> <pre><code>trdf[] = + trdf.gene + + trdf.cell_type + + trdf.sm_name + </code></pre> <p>Fine-tuning the TinyBioBert model for 20-30 epochs on an A10 GPU using an 80/20 train-valid-split and a batch size of 256 took about 24-30 hours (see <a href="https://www.kaggle.com/code/frenio/30-op2scp-transformer-tinybiobert-demo" target="_blank">Transformer TinyBioBert (Demo) Notebook</a> for a training demo that runs on Kaggle in ~1:30 hours).</p> <p>Interestingly, the "sparse" input used for the DeBERTa model did not lead to very good performance in the TinyBioBert model, whereas the more "verbose" input used for the TinyBioBert model led to slower training and worse results in the DeBERTa model. This is reflected in the following submission scores (all trained for 5 epochs using the same random seed of 42):</p> <pre><code> public LB privateLB DeBERTa sparse: DeBERTa verbose: TinyBioBert sparse: TinyBioBert verbose: </code></pre> <p>The transformer part of the final ensemble consisted of four TinyBioBert models. These were trained for 30 epochs using seed 42, 30 epochs using seed 546, 20 epochs using seed 419, and 5x5 epochs using different random seeds. Each model was assigned a weight of 0.15 in the Transformer sub-ensemble. The single DeBERTa model used was trained for a total of 22 epochs using different random seeds, and was assigned a weight of 0.4.</p> <h2>4. Robustness</h2> <p>The robustness of each individual model type was evaluated using triplicate submissions with different random seeds by calculating the mean and standard deviation of the scores achieved on the public and private leaderboard (in the case of transformers, the models also varied slightly in the number of epochs trained). </p> <p>TabMod NN (600): 0.567 ± 0.003 public, 0.777 ± 0.007 private.<br> TabMod NN (1000): 0.562 ± 0.002 public, 0.778 ± 0.003 private</p> <p>Random Forest: 0.592 ± 0.004 public, 0.781 ± 0.002 private.</p> <p>XGBoosted Forest: 0.588 ± 0.002 public, 0.780 ± 0.001 private.</p> <p>Transformer (DeBERTa) 22-25 epochs: 0.606 ± 0.013 public, 0.784 ± 0.007 private.<br> Transformer (TinyBioBert) 25-30 epochs: 0.607 ± 0.003 public, 0.777 ± 0.004 private.</p> <p>These results show that the fine-tuned DeBERTa model is the least robust of the models. The fine-tuned TinyBioBert model consistently performed as well as the TabMod NNs on the private leaderboard, but consistently looked worse on the public leaderboard.</p> <h2>5. Documentation &amp; code style</h2> <p>The code is documented in the notebooks linked throughout the notebook. Additionally, a list of links to all notebooks and datasets can be found in Section 6.</p> <h2>6. Reproducibility</h2> <p>The source code and further documentation for all models as well as additional datasets can be accessed through the following links:</p> <p>Model Notebooks:<br> <a href="https://www.kaggle.com/code/frenio/30-op2scp-tabular-model-nn-with-pca10-denoising" target="_blank">TabMod NN Notebook</a><br> <a href="https://www.kaggle.com/code/frenio/30-op2scp-random-forest-with-tabmod-embeds" target="_blank">Random Forest Notebook</a><br> <a href="https://www.kaggle.com/code/frenio/30-op2scp-xgboosted-forest-with-tabmod-embeds" target="_blank">XGBoosted Forest Notebook</a><br> <a href="https://www.kaggle.com/code/frenio/30-op2scp-transformer-deberta-v3-small-demo" target="_blank">Transformer DeBERTa (Demo) Notebook</a><br> <a href="https://www.kaggle.com/code/frenio/30-op2scp-transformer-tinybiobert-demo" target="_blank">Transformer TinyBioBert (Demo) Notebook</a></p> <p>Other Notebooks:<br> <a href="https://www.kaggle.com/code/frenio/30-op2scp-look-at-tabmod-nn-embeddings" target="_blank">Look at Embeddings Notebook</a><br> <a href="https://www.kaggle.com/code/frenio/30-op2scp-correlation-of-predictions" target="_blank">Prediction Correlation Notebook</a></p> <p>Datasets:<br> <a href="https://www.kaggle.com/datasets/frenio/op2-scp-additional-cell-gene-and-mol-features" target="_blank">OP2 Additional Features Dataset</a><br> <a href="https://www.kaggle.com/datasets/frenio/op2-single-cell-perturbations-tabmodnn-embeddings" target="_blank">OP2 TabMod NN Embeddings Dataset</a></p> <p>This comment used to contain Sections 3 to 6 of my solution write-up because a bug kept me from posting it in whole. Now that the bug is fixed, I replaced the original comment with this notice and added Sections 3 to 6 to the main topic.</p> <p>Thank you for reading and for providing so many useful public notebooks to get people started in this competition! </p> <p>I do not know how to interpret these two gene clusters, unfortunately, but do I recall correctly that you also found two clusters of genes in one of your earlier public EDA notebooks using a different clustering approach? (I wasn't able to find it just now.) Happy to discuss more if you have ideas.</p> <p>PS: I am not sure I understand what you mean by key submit, but if your interested in my final submission notebook, I just made it public and you can find it <a href="https://www.kaggle.com/code/frenio/open-problems-super-ensemble-submission/notebook" target="_blank">here</a>. I only used that notebook to load and manually blend the individual model predictions, though. Please let me know if I can provide any other resources that might be helpful.</p>
Open Problems – Single-Cell Perturbations
... & I'd like to thank Kaggle, the challenge host, and everyone who made their notebooks public.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>If the columns with negative correlation are ignored and the combination factor is considered equal to 0.10: (best)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4288268%2Fef7cd49e7d2de235c50a95e56878716d%2Fjj103.png?generation=1701635101435479&amp;alt=media" alt=""></p> <p>If the columns with negative correlation are ignored and the combination factor is considered equal to 0.15:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4288268%2Fba47781ff043bc86bfcb253ecc46c706%2Fjj104.png?generation=1701635185955055&amp;alt=media" alt=""></p> <p>But if all columns are considered with a combination factor equal to 0.10:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4288268%2Fd883d0f0b8965c4ca8f9731868383330%2Fjj105.png?generation=1701635231696726&amp;alt=media" alt=""></p> <p>I think the best general Py-boost notebook is from <a href="https://www.kaggle.com/ambrosm" target="_blank">@ambrosm</a>. which obtained these results:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4288268%2F7f3096cde929117e24e7f9a73d484bde%2Fjj101.png?generation=1701628945608551&amp;alt=media" alt=""></p> <p>If you are careful when Ensembling the results of this notebook with 0.720 and do not consider the columns that have a negative correlation and only combine the rest of the columns with a factor of 0.05, the results will improve. I just did this:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4288268%2Fbcc464865525ec68ddb05de07b057ece%2Fjj102.png?generation=1701628983539562&amp;alt=media" alt=""></p> <p>We named this type of results "Results of impure golden". If you are interested in this topic, take a look at the notebooks I mentioned above. Also, the three notebooks below are exactly about the same topic. I hope that we will publish all these topics in one coherent article soon.</p> <p><a href="https://www.kaggle.com/code/mehrankazeminia/1-tps22nov-pseudo-genetic-algorithm" target="_blank">https://www.kaggle.com/code/mehrankazeminia/1-tps22nov-pseudo-genetic-algorithm</a></p> <p><a href="https://www.kaggle.com/code/mehrankazeminia/2-tps22nov-results-of-impure-golden-eda" target="_blank">https://www.kaggle.com/code/mehrankazeminia/2-tps22nov-results-of-impure-golden-eda</a></p> <p><a href="https://www.kaggle.com/code/mehrankazeminia/3-tps22nov-golden-results-knn-lgbm" target="_blank">https://www.kaggle.com/code/mehrankazeminia/3-tps22nov-golden-results-knn-lgbm</a></p> <p>Hi; thank you<br> We usually follow two rules:</p> <p>1- When two columns have a negative correlation:<br> Instead of their Ensemble, choose one of them alone. (Probably the column that is used as a support and maybe has a lower general score will be selected)</p> <p>2- When two columns have a correlation close to one:<br> Instead of combining them, choose one of them. (Probably the column that is used as the main one and maybe has a higher general score will be selected)</p> <p>Please note that:</p> <p>In many cases, only the final results of the calculations are accessible, that is, there is no access to the validation data. In these cases, Ensembling is possible by trial and error method, and actually Ensembling should be done in the dark.</p> <p>But when the number of answer columns is more than one, the darkness increases. Because it is not known that after finding the right coefficient for ensembling the first columns, the same coefficient is optimal for ensembling the next columns.</p> <p>In this challenge, the predict contains more than eighteen thousand columns. So if a coefficient is chosen for the ensembling of the first column, it cannot be sure that the same coefficient is optimal for more than eighteen thousand ensembling.</p> <p>Can the benefits of ensemble be ignored? Certainly not. So what to do?</p> <p>This is a complex problem and there is no unique answer for all challenges. For example, in this challenge, due to the large number of columns, we decided to make the most of the correlation value of the columns.</p> <p>We previously used Comparative Method and Snap to Grid in the Indoor Location &amp; Navigation challenge:</p> <p><a href="https://www.kaggle.com/code/mehrankazeminia/1-3-indoor-navigation-cost-minimization-floor/notebook" target="_blank">https://www.kaggle.com/code/mehrankazeminia/1-3-indoor-navigation-cost-minimization-floor/notebook</a></p> <p><a href="https://www.kaggle.com/code/mehrankazeminia/2-3-indoor-navigation-comparative-method" target="_blank">https://www.kaggle.com/code/mehrankazeminia/2-3-indoor-navigation-comparative-method</a></p> <p><a href="https://www.kaggle.com/code/mehrankazeminia/3-3-g6-snap-to-grid-fix-the-timestamps" target="_blank">https://www.kaggle.com/code/mehrankazeminia/3-3-g6-snap-to-grid-fix-the-timestamps</a></p> <p>In the Tabular Playground Series challenge - Jul 2021, we used the Smart Ensembling method:<br> <a href="https://www.kaggle.com/code/mehrankazeminia/2-tps-jul-21-smart-ensembling" target="_blank">https://www.kaggle.com/code/mehrankazeminia/2-tps-jul-21-smart-ensembling</a></p> <p>In the Tabular Playground Series - Jul 2022 challenge, we used the Clustering-Ensembling method:<br> <a href="https://www.kaggle.com/code/mehrankazeminia/3-3-tps22jul-clustering-ensembling" target="_blank">https://www.kaggle.com/code/mehrankazeminia/3-3-tps22jul-clustering-ensembling</a></p> <p>Despite the public score, private score, mistake in selection or mistake in the categories of samples, etc., it is clear that looking at the rows can be misleading. But looking independently at each prediction column, for their Ensembling, there is no risk. Even if the prediction columns are not independent of each other. We tried this issue thousands and thousands of times and even at the end of this competition, when the private scores were determined, the issue became clear to us once again.</p> <p>We are looking for a coefficient for the linear combination of two lists (a pair of lists). But to combine two pairs of lists, we need to look for two coefficients. For eighteen thousand pairs of lists, we should not expect a coefficient to be the best coefficient and the optimal coefficient.</p> <p>Let me give you an example: you may remember that for the initial Ensembling, we even used a general score of 0.720, because it would have made the result better. Why? Because it worked well for only a few columns. If in this match, instead of eighteen thousand columns, we were to predict only one column, this would never have happened and we should not have used the general score of 0.720.</p> <p>The prediction in this competition included more than eighteen thousand columns, which may have less history in machine learning competitions, while the number of features was also very limited. These themes made every detail seem very important. Even regardless of the result of the match, it was definitely a good experience for us.</p> <p>In this challenge, there are only two features, namely "cell_type" and "sm_name". That's why we used "Feature Augmentation". We added two new columns (two new features) separately for each prediction column as follows:</p> <p>If we separate the cells based on 'cell_type' and assume that the drugs will usually have similar responses on each of these divisions, we can hope that by finding the average effects, we have obtained a new feature. For example, for y0 and the new feature of zero column, the correlation coefficient is 0.24. Of course, this amount is repeated for other columns as well.</p> <p>Also, if we separate the cells based on 'sm_name', we get a new feature by finding the average effects. In this case, for y0 and the new feature of column zero, the correlation coefficient is 0.62, and this value is almost repeated for other columns.</p> <p>In addition, we added other features by using "SMILES", which were used for all prediction columns at the same time. At first, we added about five hundred binary features using "fragments of SMILES" and then we added about two thousand new binary features using "morgan fingerprint from SMILES".</p> <p>We have publicly released two notebooks that cover the above topicss:</p> <p><a href="https://www.kaggle.com/code/mehrankazeminia/1-op2-eda-linearsvr-regressorchain" target="_blank">https://www.kaggle.com/code/mehrankazeminia/1-op2-eda-linearsvr-regressorchain</a></p> <p><a href="https://www.kaggle.com/code/mehrankazeminia/3-op2-feature-augment-fragments-of-smiles" target="_blank">https://www.kaggle.com/code/mehrankazeminia/3-op2-feature-augment-fragments-of-smiles</a></p> <p>Finally, the results of LinearSVR and neural network and NLP and PYBOOST were combined and we used "Separately Ensembling for Each Column". That is, Ensembling was done with different coefficients (based on the correlation value).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4288268%2Ffc0fcd5fe67f70542c4e5409a76b81ad%2Fphoto_2023-12-03_01-15-24.jpg?generation=1701553548981440&amp;alt=media" alt=""></p>
Open Problems – Single-Cell Perturbations
10th Place Solution for the Open Problems – Single-Cell Perturbations
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you! Your feature engineering ideas are great! And I really appreciate you sharing your ideas and implementation process with us. I also have a lot to learn. I just updated the solution page and added a simple instruction notebook about training Part A: <a href="https://www.kaggle.com/code/mori123/single-cell-perturbations-part-a-model-training" target="_blank">"Single-Cell Perturbations(Part A-Model Training)"</a>.</p> <p>Thanks to Kaggle for organizing such an interesting competition.<br> Thanks to the teammates who fought side by side. And other Kagglers who share various ideas.</p> <h1>Context</h1> <p>• <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview" target="_blank">Competition Overview</a></p> <p>• <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data" target="_blank">Competition Data</a></p> <h1>Overview of the approach</h1> <p>Overall, our final submitted result is an ensemble of two parts. </p> <p><strong>Final submission = Part A×0.7 + Part B×0.3</strong></p> <p>We will explain the composition of Part A and Part B respectively.</p> <h1>Part A</h1> <p>It is an ensemble composed of neural networks with different structures.</p> <h3>Feature engineering</h3> <p>After many attempts, we finally adopted the following two features from the public notebook <a href="https://www.kaggle.com/code/antoninadolgorukova/op2-feature-engineering" target="_blank">"OP2: feature engineering"</a> as our training and testing feature.</p> <p><em>(1). PCA followed by target encoding (cell type + drug) without noise (pca_target_encoded_features)</em> <br> <em>(2). PCA followed by target encoding (cell type + drug) with noise (pca_target_encoded_features_s0.1)</em></p> <p>(1) is subjected to PCA on 18,211 target variables and produced features representing cell type means for drugs and cell type means for compounds. And by using features (2) that add random noise, we believe this will make the model more generalizable.</p> <h3>Models</h3> <ul> <li>NN with Fully Connected Layers, as well as BatchNormalization, Dropout, ReLU, and Linear Activation Functions.</li> <li>The initial seed is set to 42 and is fixed. The loss function is mae, the optimizer is Adam.</li> <li>The structure of NN is shown below.</li> </ul> <pre><code>tf.random.set_seed() model = Sequential([ Dense(), BatchNormalization(), Activation(), Dropout(), Dense(), BatchNormalization(), Activation(), Dropout(), Dense(, activation=), Dropout(), Dense(, activation=), BatchNormalization(), Dropout(), Dense(, activation=), Dropout(), Dense(,activation= ) ]) model.(loss=, optimizer=tf.keras.optimizers.Adam(), metrics=[custom_mean_rowwise_rmse]) history = model.fit(full_features, labels, epochs=, verbose=) </code></pre> <p>A simple Part A model training process is shown in this notebook <a href="https://www.kaggle.com/code/mori123/single-cell-perturbations-part-a-model-training" target="_blank">"Single-Cell Perturbations(Part A-Model Training)"</a> .</p> <h3>Model ensemble</h3> <p>We use feature (1) and feature (2) to train the network respectively. By changing the feature, the number of network layers, the number of nodes, and the number of training epochs, we successfully obtained a set of individual models scoring 0.567-0.582 on LB. <br> Subsequently, we ensembled 7 models and use it as <strong>Part A</strong> (LB: 0.556/PB 0.741).</p> <p>In fact, if the weight of each model is determined based on CV during ensemble, we find that the above results can be further optimized to LB0.556/PB0.74.</p> <p>Moreover, the highest score we obtained was (LB: 0.557/PB 0.737) after combining 8 models through this method.</p> <h1>Part B</h1> <p>This part is also composed of NN and an ensemble of different models.</p> <h3>Feature engineering</h3> <p>The following two approaches are used to perform feature engineering.</p> <p>・One-hot encoding on cell_type and sm_name <br> ・SMILES(ChemBERTa-77M-MLM)</p> <h3>Models</h3> <pre><code> (nn.Module): (): (DnnV5, self).__init__() self.conv1d1 = nn.Conv1d( in_channels=, out_channels=, kernel_size=, stride=, padding=, bias=) self.batch_norm1 = nn.BatchNorm1d() self.dense1 = nn.utils.weight_norm(nn.Linear(, )) self.batch_norm2 = nn.BatchNorm1d() self.dropout2 = nn.Dropout() self.dense2 = nn.utils.weight_norm(nn.Linear(, )) self.batch_norm4 = nn.BatchNorm1d() self.dropout4 = nn.Dropout() self.dense4 = nn.utils.weight_norm(nn.Linear(, num_targets)) (): b,w = x.shape x = x.reshape(b,w,) x = self.conv1d1(x) x = x.reshape(b,) x = self.batch_norm1(x) x = F.leaky_relu(self.dense1(x)) x = self.batch_norm2(x) x = self.dropout2(x) x = F.leaky_relu(self.dense2(x)) x = self.batch_norm4(x) x = self.dropout4(x) y = self.dense4(x) y </code></pre> <p>The highest score on LB when using model = DnnV5() for each fold is 0.579. <br> For an ensemble of 5 fold + 3 seed, the best LB 0.568 for the above model structure can be obtained.</p> <p>Additionally, <a href="https://www.kaggle.com/code/ambrosm/scp-quickstart" target="_blank">"SCP Quickstart"</a> was referred to for fold creation.</p> <h3>Model ensemble</h3> <p>Multiple models were created with different smiles and model structures, and the final ensemble was created with the following ratio.</p> <pre><code>ensemble_submission = * (sub0568* +sub0570*+ sub0571*+ sub0571_2*+ sub0573*+sub0576_1* + sub0576_2*)+ lbsub567* </code></pre> <p>Among them, lbsub567 referred to the public notebook <a href="https://www.kaggle.com/code/misakimatsutomo/blend-for-single-cell-perturbations" target="_blank">"Blend for Single-Cell Perturbations"</a>. <br> With the above model ensemble, we get a score of 0.560 on LB. We use this ensemble as <strong>part B</strong>.</p> <p>Based on the above process, our final submission is <strong>Part A×0.7 + Part B×0.3</strong>, and <strong>(LB: 0.554/PB 0.741)</strong> is obtained.</p> <h1>Things that didn't work</h1> <p>• Pseudo labels</p> <p>• Feature normalization</p>
Open Problems – Single-Cell Perturbations
118th Place Solution for the Open Problems – Single-Cell Perturbations Competition draft
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to express gratitude to Kaggle for hosting the competition and Open Problems in Single-Cell Analysis scientific collaboration for a single-cell perturbational dataset and for select 144 compounds from the Library of Integrated Network-Based Cellular Signatures (LINCS) Connectivity Map dataset (<a href="https://pubmed.ncbi.nlm.nih.gov/29195078/" target="_blank">PMID: 29195078</a>) <a href="https://www.cell.com/action/showPdf?pii=S0092-8674%2817%2931309-0" target="_blank"> L1000</a></p> <h1>1. Integration of biological knowledge</h1> <h2>Context</h2> <p>• Business context: <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview</a><br> • Data context: <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data</a></p> <h1>2. Exploration of the problem</h1> <p><a href="https://ru.wikipedia.org/wiki/%D0%9A%D0%BE%D0%BB%D0%B8%D1%87%D0%B5%D1%81%D1%82%D0%B2%D0%B5%D0%BD%D0%BD%D1%8B%D0%B9_%D0%B0%D0%BD%D0%B0%D0%BB%D0%B8%D0%B7_%D1%8D%D0%BA%D1%81%D0%BF%D1%80%D0%B5%D1%81%D1%81%D0%B8%D0%B8_%D0%B3%D0%B5%D0%BD%D0%BE%D0%B2" target="_blank">Gene expression profiling</a></p> <p>The wiki contains models such as the Generalized Linear Model, which is used by the … place :</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16791068%2F3e1cc01e2e176b1984ddebf6537d029b%2FIMG_20231220_155835.jpg?generation=1703080145948652&amp;alt=media"></p> <p>Train</p> <table> <thead> <tr> <th>Compound</th> <th>Gene 0</th> <th>1</th> <th>..</th> <th>18 211</th> </tr> </thead> <tbody> <tr> <td>1, ABT-199 (GDC-0199)</td> <td>2547</td> <td>2006</td> <td>..</td> <td>3387</td> </tr> <tr> <td>2, ABT737</td> <td>780</td> <td>381</td> <td>..</td> <td>2093</td> </tr> <tr> <td>3, AMD-070 (hydrochloride)</td> <td>3335</td> <td>3424</td> <td>..</td> <td>1355</td> </tr> <tr> <td>..</td> <td>..</td> <td>..</td> <td>..</td> <td>..</td> </tr> <tr> <td>613,Myeloid cells,YK 4-279</td> <td>3373</td> <td>1433</td> <td>..</td> <td>1618</td> </tr> </tbody> </table> <p>Inference</p> <table> <thead> <tr> <th>Compound</th> <th>Gene 0</th> <th>1</th> <th>..</th> <th>18 211</th> </tr> </thead> <tbody> <tr> <td>254,Myeloid cells,YK 4-279</td> <td>?</td> <td>?</td> <td>?</td> <td>?</td> </tr> </tbody> </table> <p>Predict: signed -log10(p-values)</p> <h1>3. Model design</h1> <p>3.1 Gene order model<br> 3.2 Plot line model<br> 3.3 Marker point on plot model<br> 3.4 Multiplier model</p> <h3>3.1 Gene order model</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16791068%2F66047a0cc4015b2b6e3280bf543d6a68%2F2023-12-20%20%2015.34.41.png?generation=1703075796711416&amp;alt=media"></p> <p>Search Queries: 19,B cell, BMS-387032<br> Search Results: </p> <table> <thead> <tr> <th>Gene</th> <th>Rank 0 .. 18 211</th> </tr> </thead> <tbody> <tr> <td>AL1173282</td> <td>2547</td> </tr> <tr> <td>AC2397982</td> <td>2006</td> </tr> <tr> <td>AC0118992</td> <td>682</td> </tr> <tr> <td>AP0056711</td> <td>3470</td> </tr> <tr> <td>…</td> <td>…</td> </tr> <tr> <td>ACAP1</td> <td>2072</td> </tr> <tr> <td>ARHGAP15</td> <td>3590</td> </tr> </tbody> </table> <p>Predict rank. Not predict value.</p> <h3>3.2 Plot line model</h3> <p>Curve class prediction Attachments compound.pdf :</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16791068%2F59437c342447b7260b22e2020552a7b0%2F2023-12-20%20%2018.47.27.png?generation=1703088550144370&amp;alt=media"></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16791068%2F9f5120b3fbb36e68e40ac8feef6dfa0e%2F2023-12-20%20%2018.45.03.png?generation=1703088579086403&amp;alt=media"></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16791068%2Fc043c2cd488217adf1a1fabb95f71dbe%2F2023-12-20%20%2018.40.33.png?generation=1703088612243004&amp;alt=media"></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16791068%2F805ea55569935020bf6bb5a1e68b03b6%2F2023-12-20%20%2018.53.40.png?generation=1703088628570177&amp;alt=media"></p> <h3>3.3 Multiplier model</h3> <p>Predict point:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16791068%2F859c65d626ecc5312a34953294675b9a%2F2023-12-20%20%2019.11.37.png?generation=1703089042788063&amp;alt=media"></p> <p>Predict R for points.</p> <h3>3.4 Multiplier model</h3> <p>Predict zoom:<br> Magic of 4th place:</p> <p>Examine the multiplier coefficient in the attachments: Multiplier.xlsx</p> <table> <thead> <tr> <th>Gene</th> <th>Cell</th> <th>Multiplier</th> </tr> </thead> <tbody> <tr> <td>BMS-387032</td> <td>B cells</td> <td>3 ,09</td> </tr> <tr> <td>Lamivudine</td> <td>B cells</td> <td>2 ,9</td> </tr> <tr> <td>AZD-8330</td> <td>Myeloid cells</td> <td>2 ,031</td> </tr> <tr> <td>Perhexiline</td> <td>Myeloid cells</td> <td>1 ,870</td> </tr> <tr> <td>AT13387</td> <td>Myeloid cells</td> <td>1 ,675</td> </tr> </tbody> </table> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16791068%2Ffb4c98e53f29150d2bbe8c6cb587e618%2F2023-12-20%20%2021.47.14.png?generation=1703098200209837&amp;alt=media"></p> <h2>Overview of the approach</h2> <h2>Data preprocessing, feature engineering:</h2> <h2>The models</h2> <h2>Validation Strategy</h2> <h2>Details of the submission</h2> <h2>What was impactful about the submission.</h2> <h2>What was tried and didn’t work.</h2> <h1>4. Robustness</h1> <table> <thead> <tr> <th>Model</th> <th>Private</th> <th>Public</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> </tr> </tbody> </table> <h1>5. Documentation &amp; code style</h1> <h2>Code samples feature engineering:</h2> <h2>Code samples model training:</h2> <h2>Code samples model inference:</h2> <h1>6. Reproducibility</h1> <table> <thead> <tr> <th>name</th> <th>link</th> </tr> </thead> <tbody> <tr> <td>118th Place notebook</td> <td><a href="https://www.kaggle.com/emmawilsonev/118th-place-solution-for-the-open-problems-singl" target="_blank">https://www.kaggle.com/emmawilsonev/118th-place-solution-for-the-open-problems-singl</a></td> </tr> </tbody> </table> <h2>Helpful notebooks:</h2>
Open Problems – Single-Cell Perturbations
16th Place Solution Writeup for the Open Problems – Single-Cell Perturbations (Los Rodriguez)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We finally had some time to writeup our strategy for the OP2 challenge. It was a super engaging competition, and we're really thankful to both the organizers and fellow competitors for making it such a blast! The whole experience taught us a ton, and we're happy to share what we did/discover along the way. Can't wait for the next challenge!</p> <h2>Context</h2> <p>In this competition, the main objective was to predict the effect of drug perturbations on peripheral blood mononuclear cells (PBMCs) from several patient samples. For convenience, we have created a Python package with the model here <a href="https://github.com/scapeML/scape" target="_blank">https://github.com/scapeML/scape</a>. </p> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data</a></li> </ul> <h2>Overview of the approach</h2> <p>Similar to most problems in biological research via omics data, we encountered a high-dimensional feature space (~18k genes) and a low-dimensional observation space (~614 cell/drug combinations) with a low signal-to-noise ratio, where most of the genes show random fluctuations after perturbation. The main data modality to be predicted consisted of signed and log-transformed P-values from differential expression (DE) analysis. In the DE analysis, pseudo-bulk expression profiles from drug-treated cells were compared against the profiles of cells treated with Dimethyl Sulfoxide (DMSO). In addition, challenge organizers also provided the raw data from the single-cell RNA-Seq experiment and from an accompanying ATAC-Seq experiment, conducted only in basal state.</p> <p>At the beginning of the challenge, we tested different models using the signed log-pvalues (“de_train” data) alone, such as simple linear models, ensembles of gradient boosting with drug and cell features, conditional variational autoencoders, etc. We soon realized that a simple Neural Network using only a small subset of genes to compute drug and cell features (median of the genes grouped by drug and cell) was enough to have a competitive model.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F311361%2F72933c3ebc89980311c9824a42bffde2%2Fnn-architecture.png?generation=1702418417955250&amp;alt=media" alt=""></p> <p>The figure above shows the final architecture used for our submission. We used a Neural Network that takes as inputs drug and cell features and produces signed log-pvalues. Features were computed as the median of the signed log-pvalues grouped by drugs and cells, calculated from the de_train.parquet file. Additionally, we also estimated log fold-changes (LFCs) from pseudobulk expression, to produce a matrix of the same shape as the de_train data but containing LFCs instead. We also computed the median per cell/drug as features.</p> <p>Similar to a Conditional Variational Autoencoder (CVAE), we used cell features both in the encoding part and the decoding part of the NN. Initially, the model consisted of a CVAE that was trained using the cell features as the conditional features to learn an encoding/decoding function conditioned on the particular cell type. However, after testing different ways to train the CVAE (similar to a beta-VAE with different annealing strategies for the KL term), we finally considered a non probabilistic NN since we did not find any practical advantage in this case with respect to a simpler non-probabilistic NN. </p> <h3>Neural Net</h3> <p>We created a method to parametrize the architecture of the NN and the feature extraction from different data sources. This is the code needed to create the NN through the scAPE library specifically created for this challenge: <a href="https://github.com/scapeML/scape/blob/222b19f47a32afb8d157aecd5e46b23d90b73e9d/scape/_model.py#L678" target="_blank">https://github.com/scapeML/scape/blob/222b19f47a32afb8d157aecd5e46b23d90b73e9d/scape/_model.py#L678</a></p> <p>We used <code>n_genes=64</code> (top 64 genes sorted by variance across conditions).This generates a NN with 9637475 parameters (36.76 MB). The inputs are computed from de_train and from log-fold changes calculated from pseudobulk. Cell features are duplicated both in the encoder and decoder. We did some permutations to estimate the distribution of CV errors, permuting drug and cell features, also in the encoder and decoder part. Drug features have more impact on the final error (something to expect since there are 146 datapoints per cell type + B/Myeloid drugs). For the cell features, in general we observed that when used through the encoder and decoder, the NN places more importance in the cell features on the decoder rather than the encoder. This might suggest that cell features are more important for performing a conditional decoding of the drug features which is cell-type specific.</p> <h3>Model selection</h3> <p>Using the previous NN, we did a leave-one-drug-out for NK cells, which resulted in 146 models. We used the median of the predictions from the 146 models on the cell/drugs for the submission to generate what we call <em>base predictions</em>.</p> <p>The idea of using this strategy is motivated by the fact that NK cells are the most similar ones to B/Myeloid cells. The other advantage of adopting a leave-one-drug-out approach for NK-cells is that it allows us to estimate how well the model generalizes to unseen drugs, on a per-drug basis per cell type. We also observed that in general, the median was much better than the mean for aggregating the results of the 146 models, since it is more robust to outliers (some models did not generalize well on some drugs, and early stopping selected bad models in those situations).</p> <p>We also trained a second neural network with the same hyperparameters, but this time using only the top 256 most variable genes and focusing on the 60 most variable drugs. In this second set of predictions, instead of predicting the 18211 genes, the NN predicts only the top 256 genes used as inputs. We did this because we realized the NN was learning to decide if there was an effect on a given cell type from a small set of genes (essentially, determining where to place values close to 0 in the matrix). We reasoned that training again on only a subset of the data, where most of the changes were concentrated, would help increase performance for that subset of genes. We generated 60 models and computed the median of the predictions, which we referred to as <strong>enhanced predictions</strong>.</p> <p>We finally replaced the base predictions with the enhanced predictions (on the subset of genes/drugs). For the final submission, to be more conservative, we mixed the predictions in 0.80/0.20 proportions (0.80 given to the enhanced predictions). <strong>We tested this strategy with several different base predictions, and it always resulted in a boost of performance, which was also the case on the private leaderboard.</strong> A reproducible notebook for the submission is available at: <a href="https://github.com/scapeML/scape/blob/main/docs/notebooks/solution.ipynb" target="_blank">https://github.com/scapeML/scape/blob/main/docs/notebooks/solution.ipynb</a>.</p> <p>One limitation of this strategy is that most of the trained models are very similar, and the blending with the median is very conservative. We also tested different CV strategies, and we found that using blendings of models trained on a 4-CV setting with handpicked drugs on both B/Myeloid cells provided better results in the private leaderboard. However, we didn't trust this strategy that much since it was not very stable and it was hard to understand how well those models were performing in particular cell/drugs combinations.</p> <h3>Baselines</h3> <p>We think that having simple baselines is important to understand 1) if the model works, and 2) how good it is. </p> <p>We decide to use two simple baselines: predicting zeros (as the baseline used in the competition, which achieves a 0.666 error in the public LB), and the median of the genes grouped by drugs (computed on the training data). The second baseline is more informative.</p> <p>We combined those baselines with our leave-on-drug-out strategy to produce plots per drug, so we could have an upper bound estimation of the generalization for each drug. Here is an example for NK and Prednisolone:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F311361%2Fba8aa85e97594aed0ab09f324528e0fa%2Fexample-nk-prednisolone.png?generation=1702418460653383&amp;alt=media" alt=""></p> <h2>Specific questions</h2> <h3>1. Use of prior knowledge</h3> <p>We decided against using LINCS data in our model because it primarily focuses on cancer cell data, which tends to have a molecular state quite distinct from the PBMCs we were investigating. Despite our exploration of published work and datasets related to predicting drug-induced changes in single-cell states, none of them encompassed the vast array of drug perturbations examined in the challenge. Additionally, we chose not to integrate external data into our approach due to concerns about handling batch effects caused by differences in laboratory settings, protocols, and other related factors.</p> <p>We've also tried to use ATAC-seq with no success. We believe that this data would be useful in the case of not having any measurement for B/Myeloid. However, more informative than ATAC-seq are the actual perturbational profiles on the small subset of drugs on those cells.</p> <p>Here is a summary of different features we tested:</p> <ol> <li>Dummy binary variables for cell types and drugs.</li> <li>Basal omics features, including average expression in DMSO and average accessibility per the ATAC-Seq data.</li> <li>Summary statistics of the drug response after grouping by cell type and drug, including standard deviation, mean and median.</li> <li>A “raw” fold-change computed over the raw counts of the single-cell RNA-Seq data (this is, without the corrections applied by limma).</li> <li>Centroids of the principal component space of the drug response data, using cell-type and drug as grouping variables.</li> </ol> <p>And we obtained the best results using the median of the response after grouping by cell type and drug in combination with the raw fold changes, using only a subset with the most variable genes in the dataset.</p> <h3>2. Exploration of the problem</h3> <p>We found that the error distribution for the drugs was more or less even except for the first four drugs, which accounted for 15% of the total error. As expected, we also found out that the response of drugs that were harder to predict was very different from training cell-types in comparison to the test cell-type. For instance, the drug that accounted for the maximum proportion of the error (IN1451), produced a strong response in NK cells, but seemed to have little effect in T cells CD4+, T cells CD8+ and T regulatory cells.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F311361%2Fec079268e6ac589644cf28ae1921517f%2Ffig6.png?generation=1702418626577518&amp;alt=media" alt=""></p> <p>Our approach was refined to better understand cell-type errors, aiming to identify the most challenging cell type for accurate prediction. We evaluated 15 drugs across all cell types, selecting 4 at random for testing. This test set was used for cell-type cross-validation, where the model was trained on data from all 15 drugs, excluding the 4 test drugs within a specific cell type. Our method facilitated evaluation of predictive performance for each cell type. Our findings, illustrated in the figure below, indicated that myeloid cells were more difficult to predict than others, corroborating RNA-Seq PCA analysis results.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F311361%2F86646a0d4e453e58e2856a74b029b11d%2Ffig7.png?generation=1702418748195660&amp;alt=media" alt=""></p> <p>Regarding genes, we investigated if specific biological functions were harder to predict. An enrichment analysis of the top 5% genes with the highest average error in our local CV setup was conducted using MSigDB hallmarks and <a href="https://www.kaggle.com/code/pablormier/op2-biologically-aware-dimensionality-reduction" target="_blank">decoupleR</a>. This revealed that certain hallmarks, such as epithelial mesenchymal transition and TNF alpha signaling, had a significant number of genes with high error rates:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F311361%2Fcc674087faeccb96578b85de39123a7a%2Ffig8.png?generation=1702418832195389&amp;alt=media" alt=""></p> <h3>3. Model design</h3> <p>We wanted to check if simpler models could perform just as well. So, we cut down the input features in our models. Considering that our architecture was simple already, we aimed to find the fewest input features that could match the local CV performance of using the top 128 genes with the highest variance.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F311361%2Ff29dbc0d67029908ca8bcf86be308426%2Ffig9.png?generation=1702418909501736&amp;alt=media" alt=""></p> <p>Interestingly, we found that models with 8 to 64 input features would achieve similar performance that the model that employed 128 features.</p> <p>Regarding explainability, even though the model is not easily interpretable, we put some extra care in understanding better how the NN behaved through the leave-on-drug-out + baselines, and by doing permutations on the input data after training a model, to asses the impact on the validation loss.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F311361%2F309451c68977bac8fa6e310a788103df%2Ffig10.png?generation=1702418986950956&amp;alt=media" alt=""></p> <p>We observed that while both components had a direct impact on the performance of our model, the mean of the errors after drug features permutation was higher compared to the average error after cell features permutation. This is something to expect, as we have more data points of gene values per drug (146 data points per cell type except B/Myeloid), but we only have 6 data points grouping by cell type. We used this type of permutation tests to estimate the importance that different features had in the CV error.</p> <h3>4. Robustness</h3> <p>Our model included a Gaussian Noise layer from Keras to perturb the input data. We used this to test CV errors for different noise levels. The following figure shows that a gaussian noise of std=0.01 w. This is the value we selected for training the final models:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F311361%2F21ba9f226003e9246391f7181ba834bb%2Ffig11.png?generation=1702419081567894&amp;alt=media" alt=""></p> <h3>5. Documentation &amp; code style</h3> <p>For convenience, we refactored the code and created a package called “scape” (<a href="https://github.com/scapeML/scape" target="_blank">https://github.com/scapeML/scape</a>) using <a href="https://pdm-project.org/" target="_blank">https://pdm-project.org/</a>, which contains the code that we finally used for the submission. The code is documented using numpydoc docstrings, and we included a series of notebooks using the scape package to learn how to use it and how to manually create the setup for generating our submission. We have put effort into developing a library that allows for the comfortable configuration and parameterization of neural networks, with an automatic mode for calculating diverse features from drugs and cell lines.</p> <h3>6. Reproducibility</h3> <p>In order to improve reproducibility, we show how the tool package can be installed and used directly from Google Colab <a href="https://colab.research.google.com/drive/1-o_lT-ttoKS-nbozj2RQusGoi-vm0-XL?usp=sharing" target="_blank">https://colab.research.google.com/drive/1-o_lT-ttoKS-nbozj2RQusGoi-vm0-XL?usp=sharing</a>. </p> <p>We also included an environment.yml file to exactly recreate the environment we used for testing using conda.</p> <h2>Sources</h2> <ul> <li><a href="https://github.com/scapeML/scape" target="_blank">https://github.com/scapeML/scape</a></li> <li><a href="https://academic.oup.com/bioinformaticsadvances/article/2/1/vbac016/6544613?login=false" target="_blank">https://academic.oup.com/bioinformaticsadvances/article/2/1/vbac016/6544613?login=false</a></li> <li><a href="https://www.kaggle.com/code/pablormier/op2-biologically-aware-dimensionality-reduction" target="_blank">https://www.kaggle.com/code/pablormier/op2-biologically-aware-dimensionality-reduction</a></li> </ul>
Open Problems – Single-Cell Perturbations
178th Place Solution for the Open Problems – Single-Cell Perturbations
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Many thanks to the organizers for hosting the Open Problems – Single-Cell Perturbations competition. Congratulations to the winners and everyone who participated, I really learnt alot from the discussions and great notebooks!</p> <p>I thought particularly great contributions and insights were from the following:</p> <p><a href="https://www.kaggle.com/alexandervc" target="_blank">@alexandervc</a><br> <a href="https://www.kaggle.com/antoninadolgorukova" target="_blank">@antoninadolgorukova</a><br> <a href="https://www.kaggle.com/mehrankazeminia" target="_blank">@mehrankazeminia</a><br> <a href="https://www.kaggle.com/somayyehgholami" target="_blank">@somayyehgholami</a></p> <p>Context:</p> <p>Business context: <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview</a><br> Data context: <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data</a><br> Integration of Biological Knowledge:</p> <p>Even though I was not at the top of the leader-board I wanted to share my experience in this competition and my approach! </p> <p>One of the objectives of this competition is to determine how compounds are involved in changes in gene expression within cells. For this competition, the organizers designed and generated a novel single-cell perturbational dataset in human peripheral blood mononuclear cells (PBMCs). 144 compounds were selected from the Library of Integrated Network-Based Cellular Signatures (LINCS) Connectivity Map dataset (PMID: 29195078) and measured single-cell gene expression profiles after 24 hours of treatment. The experiment was repeated in three healthy human donors, and the compounds were selected based on diverse transcriptional signatures observed in CD34+ hematopoietic stem cells.</p> <p>Before I started to use ML models, I was interested in exploring the dataset within this competition to determine the compounds involved and the distribution of cell types associated with them as outlined in the following notebook:</p> <p><a href="https://www.kaggle.com/code/insiyajafferji/op-single-cell-dataset-and-cell-type-distribution" target="_blank">https://www.kaggle.com/code/insiyajafferji/op-single-cell-dataset-and-cell-type-distribution</a></p> <p>In this notebook some initial dataset exploration helped me to understand cell type differences within PBMC samples treated with the compounds. I found that NK, CD4+ T cells, CD8+ T cells and Tregs are in all PBMC samples of the compounds being treated and there are a subset of PBMCs where compounds treated also have B and myeloid cells. PBMCs have been reported to contain a multitude of distinct multipotent progenitor cell populations and therefore treatment of certain drug compounds could have the ability to affect cell type distribution aswell as gene expression which can have an impact on the DE/DGE analysis.</p> <p>Interestingly some notebooks have suggested that the highest bias (differences between predicted and true values) and variability of gene DE predictions are related to individual drugs rather than cell types. <br> <a href="https://www.kaggle.com/code/antoninadolgorukova/op2-analysis-of-different-models-predictions" target="_blank">https://www.kaggle.com/code/antoninadolgorukova/op2-analysis-of-different-models-predictions</a></p> <p>Based on this information, the specific embeddings that I thought would help my ML models would be based on cell type, compound (sm_name and SMILES), and DE of genes (gene name).</p> <p>Exploration of the problem:</p> <p>When I was exploring the ways to approach the challenges associated with this competition I found the following paper useful:<br> <a href="https://genomebiology.biomedcentral.com/articles/10.1186/s13059-021-02544-3" target="_blank">https://genomebiology.biomedcentral.com/articles/10.1186/s13059-021-02544-3</a></p> <p>Here, feature selection an essential technique for single-cell data analysis in high-dimensional datasets.<br> Importantly, feature selection is an effective strategy to reduce the feature dimension and redundancy and can alleviate issues such as model overfitting in downstream analysis. Different from dimension reduction methods (e.g. principal component analysis) where features in a dataset are combined and/or transformed to derive a lower feature dimension, feature selection methods do not alter the original features in the dataset but only identify and select features that satisfy certain pre-defined criteria or optimise certain computational procedures. Some of the most popular research directions include selecting genes that can discriminate certain cell types. </p> <p>In this competition setup, participants were tasked with modelling differential expression (DE), which enables us to estimate the impact of an experimental perturbation on the expression level of every gene in the transcription (18211 genes in this dataset). The Limma model was used to determine ‘differential expression’ (DE) methods for biological data analysis in this dataset.</p> <p>The cell type proportion differs in PBMCs treated with different compounds and would therefore have different differential gene expression. For example T cells have a very distinct gene expression profile (i.e express CD3E) compared to B cells (CD79a, MSA41) and Myeloid cells (CD14, FCGR3A(CD16)). Also T cell sub-types also express differences, for example Regulatory T cells are classified by the expression of FOXP3, IL2RA and CTLA4 and Cytotoxic CD8 T cells are enriched in cytotoxic-related genes including GNLY, CCL5, NKG7, GZMH, LYZ, GZMB, and GZMK. CD4 T cells will express the CD4 gene whereas CD8 T cells will express CD8A and CD8B genes. Typically in single cell analysis the result is displayed as TSNE or UMAP plot showing distinct clusters with ideally one cell type and marker plots and DGE analysis can be used to confirm cell types such as in the following paper (<a href="https://pubmed.ncbi.nlm.nih.gov/34911770/" target="_blank">https://pubmed.ncbi.nlm.nih.gov/34911770/</a>)</p> <p>The following paper shows the combined protein and transcript analysis of single-cell RNA sequencing in human peripheral blood mononuclear cells (<a href="https://bmcbiol.biomedcentral.com/articles/10.1186/s12915-022-01382-4" target="_blank">https://bmcbiol.biomedcentral.com/articles/10.1186/s12915-022-01382-4</a>)</p> <p>Supervised ensemble classification models are popular among bioinformatics applications and have recently seen their increasing integration with deep learning models. Ensemble feature selection methods, typically, rely on either perturbation to the dataset or hyperparameters of the feature selection algorithms for creating ‘base selectors’ from which the ensemble could be derived. Generally, hybrid methods are motivated by the aim of taking advantage of the strengths of individual methods while alleviating or avoiding their weaknesses.</p> <p>Model design:</p> <p>I tried a number of approaches for the ML model and found that ensembeling models gave the best approach as described in section 2 of exploration of the problem. I used a combination of blends and ridge models. Selection of certain cell types such as B cells, T regulatory cells and NK cells did help improve the model and including CD8 T cells gave a worse score.</p> <p>I used the features "cell_type" and "sm_name" and used certain a 'cell_type' with a selection of compounds that will have similar responses on each of these cell divisions. </p> <p>RMSE was calculated for each line and then to select the best model for each compound there were two submissions for every line that has RMSE &gt;1</p> <p>I experimented with a number of blends and I would like to acknowledge the following notebooks from the following participants:<br> -Daphne Anga <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/457081" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/457081</a><br> -Mehran Kazeminia, Somayyeh Gholami <a href="https://www.kaggle.com/code/mehrankazeminia/3-op2-feature-augment-fragments-of-smiles" target="_blank">https://www.kaggle.com/code/mehrankazeminia/3-op2-feature-augment-fragments-of-smiles</a></p> <p>For including the Ridge model thanks to AMBROSM and MT for the following:<br> <a href="https://www.kaggle.com/code/masato114/scp-quickstart-another-cv-strategy/notebook" target="_blank">https://www.kaggle.com/code/masato114/scp-quickstart-another-cv-strategy/notebook</a></p> <p>Robustness:<br> When using the cross-validation it did help in improving the robustness of the model in this competition and could help with over fitting. I included the cross-validation strategy that makes four folds.</p> <p>Documentation, Code and Reproducibility:<br> Please see the Kaggle link to my approach below.<br> <a href="https://www.kaggle.com/code/insiyajafferji/code-178th-priv-pub-0-759-0-531" target="_blank">https://www.kaggle.com/code/insiyajafferji/code-178th-priv-pub-0-759-0-531</a></p>
Open Problems – Single-Cell Perturbations
17th Place Solution for the Open Problems – Single-Cell Perturbations (ADEFR)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We are thrilled to share our solution with the community. Thanks to Kaggle and the Organizers for the opportunity to participate in such an interesting challenge.</p> <h2>1 Context</h2> <ul> <li><a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview" target="_blank">Competition Overview</a></li> <li><a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data" target="_blank">Competition Data</a></li> </ul> <h2>2 Overview</h2> <ul> <li>Our solution comprises an ensemble of <strong>four gradient boosting based regression models</strong>, all trained on a shared feature set. Each model is endowed with distinctive hyperparameters and tailored training settings.</li> <li>Regression models are trained individually for each of the 129 compounds in the public/private test set, with the exception of instances where multi-regression is performed using <a href="https://github.com/sb-ai-lab/Py-Boost/blob/master/tutorials/Tutorial_2_Advanced_multioutput.ipynb" target="_blank">SketchBoost</a>.</li> <li>Local training and evaluation follow a "one-vs-rest" style cross-validation approach across four cell types with known target variables: <code>['NK cells', 'T cells CD4+', 'T cells CD8+', 'T regulatory cells']</code>.</li> <li>Important features used:<ul> <li>Gene-wise PCA derived from the differential gene expression (DGE) table.</li> <li>Mean single-cell gene expression averaged over cell types and compounds, with particular emphasis on the control compound expression levels to enhance results.</li> <li>Gene-wise PCA derived from downsampled single-cell transcriptomics data.</li></ul></li> <li>We averaged feature values of nearest neighbor genes (neighbors were calculated based on the DGE data). This potentially reduced noise and improved downstream modeling.</li> <li>Moreover, we realized the impact of the number of cells used for DGE per <code>(cell type, compound)</code> on the mean target value per <code>(cell type, compound)</code>. Consequently, we incorporated the number of cells listed in the training data per <code>(cell type, compound)</code> to define importance weights for each training record using the formula: <code>w = 1-1/np.sqrt(1+num_cells['obs_id'][cell_type][compound])</code></li> <li>Our final submission is a blend of our solution (more details in the <code>4 Models</code> section) and a <a href="https://www.kaggle.com/code/olegpush/op2-eda-lb" target="_blank">public one</a> with 0.5-0.5 equal weights.</li> </ul> <h3>2.1 Things that did not work</h3> <ul> <li>Inclusion of multiome data based features</li> <li>Inclusion of external knowledge based on<ul> <li>SMILES (we were experimenting with <a href="https://www.rdkit.org/docs/GettingStartedInPython.html" target="_blank">RDKit</a>)</li> <li>the <a href="https://string-db.org/" target="_blank">STRING database</a></li> <li><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7289078/" target="_blank">sci-Plex</a>, <a href="https://pubmed.ncbi.nlm.nih.gov/29195078/" target="_blank">LINCS data</a></li></ul></li> </ul> <h3>3 Feature engineering</h3> <h4>3.1 Input data</h4> <ul> <li>We extract features from the differential gene expression (DGE) data (<code>de_train.parquet</code>) and the single cell transcriptomics data (<code>adata_train.parquet</code>, <code>adata_obs_meta.csv</code>):</li> </ul> <pre><code> pandas pd tr = pd.read_parquet(data_path + , engine=) genes = (tr.columns[:]) feats = (tr[tr[]==][].unique()) trx = pd.read_pickle(data_path + ) tro = pd.read_csv(data_path + ) genesx = (trx.index) obs = pd.DataFrame({:trx.columns}).reset_index().set_index() obs = obs.join(tro.set_index()).sort_values() X = trx.transpose() X = X.join(obs[[,]]) ... </code></pre> <ul> <li>We exclude single-cell records based on the file <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/454107" target="_blank">provided by Kaggle</a> </li> </ul> <pre><code>excluded_ids = pd.read_csv(data_path + ) excluded_ids[] = excluded_ids = excluded_ids.pivot(index=, columns=, values=) excluded_ids = excluded_ids.fillna() excluded_ids = excluded_ids[trx.columns] = trx[[]] = .join(excluded_ids).fillna() = (-) = np.array(.astype()) trx_ = np.array(trx) trx_ = * trx_ trx_ = pd.DataFrame(trx_) trx_.columns = trx.columns trx_.index = trx.index trx = trx_ ... </code></pre> <ul> <li>Importantly, we downsample the single-cell data to get an <strong><em>equal number of cells per cell type and perturbation</em></strong> and we use this subsampled data for some of the downstream generated features</li> </ul> <pre><code>D = [] sample_num = X.groupby([,]).count().().() (sample_num, end = ) sm_name X[].unique(): (sm_name,end = ) ct X[].unique(): x = X[X[]==sm_name] x = x[x[]==ct] x = x[genesx] x = x.head(sample_num) D.append(x) D = pd.concat(D,axis=) </code></pre> <h4>3.2 Single-cell transcriptomics data based PCA</h4> <ul> <li>We generate gene representations by performing PCA on this subsampled dataset</li> </ul> <pre><code> sklearn.decomposition PCA n_components = pca = PCA(n_components=n_components) d = D[genes].transpose() features_pca_genes_sc = pca.fit_transform(d) ... </code></pre> <h4>3.3 Mean gene expression based on the single-cell transcriptomics data</h4> <ul> <li>Mean gene expresison is calculated per cell type and compound</li> </ul> <pre><code>X = trx.transpose() X = X.join(obs[[,,]]) X = X[X[]] X[] features_mean_expression = {} cell_type X[].unique(): (cell_type, end = ) features_mean_expression_ = X[X[]==cell_type] features_mean_expression_ = features_mean_expression_.groupby()[genesx].mean() ... </code></pre> <h4>3.4 DGE based PCA</h4> <pre><code>cell_types_train = [, , , ] D = tr[tr.apply( x: x[] cell_types_train, axis = )][genes] D = D.transpose() D = (D - D.mean())/D.std() n_components = pca = PCA(n_components=n_components) features_pca_genes = pca.fit_transform(D) </code></pre> <h4>3.5 Raw DGE values of the 17 compounds known for all six cell types</h4> <pre><code>features_dge = {} cell_type cell_types: (cell_type, end = ) d = tr[tr[]==cell_type] d = d.set_index() d = d[genes].transpose() features_ = pd.DataFrame() feat feats: feat d.columns: features_[feat] = d[feat] : features_[feat] = features_dge[cell_type] = features_ ... </code></pre> <h4>3.6 Nearest Neighbor based 'smoothing' over genes:</h4> <pre><code> sklearn.neighbors NearestNeighbors feats = (tr[tr[]==][].unique()) tr_genes = tr[tr.apply( x: x[] feats, axis = )][genes] tr_genes = tr_genes.transpose() tr_genes = (tr_genes - tr_genes.mean())/tr_genes.std() gene_nn = nbrs = NearestNeighbors(n_neighbors=gene_nn+).fit(tr_genes) nn_distances, nn_indices = nbrs.kneighbors(tr_genes) cell_type cell_types: (cell_type, end = ) d = features_dge[cell_type] d_ = np.array(d) d_nn = np.zeros(d.shape) ii ((d)): n_neighbors = nn_indices.shape[]- jj= neighbor nn_indices[ii,:]: jj+= d_nn[ii,:] += d_[neighbor,:] d_nn[ii,:] = d_nn[ii,:]/n_neighbors ... </code></pre> <h4>3.7 Smoothed DGE values:</h4> <p>We use Ridge regression to reduce the noise for DGE values</p> <pre><code> numpy np (): eps = eps * np.var(data, axis=) nmat = data + np.random.normal(size=data.shape) * eps[, ...] solvemat = np.linalg.inv(nmat.T.dot(nmat) + lambd * np.diag(np.ones(nmat.shape[]))).dot(nmat.T) coefs = solvemat.dot(data) coefs features_dge_smoothed = {} cell_type cell_types: (cell_type, end = ) d = features_dge[cell_type] d_ = np.array(d).transpose() coefs = get_smoothing_coefs(d_) d_smoothed = (d_.dot(coefs)).transpose() d_smoothed = pd.DataFrame(d_smoothed) d_smoothed.index = d.index cols = [] col feats: cols.append(+col) d_smoothed.columns = cols features_dge_smoothed[cell_type] = d_smoothed </code></pre> <h3>4 Models</h3> <h4>4.1 Blending different boosting models</h4> <p>In our modeling framework, we use various boosting libraries with different settings.</p> <ul> <li><strong>LGBM:</strong> We predict DGE values for each of the 129 target compounds with different <a href="https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMRegressor.html" target="_blank">LightGBM regressors</a>. This model achieved a public LB score of 0.571. </li> <li><strong>LGBM_LIN:</strong> We added the prediction of a simple Linear Regression as extra features to the previous setting. Public LB score is 0.573.</li> <li><strong>SketchBoost:</strong> We performed multi-regression for the 129 target compounds with a single <a href="[https://github.com/sb-ai-lab/Py-Boost/blob/master/tutorials/Tutorial_2_Advanced_multioutput.ipynb]" target="_blank">SketchBoost</a> model. We trained this model with the following parameters: <code>{"ntrees":4096, "lr"=0.01, "subsample"=0.5, "colsample"=0.5, "min_data_in_leaf"=5, "max_depth"=5}</code></li> <li><strong>DART:</strong> We also use LightGBM with DART trained for 130 iterations with the following parameters: <code>{"feature_fraction": 0.7, "bagging_fraction": 0.6, "data_sample_strategy":'goss', 'boosting_type': 'dart'}</code>. Furthermore, we average out the predictions of five DART models trained with different random seeds.</li> </ul> <p>In our solution, we blend these models with equal (0.25) weights and then combine our prediction with a <a href="https://www.kaggle.com/code/olegpush/op2-eda-lb" target="_blank">public one</a>, as described above in the Overview section.</p> <h4>4.2. Feature importance</h4> <p>Below, we show some of our feature importance measurements for the SketchBoost component.<br> Observe that almost half of all importance is attributed to DGE-based gene PCA features (3.4). Ridge regression smoothing (3.7) proved to be slightly more effective than KNN-based smoothing (3.6). The remaining importance goes to raw compound DGE features (3.5) and mean expression values (3.3). </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F611446%2Ff6b0a8cb4592370dd6a3a59d726fb68b%2Fcategory_imp.png?generation=1702845405654725&amp;alt=media" alt="Top 20 most important features for SketchBoost"></p> <p>Finally, we show the top 20 most important features for the SketchBoost component.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F611446%2F52ad6d64f8ab92a9cc0d1252b5c89399%2Ftop20.png?generation=1702845427700935&amp;alt=media" alt="Top 20 most important features for SketchBoost"></p>
Open Problems – Single-Cell Perturbations
1st Place Solution Writeup for Open Problems – Single-Cell Perturbations
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you Makio. Indeed, there is an intersection, which is interesting!</p> <p>I would like to thank the organizers and Kaggle for hosting this exciting competition. I am also grateful to the participants who shared starter notebooks, datasets, and insightful ideas. Below is a more detailed writeup of my solution, including late findings.</p> <h1>Competition Page</h1> <p><a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview</a></p> <p><a href="https://openproblems.bio/" target="_blank">https://openproblems.bio/</a></p> <h1>1. Integration of Biological Knowledge</h1> <p>Since the input features only consisted of pairs of short keywords, that is, cell types and small molecule names, and given the large size of the target variable, I was quickly convinced that I needed to somehow enrich the input feature space. I therefore dedicated my first days of the competition to this task. First, I searched for biological word/term embeddings in the literature, and found the paper ''BioWordVec, improving biomedical word embeddings with subword information and MeSH’’ by Zhang et al [1]. The paper directed me to the code on Github where I could find pretrained embeddings for biological terms. This was motivated by the fact that 1) I would be able to find most cell types, and small molecule names in such embeddings, and 2) The embeddings would encode rich information about the general meaning of each term. With these embeddings, I created larger input features and trained a regression model. This achieved 0.767 on the public leaderboard. With a better hyperparameter search and feature engineering, I improved the score to 0.614. As this seemed to be a good direction to go, I decided to further enrich the input features. This time, I searched for the definition of each cell type and small molecule name on wikipedia. For this, I used the python library <code>wikipedia</code> <a href="https://pypi.org/project/wikipedia/" target="_blank">https://pypi.org/project/wikipedia/</a>. I then represented each cell type and small molecule name by a few sentences describing it, then I bootstrapped an embedding from the descriptions. For example, Nk cells were described by: ''Natural killer cells, also known as NK cells or large granular lymphocytes (LGL), are a type of cytotoxic lymphocyte critical to the innate immune system that belong to the rapidly expanding family of known innate lymphoid cells (ILC) and represent 5–20% of all circulating lymphocytes in humans. The role of NK cells is analogous to that of cytotoxic T cells in the vertebrate adaptive immune response. NK cells provide rapid responses to virus-infected cell and other intracellular pathogens acting at around 3 days after infection, and respond to tumor formation.’’ I also explored different numbers of sentences to describe each cell type and small molecule.<br> While this is interesting from the biological point of view, it did not improve the leaderboard score. In fact, the score became worse (0.656 vs. 0.614 previously). This can be explained by the fact that such natural language descriptions came with some noise, and pretrained embeddings were probably not computed to deal with this. Fine-tuning the embeddings on natural language descriptions of biological terms also fell short.</p> <p>Because my initial idea about input feature enrichment did not meet my expectations, I decided to look for alternatives. Thanks to the discussions in the forum, I came across a notebook proposing to use SMILES to encode chemical structures of small molecules. I immediately decided to use ChemBERTa embeddings of SMILES encodings and observed a significant improvement in the evaluation metric MRRMSE on the validation data splits (I used a 5 fold cross-validation setting throughout the competition). With this, I developed additional data augmentation techniques, including the mean, standard deviation, and (25%, 50%, 75%) percentiles of differential expressions per cell type and small molecule in the training data.</p> <h1>2. Exploration of the Problem</h1> <p>As mentioned in the previous section, I started the competition by trying to build rich features for the input pairs (cell_type, sm_name). Ultimately, the use of ChemBERTa features of small molecules’ SMILES appeared to be an important step towards this goal. Combined with the mean, standard deviation, and (25%, 50%, 75%) percentiles per cell type and small molecule, I achieved an optimal input feature representation.</p> <p>In my experiments, I used a 5-fold cross-validation setting with a fixed seed (42). It was hard to achieve a good score on the validation sets of the 2nd and 4th folds. On these folds, the MRRMSE on the validation set was approximately 1.19, and 1.15 on average, respectively. On the 1st, 3rd and 5th folds the average scores were 0.86, 0.86, and 0.90, respectively. The scores are the average across different model architectures (LSTM, 1d-CNN, GRU) and different input feature combinations (''initial’’, ''light’’, ''heavy’’). The three different input feature representations are as follows:<br> “initial”: ChemBERTa embeddings, 1 hot encoding of cell_type/sm_name pairs, mean, std, percentiles of targets per cell_type and sm_name<br> “light”: ChemBERTa embeddings, 1 hot encoding of cell_type/sm_name pairs, mean targets per cell_type and sm_name<br> “heavy”: ChemBERTa embeddings, 1 hot encoding of cell_type/sm_name pairs, mean, 25%, 50%, 75% percentiles of targets per cell_type and sm_name<br> The figure below shows the training curves (MRRMSE) per fold averaged over all three model architectures.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3937164%2F8e7a5107e722900173e206496e4e8ee6%2FMRRMSE.png?generation=1702286999246540&amp;alt=media" alt=""></p> <p>The differences in the validation MRRMSE in the above figure motivated me to take a closer look into the validation sets, where I found different distributions of cell types. The figure below shows the predominant cell types per fold and the corresponding average (across models and different input feature representations) validation MRRMSE. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3937164%2F8e816183ed88332f09fb95731adcf775%2FHardCelltypes.png?generation=1702287123989655&amp;alt=media" alt=""></p> <p>In the validation sets of the 1st, 3rd, and 5th folds, the predominant cell types (in terms of percentage) are ''T regulatory cells’’, ''B cells’’, and ''Nk cells’’, respectively. On the 2nd and 4th folds, ''T cells CD8+’’ and ''Myeloid cells’’ are the most represented cell types in the validation sets, respectively. The percentage is computed as the number of occurrences of a cell type in the validation set divided by the number of occurrences of that cell type in the training set.<br> From the bar-plots above, the cell types that are easier to predict are ''T regulatory cells’’, ''B cells’’, and ''Nk cells’’, while ''T cells CD8+’’ and ''Myeloid cells’’ are the hardest to predict. Based on this observation, an ideal training set should include more ''T cells CD8+’’ and ''Myeloid cells’’ than the rest of the cell types. In this way, trained ML models would be able to generalize to other cell types.</p> <h1>3. Model Design</h1> <h2>Model Architecture</h2> <p>I tried different model architectures, including gradient boosting models, MLP, and 2D CNN which did not work so well. I finally selected LSTM, GRU, and 1-d CNN architectures as they performed better on the validation sets. Below I show a rough implementation of the GRU model.</p> <pre><code>dims_dict = {: {: , : , : }, : {: {: , : , : }, : {: [,], : [,], : [,]} }} (nn.Module): (): (GRU, self).__init__() self.name = self.scheme = scheme self.gru = nn.GRU(dims_dict[][][self.scheme][], , num_layers=, batch_first=) self.linear = nn.Sequential( nn.Linear(dims_dict[][][self.scheme], ), nn.Dropout(), nn.ReLU(), nn.Linear(, ), nn.Dropout(), nn.ReLU()) self.head = nn.Linear(, ) self.loss1 = nn.MSELoss() self.loss2 = LogCoshLoss() self.loss3 = nn.L1Loss() self.loss4 = nn.BCELoss() (): shape1, shape2 = dims_dict[][][self.scheme] x = x.reshape(x.shape[],shape1,shape2) y : out, hn = self.gru(x) out = out.reshape(out.shape[],-) out = torch.cat([out, hn.reshape(hn.shape[], -)], dim=) out = self.head(self.linear(out)) out : out, hn = self.gru(x) out = out.reshape(out.shape[],-) out = torch.cat([out, hn.reshape(hn.shape[], -)], dim=) out = self.head(self.linear(out)) loss1 = *self.loss1(out, y) + *self.loss2(out, y) + *self.loss3(out, y) yhat = torch.sigmoid(out) yy = torch.sigmoid(y) loss2 = self.loss4(yhat, yy) *loss1 + *loss2 </code></pre> <p>In my late experiments, I realized that 1d-CNN and GRU are actually the best architectures as they achieve the best scores alone (0.733 for GRU and 0.745 for 1d-CNN on Private LB). LSTM alone achieves 0.839 on Private LB. With 0.25xLSTM + 0.65xCNN the Private LB is 0.725, and with 0.25xLSTM + 0.65xGRU the Private LB is 0.723.</p> <h2>Loss Functions and Optimizer</h2> <p>I simultaneously optimized 4 loss functions via weighted averaging: MSE, MAE, LogCosh, and BCE. The weights are 0.32, 0.24, 0.24, and 0.2, respectively.<br> This was found to enhance the predictive performance of models. The Adam optimizer with learning rate 0.001 for LSTM and CNN, and 0.0003 for GRU was used to train the models. LogCosh is defined as:</p> <pre><code> (nn.Module): (): ().__init__() (): ey_t = (y_t - y_prime_t)/ torch.mean(torch.log(torch.cosh(ey_t + ))) </code></pre> <p>LogCosh is similar to MAE with the difference being that it is a softer version that can allow smoother convergence. It was adapted from <a href="https://github.com/tuantle/regression-losses-pytorch" target="_blank">https://github.com/tuantle/regression-losses-pytorch</a>.</p> <p>The BCE loss is indeed special as it is often used for classification tasks. However, I argue that it sends better signals to the models and optimizers when the target values are close to zero. To demonstrate this, consider the following two pieces of code:</p> <pre><code>m1 = nn.Sigmoid() loss = nn.BCELoss() = torch.tensor([], requires_grad=).unsqueeze() target = torch.sigmoid(torch.tensor([-], requires_grad=).unsqueeze()) output1 = loss(m1(), target) (output1.item()) m2 = nn.Identity() loss = nn.MSELoss() = torch.tensor([], requires_grad=).unsqueeze() target = torch.tensor([-], requires_grad=).unsqueeze() output2 = loss(m2(), target) (output2.item()) </code></pre> <p>With this example, one can observe that the MSELoss tells the model and optimizer that "it is ok, there is no mistake here". Obviously, there is a mistake, and BCELoss can see it as it returns a high loss value (0.694 compared to 0.010 for MSELoss). My choice of the BCELoss in this competition is motivated by the fact that most target values are from a Gaussian distribution with mean 0 as can be seen in the figure below.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3937164%2F7d9566cc821cac514ad9ebc4fd4e658d%2FGaussian.png?generation=1702287278847099&amp;alt=media" alt=""></p> <h2>Hyperparameters</h2> <ul> <li>250 epochs of training</li> <li>Learning rate 0.001 for LSTM and CNN, and 0.0003 for GRU</li> <li>Gradient norm clip value: [5.0, 1.0, 1.0] for the three schemes ''initial'', ''light'', and ''heavy''</li> </ul> <h1>4. Robustness</h1> <p>I conducted 4 experiments using different subsets of the training data, and monitored the private leaderboard score. I considered subsets of the initial training data (de_train) with sizes 25%, 50%, 75%, and 100%. Below 25%, we cannot cover all small molecules in the test set (id_map) even with a stratified split on sm_name, and hence the one hot encoding algorithm cannot run. With 25%, I achieved 0.946. With 50%, I achieved 0.815. With 75% of the training data, it is 0.769, and with the full data the private leaderboard is 0.719 (which is better than my winning submission because I removed padding in the ChemBERTa model). The figure below shows the robustness of my approach as a decreasing curve, i.e., improvement of the MRRMSE with increasing training data amount.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3937164%2F1eeb2e54c1c23a317402cb5c8ede967d%2Frobustness.png?generation=1702287304367920&amp;alt=media" alt=""></p> <p>My second data augmentation technique can be regarded as noise addition. I randomly replace 30% of the input features’ entries with zeros, and add the resulting input feature together with the correct target as a new training datapoint. This has proven to improve the predictive performance of my models. In this sense, my models are robust to the noise as their performance is not hindered but rather improved. The biological motivation here is that we might not need to know the complete chemical structure of a molecule (assuming the dropped input features are from sm_name) to know its impact on a cell. Similarly, there might be a biological disorder in a cell, and we would still expect that cell to respond to a molecule (drug) in the same way as a normal cell.</p> <p>Below is the data augmentation function</p> <pre><code> (): copy_x = x_.copy() new_x = [] new_y = y_.copy() dim = x_.shape[] k = (*dim) i (x_.shape[]): idx = random.sample((dim), k=k) copy_x[i,:,idx] = new_x.append(copy_x[i]) np.stack(new_x, axis=), new_y </code></pre> <h1>5. Documentation and Code Style</h1> <p>The documentation and software dependencies are available on Github at <a href="https://github.com/Jean-KOUAGOU/1st-place-solution-single-cell-pbs" target="_blank">https://github.com/Jean-KOUAGOU/1st-place-solution-single-cell-pbs</a></p> <h1>6. Reproducibility</h1> <p>Code is available and well documented on Github at <a href="https://github.com/Jean-KOUAGOU/1st-place-solution-single-cell-pbs" target="_blank">https://github.com/Jean-KOUAGOU/1st-place-solution-single-cell-pbs</a>. Reproduction scripts are added.</p> <h1>Sources</h1> <p><a href="https://www.nature.com/articles/s41597-019-0055-0" target="_blank">[1] BioWordVec, improving biomedical word embeddings with subword information and MeSH</a><br> <a href="https://github.com/tuantle/regression-losses-pytorch" target="_blank">Pytorch Regression Loss Functions</a><br> [ChemBERTa](<a href="https://huggingface.co/DeepChem/ChemBERTa-77M-MTR" target="_blank">https://huggingface.co/DeepChem/ChemBERTa-77M-MTR</a></p> <p>Thanks for the question. Maybe I was not precise enough.<br> In Section "Loss Functions and Optimizers", I wrote <code>The BCE loss is indeed special as it is often used for classification tasks. However, I argue that it helps models better learn the signs of the predicted values as they are mostly close to zero.</code></p> <p>I actually meant that the BCE loss applied on the sigmoid of predictions and targets returns a higher loss (i.e., sends a better signal to the optimizer) when the values involved are close to zero; indeed, most values are close to zero, see plot above. To demonstrate this, let's consider the two pieces of code below:</p> <pre><code>m1 = nn.Sigmoid() loss = nn.BCELoss() = torch.tensor([], requires_grad=).unsqueeze() target = torch.sigmoid(torch.tensor([-], requires_grad=).unsqueeze()) output1 = loss(m1(), target) (output1.item()) m2 = nn.Identity() loss = nn.MSELoss() = torch.tensor([], requires_grad=).unsqueeze() target = torch.tensor([-], requires_grad=).unsqueeze() output2 = loss(m2(), target) (output2.item()) </code></pre> <p>With this, one can observe that the MSELoss tells the model and optimizer that "it is ok, there is no mistake here"<br> Oppositely, the BCELoss considers this to be a huge mistake.</p> <p>Thank you Miraj. I appreciate </p> <p>Thank you <a href="https://www.kaggle.com/abhishekgupta18895" target="_blank">@abhishekgupta18895</a> </p> <p>Hi <a href="https://www.kaggle.com/alexandervc" target="_blank">@alexandervc</a>. Thanks. Here are the submissions: <a href="https://www.kaggle.com/code/jeannkouagou/submit-best" target="_blank">https://www.kaggle.com/code/jeannkouagou/submit-best</a></p> <p>Thanks a lot <a href="https://www.kaggle.com/manavtrivedi" target="_blank">@manavtrivedi</a> </p> <p>Reproduce 1st Place Private Leaderboard Notebook: <a href="https://www.kaggle.com/code/jeannkouagou/1st-place-solution" target="_blank">https://www.kaggle.com/code/jeannkouagou/1st-place-solution</a><br> Note: Run ChemBERTa Model with random LM head (600-d) and no padding to obtain 0.719 on Private LB</p> <p>Hi <a href="https://www.kaggle.com/jingfengou" target="_blank">@jingfengou</a>. Adding 1e-12 is not necessary. But it can act as a small regularizer</p> <p>Thank you <a href="https://www.kaggle.com/zechengyin" target="_blank">@zechengyin</a> </p>
Open Problems – Single-Cell Perturbations
207th Place Solution for the Open Problems – Single-Cell Perturbations. Late submission 0.576 private LB.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>1. Introduction</h2> <p>I want to thank Kaggle and everyone involved for hosting such an intriguing competition.</p> <p>Two topics are what I describe.<br> 1.1. late submission<br> 1.2. solution </p> <p>It seems that one of the models is better for one compound but weak for other compound.</p> <p>The question remains: how to choose the right drug model for compound?</p> <p>Considering that cells of various types have different models.</p> <p>Could you please clarify why a model is good for a compound?<br> Why is Vorinostat's '24 place model LS RG' the best model for Myeloid cells, with a private score improvement of +0.003?</p> <p>Initially, it is necessary to cover a larger portion of the error with models. <br> With an error of 0.661 and without knowing which compounds have large errors, it is difficult to say whether the model is suitable for the compound.</p> <h2>2. Context</h2> <p>Business context: <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview</a><br> Data context: <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data</a></p> <h2>3. Overview of the approach ( late submission)</h2> <p>I like to discover which model is better for compounds.<br> The result is better if you select one model for each drug and cell type.<br> It's possible that there's a chemical explanation for model selection.</p> <p>We lack the skill to select from all the models available.<br> All models have been added.</p> <table> <thead> <tr> <th>bland</th> <th>Private score</th> </tr> </thead> <tbody> <tr> <td>Weight sum of 10 models</td> <td>0.718</td> </tr> <tr> <td>Select one model for each compound</td> <td>0.657</td> </tr> <tr> <td>Multiply the compound by a constant (between 1.4 and 3.09)</td> <td>0.576</td> </tr> </tbody> </table> <h3>Late submission 0.577:</h3> <table> <thead> <tr> <th>Submition id</th> <th>Model</th> <th>Multiplecator is Post Processing</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>1</td> <td>2nd place result_11 776 575 *50% + + 50% train <strong>B cells, Alvocidib</strong></td> <td></td> </tr> <tr> <td>2</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>3</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>4</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>5</td> <td>n9_736_3rd_place_psevdo_submission 736</td> <td></td> </tr> <tr> <td>6</td> <td>m4_775_LS_T4 775</td> <td>1.2</td> </tr> <tr> <td>7</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>8</td> <td>24plase LS RG 834</td> <td></td> </tr> <tr> <td>9</td> <td>24plase LS NK 784</td> <td>1.2</td> </tr> <tr> <td>10</td> <td>4th_Place_Ensembling_submission 746. 564</td> <td>1.2</td> </tr> <tr> <td>11</td> <td>m18_790_submission_calibrated_max_d 790</td> <td></td> </tr> <tr> <td>12</td> <td>m18_790_submission_calibrated_max_d 790</td> <td></td> </tr> <tr> <td>13</td> <td>24plase LS NK 784</td> <td></td> </tr> <tr> <td>14</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>15</td> <td>4th_Place_Ensembling_submission 746. 564</td> <td>1.2</td> </tr> <tr> <td>16</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>17</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>18</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>19</td> <td>1plase 729 569</td> <td>3.09</td> </tr> <tr> <td>20</td> <td>1plase 729 569</td> <td></td> </tr> <tr> <td>21</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>22</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>23</td> <td>n9_736_3rd_place_psevdo_submission 736</td> <td>1.3</td> </tr> <tr> <td>24</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>25</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>26</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>27</td> <td>2nd place result_11 776 575</td> <td></td> </tr> <tr> <td>28</td> <td>JK-PIECE Submit-best 719 592</td> <td></td> </tr> <tr> <td>29</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>30</td> <td>n9_736_3rd_place_psevdo_submission 736</td> <td>1.2</td> </tr> <tr> <td>31</td> <td>1plase 729 569</td> <td></td> </tr> <tr> <td>32</td> <td>pyBoost 760. 586</td> <td></td> </tr> <tr> <td>33</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>34</td> <td>24plase LS NK 784</td> <td></td> </tr> <tr> <td>35</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>36</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>37</td> <td>2nd place result_9 750. 559</td> <td></td> </tr> <tr> <td>38</td> <td>pyBoost 760. 586</td> <td></td> </tr> <tr> <td>39</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>40</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>41</td> <td>pyBoost 760. 586</td> <td></td> </tr> <tr> <td>42</td> <td>4th_Place_Ensembling_submission 746. 564</td> <td>1.2</td> </tr> <tr> <td>43</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>44</td> <td>m18_790_submission_calibrated_max_d 790</td> <td>2</td> </tr> <tr> <td>45</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>46</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>47</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>48</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>49</td> <td>24plase LS NK 784</td> <td></td> </tr> <tr> <td>50</td> <td>n9_736_3rd_place_psevdo_submission 736</td> <td>1.2</td> </tr> <tr> <td>51</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>52</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>53</td> <td>24plase LS RG 834</td> <td></td> </tr> <tr> <td>54</td> <td>24plase LS NK 784</td> <td>1.3</td> </tr> <tr> <td>55</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>56</td> <td>1plase 729 569</td> <td></td> </tr> <tr> <td>57</td> <td>24plase LS RG 834</td> <td>1.2</td> </tr> <tr> <td>58</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>59</td> <td>2nd place result_9 750. 559</td> <td></td> </tr> <tr> <td>60</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>61</td> <td>24plase LS NK 784</td> <td></td> </tr> <tr> <td>62</td> <td>1plase 729 569</td> <td>1.2</td> </tr> <tr> <td>63</td> <td>2nd place result_9 750. 559</td> <td></td> </tr> <tr> <td>64</td> <td>n9_736_3rd_place_psevdo_submission 736</td> <td>1.3</td> </tr> <tr> <td>65</td> <td>24plase LS NK 784</td> <td>2.9</td> </tr> <tr> <td>66</td> <td>24plase LS NK 784</td> <td></td> </tr> <tr> <td>67</td> <td>m4_775_LS_T4 775</td> <td>1.2</td> </tr> <tr> <td>68</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>69</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>70</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>71</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>72</td> <td>3rd place submission 732,2 548</td> <td>1.2</td> </tr> <tr> <td>73</td> <td>24plase LS NK 784</td> <td></td> </tr> <tr> <td>74</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>75</td> <td>24plase LS RG 834</td> <td>1.3</td> </tr> <tr> <td>76</td> <td>4th_Place_Ensembling_submission 746. 564</td> <td>1.3</td> </tr> <tr> <td>77</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>78</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>79</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>80</td> <td>n9_736_3rd_place_psevdo_submission 736</td> <td>1.2</td> </tr> <tr> <td>81</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>82</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>83</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>84</td> <td>24plase LS NK 784</td> <td>1.3</td> </tr> <tr> <td>85</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>86</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>87</td> <td>m18_790_submission_calibrated_max_d 790</td> <td>1.3</td> </tr> <tr> <td>88</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>89</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>90</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>91</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>92</td> <td>3rd place submission 732,2 548</td> <td>1.2</td> </tr> <tr> <td>93</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>94</td> <td>n9_736_3rd_place_psevdo_submission 736</td> <td>1.3</td> </tr> <tr> <td>95</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>96</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>97</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>98</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>99</td> <td>2nd place result_15 758 551</td> <td>1.2</td> </tr> <tr> <td>100</td> <td>24plase LS NK 784</td> <td></td> </tr> <tr> <td>101</td> <td>2nd place result_15 758 551</td> <td></td> </tr> <tr> <td>102</td> <td>2nd place result_9 750. 559</td> <td></td> </tr> <tr> <td>103</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>104</td> <td>3rd place submission 732,2 548</td> <td>1.2</td> </tr> <tr> <td>105</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>106</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>107</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>108</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>109</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>110</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>111</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>112</td> <td>1plase 729 569</td> <td></td> </tr> <tr> <td>113</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>114</td> <td>n9_736_3rd_place_psevdo_submission 736</td> <td></td> </tr> <tr> <td>115</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>116</td> <td>m4_775_LS_T4 775</td> <td>1.2</td> </tr> <tr> <td>117</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>118</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>119</td> <td>3rd place submission 732,2 548</td> <td>1.2</td> </tr> <tr> <td>120</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>121</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>122</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>123</td> <td>m4_775_LS_T4 775</td> <td>1.3</td> </tr> <tr> <td>124</td> <td>24plase LS RG 834</td> <td></td> </tr> <tr> <td>125</td> <td>2nd place result_9 750. 559</td> <td></td> </tr> <tr> <td>126</td> <td>1plase 729 569</td> <td></td> </tr> <tr> <td>127</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>128</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>129</td> <td>24plase LS NK 784</td> <td></td> </tr> <tr> <td>130</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>131</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>132</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>133</td> <td>m18_790_submission_calibrated_max_d 790</td> <td>1.675</td> </tr> <tr> <td>134</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>135</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>136</td> <td>24plase LS NK 784</td> <td>2.031</td> </tr> <tr> <td>137</td> <td>2nd place result_15 758 551</td> <td>1.2</td> </tr> <tr> <td>138</td> <td>2nd place result_15 758 551</td> <td></td> </tr> <tr> <td>139</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>140</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>141</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>142</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>143</td> <td>3rd place submission 732,2 548</td> <td>1.2</td> </tr> <tr> <td>144</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>145</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>146</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>147</td> <td>JK-PIECE Submit-best 719 592</td> <td>1.21</td> </tr> <tr> <td>148</td> <td>n9_736_3rd_place_psevdo_submission 736</td> <td></td> </tr> <tr> <td>149</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>150</td> <td>JK-PIECE Submit-best 719 592</td> <td></td> </tr> <tr> <td>151</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>152</td> <td>1plase 729 569</td> <td></td> </tr> <tr> <td>153</td> <td>1plase 729 569</td> <td></td> </tr> <tr> <td>154</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>155</td> <td>2nd place result_15 758 551</td> <td></td> </tr> <tr> <td>156</td> <td>pyBoost 760. 586</td> <td></td> </tr> <tr> <td>157</td> <td>m18_790_submission_calibrated_max_d 790</td> <td></td> </tr> <tr> <td>158</td> <td>JK-PIECE Submit-best 719 592</td> <td></td> </tr> <tr> <td>159</td> <td>pyBoost 760. 586</td> <td></td> </tr> <tr> <td>160</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>161</td> <td>pyBoost 760. 586</td> <td></td> </tr> <tr> <td>162</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>163</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>164</td> <td>pyBoost 760. 586</td> <td></td> </tr> <tr> <td>165</td> <td>JK-PIECE Submit-best 719 592</td> <td>1.65</td> </tr> <tr> <td>166</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>167</td> <td>1plase 729 569</td> <td></td> </tr> <tr> <td>168</td> <td>2nd place result_9 750. 559</td> <td></td> </tr> <tr> <td>169</td> <td>m18_790_submission_calibrated_max_d 790</td> <td>1.2</td> </tr> <tr> <td>170</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>171</td> <td>pyBoost 760. 586</td> <td>1.2</td> </tr> <tr> <td>172</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>173</td> <td>2nd place result_9 750. 559</td> <td></td> </tr> <tr> <td>174</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>175</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>176</td> <td>pyBoost 760. 586</td> <td></td> </tr> <tr> <td>177</td> <td>m4_775_LS_T4 775</td> <td></td> </tr> <tr> <td>178</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>179</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>180</td> <td>pyBoost 760. 586</td> <td></td> </tr> <tr> <td>181</td> <td>2nd place result_15 758 551</td> <td></td> </tr> <tr> <td>182</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>183</td> <td>24plase LS RG 834</td> <td>1.5</td> </tr> <tr> <td>184</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>185</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>186</td> <td>2nd place result_9 750. 559</td> <td></td> </tr> <tr> <td>187</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>188</td> <td>pyBoost 760. 586</td> <td></td> </tr> <tr> <td>189</td> <td>24plase LS NK 784</td> <td></td> </tr> <tr> <td>190</td> <td>4th_Place_Ensembling_submission 746. 564</td> <td></td> </tr> <tr> <td>191</td> <td>24plase LS NK 784</td> <td>1.3</td> </tr> <tr> <td>192</td> <td>2nd place result_9 750. 559</td> <td></td> </tr> <tr> <td>193</td> <td>2nd place result_15 758 551</td> <td></td> </tr> <tr> <td>194</td> <td>2nd place result_15 758 551</td> <td>1.2</td> </tr> <tr> <td>195</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>196</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>197</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>198</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>199</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>200</td> <td>24plase LS NK 784</td> <td></td> </tr> <tr> <td>201</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>202</td> <td>1plase 729 569</td> <td></td> </tr> <tr> <td>203</td> <td>4th_Place_Ensembling_submission 746. 564</td> <td></td> </tr> <tr> <td>204</td> <td>2nd place result_15 758 551</td> <td></td> </tr> <tr> <td>205</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>206</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>207</td> <td>JK-PIECE Submit-best 719 592</td> <td>1.2</td> </tr> <tr> <td>208</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>209</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>210</td> <td>24plase LS RG 834</td> <td>1.870</td> </tr> <tr> <td>211</td> <td>pyBoost 760. 586</td> <td></td> </tr> <tr> <td>212</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>213</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>214</td> <td>1plase 729 569</td> <td></td> </tr> <tr> <td>215</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>216</td> <td>pyBoost 760. 586</td> <td></td> </tr> <tr> <td>217</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>218</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>219</td> <td>pyBoost 760. 586</td> <td></td> </tr> <tr> <td>220</td> <td>n9_736_3rd_place_psevdo_submission 736</td> <td></td> </tr> <tr> <td>221</td> <td>2nd place result_15 758 551</td> <td></td> </tr> <tr> <td>222</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>223</td> <td>24plase LS NK 784</td> <td></td> </tr> <tr> <td>224</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>225</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>226</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>227</td> <td>24plase LS NK 784</td> <td></td> </tr> <tr> <td>228</td> <td>2nd place result_15 758 551</td> <td></td> </tr> <tr> <td>229</td> <td>24plase LS NK 784</td> <td></td> </tr> <tr> <td>230</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>231</td> <td>2nd place result_9 750. 559</td> <td></td> </tr> <tr> <td>232</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>233</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>234</td> <td>pyBoost 760. 586</td> <td></td> </tr> <tr> <td>235</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>236</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>237</td> <td>pyBoost 760. 586</td> <td></td> </tr> <tr> <td>238</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>239</td> <td>JK-PIECE Submit-best 719 592</td> <td>1.351</td> </tr> <tr> <td>240</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>241</td> <td>m18_790_submission_calibrated_max_d 790</td> <td>1.3</td> </tr> <tr> <td>242</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>243</td> <td>24plase LS NK 784</td> <td></td> </tr> <tr> <td>244</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>245</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>246</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>247</td> <td>2nd place result_9 750. 559</td> <td></td> </tr> <tr> <td>248</td> <td>4th_Place_Ensembling_submission 746. 564</td> <td></td> </tr> <tr> <td>249</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>250</td> <td>1plase 729 569</td> <td></td> </tr> <tr> <td>251</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> <tr> <td>252</td> <td>pyBoost 760. 586</td> <td></td> </tr> <tr> <td>253</td> <td>24plase LS RG 834</td> <td></td> </tr> <tr> <td>254</td> <td>2nd place result_15 758 551</td> <td></td> </tr> <tr> <td>255</td> <td>3rd place submission 732,2 548</td> <td></td> </tr> </tbody> </table> <h2>4. Data preprocessing ( solution)</h2> <p>The models employed are NN and pyboost.</p> <p>Cross validation is not used</p> <h2>5. Method ( late submission)</h2> <h2>Ensembling <strong>late submissions</strong></h2> <p>I evaluate for various compounds the models:</p> <table> <thead> <tr> <th>model</th> </tr> </thead> <tbody> <tr> <td>1th place</td> </tr> <tr> <td>2nd place</td> </tr> <tr> <td>3rd place</td> </tr> <tr> <td>4th place</td> </tr> <tr> <td>24th place NK</td> </tr> <tr> <td>24th place RG</td> </tr> <tr> <td>pyBoost</td> </tr> </tbody> </table> <p>Green select indicates that the model has the lowest error among all.</p> <table> <thead> <tr> <th>late submissions</th> <th>private LB</th> <th>public LB</th> </tr> </thead> <tbody> <tr> <td>1th place+2nd place+3rd place+4th place+24th place NK+24th place RG+pyBoost+50%+mult</td> <td><strong>0,597</strong></td> <td>0,548</td> </tr> </tbody> </table> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4172517%2F1dadc65287d17a1469ebbd71e0e95364%2F2023-12-13%20%2023.15.41.png?generation=1702499882391676&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4172517%2F887cf9a7156a8a35b83850cdf7168c8f%2F2023-12-13%20%2023.16.16.png?generation=1702499901312868&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4172517%2F8600e331c1e6304df98f7bcaab85a41e%2F2023-12-13%20%2023.16.44.png?generation=1702499920405715&amp;alt=media" alt=""></p> <p>Why is Vorinostat's '24 place model LS RG' the best model for Myeloid cells, with a private score improvement of +0.003?</p> <h2>6. Final result ( solution, late submission)</h2> <table> <thead> <tr> <th>solution</th> <th>private LB</th> <th>public LB</th> <th>solution or late submission</th> </tr> </thead> <tbody> <tr> <td>bland</td> <td><strong>0,577</strong></td> <td>0,53</td> <td>late submission</td> </tr> <tr> <td>Pyboost max_depth 10, ntrees 5000, lr 001, colsample 035</td> <td>0,763</td> <td>0,586</td> <td>solution</td> </tr> <tr> <td>Pyboost max_depth 12, ntrees 5000, lr 001, colsample 075, n_components 49, T8 b7t17</td> <td><strong>0,760</strong></td> <td>0,586</td> <td>solution</td> </tr> <tr> <td>Pyboost max_depth 12, ntrees 1000, lr 001, colsample 035, n_components 50, T8 b7t1 bcell7 mie6</td> <td>0,761</td> <td>0,578</td> <td>solution</td> </tr> <tr> <td>quantile(0.7)</td> <td>1,003</td> <td>0,797</td> <td>solution</td> </tr> </tbody> </table> <h2>What help ( solution)</h2> <p><strong>Drop T8 cell</strong><br> <strong>Repeat 17 drug - 7 time</strong> +0,001 to Pyboost</p> <h2>What help ( late submission)</h2> <p>The main score improvement is in <strong>4th Place Magic Postprocessing</strong> - <br> <strong>multiply the compound by a constant</strong> (between 1.4 and 3.09).</p> <p>Base score 3rd place submission Private LB 732 , public LB 548</p> <table> <thead> <tr> <th>Algorithm</th> <th>Private LB improve</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>Base 3rd place submission</td> <td></td> <td>0.732</td> </tr> <tr> <td>select model from 10 for compound</td> <td>+0.075</td> <td>0.657</td> </tr> <tr> <td><strong>B cells, ABT-199</strong> = 50% model <strong>B cells, ABT-199</strong> + 50% train <strong>B cells, Alvocidib</strong></td> <td>+0.015</td> <td></td> </tr> <tr> <td>Add constant (+0.1, -0.1, +3 B cells BMS-387032)</td> <td>+0.008</td> <td></td> </tr> <tr> <td>50% model + 50% compound</td> <td>+0.008</td> <td>0.626</td> </tr> <tr> <td>4th Place Magic Postprocessing of 1 compound: <strong>B cells, BMS-387032</strong> MULT 3.09</td> <td>+0.027</td> <td>0.609</td> </tr> <tr> <td>4th Place Magic Postprocessing of 55 compounds MULT 1.2, 1.3, 2</td> <td>+0.009</td> <td><strong>0.600</strong></td> </tr> </tbody> </table> <h2>+0.008 Private LB detail:</h2> <p><strong>Compound A</strong> = <strong>Compound A</strong> model * 50% + <strong>Compound B</strong> train * 50%</p> <table> <thead> <tr> <th>50% model submission Cell, Compound</th> <th>50% train Cell, Compound</th> </tr> </thead> <tbody> <tr> <td>27 B cells, CGM-097</td> <td>train row 3. T regulatory cells, Clotrimazole</td> </tr> <tr> <td>65 B cells, Lamivudine</td> <td>train row 7. T regulatory cells, Mometasone Furoate</td> </tr> <tr> <td>147 Myeloid cells, BMS-536924</td> <td>train row 9. Myeloid cells, Idelalisib</td> </tr> <tr> <td>152 Myeloid cells,CC-401</td> <td>train row 1. T cells CD4+, Clotrimazole</td> </tr> <tr> <td>155 Myeloid cells,CGM-097</td> <td>train row 0. NK cells, Clotrimazole</td> </tr> <tr> <td>157 Myeloid cells,Canertinib</td> <td>train row 9. Myeloid cells, Idelalisib</td> </tr> <tr> <td>207 Myeloid cells,PF-03814735</td> <td>train row 9. Myeloid cells, Idelalisib</td> </tr> <tr> <td>214 Myeloid cells,Prednisolone</td> <td>train row 6. T cells CD8+, Mometasone Furoate</td> </tr> <tr> <td>229 Myeloid cells,SLx-2119</td> <td>train row 2. T cells CD8+, Clotrimazole</td> </tr> <tr> <td>241 Myeloid cells,Tacalcitol</td> <td>train row 234. Myeloid cells, Porcn Inhibitor III</td> </tr> <tr> <td>247 Myeloid cells,Tosedostat</td> <td>train row 1. T cells CD4+, Clotrimazole</td> </tr> <tr> <td>248 Myeloid cells,Trametinib</td> <td>train row 9. Myeloid cells, Idelalisib</td> </tr> <tr> <td>251 Myeloid cells,Vanoxerine</td> <td>train row 1. T cells CD4+, Clotrimazole</td> </tr> </tbody> </table> <h2>7. Sources ( solution)</h2> <p><a href="https://www.kaggle.com/code/liudacheldieva/blend-for-single-cell-perturbations-dd4fc8" target="_blank">https://www.kaggle.com/code/liudacheldieva/blend-for-single-cell-perturbations-dd4fc8</a></p> <p><strong>0.760</strong> <a href="https://www.kaggle.com/code/jeffreylihkust/copy-of-pyboost-secret-grandmaster-s-to-1d68b4/notebook?scriptVersionId=150765833" target="_blank">https://www.kaggle.com/code/jeffreylihkust/copy-of-pyboost-secret-grandmaster-s-to-1d68b4/notebook?scriptVersionId=150765833</a><br> 0.761 <a href="https://www.kaggle.com/code/emmawilsonev/copy-of-pyboost-secret-grandmaster-s-to-1d68b4?scriptVersionId=150583811" target="_blank">https://www.kaggle.com/code/emmawilsonev/copy-of-pyboost-secret-grandmaster-s-to-1d68b4?scriptVersionId=150583811</a><br> <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/458750" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/458750</a><br> <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/460191" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/460191</a></p> <h2>8. Sources ( late submission)</h2> <p>Private score <strong>0.577</strong> <a href="https://www.kaggle.com/code/liudacheldieva/fork-of-ensembling" target="_blank">https://www.kaggle.com/code/liudacheldieva/fork-of-ensembling</a> <br> summation.csv 0.577: <a href="https://www.kaggle.com/datasets/liudacheldieva/op-0-670" target="_blank">https://www.kaggle.com/datasets/liudacheldieva/op-0-670</a> </p> <h2>9. Used submission.csv ( late submission)</h2> <p>import_nn = pd.read_csv('/kaggle/input/4th-place-neural-net/submission.csv', index_col='id')<br> import_lgbm = pd.read_csv('/kaggle/input/lgbm-with-gene-aggregation-4th-place-writeup/submission.csv', index_col='id')<br> import_nlp = pd.read_csv('/kaggle/input/nlp-regression/submission.csv', index_col='id')<br> import_760pyboost = pd.read_csv('/kaggle/input/copy-of-pyboost-secret-grandmaster-s-to-1d68b4/submission.csv', index_col='id')<br> n1_776_result_11 = pd.read_csv('/kaggle/input/2-pla-submit/result (11).csv', index_col='id')<br> n5_760_submission_pyboost = pd.read_csv('/kaggle/input/copy-of-pyboost-secret-grandmaster-s-to-1d68b4/submission.csv', index_col='id')<br> n6_776_submission_nn = pd.read_csv('/kaggle/input/4th-place-neural-net/submission.csv', index_col='id')<br> n7_792_submission_lgbm = pd.read_csv('/kaggle/input/lgbm-with-gene-aggregation-4th-place-writeup/submission.csv', index_col='id')<br> n8_813_submission_nlp = pd.read_csv('/kaggle/input/nlp-regression/submission.csv', index_col='id')<br> n3_750_result_9 = pd.read_csv('/kaggle/input/2-pla-submit/result (9).csv', index_col='id')<br> n4_767_result_8 = pd.read_csv('/kaggle/input/2-pla-submit/result (8).csv', index_col='id')<br> n2_758_result_15 = pd.read_csv('/kaggle/input/2-pla-submit/result (15).csv', index_col='id')<br> n9_736_3rd_place_psevdo_submission = pd.read_csv('/kaggle/input/3rd-place/3rd place psevdo submission(14).csv', index_col='id')<br> n12_746_OP_pub530_myModels_exp1 = pd.read_csv('/kaggle/input/4-pl-sub-late/OP_pub530_myModels_exp1.csv', index_col='id')<br> import_732 = pd.read_csv('/kaggle/input/3rd-place/3rd place submission.csv', index_col='id')<br> import_746 = pd.read_csv('/kaggle/input/4-pl-sub-late/4th_Place_Ensembling_submission.csv', index_col='id')<br> n13_729_1_pl = pd.read_csv('/kaggle/input/submit-best/submission.csv', index_col='id')<br> n14_719_sub_b = pd.read_csv('/kaggle/input/1st-place-solution/submission.csv', index_col='id')<br> import_670 = pd.read_csv('/kaggle/input/op-0-670/submission732_some_746_some760_750_558_776_736_719_729.csv', index_col='id')<br> import_657 = pd.read_csv('/kaggle/input/op-0-670/submission732_some_746_some760_750_558_776_736_719_729_784_834_785.csv', index_col='id')<br> import_626 = pd.read_csv('/kaggle/input/op-0-670/submission732_some_746_some760_750_558_776_736_719_729_784_834_785_add.csv', index_col='id')<br> import_600 = pd.read_csv('/kaggle/input/op-0-670/submission732_some_746_some760_750_558_776_736_719_729_784_834_785_add_mult.csv', index_col='id')<br> import_600 = pd.read_csv('/kaggle/input/op-0-670/submission732_some_746_some760_750_558_776_736_719_729_784_834_785_add_mult.csv', index_col='id')<br> m1_792_4th_submission = pd.read_csv('/kaggle/input/lgbm-with-gene-aggregation-4th-place-writeup/submission.csv', index_col='id')<br> m2_784_LS_NK = pd.read_csv('/kaggle/input/24th-using-linear-algebra-priv-pub-0-768-0-582/LS_NK.csv', index_col='id')<br> m3_834_LS_RG = pd.read_csv('/kaggle/input/24th-using-linear-algebra-priv-pub-0-768-0-582/LS_RG.csv', index_col='id')<br> m4_775_LS_T4 = pd.read_csv('/kaggle/input/24th-using-linear-algebra-priv-pub-0-768-0-582/LS_T4.csv', index_col='id')<br> m5_959_LS_T8 = pd.read_csv('/kaggle/input/24th-using-linear-algebra-priv-pub-0-768-0-582/LS_T8.csv', index_col='id')<br> m6_784_SVD_NK = pd.read_csv('/kaggle/input/24th-using-linear-algebra-priv-pub-0-768-0-582/SVD_NK.csv', index_col='id')<br> m7_834_SVD_RG = pd.read_csv('/kaggle/input/24th-using-linear-algebra-priv-pub-0-768-0-582/SVD_RG.csv', index_col='id')<br> m8_775_SVD_T4 = pd.read_csv('/kaggle/input/24th-using-linear-algebra-priv-pub-0-768-0-582/SVD_T4.csv', index_col='id')<br> m9_959_SVD_T8 = pd.read_csv('/kaggle/input/24th-using-linear-algebra-priv-pub-0-768-0-582/SVD_T8.csv', index_col='id')<br> m10_773_SVD_b_nk_m_t4 = pd.read_csv('/kaggle/input/24th-using-linear-algebra-priv-pub-0-768-0-582/SVD_b_nk_m_t4.csv', index_col='id')<br> m11_786_SVD_b_t4_m_nk = pd.read_csv('/kaggle/input/24th-using-linear-algebra-priv-pub-0-768-0-582/SVD_b_t4_m_nk.csv', index_col='id')<br> m12_768_SVD_nk_t4 = pd.read_csv('/kaggle/input/24th-using-linear-algebra-priv-pub-0-768-0-582/SVD_nk_t4.csv', index_col='id')<br> m13_771_SVD_nk_t4_t8_rg = pd.read_csv('/kaggle/input/24th-using-linear-algebra-priv-pub-0-768-0-582/SVD_nk_t4_t8_rg.csv', index_col='id')<br> m14_746_op_late_submission = pd.read_csv('/kaggle/input/op-late-sub-pub530-mymodels-exp1/submission.csv', index_col='id')<br> m15_863_plase30_submission = pd.read_csv('/kaggle/input/30-op2scp-transformer-deberta-v3-small-demo/submission.csv', index_col='id')<br> m17_769_submission_Pyboost_max_depth10_ntrees5000_lr001_subsample1_colsample035_n_components50 = pd.read_csv('/kaggle/input/pyboost-u900/submission_Pyboost_max_depth10_ntrees5000_lr001_subsample1_colsample035_n_components50.csv', index_col='id')<br> m18_790_submission_calibrated_max_d = pd.read_csv('/kaggle/input/pyboost-u900/submission_calibrated_max_depth10_ntrees5000_lr001_subsample1_colsample035_n_components50.csv', index_col='id')<br> M19_785_13th_place_mlp = pd.read_csv('/kaggle/input/13th-place-solution-mlp/submission_example_MLP.csv', index_col='id')<br> import_755_10pl = pd.read_csv('/kaggle/input/10th-place/sub_submission.csv', index_col='id')<br> import_748_26pl = pd.read_csv('/kaggle/input/op-26-place/submission.csv', index_col='id')<br> import_577 = pd.read_csv('/kaggle/input/op-0-670/submi_mu_0.577.csv', index_col='id')</p> <p>Thank you for taking the time to read. Feel free to ask any questions you may have.</p>
Open Problems – Single-Cell Perturbations
20th Place Solution Writeup For Open Problems - Single-cell Perturbations Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Please find attached our detailed solution write-up, submitted for consideration for the Judge's reward. For your convenience, we have included all necessary citations within the attached PDF document.</p> <p>Additionally, the individual notebooks referenced in our write-up are accessible at the following GitHub repository: <a href="https://github.com/AntoinePassemiers/Open-Challenges-Single-Cell-Perturbations/tree/master" target="_blank">https://github.com/AntoinePassemiers/Open-Challenges-Single-Cell-Perturbations/tree/master</a></p> <p>Thank you very much, <a href="https://www.kaggle.com/alexandervc" target="_blank">@alexandervc</a> for your deeper look.</p> <p>These different analyses all pointing to the same direction. Thanks for sharing your point of view.</p> <p>Re "By the way those highlighted is it MLN2238 or Porcn Inhibitor ? (Color seems to be the same)" Its MLN 2238 (the color is confusing, i agree). </p> <p>thank you </p> <p>thank you for the comment. the code for this section is here <a href="https://github.com/AntoinePassemiers/Open-Challenges-Single-Cell-Perturbations/blob/master/op2-de-dl.ipynb" target="_blank">https://github.com/AntoinePassemiers/Open-Challenges-Single-Cell-Perturbations/blob/master/op2-de-dl.ipynb</a>. </p> <p>np.log(.001/18206)=-16.7, as we correct for multiple testing. </p> <p>thank you very much for the comment, and my bad in mistyping your name! i will certainly fix it. best </p>
Open Problems – Single-Cell Perturbations
27th Tabtransformer
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>🫡 to the shake up survivor</p> <p>Used ensemble of ft transformer of 12 models(same as tab because no numerical) trained on different seed, 5 folds by models all used</p> <p>config picked based on mae (dim range 48 to 64, depth 4 to 8, head 8)<br> Very high dropout 0.4<br> No augmentation, no use of the blend ressources (i just checked one and i was lost)</p> <p>Hard time to understand how to start and the competition<br> I did not take the competition seriously until i dropped from the top 10 of the public leaderboard because of the blend fog, but was too lazy to do/think anything fancy to get back on track.</p> <p>1st tabular competition (i am not a fan of that!)</p> <p>bronze to silver next goal is gold !</p>
Open Problems – Single-Cell Perturbations
2nd Place Solution for the Open Problems – Single-Cell Perturbations
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi <a href="https://www.kaggle.com/songqizhou" target="_blank">@songqizhou</a>, I've uploaded the code to the Git repository. Please feel free to explore it and don't hesitate to reach out if you have any questions.</p> <p>Hi, I've uploaded the code to the Git repository, check it out :)</p> <p>I employed K-means to create samples from each cluster. A simple analogy is when dealing with significantly imbalanced data, it's crucial to ensure a representation of categories in both the training and validation sets. The optimal model was attained without truncated SVD. However, if you choose to apply this transformation, remember to also perform the inverse transform.</p> <p>I hope I didn’t disappoint :)</p> <p>Thanks! I tried xgboost and simple mlp but the results were the best for this model. </p> <p>I am thrilled to finally unveil my solution to this competition!</p> <h2>Context</h2> <ul> <li><a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview</a></li> <li><a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data</a></li> </ul> <h2>Overview of the approach</h2> <p>My optimal model emerged as a composite of four models, where three were employed as a normalized sum, and the fourth acted as an amplifier, showcasing discernible impact, potentially on labels with alternating signs:</p> <ul> <li><p>weight_df1: 0.5 (utilizing std, mean, and clustering sampling, yielding 0.551)</p></li> <li><p>weight_df2: 0.25 (excluding uncommon elements, resulting in 0.559)</p></li> <li><p>weight_df3: 0.25 (leveraging clustering sampling, achieving 0.575)</p></li> <li><p>weight_df4: 0.3 (incorporating mean, random sampling, and excluding std, attaining 0.554)<br> The resultant model is expressed as the weighted sum of these components: resulted_model = weight_df1 * df1 + weight_df2 * df2 + weight_df3 * df3 + weight_df4 * df4. All models were trained using the same transformer architecture, albeit with varying dimensional models (with the optimal dimension being 128).</p></li> </ul> <p><strong>Data preprocessing and feature selection</strong><br> To train deep learning models on categorical labels, I employed one-hot encoding transformation. To address high and low bias labels, I utilized target encoding by calculating the mean and standard deviation for each cell type and SM name. During experimentation, I compared models with and without uncommon columns and discovered that including uncommon columns significantly improved the training of encoding layers for mean and standard deviation feature vectors, resulting in noticeable performance enhancement.<br> <strong>EDA</strong></p> <p>In my feature exploration, I performed truncated Singular Value Decomposition (SVD) specifically on the target variables. Despite experimenting with different SVD sizes, the models generated from this approach consistently fell short of replicating the performance observed in the full targets regression. Notably, as part of this analysis, I identified certain targets with significant standard deviation values.</p> <p>To leverage this insight, I strategically enriched the feature set by introducing the standard deviation as a dedicated feature for each cell type and SM name. This involved concatenating these standard deviation values into a unified vector format (std_cell_type, std_sm_name).</p> <p>Furthermore, as I delved into the dataset, I conducted a comprehensive examination, revealing a remarkable cleanliness characterized by the absence of NaN values and duplicates. This meticulous exploration not only extended to model training but also encompassed a nuanced analysis of the target variables, contributing to the informed enhancement of the feature set for improved model performance.</p> <p><strong>Sampling strategy</strong><br> In devising an effective sampling strategy for partitioning the data into training and validation sets, I employed a sophisticated approach rooted in the clusters derived from a K-Means clustering analysis on the target values. The rationale behind this strategy lies in the premise that data points within similar clusters share inherent patterns or characteristics, thus ensuring a more nuanced and representative split.</p> <p>For each distinct cluster identified through K-Means, I executed the data partitioning using the train_test_split function from the scikit-learn library. This method facilitated a controlled allocation of data points to both the training and validation sets, ensuring that the models were exposed to a diverse yet stratified representation of the data.</p> <p>The validation percentage, a crucial parameter in this process, was meticulously chosen to strike a balance between model robustness and computational efficiency. By experimenting with validation percentages ranging from 0.1 to 0.2, I systematically assessed the impact on model performance. Ultimately, for the optimal model configuration, a validation percentage of 0.1 was deemed most effective. This decision was guided by a careful consideration of the trade-off between the need for a sufficiently large training set and the importance of a robust validation set to gauge model generalizability.</p> <p>In summary, this sampling strategy, grounded in cluster-based data splitting, not only acknowledges the underlying structure within the target values but also optimizes the partitioning process to enhance the model's ability to capture diverse patterns and generalize effectively to unseen data.</p> <p><strong>Modeling</strong></p> <p>This is my best architecture</p> <pre><code> (nn.Module): (): (CustomTransformer_v3, self).__init__() self.num_target_encodings = * self.num_sparse_features = num_features - self.num_target_encodings self.sparse_feature_embedding = nn.Linear(self.num_sparse_features, d_model) self.target_encoding_embedding = nn.Linear(self.num_target_encodings, d_model) self.norm = nn.LayerNorm(d_model) self.concatenation_layer = nn.Linear( * d_model, d_model) self.transformer = nn.TransformerEncoder( nn.TransformerEncoderLayer(d_model=d_model, nhead=num_heads, dropout=dropout, activation=nn.GELU(), batch_first=), num_layers=num_layers ) self.fc = nn.Linear(d_model, num_labels) (): sparse_features = x[:, :self.num_sparse_features] target_encodings = x[:, self.num_sparse_features:] sparse_features = self.sparse_feature_embedding(sparse_features) target_encodings = self.target_encoding_embedding(target_encodings) combined_features = torch.cat((sparse_features, target_encodings), dim=) combined_features = self.concatenation_layer(combined_features) combined_features = self.norm(combined_features) x = self.transformer(combined_features) x = self.norm(x) x = self.fc(x) x </code></pre> <p>I modularized my model into two distinct segments to efficiently handle the complexity of the input data:</p> <p>Sparse Feature Encoding:</p> <p>For encoding sparse features, I utilized an embedding layer tailored to handle the sparsity inherent in certain feature types.<br> Considering the nature of sparse features, I initially explored utilizing the nn.Embedding layer. However, due to computational constraints on my laptop GPU, I opted for an alternative approach using a linear layer (nn.Linear) to convert sparse features into a dense representation.<br> Target Encoding Feature Encoding (Dense Features):</p> <p>Concurrently, I addressed the encoding of target encodings, which are inherently dense features.<br> To accomplish this, I employed a separate linear layer (nn.Linear) designed specifically for target encodings, ensuring an effective transformation into a meaningful latent space.<br> Following these individual encodings, I concatenated the resulting embeddings into a unified latent vector, fostering a comprehensive representation of both sparse and dense feature information. To further enhance the model's capacity to capture nuanced patterns, I employed normalization (nn.LayerNorm) on the concatenated features.</p> <p>Considering the computational challenges posed by sparse feature embedding using nn.Embedding, I strategically utilized nn.Linear for efficient processing on my laptop GPU.</p> <p>In addition, I introduced a nuanced approach for encoding dense features by breaking them into four distinct encoding layers, each dedicated to capturing nuanced patterns related to mean and standard deviation for both 'sm_name' and 'cell_type'.</p> <p>Furthermore, the model architecture leveraged the state-of-the-art Lion optimizer, recognized for its efficacy in optimizing transformer-based models. The specific model implementation, as exemplified in the CustomTransformer_v3 class, showcases a transformer architecture with multiple layers, heads, and dropout for optimal learning.</p> <p>In summary, this modularized and intricately designed model not only optimizes computational efficiency but also demonstrates a keen understanding of the unique characteristics of sparse and dense features, contributing to the overall effectiveness of the learning process.</p> <p><strong>Hyperparameters</strong><br> The model training regimen was characterized by a judicious selection of hyperparameters, featuring an initial learning rate of 1e-5 and a weight decay of 1e-4, the latter serving as an additional regularization mechanism, particularly for dropout layers within the model architecture. This meticulous choice of hyperparameters aimed at striking a balance between effective training and prevention of overfitting.</p> <p>To dynamically adjust the learning rate during training, a learning rate scheduler of the ReduceLROnPlateau type was employed. The scheduler, configured with a mode of "min" and a reduction factor of 0.9999, adeptly adapted the learning rate in response to plateaus in model performance, ensuring an efficient convergence towards optimal results. The patience parameter, set to 500, determined the number of epochs with no improvement before triggering a reduction in the learning rate.</p> <p>The training process spanned 20,000 epochs, incorporating an early stopping mechanism triggered after 5,000 epochs of stagnation. This comprehensive training strategy underscored a thoughtful balance between fine-tuning the model's parameters and preventing overfitting, contributing to the robustness and efficiency of the learning process.</p> <p><strong>Loss function</strong></p> <p>While Mean Absolute Error (MAE) and Mean Squared Error (MSE) individually prove effective for gauging bias or variance, it's crucial to recognize their distinct roles in assessing model performance. To strike a balance between these metrics, I opted for Huber loss, amalgamating the strengths of both MAE and MSE. Despite utilizing the Mrrmse metric, the ultimate model selection hinged on the performance demonstrated by the optimal loss function on the validation set.</p> <h2>Preventing overfitting</h2> <p>In addressing the challenges posed by the limited size of the dataset, a critical consideration was ensuring the model's ability to effectively regularize. To achieve this, I incorporated Dropout layers within the transformer architecture and applied weight decay for L2 penalty on the model weights. Additionally, to mitigate the impact of exploding gradients, I implemented gradient norm clipping with a maximum norm of 1.</p> <h2>Validation Strategy</h2> <p>In pursuit of an optimal model configuration, I adopted a robust validation strategy by training with different seeds and employing k-fold cross-validation. The diverse set of performance metrics for each fold included:</p> <ul> <li>Achieving a score of 0.551 through the incorporation of standard deviation, mean, and clustering sampling.</li> <li>Attaining a score of 0.559 by excluding uncommon elements from the dataset.</li> <li>Realizing a score of 0.575 through clustering sampling.</li> <li>Securing a score of 0.554 by integrating mean, random sampling, and excluding standard deviation from the features.<br> This validation approach not only facilitated the identification of the best-performing model but also guided the fine-tuning of hyperparameters to optimize overall model performance.</li> </ul> <h2>Sources</h2> <ul> <li><a href="https://github.com/lucidrains/lion-pytorch" target="_blank">https://github.com/lucidrains/lion-pytorch</a></li> <li><a href="https://www.kaggle.com/code/ayushs9020/understanding-the-competition-open-problems" target="_blank">https://www.kaggle.com/code/ayushs9020/understanding-the-competition-open-problems</a></li> <li><a href="https://www.kaggle.com/code/alexandervc/op2-eda-baseline-s" target="_blank">https://www.kaggle.com/code/alexandervc/op2-eda-baseline-s</a></li> <li><a href="https://github.com/Eliorkalfon/single_cell_pb" target="_blank">https://github.com/Eliorkalfon/single_cell_pb</a></li> </ul> <p>Thanks! The files are in my GitHub repository under submissions folder also check seq.py for more info. </p> <p>The models were trained separately, with the same general architecture but with different features (mean,std,etc), I just ensembled the results using weighted sum, I hope it clarifies my approach. </p> <p>The difference between df1 to df3 is the sampling method.<br> weight_df3: I clustered all of the data using k-means, and I sampled from each cluster instead of random sampling.</p>
Open Problems – Single-Cell Perturbations
34th Place Solution Writeup for Open Problems – Single-Cell Perturbations
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to express my gratitude to the organizers and kagglers of this competition. I learned a lot from the open notebooks and discussions.<br> Especially, I would like to express my big thanks to the following people.<br> <a href="https://www.kaggle.com/mehrankazeminia" target="_blank">@mehrankazeminia</a><br> <a href="https://www.kaggle.com/somayyehgholami" target="_blank">@somayyehgholami</a><br> <a href="https://www.kaggle.com/alexandervc" target="_blank">@alexandervc</a><br> <a href="https://www.kaggle.com/antoninadolgorukova" target="_blank">@antoninadolgorukova</a><br> <a href="https://www.kaggle.com/kishanvavdara" target="_blank">@kishanvavdara</a><br> <a href="https://www.kaggle.com/pablormier" target="_blank">@pablormier</a></p> <p><strong>34th Solution</strong></p> <p>We blended the results of three tasks to find one result. Being different approaches, blending these models was very effective.</p> <p>The three approaches<br> Task1 : test set as categorical variable<br> Task2 : test set as continuous variable ( genes as sampled)<br> Task3 : test set as continuous variable ( compounds as sample)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2646279%2Fb2df8d249afa3fa8af194ba5d14f3438%2FOC2_34th.png?generation=1702317451136015&amp;alt=media" alt="34th_solution"></p> <p>Basically, it is a simple solution that blends public notebook's score and sklearn's model. If you combine it with a better performing model (like PY-BOOST), the performance will be even better.</p> <ol> <li>integration of biological knowledge<br> In Task 3, we incorporated the decoupler. A small improvement in public scores was obtained. </li> </ol> <p>2.Exploration of the problem<br> In Task 2 and Task 3, CD8 data were excluded. <a href="https://www.kaggle.com/code/yoshifumimiya/op2-about-positive-control" target="_blank">op2-about-positive-control</a></p> <p>3.Model design<br> I used sklearn's MLP, lightgbm, and Ridge. </p> <p>4.Robustness<br> Robustness is considered to be high, because I sought a single result from three different perspectives. </p> <p>5.Documentation &amp; code style<br> In preparation</p> <p>6.Reproducibility<br> In preparation</p>
Open Problems – Single-Cell Perturbations
3rd Place Solution for the Open Problems – Single-Cell Perturbations
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The analysis and github code are added. </p> <h1>1. Integration of Biological Knowledge</h1> <p>Generally, I treated this problem as a regression with 2 feature columns and 18211 targets, but I had tried to utilize SMILES sequences in neural network with LSTM unit. Both the sm_name and SMILES columns can be encoded exactly in the same way, so the sm_name column can be replaced with the SMILES column. Moreover, SMILES column is more informative, because every single character of the sequence can be encoded (not only a single value like in sm_name) and the order of these characters provides extra information. Theoretically, In the worst case, the performance of the neural network using SMILES column instead of sn_name should be not worse than using original columns. Unfortunate, I have reached an unsatisfactory public score with this neural network and I stopped further research. </p> <h1>2. Exploration of the problem</h1> <p>For simplicity, the analysis is done for the single best model without pseudolabeling. Since mrrmse metric is sensitive to outliers, a distribution of ranges for columns should be checked.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6882624%2F523b491c0054eae38b031b945b7eb168%2FScreenshot%20from%202023-12-05%2009-36-00.png?generation=1701849069283853&amp;alt=media" alt=""><br> The majority of columns have a range of values in an interval (4, 50). Naturally, the columns with high range lead to high mae or mse, so in order to determine genes which are easy and hard to predict, the standardized (divided by std) colwise mse is applied. The table below shows the hardest and easiest genes for prediction.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6882624%2Fd8bf3ee03e6d6154593170a5db43cab3%2FScreenshot%20from%202023-12-05%2009-45-05.png?generation=1701849594906441&amp;alt=media" alt=""><br> The scheme of the applied cross validation. Every fold contains one cell type chosen from NK cells, T cells CD4+, T cells CD8+, T regulatory cells and only sm_names being in public and private test was involved. The lowest value of this validation split corresponds to the lowest value on public and private dataset. In my opinion this is a reliable split and the perfect split depends on the model architecture, so every model can have its perfect validation split. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6882624%2Fb4a580c8349343ab853a40cf6ee8b43e%2FScreenshot%20from%202023-12-05%2009-49-01.png?generation=1701849988973830&amp;alt=media" alt=""><br> The easiness of learning per cell types is shown below. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6882624%2F64d7f90af54659c0e8477d4faa3a3c51%2FScreenshot%20from%202023-12-05%2009-56-05.png?generation=1701850210915961&amp;alt=media" alt=""><br> The values of loss are different, because they are calculated in truncated space. T cells CD4+ and NK cells are learning well. T regulatory cells are harder for training. The T cells CD8 are weakly improving on validation dataset. This split uses about 25% of the dataset for validation, so I believe more reliable splits exist. My new proposition of split is: <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6882624%2F718b580e65a5654a92880f8a6bcf6d1b%2FScreenshot%20from%202023-12-05%2010-08-16.png?generation=1701850452329548&amp;alt=media" alt=""><br> This split is similar to the previous one, but for each validation fold, randomly selected examples are moved to training. Let's check an impact of the new split for training. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6882624%2F8a68fc6a4b3025b48d37dabc67a50c05%2FScreenshot%20from%202023-12-05%2010-12-19.png?generation=1701850788086187&amp;alt=media" alt=""><br> For each cell type, more data improved performance on this same, constant and small validation set. Metric is calculated on full dimension this time. The first value at 480 training examples corresponds to the previous split. Theoretically, decreasing the size of validation set to one example can lead to the best performance. Since it will be very similar to training on whole dataset, what is done finally.</p> <h1>3. Model design</h1> <h2>Solution</h2> <p>The prediction system is two staged, so I publish two versions of the notebook. <br> The first stage predicts pseudolabels. To be honest, if I stopped on this version, I would not be the third. <br> The predicted pseudolabels on all test data (255 rows) are added to training in the second stage.</p> <h3>Stage 1 preparing pseudolabels</h3> <p>The main part of this system is a neural network. Every neural network and its environment was optimized by optuna. Hyperparameters that have been optimized:<br> a dropout value, a number of neurons in particular layers, an output dimension of an embedding layer, a number of epochs, a learning rate, a batch size, a number of dimension of truncated singular value decomposition.<br> The optimization was done on custom 4-folds cross validation. In order to avoid overfitting to cross validation by optuna I applied 2 repeats for every fold and took an average. Generally, the more, the better. The optuna's criterion was MRRMSE. <br> Finally, 7 models were ensembled. Optuna was applied again to determine best weights of linear combination. The prediction of test set is the pseudolabels now and will be used in second stage.</p> <h3>Stage 2 retraining with pseudolabels</h3> <p>The pseudolabels (255 rows) were added to the training dataset. I applied 20 models with optimized parameters in different experiments for a model diversity.<br> Optuna selected optimal weights for the linear combination of the prediction again.<br> Models had high variance, so every model was trained 10 times on all dataset and the median of prediction is taken as a final prediction. The prediction was additionally clipped to colwise min and max. </p> <p><strong>History of improvements:</strong></p> <ol> <li>a replacing onehot encoding with an embedding layer</li> <li>a replacing MAE loss with MRRMSE loss</li> <li>an ensembing of models with mean</li> <li>a dimension reduction with truncated singular value decomposition</li> <li>an ensembling of models with weighted mean</li> <li>using pseudolabeling</li> <li>using pseudolabeling and ensembling of 20 models and weighted mean. </li> </ol> <p><strong>What did not work for me</strong>:</p> <ul> <li>a label normalization, standardization</li> <li>a chained regression</li> <li>a denoising dataset</li> <li>a removal of outliers</li> <li>an adding noise to labels</li> <li>a training on selected easy / hard to predict columns</li> <li>a huber loss.</li> </ul> <h1>4. Robustness</h1> <p>I have tested 3 types of the robustness: increasing dataset size, adding noise to labels and inputs. Adding the noise to inputs failed totally, it is logical for me, because the nominal and categorical values are changed and are behaving like the continues values, what is not beneficial.<br> Let's see the performance on 40%, 50%,…, 100% of training dataset. It started from 40%, because singular values decomposition is limited by number of examples.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6882624%2Fb684812f2feb70c2b9f28f2df5714cd7%2FScreenshot%20from%202023-12-05%2013-11-10.png?generation=1701852018611279&amp;alt=media" alt=""><br> The experiment was 5 times repeated, so the interval of uncertainty is visible. More data improves the performance significantly. <br> The last test of robustness is adding noise to the labels. The random gaussian (a distribution with 0 mean and scale * std) noise was added to the labels.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6882624%2F8625a5cb1af874f4986277c98e76ab8e%2FScreenshot%20from%202023-12-05%2011-45-22.png?generation=1701852528532795&amp;alt=media" alt=""><br> Adding some noise (0.01 * std) can even improve the model's performance. Generally, the model is robust to the noise. </p> <h1>5. Documentation &amp; code style</h1> <p>The code on GitHub is documented. </p> <h1>6. Reproducibility</h1> <p>GitHub code:<br> <a href="https://github.com/okon2000/single_cell_perturbations" target="_blank">repo</a><br> Notebook. The version 264 is first a stage and 266 the second one:<br> <a href="https://www.kaggle.com/code/jankowalski2000/3rd-place-solution" target="_blank">notebook</a>.</p> <p>The code runs in approximately 1 hour using CPU Intel(R) Core(TM) i5-9300H CPU @ 2.40GHz and 8GB RAM.</p> <p>I am not sure, I have to provide a lot of documents and refactor my code to fulfill all requirements for a leaderboard award. </p>
Open Problems – Single-Cell Perturbations
43d Place Solution for the Open Problems – Single-Cell Perturbations
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Context</h1> <p><a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview</a><br> <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data</a></p> <h1>Overview of the approach</h1> <p>Our optimal model is blend of two models:</p> <ol> <li>weight_1: 0.5 Pyboost model</li> <li>weight_2: 0.5 NN model</li> </ol> <h1>Data preprocessing and feature selection</h1> <h1>Features</h1> <h3>1. Pyboost features</h3> <p>Our main improvement from public pyboost implementation was that we explored and added as a feature categories for each drug:</p> <blockquote> <p>drug_cls = {<br> "Antifungal": ["Clotrimazole", "Ketoconazole"], <br> "Corticosteroid": ["Mometasone Furoate"],<br> "Kinase Inhibitors": ["Idelalisib", "Vandetanib", "Bosutinib", "Ceritinib", "Crizotinib", <br> "Cabozantinib", "Dasatinib", "Selumetinib", "Trametinib", "Lapatinib", <br> "Canertinib", "Palbociclib", "Dabrafenib", "Ricolinostat","Tamatinib", "Tivozanib", <br> "Quizartinib","Sunitinib","Foretinib","Imatinib","R428","BMS-387032","CGP 60474",<br> "TIE2 Kinase Inhibitor","Masitinib","Saracatinib","CC-401","RN-486","GO-6976",<br> "HMN-214","BMS-777607","Tivantinib","CEP-37440","TPCA-1","AZ628","PF-03814735",<br> "PRT-062607","AT 7867", "BI-D1870", "Mubritinib", "GLPG0634","Ruxolitinib", "ABT-199 (GDC-0199)",<br> "Nilotinib"],<br> "Antiviral": ["Lamivudine", "AMD-070 (hydrochloride)", "BMS-265246"],<br> "Sunscreen agent" : ["Oxybenzone"],<br> "Antineoplastic": ["Vorinostat", "Flutamide", "Ixabepilone", "Topotecan", "CEP-18770 (Delanzomib)",<br> "Resminostat", "Decitabine", "MGCD-265", "GSK-1070916","BAY 61-3606","Navitoclax", "Porcn Inhibitor III","GW843682X","Prednisolone","Tosedostat",<br> "Scriptaid", "AZD-8330", "Belinostat","BMS-536924","Pomalidomide","Methotrexate","HYDROXYUREA",<br> "PD-0325901","SB525334","AVL-292","AZD4547","OSI-930","AZD3514","MLN 2238","Dovitinib","K-02288",<br> "Midostaurin","I-BET151","FK 866","Tipifarnib","BX 912","SCH-58261","BAY 87-2243",<br> "YK 4-279","Ganetespib (STA-9090)","Oprozomib (ONX 0912)","AT13387","Tipifarnib","Flutamide","Perhexiline","Sgc-cbp30","IMD-0354",<br> "IKK Inhibitor VII", "UNII-BXU45ZH6LI","ABT737","Dactolisib", "CGM-097", "TGX 221","Azacitidine","Defactinib",<br> "PF-04691502", "5-(9-Isopropyl-8-methyl-2-morpholino-9H-purin-6-yl)pyrimidin-2-amine"],<br> "Selective Estrogen Receptor Modulator (SERM)": ["Raloxifene"],<br> "Antidiabetic (DPP-4 Inhibitor)": ["Linagliptin","Alogliptin"],<br> "Antidepressant": ["Buspirone", "Clomipramine", "Protriptyline", "Nefazodone","RG7090"], <br> "Antibiotic": ["Isoniazid","Doxorubicin"],<br> "Antipsychotic": ["Penfluridol"],<br> "Antiarrhythmic": ["Amiodarone","Proscillaridin A"],<br> "Alkaloid": ["Colchicine"],<br> "Antiviral (HIV)": ["Tenofovir","Efavirenz"],<br> "Allergy": ["Desloratadine","Chlorpheniramine","Clemastine","GSK256066","SLx-2119", "TR-14035", "Tacrolimus"],<br> "Anticoagulant": ["Rivaroxaban"],<br> "Alcohol deterrent":["Disulfiram"],<br> "Cocaine addiction":["Vanoxerine"],<br> "Erectile dysfunction":["Vardenafil"],<br> "Calcium channel blocker":["TL_HRAS26"],<br> "Anti-endotoxemic":["CGP 60474"],<br> "Acne treatment":["O-Demethylated Adapalene"],<br> "Stroke":["Pitavastatin Calcium","Atorvastatin"],<br> "Stem cell work":["CHIR-99021"],<br> "Hypertension":["Riociguat"],<br> "Heart failure":["Proscillaridin A;Proscillaridin-A", "Colforsin"],<br> "Regenerative":["LDN 193189"],<br> "Psoriasis":["Tacalcitol"],<br> "Unknown_1": ["STK219801"],<br> "Unknown_2": ["IN1451"]</p> </blockquote> <p>Another features were 'cell_type' and 'sm_name' encoded with QuantileEncoder(quantile =.8)<br> TruncatedSVD(n_components=50) was applied to target. </p> <h3>2. NN features</h3> <p>For neural network we used just SMILES and cell_type colums </p> <h1>Modeling</h1> <h3>1. Pyboost</h3> <blockquote> <p>params = {<br> n_components = 50, <br> ntrees = 5000,<br> lr = 0.01, <br> max_depth = 10 , <br> colsample = 0.35,<br> subsample = 1}</p> </blockquote> <h3>2. NN Model</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F862202%2F2fe3b3b85c7bf180061171cab7ede651%2F__results___74_1.png?generation=1702165001166005&amp;alt=media" alt=""></p> <h1>Loss function</h1> <p>We used Mean Absolute Error (MAE) for train and Mrrmse metric for validation</p> <h1>Validation Strategy</h1> <p>Our team used k-fold cross-validation strategy. Our goal was to get stable score on train and do not fit our scores to public leaderboard. </p> <p>We had a few submissions with very high public score but these submissions were blends of many models with different coefficient. We were not sure that their scores would be stable on private part. As a result, we chose a 0.5-0.5 blends from two our own models which didn't show great result on public but score very stable on cross validation.</p> <p>Sources<br> <a href="https://www.kaggle.com/code/alexandervc/pyboost-secret-grandmaster-s-tool" target="_blank">https://www.kaggle.com/code/alexandervc/pyboost-secret-grandmaster-s-tool</a><br> <a href="https://www.kaggle.com/code/asimandia/kfold-simple-nn-refactored" target="_blank">https://www.kaggle.com/code/asimandia/kfold-simple-nn-refactored</a></p>
Open Problems – Single-Cell Perturbations
45 Place Solution for the Open Problems – Single-Cell Perturbations Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Competition Pages</h1> <p><a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview</a></p> <p><a href="https://openproblems.bio/" target="_blank">https://openproblems.bio/</a></p> <h1>Overview of the Approach</h1> <p>We try CatBoost in multi target mode, but woking worse than constants in submission. <br> <a href="https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=&amp;cad=rja&amp;uact=8&amp;ved=2ahUKEwiPs9e4jvmCAxUSDRAIHRIaC48QFnoECAoQAQ&amp;url=https%3A%2F%2Fgithub.com%2Fsberbank-ai-lab%2FPy-Boost&amp;usg=AOvVaw35jZARIhArmT7tiDnuD6BO&amp;opi=89978449" target="_blank">PyBoost</a> it's a great framework for multi target.</p> <h1>Details of the submission</h1> <h1>Our blend:</h1> <p>public kernels [: 128] 0.574 * 0.2 + 0.577 * 0.8<br> [128: ] 0.584 PyBoost + 0.574 public</p> <h1>Not working:</h1> <p>Ridge regression<br> ChemBert<br> CatBoost</p>
Open Problems – Single-Cell Perturbations
4rd Place Solution for the Open Problems – Single-Cell Perturbations
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>1. Context</h1> <ul> <li><a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview</a></li> <li><a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data</a></li> </ul> <h1>2. Overview of the approach</h1> <p>Our approach is principally based on trial and error. Our optimal model is derived from several different models, of which the first model, RAPIDS SVR, was used to provide pseudo-labels but did not participate in the final ensembling:</p> <ol> <li>RAPIDS SVR - Used for training based on ChemEMB features to obtain pseudo-label results.</li> <li>Pyboost - Based on RAPIDS SVR pseudo-label features and other features. Public LB-0.581, Private LB-0.768.</li> <li>nn - Only utilized TruncatedSVD and Leaveoneout Encoder. Public LB-0.577, Private LB-0.731. (Yes, only nn can achieve 0.731 in Private LB)</li> <li>Open-source solution 0720 and open-source solution 0531 (late submission indicates that 0531 is not necessary) .<br> The final solution's result is Public LB-0.566, Private LB-0.733~0.735. <br> <code>((0.3*Pyboost + 0.7*nn)*0.9 + open-source solution 0720*0.1)*0.95 + open-source solution 0531*0.05</code><br> There is a score zone because our code has been saved in different versions in a messy manner, and there are some subtle parameter differences when reviewing our proposal, which makes it difficult to fully reproduce the final submission plan. This is our first time participating in the Kaggle competition, and in the future, we will be more meticulous in doing a good job of code version control.</li> </ol> <p>Different models employed different feature engineering strategies, which we will detail below.</p> <h1>3. Modeling</h1> <h2>3.1 Cross-validation</h2> <p>We believe that the method of cross-validation is a very important point for scoring improvement in this competition. A reasonable validation set allows us to trust our local CV instead of an overfitted LB. <br> Initially, we used random k-fold cross-validation. This method's CV maintained a consistent trend with the LB in the early stages. However, as we further incorporated nn models and added more training strategies, we noticed a discrepancy between the CV and LB. Consequently, we experimented with public two other forms of cross-validation which come from <a href="https://www.kaggle.com/code/ambrosm/scp-quickstart?scriptVersionId=144293041&amp;cellId=8" target="_blank">AmbrosM</a> and <a href="https://www.kaggle.com/code/masato114/scp-quickstart-another-cv-strategy/notebook" target="_blank">MT</a>. Thanks for their sharing. Ultimately, we chose MT's method of cross-validation, which showed excellent consistency between CV and LB in our models.</p> <h2>3.2 RAPIDS SVR</h2> <p>The first model we experimented with was RAPIDS SVR. It is frequently used in Kaggle competitions and has been part of multiple winning solutions. It is renowned for its fast training capabilities.</p> <h3>3.2.1 Feature Engineering</h3> <p>The feature engineering for the RAPIDS SVR model primarily included generating embedding features from ChemBERTa-10M-MTR for SMILES and statistical features obtained by aggregating target data. During the competition, we noticed in the discussion forums that many mentioned the embedding features generated by ChemBERTa-10M-MTR did not yield positive results. In our trials, we found that these features had a negative impact on models like LGBM and CatBoost, but they improved performance in RAPIDS SVR. Therefore, we decided to use RAPIDS SVR to generate pseudo-labels as a basis for subsequent model training.</p> <p>Here are the features we used in our RAPIDS SVR model:</p> <ol> <li>One-hot encoding features of cell_type and sm_name.</li> <li>Embedding features generated by ChemBERTa-10M-MTR. We used a mean pooling method comes from <a href="https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x?scriptVersionId=105321484&amp;cellId=12" target="_blank">Chris</a> and set the maximum length for SMILES based on our analysis.</li> </ol> <pre><code>def mean_pooling(model_output, attention_mask): token_embeddings = model_output.last_hidden_state.detach().cpu() input_mask_expanded = ( attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() ) return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp( input_mask_expanded.sum(1), =1e-9 ) class EmbedDataset(torch.utils.data.Dataset): def __init__(self,df): self.df = df.reset_index(=) def __len__(self): return len(self.df) def __getitem__(self,idx): text = self.df.loc[idx,] tokens = tokenizer( text, None, =, =, =, =150, =) tokens = {k:v.squeeze(0) k,v tokens.items()} return tokens def get_embeddings(de_train, =, =150, =32, =): global tokenizer # Extract unique texts unique_texts = de_train[].unique() # Create a dataset unique texts ds_unique = EmbedDataset(pd.DataFrame(unique_texts, columns=[])) embed_dataloader_unique = torch.utils.data.DataLoader(ds_unique, =BATCH_SIZE, =) = model = AutoModel.from_pretrained( MODEL_NM ) tokenizer = AutoTokenizer.from_pretrained( MODEL_NM ) model = model.(DEVICE) model.eval() unique_emb = [] batch tqdm(embed_dataloader_unique,=len(embed_dataloader_unique)): input_ids = batch[].(DEVICE) attention_mask = batch[].(DEVICE) with torch.no_grad(): model_output = model(=input_ids,attention_mask=attention_mask) sentence_embeddings = mean_pooling(model_output, attention_mask.detach().cpu()) # Normalize the embeddings sentence_embeddings = F.normalize(sentence_embeddings, =2, =1) sentence_embeddings = sentence_embeddings.squeeze(0).detach().cpu().numpy() unique_emb.extend(sentence_embeddings) unique_emb = np.array(unique_emb) verbose: (,unique_emb.shape) text_to_embedding = {text: emb text, emb zip(unique_texts, unique_emb)} train_emb = np.array([text_to_embedding[text] text de_train[]]) test_emb = np.array([text_to_embedding[text] text id_map[]]) return train_emb, test_emb MODEL_NM = all_train_text_feats, te_text_feats = get_embeddings(df_de_train, MODEL_NM) </code></pre> <p>3.Statistical features for cell_type and sm_name by aggregated from the target column. We retained features like ['mean', 'min', 'max', 'median', 'first', 'quantile_0.4'].</p> <p>Interestingly, when experimenting with different features, we did not train on all targets but selected only the first target, A1BG, for training and validation. This approach of feature selection allowed us to screen all features in just a few minutes, recording the CV scores for different features.</p> <h2>3.3 Pyboost</h2> <p>Our Pyboost solution is based on an open-source approach from <a href="https://www.kaggle.com/code/alexandervc/pyboost-secret-grandmaster-s-tool" target="_blank">Alexander Chervov</a>. Thanks for your sharing.</p> <h3>3.3.1 Feature Engineering</h3> <p>In our Pyboost model, we used four types of features:</p> <ol> <li>Pseudo-label features from RAPIDS SVR.</li> <li>Leaveoneout encoding features for cell_type and sm_name. We found in the Pyboost model that leaveoneout encoding was more effective than onehot encoding.</li> <li>Embedding features generated by ChemBERTa-10M-MTR for SMILES.</li> <li>Aggregated features for cell_type and sm_name against the target column, where we retained ['mean', 'max'].<br> We reduced the dimensionality of 18,211 targets to 45 using TruncatedSVD. Similarly, we also reduced the dimensions of the above features to the same 45 dimensions. This dimensional reduction provided a certain improvement in our CV scores.</li> </ol> <h3>3.3.2 modeling</h3> <pre><code>model = GradientBoosting( ,=1000 ,=0.01 ,=10 ,=1 ,=0.2 ,=1 ,=0 ,=100) </code></pre> <h2>3.4 nn</h2> <h3>3.4.1 Feature Engineering</h3> <p>In our nn model, we only used leaveoneout encoding features for cell_type and sm_name, as other features caused a decrease in CV scores. We also performed dimensionality reduction on the target data based on TruncatedSVD.</p> <h3>3.4.2 modeling</h3> <p>Our nn model consisted of a 3-layer 1D convolutional layer + 1 fully connected layer + a non-pretrained ResNet18 network. This structure allowed us to achieve a LB score of around 0.57 based solely on leaveoneout encoding features for cell_type and sm_name, which seems quite tricky. <br> Our initial nn model aimed to convert SMILES expressions into image data using the rdkit.Chem library and then input these images along with other features into the network for training, thus employing the ResNet network for image processing. However, we found that introducing SMILES image data did not improve training results. Despite this, we retained parts of the network structure and ended up with the aforementioned network.</p> <pre><code>class ResNetRegression(nn.Module): def __init__(self, input_size, output_size, =, =32, =16, =0.2): super(ResNetRegression, self).__init__() self.reshape_size = reshape_size self.conv1d_layers = nn.Sequential( nn.Conv1d(1, num_channels, =3, =1, =1), nn.ReLU(), nn.Dropout(dropout_rate), nn.Conv1d(num_channels, num_channels, =3, =1, =1), nn.ReLU(), nn.Dropout(dropout_rate), nn.Conv1d(num_channels, num_channels, =3, =1, =1), nn.ReLU(), nn.Dropout(dropout_rate), ) self.fc_layers = nn.Sequential( nn.Linear(num_channels * 4 * input_size, self.reshape_size * self.reshape_size), ) self.resnet = models.resnet18(=pretrained) self.resnet.conv1 = nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), =) self.resnet.fc = nn.Linear(self.resnet.fc.in_features, output_size) def forward(self, x): x = x.unsqueeze(1) # Reshape x Conv1d x = self.conv1d_layers(x) x = x.view(x.size(0), -1) # Flatten the linear layer x = self.fc_layers(x) x = x.view(x.size(0), 1, self.reshape_size, self.reshape_size) x = self.resnet(x) return x </code></pre> <h2>3.5 The 0720 Open-source Solution</h2> <p>In our final model, we assigned a weight of 0.05 to the 0720 open-source solution for ensembling. This is the result of a great notebook that uses the "Autoencoder" method. This integration resulted in an increase of 0.001 in our scores on both the Public LB and Private LB. We are grateful for the contribution shared by <a href="https://www.kaggle.com/vendekagonlabs" target="_blank">vendekagonlabs</a> and <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/discussion/452515" target="_blank">discussion</a>.</p> <h1>4 Parameter Tuning</h1> <p>We are not enthusiasts of parameter tuning, as, in our experience, tuning does not bring qualitative improvements to the model. In this competition, we only experimented with tuning towards the end, using Optuna to try out different settings for 'n_components' and 'n_iter' in TruncatedSVD, as well as 'sigma' in LeaveOneOutEncoder. Ultimately, we selected a few sets of parameters that yielded the best CV scores.</p> <h1>5 Things That Did Not Work</h1> <ol> <li>Normalization of target data.</li> <li>Converting SMILES into image features.</li> <li>Tree models such as LGBM and Catboost yielded average training results.</li> </ol> <p>Code<br> <a href="https://github.com/paralyzed2023/4st-place-solution-single-cell-pbs.git" target="_blank">https://github.com/paralyzed2023/4st-place-solution-single-cell-pbs.git</a></p>
Open Problems – Single-Cell Perturbations
58th solution: Single-Cell Perturbations (Blending only)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Disclaimer:</h1> <p>I didn't mean to disrespect anyone's intention, and I know you all have done your hard work for<br><br> this competition and most of the competitors are genuinely more skilful than me.</p> <h1>How I ended up with ensemble only:</h1> <p>I joined this competition at an early stage, but I had forgotten until the day before the deadline.<br><br> I got the reminder notification but I had nothing to submit. So, I tried ensemble the best version<br><br> of top public score notebooks, which are:</p> <ul> <li><a href="https://www.kaggle.com/code/liudacheldieva/submit-only?scriptVersionId=152457701" target="_blank">https://www.kaggle.com/code/liudacheldieva/submit-only?scriptVersionId=152457701</a></li> <li><a href="https://www.kaggle.com/code/nikolenkosergei/pyboost-secret-grandmaster-s-tool-106cac?scriptVersionId=152757460" target="_blank">https://www.kaggle.com/code/nikolenkosergei/pyboost-secret-grandmaster-s-tool-106cac?scriptVersionId=152757460</a></li> </ul> <p>Normally, I will be kicked out of the medal tier when I do it in other competitions.<br><br> It seems there's luck on my side. I unexpectedly took the 58th place on the private leaderboard.<br></p> <h1>Link to the solution</h1> <p><a href="https://www.kaggle.com/code/batprem/58th-private-lb-ensemble-only?scriptVersionId=153068055" target="_blank">https://www.kaggle.com/code/batprem/58th-private-lb-ensemble-only?scriptVersionId=153068055</a></p>
Open Problems – Single-Cell Perturbations
6th Place Solution for the Open Problems – Single-Cell Perturbations
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>1. Integration of biological knowledge</h2> <p>I tried integrating different biological knowledge, but unfortunately, most didn't work very well.</p> <p>Firstly, I came up with the idea of using the GO term to reduce genes into modules. I adopted the KEGG 2016 genes set and filtered out gene sets with a P-value less than 0.05. However, genes from the same set exhibit considerably different differential expression (DE) values. As a result, I feel the hypothesis that genes from the same GO term would have similar DE values cannot stand. Then, I resorted to finding co-expressed genes based on the gene expression matrix. I found the DE values between co-expressed genes do have a high Pearson correlation. However, I checked the predicted DE values of co-expressed genes and found they yield strong Pearson correlations as well. In other words, adding the correlation constraints to the optimization process won't have much effect.</p> <p>Next is about the molecule representation. I tried using the features from the pre-trained molecule models including ChemBERT and MolCLR, as well as the TFIDF-transformed element count proposed in the public notebooks. In my experiments, using the pre-trained features led to slightly inferior results, while the TFIDF-transformed element count and simply learnable molecule features give similar performance. I wonder if it is because the pre-training models capture more coarse-grained differences between molecules, or because the molecules in the contest are quite different from those used for pre-training.</p> <p>Finally is the integration of ATAC data. I did not have a good idea of integrating ATAC data into the model. All that I had come up with is that genes corresponding to a small ATAC count would be less affected by molecules. Anyway, I am not sure about such a hypothesis, and in practice, I simply concatenated the ATAC feature and found it slightly improves the results.</p> <h2>2. Exploration of the problem</h2> <p>According to the problem definition, I think three targets could be predicted to achieve the task. The first target is directly the DE values. The variables in DE value prediction would be cell types and compounds. A model could be trained to predict the DE value given cell type-compound pairs. The second target is the bulk expression values, namely, the input to the Limma model. The variables in bulk prediction would be only the compounds (as I will elaborate on later). I feel such a paradigm is the most promising and robust solution. The most important reason is that the results could be precisely validated. To be specific, we could send the predicted and provided bulk expressions to the Limma model, and check how close the predicted DE values are to the provided ones. I feel such a validation scheme would be much less risks in overfitting. The third target is the single-cell gene expression. Though the training data is most sufficient in such a paradigm, it is the most challenging paradigm since we do not have the paired before-perturbation and after-perturbation gene expression values.</p> <h2>3. Model design</h2> <p>In this competition, I mainly explored the first two prediction paradigms, namely the DE values and bulk expression values.</p> <h3>3.1 DE value prediction</h3> <h4>3.1.1 Preprocess</h4> <p>No data preprocessing is applied. Common preprocessing strategies such as normalizing and scaling even harm the performance in my experience.</p> <h4>3.1.2 Model architecture</h4> <p>The plainest model learns the cell type and compound representation, as well as makes the predictions at the same time. The model could be written within a few lines, namely,</p> <pre><code> (nn.Module): (): (Net, self).__init__() self.type_num = type_num self.compound_num = compound_num self.gene_num = gene_num self.type_embedding = nn.Embedding(self.type_num, ) self.compound_embedding = nn.Embedding(self.compound_num, ) self.predictor = nn.Sequential( nn.Linear(, ), nn.BatchNorm1d(), nn.Dropout(), nn.ReLU(), nn.Linear(, ), nn.BatchNorm1d(), nn.Dropout(), nn.ReLU(), nn.Linear(, self.gene_num), ) (): type_embedding = self.type_embedding() compound_embedding = self.compound_embedding(compound) embedding = torch.cat([type_embedding, compound_embedding], dim=) self.predictor(embedding) </code></pre> <p>Optionally, we could replace the learnable cell type representation with RNA or ATAC counts averaged by cell type, and replace the compound representation with features from pre-trained models such as ChemBERT and MolCLR. Training such a vanilla model gives 0.586 MRRMSE on the public test split and 0.785 MRRMSE on the private test split.</p> <p>I also tried Transformer architecture like the Performer used in the scBERT paper. The idea is to treat each gene as a token and replace the positional embedding with cell type and compound representation. The token of each gene is learned together with the Transformer model. Due to the space limitation, I will not attach the model configuration code here, please refer to the GitHub repository attached below. Sadly, though the Transformer architecture generally gives better performance, in my practice, I just cannot successfully train the model (i.e., the training loss stays very high), and achieved 0.611 MRRMSE on the public test split and 0.767 MRRMSE on the private test split. It was not until the competition ended that I found the Transformer architecture achieved a much better score on the private test split, despite its poor performance on the public test split. Such an interesting result is worth exploring (or it is simply due to randomness and luck).</p> <h4>3.1.3 Learning objective</h4> <p>To directly predict the DE values, I tried the MSE loss, L1 loss, and MRRMSE loss. The MRRMSE loss gives the best performance in practice. I did not use fancy tuning on the optimizer but simply used a standard Adam optimizer without any learning rate scheduler.</p> <h3>3.2 Bulk expression prediction</h3> <h4>3.2.1 Preprocess</h4> <p>The bulk count data is first normalized following the classic scRNA-seq preprocessing pipeline, namely, i) scaled to have 1e4 counts per bulk, ii) applied the log1p transformation, and iii) scaled to have zero mean and unit variance.</p> <h4>3.2.2 Model architecture</h4> <p>After many tries on the DE value prediction paradigm, I felt a bit frustrated since the performance gain only comes from the fancy tuning of the model architecture. Hence, I turned to try the bulk expression prediction paradigm in the later stage of the competition. According to the experiment design, I noticed that there is a negative control spot in each row on the plate. In other words, the only difference between the negative spot and the remaining spots lies in the added compound (please correct me if that is wrong). Consequently, we could predict the perturbed bulk expression values based on the baseline counts and the compound. To this end, I built a conditional autoencoder as follows,</p> <pre><code> (nn.Module): (): (Net, self).__init__() self.compound_num = compound_num self.gene_num = gene_num sm_feature : self.sm_emb = nn.Embedding(self.gene_num, ) self.sm_enc = : self.sm_emb = sm_feature self.sm_enc = nn.Sequential( nn.Linear(self.sm_emb.shape[], ), nn.BatchNorm1d(), nn.ReLU(), nn.Linear(, ), ) self.type_atac = type_atac self.atac_enc = nn.Sequential( nn.Linear(self.type_atac.shape[], ), nn.BatchNorm1d(), nn.ReLU(), nn.Linear(, ), ) self.encoder = nn.Sequential( nn.Linear(self.gene_num, ), nn.BatchNorm1d(), nn.ReLU(), nn.Linear(, ), ) self.decoder = ConditionalNet( dim_feature=, dim_cond_embed=, dim_hidden=, dim_out=self.gene_num, n_blocks=, skip_layers=(), ) (): self.sm_enc : sm = self.sm_emb(sm_name) : sm = self.sm_enc(self.sm_emb[sm_name]) encode = self.encoder(x) atac = self.atac_enc(self.type_atac[]) x = sm cond = torch.cat([encode, atac], dim=) pred = self.decoder(x, cond) pred </code></pre> <p>Notably, here I unnaturally chose the negative count and cell type-averaged ATAC count as conditions, while letting the compound be the input. In practice, such a configuration leads to better results than reversely setting negative count as input and compound as the condition. Such a result could probably be attributed to the over-fitting of compound representation for the later configuration. Likewise, I also tried the Transformer architecture but it does not work very well.</p> <h4>3.2.3 Learning objective</h4> <p>The model was trained to predict the normalized bulk count value change after adding the compound. The predicted normalized value is then recovered to the raw count according to the previous scaling and normalization factors. I tried the MSE loss, L1 loss, Smooth L1 loss, and MRRMSE loss, and found the Smooth L1 loss leads to the best performance. The predicted bulk value was then concatenated with the known bulk value and fed into the Limma model to compute the DE values.</p> <h4>3.2.4 Post preprocessing</h4> <p>Though the bulk expression prediction paradigm seems technically sound, its performance is not that satisfying. In my experiments, the prediction DE values differ a lot on the known compounds. Here, I would like to point out that comparing the DE values computed by Limma on the predicted bulk expression and the provided DE values is a solid validation metric, which would be less affected by the over-fitting problem. Anyway, I found the computed DE values have a larger mean compared with the provided ones. I wonder if it is because the Limma is applied on different sets of compounds (i.e., partially on the provided DE values but fully on the private DE values). Thus, I scaled the DE values computed by Limma to have the same mean on the known compounds. Such a paradigm ends up in the best MRRMSE of 0.587 on the public test split and 0.809 on the private test split.</p> <h3>3.3 Ensembling</h3> <p>Ensembling the results of the above two different paradigms led to 0.563 MRRMSE on the public test split and 0.755 on the private test split. By further ensembling the EDA results of 0.567 public score, the performance further improved to have 0.552 MRRMSE on the public test split and 0.728 on the private test split. The performance gain by ensembling results of different paradigms is surprising. Yes, my best history submission is even better than the 1st place in the leaderboard. However, it does not correspond to the best public score so I did not choose it as the final submission.</p> <h2>4. Robustness</h2> <p>For the DE value prediction paradigm, I tried removing compounds with extreme values such as "MLN 2238", but did not observe performance improvements. Adding noise to the inputs is less effective than adding the Dropout layers into the network. For the bulk expression paradigm, I tried removing the most dissimilar cell type "T regulatory cells" when training the model. Interestingly, the results were almost not influenced. The stronger robustness of the bulk expression paradigm could probably be attributed to the larger training data sample number. As for directly predicting the DE values, hundreds of data samples are too little to predict thousands of genes.</p> <h2>5. Documentation, code style, and reproducibility</h2> <p>The code and documentation can be accessed from <a href="https://github.com/Yunfan-Li/PerturbPrediction" target="_blank">https://github.com/Yunfan-Li/PerturbPrediction</a>.</p>
Open Problems – Single-Cell Perturbations
710th Place Solution for the Open Problems – Single-Cell Perturbations
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This might be the lowest score that still post the solution, but I learn something during my first competition and want to share with community. Hope you find it useful. :) <a href="https://github.com/Arkarachai/Kaggle_OpenProblems_SingleCellPerturbations_YellowAvocado" target="_blank">https://github.com/Arkarachai/Kaggle_OpenProblems_SingleCellPerturbations_YellowAvocado</a> </p>
Open Problems – Single-Cell Perturbations
7th Place Solution for the Open Problems – Single-Cell Perturbations
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank the organizers for hosting this interesting competition and congrats to the Winners.<br> It was a big surprise for me to see my resolution finally achieve 7th place. </p> <h2>Context</h2> <p><a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview</a><br> <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data</a></p> <h2>Overview of the approach</h2> <p>My approach has two major steps: i) learn embeddings for all cell types, small molecular, and genes, ii) use embedding as features to train a deep learning model to predict the target while accounting for overfitting. Throughout the competition, I only used a fully connected network with three layers. My best submission is also based on a single FC model.</p> <h3>Learning embeddings</h3> <p>In this step, the goal is to learn a specific embedding for each cell type, molecular, and gene. I was strongly inspired by <a href="https://genomebiology.biomedcentral.com/articles/10.1186/s13059-020-01977-6" target="_blank">this paper</a>, where the authors used deep tensor factorization to learn a dense, information-rich representation for cell type, experimental assay, and genomic position. </p> <p>Basically, I used the same approach but played around with the model architectures and parameters, such as the number of latent factors, and the number of dimensions of the network, as well as how to combine the features (e.g., concatenate vs. additive). </p> <p>In the end, my mode is the following:</p> <pre><code> (torch.nn.Module): (): ().__init__() self.cell_types = cell_types self.compounds = compounds self.genes = genes self.n_cell_types = (cell_types) self.n_compounds = (compounds) self.n_genes = (genes) self.n_cell_type_factors = n_cell_type_factors self.n_compounds_factors = n_compounds_factors self.n_gene_factors = n_gene_factors self.cell_type_embedding = torch.nn.Embedding(self.n_cell_types, self.n_cell_type_factors) self.compound_embedding = torch.nn.Embedding(self.n_compounds, self.n_compounds_factors) self.gene_embedding = torch.nn.Embedding(self.n_genes, self.n_gene_factors) self.n_hiddens = n_hiddens self.dropout = dropout self.n_factors = n_cell_type_factors + n_compounds_factors + n_gene_factors self.model = nn.Sequential(nn.Linear(self.n_factors, self.n_hiddens), nn.BatchNorm1d(self.n_hiddens), nn.ReLU(), nn.Dropout(self.dropout), nn.Linear(self.n_hiddens, self.n_hiddens), nn.BatchNorm1d(self.n_hiddens), nn.ReLU(), nn.Dropout(self.dropout), nn.Linear(self.n_hiddens, )) (): cell_type_vec = self.cell_type_embedding(cell_type_indices) compound_vec = self.compound_embedding(compound_indices) gene_vec = self.gene_embedding(gene_indices) x = torch.concat([cell_type_vec, compound_vec, gene_vec], dim=) x = self.model(x) x </code></pre> <p>To train this model, I used all the data from <code>de_train.parquet</code> and converted the table as follows:</p> <pre><code> df = pd.read_parquet() df = df.sort_values([, ]) df = df.drop([, , ], axis=) df = pd.melt(df, id_vars=[, ], var_name=, value_name=) </code></pre> <p>The training data looks like:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F642483%2Ffcf2b04f2255dc9fe2f0382320346feb%2FScreenshot%202023-12-05%20at%2021.02.43.png?generation=1701828192377214&amp;alt=media" alt=""></p> <p>Here, I trained the model with 100 epochs without validation, because I will only the embedding layer for Step 2.</p> <p>To check if the model learns meaningful embedding for cell types, molecules, and genes, I also visualized the embedding with UMAP. For example, below is the 2D UMAP of gene embeddings:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F642483%2F59782ef056d394c5deb99376c0af2a05%2FScreenshot%202023-12-05%20at%2021.05.28.png?generation=1701828348397738&amp;alt=media" alt=""></p> <p>By eyeballing, it looks like there are some structures for the genes.</p> <h3>Predicting target</h3> <p>Once I obtained the embeddings, I trained another model to predict the target, and this model was also used to generate the final submission. Again, I used a FC network as follows:</p> <pre><code> (torch.nn.Module): (): ().__init__() self.n_input = n_input self.n_hiddens = n_hiddens self.dropout = dropout self.model = nn.Sequential(nn.Linear(self.n_input, self.n_hiddens), nn.BatchNorm1d(self.n_hiddens), nn.ReLU(), nn.Dropout(self.dropout), nn.Linear(self.n_hiddens, self.n_hiddens), nn.BatchNorm1d(self.n_hiddens), nn.ReLU(), nn.Dropout(self.dropout), nn.Linear(self.n_hiddens, )) (): x = self.model(x) x </code></pre> <p>To prevent overfitting, I used each of the cell types as validation data, which means my model was based on 4-fold cross-validation. For the compounds, I followed the notebook here to select the private test compounds for validation:</p> <pre><code> key, cell_type cell_type_names.items(): (cell_type) df_train = df[(df[] != key) | ~df[].isin(privte_ids)] df_valid = df[(df[] == key) &amp; df[].isin(privte_ids)] df_train = df_train.sort_values([, ]) df_valid = df_valid.sort_values() df_train = convert_to_long_df(df_train) df_valid = convert_to_long_df(df_valid) df_train.to_csv() df_valid.to_csv() </code></pre> <p>For model training, I used the following strategies:</p> <pre><code> criterion = torch.nn.MSELoss() optimizer = AdamW(model.parameters(), lr=args.lr, weight_decay=) scheduler = ReduceLROnPlateau(optimizer, , min_lr=) </code></pre> <p>I trained one model by using each of the cell types as validation data, and for the final submission, I just averaged the predictions.</p> <h2>What didn't work for me</h2> <p>During the competition, I spent a lot of time including the features based on prior knowledge, such as single-cell data, molecular structure embedding, and gene embedding for other large models, such as geneFormer. However, they all didn't work out. So my final model was just based on the features learned from Step 1.</p> <h2>Source</h2> <p><a href="https://github.com/lzj1769/7th_place_solution_Single-Cell-Perturbations" target="_blank">https://github.com/lzj1769/7th_place_solution_Single-Cell-Perturbations</a><br> <a href="https://genomebiology.biomedcentral.com/articles/10.1186/s13059-020-01977-6" target="_blank">https://genomebiology.biomedcentral.com/articles/10.1186/s13059-020-01977-6</a><br> <a href="https://github.com/jmschrei/avocado" target="_blank">https://github.com/jmschrei/avocado</a></p> <p>Hi <a href="https://www.kaggle.com/songqizhou" target="_blank">@songqizhou</a> </p> <p>Thanks for your comments, and I am glad that we converged on a similar approach.</p> <p>Regarding 'encode cell types using the principal components of RNA expression levels for each cell type in a multimodal dataset', this also didn't work out for me. I tried two different ways, i.e., 1) generate PCA for each single cell and then use average PCs the each cell type, 2) first get pseudo-bulk profiles for each cell type and then do the PCA. It turns out they just didn't boost my model.</p> <p>Sorry for the bad organization of my code repository; I am working on it to have a clean version, which should be much easier to understand.</p> <p>Finally, good luck with your submission! </p>
Open Problems – Single-Cell Perturbations
8th Place Solution for the Open Problems – Single-Cell Perturbations Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all the winners, and thank you to Kaggle for organizing such an interesting competition. And also thanks to the other kagglers who shared their ideas and notebooks.</p> <h1>Context</h1> <ul> <li><a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview" target="_blank">Competition Overview</a></li> <li><a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data" target="_blank">Competition Data</a></li> </ul> <h1>Overview of the Approach</h1> <h3>Model</h3> <p>For model development, I designed and fine-tuned a simple neural network through a series of experiments, aiming to reduce both CV and LB scores. I used augment features from the <a href="https://www.kaggle.com/code/mehrankazeminia/3-op2-feature-augment-fragments-of-smiles?scriptVersionId=150423767&amp;cellId=24" target="_blank">notebook - [3] OP2 - Feature Augment &amp; Fragments of SMILES</a> as model inputs.</p> <p>Here is the model architecture:</p> <pre><code> (nn.Module): (): ().__init__() self.ce_layer = nn.Linear(labels, dim_size) self.sm_layer = nn.Linear(labels, dim_size) hidden_size = dim_size * self.fc1 = nn.Linear(hidden_size, hidden_size*mul_ratio) self.fc2 = nn.Linear(hidden_size*mul_ratio, hidden_size) self.act = nn.GELU() self.out = nn.Linear(hidden_size, labels) (): x1 = self.act(self.ce_layer(cell_type)) x2 = self.act(self.sm_layer(sm_name)) x = torch.concat([x1, x2], dim=-) x = self.act(self.fc1(x)) x = self.act(self.fc2(x)) x = self.out(x) x </code></pre> <h3>Data Augmentation</h3> <p>The training process invloved a strategic approach to data augmentation. Initially, I employed only mean values for the cell_type and sm_name, respectively. Subsequently, I explored various statistical values such as median, min, max and quantiles. And I found out that median values significantly improves the LB score.</p> <p>Moreover, I experimented with combinations of these features. I implemented 50% random selection between mean and median, and 25% random selection among mean, median, Q1 and Q2 for both cell_type and sm_name. </p> <h3>Validation Strategy</h3> <p>I used K-Fold cross-validation stratify on cell_type, trained 5, 10, 15, 20 splits. I reviewed that LB score increases with 10 and 15 splits.</p> <h1>Details of the submission</h1> <p>The results of augmented models are summarized below, along with their respective LB scores.</p> <ol> <li>median / 0.549</li> <li>mean and median / 0.549</li> <li>mean, median, Q1 and Q2 / 0.551</li> </ol> <p>The final submission was a weighted average of these models by 0.35/0.35/0.3, which boosted up the LB score to 0.547. Since they had different prediction distributions with similiar LB scores, I believed that the ensemble would generalize well in the private. </p> <h3>Things that didn't work</h3> <ul> <li>pseudo labels</li> <li>dropout</li> <li>normalization</li> <li>data selection (control, etc.)</li> </ul> <h1>Sources</h1> <ul> <li><a href="https://www.nature.com/articles/s41592-023-01969-x" target="_blank">Learning single-cell perturbation responses using neural optimal transport</a></li> <li><a href="https://www.kaggle.com/code/mehrankazeminia/3-op2-feature-augment-fragments-of-smiles" target="_blank">https://www.kaggle.com/code/mehrankazeminia/3-op2-feature-augment-fragments-of-smiles</a></li> </ul> <p>No problem. Here's the <a href="https://www.kaggle.com/code/todaya/op2-notebook" target="_blank">link</a> to the notebook.</p> <p>Thanks, glad you find it interesting.</p>
Open Problems – Single-Cell Perturbations
9th solution write-up: Pure NN model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>9th solution: pure NN model</h1> <p>Hi everyone! I am Dave, this is my first time completing a kaggle competition, and I feel honored to win a gold. I want to use a pure NN model to solve this problem. I would be happy if you find this solution interesting and helpful.</p> <h2>Problem definition</h2> <p>I have seen many good solutions using each row of the data as one sample, however, I view this problem in a different way. I extracted (cell,sm,gene,value) pairs from the dataset and for each time, the model will predict the object value for a given cell type, sm type, and gene type.</p> <h2>The architecture of Model</h2> <p>The overview of the model is shown as follows:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4090067%2F244e8db36e50bc5eeb7ae04d1b4657b1%2F2023-12-12%204.07.55.png?generation=1702368531932768&amp;alt=media" alt=""><br> You can see that I used three kinds of features: sm features, gene features and cell features. Let’s first check how to get the sm features:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4090067%2F0c1cf4496cea2ad183a03910b46793ce%2F2023-12-12%202.30.23.png?generation=1702365244366440&amp;alt=media" alt=""><br> Here I involved many different kinds of features:</p> <ul> <li>MACC: Molecular ACCess System keys, are one of the most commonly used structural keys, check details <a href="https://chem.libretexts.org/Courses/Intercollegiate_Courses/Cheminformatics/06%3A_Molecular_Similarity/6.01%3A_Molecular_Descriptors" target="_blank">here</a>.</li> <li>ECFP: extended-connectivity fingerprints, are generated using a variant of the Morgan algorithm, check details <a href="https://chem.libretexts.org/Courses/Intercollegiate_Courses/Cheminformatics/06%3A_Molecular_Similarity/6.01%3A_Molecular_Descriptors" target="_blank">here</a>.</li> <li>WHIM: Weighted Holistic Invariant Molecular descriptors, are geometrical descriptors based on statistical indices calculated on the projections of the atoms along principal axes, check details <a href="https://chemgps.bmc.uu.se/help/dragonx/WHIMdecriptors1.html" target="_blank">here</a>.</li> <li>sm type: the types of small molecules.</li> <li>sm hba: the number of H-bond acceptors for a molecule.</li> <li>sm hbd: the number of H-bond donors for a molecule.</li> <li>sm rotb: the number of rotatable bonds for a molecule.</li> <li>sm mw: the molecular weight for a molecule.</li> <li>sm psa: the Polar surface area for a molecule.</li> <li>sm logp: the log of the partition coefficient of a solute between octanol and water.</li> </ul> <p>Now let's check how to get the gene features:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4090067%2Ffcadf4701630dbb7774bc4d4616ba1c7%2F2023-12-12%202.46.18.png?generation=1702365793278185&amp;alt=media" alt=""><br> I have involved the PCA of the genes and the additional features. To calculate the PCA feature, I set the n_components as 10 so that I can get a 10-dimensional vector to represent each gene.</p> <p>Then let's check how to get the cell features:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4090067%2Fd0c22eba36b90129dd4ff4053bd52e88%2F2023-12-12%202.50.06.png?generation=1702365903775022&amp;alt=media" alt=""><br> Something new here is that I used a GCN layer to help extract the feature contained in the relationship between different cells. The graph used here is very simple: cells are denoted as nodes, so there are only 6 nodes in this graph. Each pair of two nodes has an un-directed edge. I found it helpful in LB.<br> What's more, given this problem is unbalanced, we need to improve the generalization on different cell types, so you may notice that I have involved random noise in the cell features.</p> <p>Another important part is the attention layers, the structure is shown as follows:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4090067%2F7ab9b5fff9e6588b8d3b3d17020a87e0%2F2023-12-12%203.43.25.png?generation=1702367027176571&amp;alt=media" alt=""></p> <p>Now we know the structure of the model, but before directly training them, to improve the performance, I built a larger model based on 3 models with the same structure mentioned earlier:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4090067%2F9269c6e48312dcde4897c6a95fe4c783%2F2023-12-12%204.17.34.png?generation=1702369066405973&amp;alt=media" alt=""></p> <h2>Others</h2> <p>I used MSE as the loss function, AdamW as the optimizer, the learning rate is 3e-4, weight decay is 1e-3, and the batch size is 128.</p> <ul> <li>loss function: MSE</li> <li>optimizer: AdamW</li> <li>learning rate: 3e-4</li> <li>weight decay: 1e-3</li> <li>batch size: 128</li> <li>CV: 5-fold cross-validation</li> </ul> <p>Thank you very much! In this competition, I found that unconstrainted weight and without bias for linear mixture layer can help me to improve the score, and the score will drop if I use softmax to constraint the weights, that's interesting😆</p> <p>Thank you! I'm glad that we share the same view on this problem, I think this way has more freedom, do you think so? Actually, by checking the private score of the history submissions, I found I could get the prize with a score of 0.732, but I didn't choose that submission… </p>
Open Problems – Single-Cell Perturbations
A model used in the 24th solution - pure linear algebra!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Intro</h1> <p>I share a linear algebra method, an unbiased and reproducible approach, resulting in decent Private/Public scores of 0.768/0.582. My final submission is an ensemble of this <a href="https://www.kaggle.com/code/makio323/24th-using-linear-algebra-priv-pub-0-768-0-582" target="_blank">Linear Algebra</a> approach along with AE NN mimicking the linear algebra approach by NN, whose joint weight is 0.70, and the combination of the public notebooks, <a href="https://www.kaggle.com/code/makio323/pyboost-secret-grandmaster-s-tool-0-592" target="_blank">Pyboost</a>, <a href="https://www.kaggle.com/code/makio323/fork-of-nlp-regression-12a31a-0-594" target="_blank">NN</a>, and <a href="https://www.kaggle.com/code/mehrankazeminia/1-op2-eda-linearsvr-regressorchain" target="_blank">Linear SVR</a>, with the total weight of 0.3. It turns out that the pure Linear Algebra model gets the best private leaderboard score.</p> <p>In the first half of this competition, I struggled to overcome the wall of a 0.600 public score. Some lucky runs of some NN models got over the wall, but not always. There was also the second formidable wall of 0.585, which blended the results of public models.</p> <p>After the first half, I came up with this linear algebra approach and overcame the walls; the prediction is deterministic and reproducible and helped me to move on. </p> <p>The code of this linear model is available at <a href="https://www.kaggle.com/code/makio323/24th-using-linear-algebra-priv-pub-0-768-0-582" target="_blank">my code notebook</a>. </p> <h1>Biological Hypothesis</h1> <p>It is a bit old, before the deep neural network, I had research experience using linear algebra to predict the missing values in a matrix - <a href="https://www.cs.uic.edu/~mtamura/MakioTamuraMasterProject.pdf" target="_blank">Missing Value Expectation of Matrix Data by Fixed Rank Approximation Algorithm</a>, and it may inspire me.</p> <p>An assumption behind this method is that the differential expressions (DEs) of 18,211 genes at one cell line (e.g. NK cells) can be linearly transferable to those of another cell line (e.g. B Cells) on the same chemical perturbation.</p> <p>A chemical perturbation triggers a complex activity interaction among the 18,211 genes, and different chemical perturbations make different activity patterns, resulting in various DEs from the same baseline condition. However, there would be an unseen “master rule” to govern these interactions on each cell line. If the master rule of one cell line (e.g. NK cells) could be similar to another cell (e.g. B cells), DEs on the same chemical perturbation could be predictable from one to another. Even not knowing the master rule of each cell line, their relationship among cell lines could be captured such that</p> <ul> <li><em>f</em>(DE<em>_i_c</em>) = DE<em>_j_c</em></li> </ul> <p>where DE<em>_i_x</em>and DE<em>_j_x</em> are the differential expressions of cell line <em>i</em> (NK cells, T cells CD4+, T cells CD8+, T regulatory cells) and <em>j</em> ('B cells', 'Myeloid cells') of chemical <em>c</em> perturbation, <em>f</em> is some special function.</p> <p>My approach is to assume that the linear system can be the proxy function, and to solve a system may provide the "transfer" such that</p> <ul> <li>DE<em>_i_core</em> x T = DE<em>_j_core</em></li> </ul> <p>where DE<em>_i_core</em> and DE<em>_j_core</em> are <em>m</em> x <em>n</em> matrix, <em>m</em> is the number of chemicals shared by cel line <em>i</em> (NK cells, T cells CD4+, T cells CD8+, T regulatory cells) and <em>j</em> ('B cells', 'Myeloid cells') as shown below (including positive controls) and n is the number of genes (18,211). T is an <em>n</em> x <em>n</em> matrix, considered a transformer matrix from one cell line to another.</p> <p>NK cells B cells 17<br> NK cells Myeloid cells 17<br> T cells CD4+ B cells 17<br> T cells CD4+ Myeloid cells 17<br> T cells CD8+ B cells 15<br> T cells CD8+ Myeloid cells 15<br> T regulatory cells B cells 17<br> T regulatory cells Myeloid cells 17</p> <p>Once the transformer T is solved on the core chemicals, it may be applied to predict DEs of the target cell (e.g. B cells) from the known cell (e.g. NK cells)</p> <ul> <li>Prediction of DE<em>_i_target</em> = DE<em>_j_target</em> x T</li> </ul> <p>where DE<em>_i_target</em> is the DEs of prediction cell line i (B cells and Myeloid cells) of the target chemicals, 128 for B cells and 127 for Myoloid cells, and DE<em>_j_target</em> is the DEs of known cell line j (NK cells, T cells CD4+, T cells CD8+, T regulatory cells) of the target chemicals.</p> <p>This approach may provide a robust and unbiased prediction.</p> <h1>Observation</h1> <p>Using SVD to get the 1st and 2nd projected expression of the shared chemicals across 6 cell lines, and visualize the disperse among them (17 including positive controls, missing chemicals in T8 is replaced with those of T4). Well, it is a bit difficult to recognize a clear pattern.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F577034%2Fb9f7d6b1595bae80513bbcd2fa786bb4%2Fs_plot.png?generation=1701966557440410&amp;alt=media" alt=""></p> <p>Here is a grid plot of the previous one by chemicals. In most of the chemicals, there is not much difference among 6 cell cline. However, NK cell seems to be similar to the B Cell and Myoloid cell on the chemical perturbations that create the larger difference among 6 cell lines such as Belinostat (one of the positive controls), MNL 2238, and Oprozomib. In the end, accuracy in predicting such chemicals may be important. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F577034%2Fa5aac54f6b18ab37d2cdaa3e3c4c921e%2Fsg_plot.png?generation=1701966669627694&amp;alt=media" alt=""></p> <h1>Model</h1> <h3>Simpler model as in the background section.</h3> <ol> <li><p>Solve linear system, transformer, from a base cell line (NK cells, T cells CD4+, T cells CD8+, T regulatory cells) to a target cell line ('B cells', 'Myeloid cells') of the DE using the chemicals tested in the two cell lines (15 plus 2 positive control).</p> <p>The transformer can be computed by multiplying a pseudo-inverse of DE<em>_i_core</em> with DE<em>_j_core</em> from the left.</p> <p>DE<em>_i_core</em> x T = DE<em>_j_core</em><br> T = DE<em>_i_core</em>-t * DE<em>_j_core</em></p> <p>where DE<em>_i_core</em>-t is a pseudo-inverse of DE<em>_i_core</em> </p></li> <li><p>Apply the transformer to the DE of the base cell line/target chemicals and get the DE of the target cell line/chemicals.</p></li> </ol> <p>The simpler model consumes a lot of memory &gt; 20Gb, and it cannot be performed on the free version of the Saturn Cloud, which I had used for convenience, I also propose an alternative solution using SVD with a projection space.</p> <h3>SVD projection</h3> <p>It first reduces the gene dimension (18,211) to the full rank dimension of the entire data set (614) by SVD, create the transformer in the projected space, and applies the transformer in projected space from known cell to target cell on the target chemicals, reconstruct the original gene dimension of the prediction.</p> <ol> <li>Project the DE data by SVD with the whole dimension.</li> <li>Solve linear system from a base cell line (NK cells, T cells CD4+, T cells CD8+, T regulatory cells) to a target cell line ('B cells', 'Myeloid cells') of the projected DE using the chemicals tested in the two cell lines (15 plus 2 positive control).</li> <li>Apply the transformer to the projected DE of the base cell line/target chemicals and get the projected DE of the target cell line/chemicals.</li> <li>Inverse the projected DE of the target cell line/chemicals to the predicted DE.</li> </ol> <h1>Robustness, Code, and Reproducibility</h1> <p>Please refer to <a href="https://www.kaggle.com/code/makio323/24th-using-linear-algebra-priv-pub-0-768-0-582" target="_blank">my notebook</a>: the code can run on the notebook, and the result is deterministic, so reproducible. </p> <h1>Key Findings from the results</h1> <p>This method indeed can be used to diagnose the similarity of the "master rule" among the cell lines for scientific insight, and my result suggests that NK cells, along with T cells CD4+ can be stronger predictor cell lines for B Cell and Myeloid cells. Interestingly, T cells CD4+ is a better predictor for B cell while NK cell is a better predictor for Myeloid cells.</p> <p>This finding may align with the biological findings - <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3072878/" target="_blank">NK cells can be derived from the myeloid lineage, Blood - The Journal of the American Society of Hematology 2011 3548 </a> and <a href="https://www.cell.com/trends/immunology/fulltext/S1471-4906(21)00117-4" target="_blank">T follicular helper cells cognately guide differentiation of antigen primed B cells in secondary lymphoid tissues - Trends in immunology 42.8 2021</a>.</p> <h3>Prediction by single base cell line</h3> <table> <thead> <tr> <th>base cell</th> <th>public</th> <th>private</th> </tr> </thead> <tbody> <tr> <td>NK cells</td> <td>0.784</td> <td>0.596</td> </tr> <tr> <td>T cells CD4+</td> <td>0.775</td> <td>0.607</td> </tr> <tr> <td>T cells CD8+</td> <td>0.959</td> <td>0.706</td> </tr> <tr> <td>T regulatory cells</td> <td>0.834</td> <td>0.680</td> </tr> </tbody> </table> <h3>Prediction by two base cell lines</h3> <table> <thead> <tr> <th>base cell/target cell</th> <th>public</th> <th>private</th> </tr> </thead> <tbody> <tr> <td>Predict B Cell by NK, Myeloid cells by T4</td> <td>0.786</td> <td>0.616</td> </tr> <tr> <td>Predict B Cell by T4, Myeloid cells by NK</td> <td>0.773</td> <td>0.587</td> </tr> </tbody> </table> <ol> <li><p>Differences in DE among certain cell lines can be captured linearly very well, and the linear relation is well transferable across different chemical responses, 0.768 in private and 0.582 in public</p></li> <li><p>DE of NK cell and T cells CD4+ cells are quite predictable for that of B Cell and Myeloid cells</p></li> <li><p>T cells CD4+ is a better predictor for B cell, and NK cells is a better predictor for Myeloid cell</p></li> </ol>
Open Problems – Single-Cell Perturbations
Kaggle Solution Write-up
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>import pandas as pd<br> import numpy as np</p> <p>df1 = pd.read_csv("/kaggle/input/open-problems-single-cell-perturbations/sample_submission.csv")</p> <p>df1.to_csv("submission.csv", index=False)</p>
Open Problems – Single-Cell Perturbations
Our Solution write-up
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://github.com/kavaryan/sc-pertb/blob/master/report.pdf" target="_blank">https://github.com/kavaryan/sc-pertb/blob/master/report.pdf</a></p>
Open Problems – Single-Cell Perturbations
Predicting Gene Expression Changes
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Write-Ups Guide</h1> <p>**Small Molecule Impact on Gene Expression<br> **1. Problem Statement:<br> - The goal of this competition is to predict how small molecules influence gene expression in various cell types. Given a dataset of small molecules and their effects on gene expression in different cell lines, the objective is to develop a model that accurately predicts the impact of a given small molecule on gene expression in unseen data. OurWrite-Ups Format <br> **Small Molecule Impact on Gene Expression<br> **1. Problem Statement:<br> - The goal of this competition is to predict how small molecules influence gene expression in various cell types. Given a dataset of small molecules and their effects on gene expression in different cell lines, the objective is to develop a model that accurately predicts the impact of a given small molecule on gene expression in unseen data.Our objective is to predict the differential expression values for Myeloid and B cells based on a majority of compounds. The training data consists of measurements from 144 compounds in T cells (CD4+, CD8+, regulatory) and NK cells. However, only 10% of compounds have measurements in Myeloid and B cells. This scenario simulates a scientific context where predictions are needed for new cell types, but only limited measurements are available.</p> <p>**Exploratory Data Analysis (EDA):</p> <ul> <li>Investigated data distribution, missing values, and statistical properties.</li> <li>Explored relationships between molecular descriptors and gene expression.</li> <li>Visualized the distribution of gene expression levels across different cell types.</li> </ul> <p>**Model Architecture:</p> <ul> <li>Model selection as it showed promising results in initial experiments.</li> <li>Experimented with different architectures, considering the complex relationships between small molecules and gene expression.</li> </ul> <h3>Training:</h3> <p>** Training Strategy:</p> <ul> <li>Trained the model on the available T cell data (CD4+, CD8+, regulatory) and NK cell data, which comprises the majority of compounds.</li> </ul> <h3>Transfer Learning:</h3> <ul> <li>Utilized transfer learning techniques to adapt the model to Myeloid and B cell predictions using the limited available data for these cell types.</li> </ul> <p>**Future Improvements:</p> <ul> <li>Considered potential enhancements, such as fine-tuning model architecture or incorporating external data.</li> </ul> <h1>Write-Ups Implementation</h1> <h1>Title: Predicting Gene Expression Changes in Different Cell Types due to Small Molecules</h1> <h2>Introduction:</h2> <ul> <li>Describe the problem and the dataset. Importance of understanding how small molecules impact gene expression in various cell types.</li> </ul> <h2>Dataset Overview:</h2> <ul> <li>Dataset key features:<ul> <li>cell_type: The annotated cell type of each cell based on RNA expression.</li> <li>sm_name: The primary name for the parent compound in a standardized representation.</li> <li>sm_lincs_id: The global LINCS ID for the parent compound.</li> <li>SMILES: Simplified molecular-input line-entry system.</li></ul></li> </ul> <h1>Exploratory Data Analysis (EDA):</h1> <ul> <li>Analysis of dataset, including visualizations and insights. Example:<br> `# Import necessary libraries<br> plt.figure(figsize=(10, 6))<br> sns.histplot(data['target_variable'], bins=50, kde=True)<br> plt.title('Distribution of Target Variable')<br> plt.xlabel('Differential Expression Values')<br> plt.ylabel('Frequency')<br> plt.show()</li> </ul> <h1>Summary statistics</h1> <p>print(data.describe())</p> <h1>Distribution of cell types</h1> <p>plt.figure(figsize=(12, 6))<br> sns.countplot(x='cell_type', data=data)<br> plt.title('Distribution of Cell Types')<br> plt.show()</p> <h1>Relationships between variables</h1> <p>plt.figure(figsize=(12, 8))<br> sns.scatterplot(x='sm_name', y='gene_A1BG', hue='cell_type', data=data)<br> plt.title('Gene Expression vs Small Molecule for A1BG')<br> plt.show()<br> `</p> <h4>EDA - Feature Analysis:</h4> <ul> <li><p>Distribution of gene expression features in T cells, NK cells, and the limited set of Myeloid and B cells.<br> `t_cell_genes = data[data['cell_type'].isin(['CD4+', 'CD8+', 'regulatory'])]['gene_expression']<br> nk_cell_genes = data[data['cell_type'] == 'NK']['gene_expression']<br> myeloid_b_cell_genes = data[data['cell_type'].isin(['Myeloid', 'B'])]['gene_expression']</p> <p>plt.figure(figsize=(14, 8))<br> sns.kdeplot(t_cell_genes, label='T Cells (CD4+, CD8+, Regulatory)')<br> sns.kdeplot(nk_cell_genes, label='NK Cells')<br> sns.kdeplot(msns.kdeplot(myeloid_b_cell_genes, label='Myeloid and B Cells (Subset)')<br> plt.title('Distribution of Gene Expression Features Across Cell Types')<br> plt.xlabel('Gene Expression Values')<br> plt.ylabel('Density')<br> plt.legend()<br> plt.show()<br> `</p></li> </ul> <h1>Feature engineering on SMILES data (example: convert to molecular fingerprints)</h1> <h1>…</h1> <h1>Split data into train and test sets</h1> <p>X_train, X_test, y_train, y_test = train_test_split(data[['sm_name', 'cell_type', 'SMILES']], data['gene_A1BG'], test_size=0.2, random_state=42)</p> <h1>Random Forest Regressor</h1> <p>model = RandomForestRegressor(n_estimators=100, random_state=42)<br> model.fit(X_train, y_train)</p> <h1>Predictions on test set</h1> <p>predictions = model.predict(X_test)</p> <h1>Model Evaluation</h1> <p>mse = mean_squared_error(y_test, predictions)<br> print(f'Mean Squared Error: {mse}')<br> `</p>
Open Problems – Single-Cell Perturbations
Public 7th and Private 15th solution (Nothing but just multiplied a factor of 1.2)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First many thanks go to those who created 0.574 and 0.577 public notebook.</p> <p>While I started this competition 2 months ago, I found the public LB is very variant, it’s very different from CV. Ensemble some worse LB results like everyone (or most people) found that ensemble results with 0.702 LB will boost the result. So I thought this would be a shake competition. Hence, I didn’t spend too much time on how to build a more diversity or robust model (I didn’t think I could), but I focused on some LB probing or tricks. For instance, multiplying a factor to any of my results (the best one is 1.2 on public LB), I could boost my results. This made me more confident that there would be some big shake up, but I also believed some of the gold medal teams will be quite stable, they will stay there.</p> <p>I used public 0.574 and 0.577 results + some of my own models (Public LB 0.578).</p> <p>For my own models:<br> • Conv1D NN<br> • LSTM<br> • MLP<br> • LGBM</p> <p>Features:<br> • Standard Scaler train label columns for NN models<br> • One-hot encoded cell_type,sm_name <br> • Split SMILES to character and use TFIDF to get embedding.</p> <p>Final results:<br> Step1<br> • sub_pub[:128] = 0.55<em>Public 0.574[:128] + 0.45</em>public 0.577[:128]<br> • sub_pub [128:] = 0.6<em>Public 0.574[128:] + 0.4</em>public 0.577[128:]<br> • then postprocess it using what’s done in <a href="https://www.kaggle.com/code/jeffreylihkust/op2-eda-lb" target="_blank">https://www.kaggle.com/code/jeffreylihkust/op2-eda-lb</a> </p> <p>Step2<br> • final_sub = 1.2<em>(0.95</em>sub_pub + 0.05*my_0578)</p> <p>About the factor:<br> I tried factors of 0.95, 1.05, 1.1, 1.15, 1.2, 1.25, 1.3, 1.4, 1.5, but 1.2 gives best public LB.</p> <p>A bit pity, I have a few results in the gold area, but their public LBs is 0.02 worse.</p>
Open Problems – Single-Cell Perturbations
SCP 21st Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>But if ur using MAE to rows to upsample, then yes you would do the reduction across the columns (for each drug-cell pair, average the elementwise MAEs across all genes observed for that pair)</p> <p><strong>Feature Engineering</strong></p> <p>Broadly two different sets of features were used for different models used in the final ensemble:</p> <p>Feature Set 1</p> <ol> <li>Cell type - one-hot encoded</li> <li>SMILES - converted to 2048 bit vectors using RDKit Morgan fingerprints</li> <li>Drug properties: for each SMILE, log P and log of Molar Refractivity, standard scaled</li> <li>Control - whether or not drug is a control (1) or not (0)</li> </ol> <p>Feature Set 2</p> <ol> <li>Same features as in Feature Set 1</li> <li>Average SVD embedding values for drug effects on all cell types AND average SVD embedding values for cell response to various drugs</li> <li>Exclusive of 2), or usage of average SVD embedding values for log fold-change for drug effects on all cell types AND average SVD emedding values for log fold-change for cell response to various drugs (either use 2 or 3)</li> </ol> <p>Note that the number of SVD singular values to keep was chosen according to the Gavish-Donohoe (GD) SVD hard threshold method: <a href="https://arxiv.org/abs/1305.5870" target="_blank">https://arxiv.org/abs/1305.5870</a></p> <p>Target</p> <ol> <li>GD SVD criterion used for singular value cutoff for low dimensional modes to keep</li> <li>SVD applied to target matrix and the singular value cutoff determined according to the Gavish-Donohoe threshold</li> <li>All models were trained against the SVD embedding of the original target, and model predictions were transformed back using the transpose of the V matrix</li> </ol> <p><strong>Model Architectures and Data Upsampling</strong></p> <p><strong>Model 1:</strong> </p> <p>Simple direct regression on SVD embedding targets</p> <ol> <li>8 layers Dense feed-forward neural netwok (5128 neurons per layer), output layer 114 neurons </li> <li>SELU activation each layer except for output layer (no activation, linear regression output)</li> <li>Output is in the SVD embedding space (114 columns)</li> <li>Loss: MAE or Pseudo-huber</li> <li>Epochs: 800</li> <li>Batch size: 16</li> <li>Cosine training schedule with warm restart every 200 epochs (alpha = 0.01, t_mul = 1.0, m_mul = 0.9)</li> <li>Stochastic weight averaging (SWA): SWA start from epoch 2</li> <li>Predictions for 18211 genes:<ol> <li>Let output be the predicted SVD embedding</li> <li>Take predicted SVD embedding and multiply by transpose of V matrix from SVD to get back to original 18211 representation</li> <li>Number of singular values to keep chosen according to Gavish-Donohoe threshold (see above)</li></ol></li> </ol> <p><strong>Model 2:</strong></p> <p>Same architecture as Model 1 however sample weights introduced to loss function.</p> <p>Sample weight scheme:</p> <ol> <li>From training set filter out drug-cell pairs where B cells / myeloid cells were exposed to same compounds</li> <li>From 1), exposure to same set of compounds but observed difference in target ( - log10(p_val) * sign(LFC)) should be attributable to cellular difference</li> <li>For each cell type not B cells / myeloid cells calculate a notion of "distance" from the filtered and observed targets for exposure under same drugs using a distance metric of choice, e.g. Frobenius norm of difference of target matrices</li> <li>For each cell type not B cells / myeloid cells average out this "distance" metric calculated in 3) and then subtract from 1 i.e. distance to B cell or myeloid cell would be 0 s.t. one minus this amount would give each cell type of prediction interest a score of 1, whilst cell types further away gets a lower score</li> <li>Divide each cell type by the minimum score of the 6 cell types as calculated in step 4), and use this number as a weight for each row in training based on cell type used in the experiment</li> <li>Model is trained on this weighted loss inclusive of each row's weight</li> </ol> <p><strong>Model 3:</strong></p> <p>Skip connections architecture:</p> <ol> <li>8 or 9 Dense layers</li> <li>Skip connections:<ol> <li>Input dimension: 2056</li> <li>Concatenate layer: Input concatenated with layer 2 pre-activation output (3072 neurons) leading to 5128 output dimension <br> (3072+2056) before feeding into SELU activation layer</li> <li>Additive skip connections: SELU output of concatenate layer (5128) + pre-activation output of layer 4 (5128), SELU<br> output of layer 4 + pre-activation output of layer 6 (5128), SELU output of layer 6 + pre-activation output of layer 8<br> (5128 / used where network has 9 hidden layers)</li></ol></li> <li>Other details similar to Model 1</li> </ol> <p><strong>Model 4:</strong></p> <p>Model 1 architecture but using training error to identify hard to predict drug-cell pairs for upsampling. Upsampling was done by identifying index of training samples (rows) which were at or below at certain training error threshold and then amplified by making a new copies (integer multiples) of these rows to be concatenated to original training set. </p> <p>The thinking here was that since the problem for predicting interactions for B / myeloid cells is potentially underspecified and to be extrapolated from observed interactions of other cells, the drug-cell pairs that have high row-wise accuracy or low MAE (or other regression metric) are not as important and performance on these rows can be sacrificed for better performance on the rows in training which have low row-wise accuracy or low MAE (or other regression metric). The amplified set was also manually checked for inclusion of the small number of B / myeloid cell observations in training.</p> <p>Broadly three types of this upsampling procedure were used with various models</p> <p><strong>Upsampling procedure 1:</strong> Regression based row-wise metric (MAE) for determining cut-off threshold</p> <ol> <li>Simpler smaller neural network trained for 200 epochs on original training set</li> <li>Row-wise MAE computed for each sample</li> <li>Take median of 614 row-wise MAE metrics</li> <li>Take a positive multiple of this median (e.g. 3x or 15x) to select the base set of training rows to be upsampled</li> <li>Make K times more (e.g. 7x) copies of the training subset in 4) and concatenate to original training set</li> <li>Re-train larger model (could be any model architecture) on this upsampled training set</li> </ol> <p><strong>Upsampling procedure 2:</strong> Sign classification using logistic loss for determining cut-off threshold</p> <p>The thinking behind this approach is that sign may be important to get right as an individual prediction where the magnitude (-log10(p_val)) is correct but where sign is not is very consequential for RWRMSE metric.</p> <ol> <li><p>Same procedure as in prior upsampling procedure, except neural network with regression output is trained against the sign of<br> the log fold-change (i.e. target matrix is composed of +1/-1)</p> <p>Logistic Loss = (1/n) * Sum(i from 1 to n) L(y, t) where<br> L(y, t) = ln(1 + exp(-y * t))</p> <p>With t being in {-1, +1} i.e. the sign of the log fold change</p></li> <li><p>Row-wise accuracy (%) is computed on the training set</p></li> <li><p>Choose a cutoff below which the training rows are to be upsampled. I used arbitrary cutoffs such as 75% or accuracy cutoffs<br> 3 standard deviations below the mean row-wise accuracy</p></li> <li><p>Repeat upsampling procedure as in the previous procedure amplifying this subset an integer number of times and retrain a<br> larger model on this exapnded training set</p></li> </ol> <p><strong>Upsampling procedure 3:</strong> Sign classification but focussed on rows with bad sign classification for small p-values</p> <p>Small p-values (e.g. less than 0.1) leads to large magnitudes when -log10 transformed, so intuition is get sign more correct for these as a bad sign classification flips these magnitudes to other side of real number line. </p> <p>Similar procedure to upsampling procedure, however we calculate accuracy only on subset of genes for each drug-cell pair where p-values are below a chosen threshold. Once these row-wise accuracy figures are computed, the same process as in the prior sign upsampling procedure is used to upsample a subset for retraining.</p> <p><strong>Model 5:</strong></p> <p>Triple regression head model with upsampling procedure and contrastive loss. The idea behind this architecture is to have share layers (e.g. 5 layers) between 3 different regression outputs. A "contrastive" loss (see below) was used to incentivise each regression head to learn a different hypothesis to the other 2 heads. This model architecture was mostly trained with sign upsampling procedure 2 as described in Model 4.</p> <p>Architecture:</p> <ol> <li><p>Shared weight layers: 5 Dense layers</p></li> <li><p>Activation: SELU for shared layers and regression heads, linear activation for regression outputs</p></li> <li><p>3 regression heads: [3072, 2048, 1024] neurons before output layer for training y_train SVD embeddings</p></li> <li><p>Contrastive loss: Sum(head 1 to 3) of regression loss for each head + contrast_weight * average_pairwise_dissimilarity</p> <p>K = n_head choose 2</p> <p>Average_Pairwise_Dissimilarity = 1/K * Sum(i from 1 to K) (Average Row-wise Cosine Similarity + 1.)</p> <p>If two non-zero vectors are exactly opposite, row-wise cosine similarity evaluates to -1. If they are exactly the same, we <br> get +1 and if they are orthogonal we get 0. Adding 1 to the average row-wise cosine similarity ensures the minimization<br> objective goes to 0 (instead of -1).</p> <p>Contrastive loss essentially balances between each regression driving down bias but also learning distinctive hypotheses<br> from the data. The amount of contrast between the heads is controlled by the contrast_weight</p></li> <li><p>Each regression heads' output is multiplied by the transpose of the V matrix from SVD to get back predictions for original<br> 18211 genes. </p></li> <li><p>Some submissions used the best head's predictions as determined by training error. Other predictions ensembled the 3 heads'<br> predictions by equal or training loss derived weights (lower loss -&gt; higher weight)</p></li> </ol> <p><strong>Final Submission</strong></p> <p>The two final submissions were LB RWRMSE weighted ensembles of the 16 best and 80 best submissions.</p> <p>For each submission I took the LB RWRMSE error, cubed them and subtracted from 1. to derive a score. These scores were then normalized against each other for the final weighted addition of the submissions.\</p> <p><strong>Code</strong></p> <p><a href="https://github.com/maxleverage/kaggle-scp" target="_blank">https://github.com/maxleverage/kaggle-scp</a></p> <p>No actually, the overall reduction to a scalar is the same (sum of elementwise MAE, divide by mxn)</p>
Open Problems – Single-Cell Perturbations
SMILES😘 Data Science Competition: A Deep Dive
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Introduction:</h1> <p>In the pursuit of advancing single-cell data science and catalyzing drug discovery, the SMILES😘 competition introduces a groundbreaking dataset. Developed for the competition, this dataset features human peripheral blood mononuclear cells (PBMCs) and includes 144 compounds from the Library of Integrated Network-Based Cellular Signatures (LINCS) Connectivity Map. The experiment, replicated in three healthy human donors, provides meticulous measurements of single-cell gene expression profiles following 24 hours of treatment.</p> <h1>Exploratory Data Analysis (EDA):</h1> <h2>Distribution of Donors:</h2> <ul> <li>Visualized the frequency of each donor in the dataset using a count plot.</li> <li>The plot offers an overview of how samples are distributed across different donors.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2F75a8c97f33b9542e9d1e14f9ff873d54%2FDistDonors.png?generation=1702342943516621&amp;alt=media" alt="image"></li> </ul> <h2>Distribution of Dose (uM):</h2> <ul> <li>Utilized a histogram to showcase the distribution of doses in microMolarity.</li> <li>This plot provides insight into how doses are spread across the dataset.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2F0fa42111ed6b467ad42bbc28cb55a711%2FDist-Dose(uM).png?generation=1702344900095659&amp;alt=media" alt="image"></li> </ul> <h2>Average Dose at Different Timepoints:</h2> <ul> <li>Presented a bar plot illustrating the average dose (uM) at different timepoints.</li> <li>This plot helps visualize how doses vary with time.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2F70275fa6f5b4d479338c4d4d9555932d%2FAveDoseAtDiff.png?generation=1702343097669094&amp;alt=media" alt="image"></li> </ul> <h2>Distribution of Dose (uM) Summary:</h2> <ul> <li>Provided a summary of the distribution of doses using a box plot.</li> <li>The plot includes quartiles, median, and potential outliers.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2Ff826df73e40e1c01448aba6a23d33bfc%2FDist-dose.png?generation=1702343026198358&amp;alt=media" alt="image"></li> </ul> <h2>Dose (uM) and Timepoint (hours) Relationship:</h2> <ul> <li>Visualized the relationship between dose (uM) and timepoint (hours) using a pair plot.</li> <li>The diagonal shows kernel density estimates.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2F54fb101ac78660f4934bf3b46bbd5ab4%2FDose%20and%20Timep.png?generation=1702345034468959&amp;alt=media" alt="image"></li> </ul> <h2>Distribution of Cell Types:</h2> <ul> <li>Displayed a count plot to visualize the occurrences of each cell type.</li> <li>This plot gives an overview of the distribution of cell types in the dataset.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2F8e094f4cf2aa946d695e1dc43eaf2a56%2FDistCellType.png?generation=1702343745263825&amp;alt=media" alt="image"></li> </ul> <h2>Correlation Heatmap:</h2> <ul> <li>Presented a heatmap visualizing the correlation between dose (uM) and timepoint (hours).</li> <li>Values closer to 1 indicate a stronger positive correlation.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2F33c5b64b658ff3e6749524c57f7993e9%2FCorHeatmap.png?generation=1702343800299076&amp;alt=media" alt="image"></li> </ul> <h2>Control Distribution:</h2> <ul> <li>Represented the distribution of 'control' values (True or False) using a pie chart.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2F601d850c48e2ae4a88c4c3107f6e51f9%2FDist-pie.png?generation=1702343949783969&amp;alt=media" alt="image"></li> </ul> <h2>Dose (uM) vs. Timepoint (hours):</h2> <ul> <li>Illustrated the relationship between dose (uM) and timepoint (hours) with a scatter plot.</li> <li>Different colors represent different control values.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2F3e5768bd41279903c51770bb626b5998%2FDose%20vs%20Timepoint.png?generation=1702344027609287&amp;alt=media" alt="image"></li> </ul> <h2>Dose (uM) by Control:</h2> <ul> <li>Used a violin plot to show the distribution of doses for each control category.</li> <li>Allows for a comparison of dose distributions.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2F6ace487693a5a1c21cfd07bca506abd6%2FDosebycontrol.png?generation=1702344109580758&amp;alt=media" alt="image"></li> </ul> <h1>Feature Extraction:</h1> <h2>Morgan Fingerprints:</h2> <ul> <li>Introduced Morgan fingerprints as a numerical representation suitable for T-SNE and PCA.</li> <li>Provided insights into the structure and application of Morgan fingerprints in chemoinformatics and drug discovery.</li> </ul> <h2>T-SNE and PCA Visualization:</h2> <p>Applied T-SNE and PCA to the features for dimensionality reduction and visualization.</p> <h2>PCA Visualization:</h2> <ul> <li>Utilized PCA for dimensionality reduction and visualization.</li> <li>The scatter plot displays the reduced features in two dimensions.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2Fb8c9af44df865b3fb9e7da392cc29e8e%2FPCA-vis.png?generation=1702344215751186&amp;alt=media" alt="image"></li> </ul> <h2>Molecule Visualizations:</h2> <ul> <li>Generated images for a subset of molecules using RDKit.</li> </ul> <h2>Molecule Visualizations:</h2> <ul> <li>Added a new column 'Molecule' with RDKit Mol objects.</li> <li>Visualized a subset of molecules with RDKit.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2Fe6bb7c2f447b811fe06151aad047f3ac%2FRIDkit.png?generation=1702344347921123&amp;alt=media" alt="image"></li> </ul> <h1>Prediction Results:</h1> <h2>Prediction Results for Various Models:</h2> <ul> <li>Presented results for linear regression, logistic regression, decision tree, random forest, SVM, KNN, K-Means, naive Bayes, and neural network models.</li> <li>Included metrics such as mean absolute error, mean squared error, and R-squared (R2).<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2F58a4cc1d79d614594e8e98f1bb411ebd%2FResults.png?generation=1702344308542403&amp;alt=media" alt="image"></li> </ul> <h1>Conclusion:</h1> <p>In conclusion, this comprehensive analysis of the SMILES😘 competition dataset provides valuable insights into the distribution of donors, doses, timepoints, and cell types. The exploration of Morgan fingerprints, T-SNE, PCA, and molecule visualizations adds depth to the understanding of the dataset. The prediction results offer a benchmark for various models, highlighting their performance in the context of the competition's objectives. This write-up serves as a resource for researchers and data scientists engaged in single-cell data analysis and drug discovery.</p> <p>SMILES😘: <a href="https://www.kaggle.com/code/jocelyndumlao/smiles/notebook" target="_blank">https://www.kaggle.com/code/jocelyndumlao/smiles/notebook</a></p>
Open Problems – Single-Cell Perturbations
The 440th solution of the 'Open Problems – Single-Cell Perturbations' competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Business context:<br> <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview</a></p> <p>Data context: <br> <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data</a></p> <p>Code link:<br> <a href="https://github.com/900Step/Open-challenge" target="_blank">https://github.com/900Step/Open-challenge</a></p> <p>Overview<br> This solution considers the impact of drugs on the gene expression of unknown cell types from three aspects: dimensional change, feature construction, and regression algorithms. Due to the high dimensionality of genes, PCA is used to reduce the dimensions of the response variable. Natural language processing, combined with SMILE chemical structures, is employed to encode the drugs. For the regression algorithm, we attempt to use a simple regression model (SVR) for prediction. A tensor model is utilized to conduct stratified analysis on plate-class data. The regression effect of experiments on linear combinations of some public datasets.</p> <p>Data Processing</p> <p>Due to the project providing both public and private test sets, we do not perform any additional train-test splitting during the modeling process.</p> <p>Response variable<br> Our objective is to predict highly-dimensional gene expression data using existing cell type and drug information. Initially, this requires altering the data dimensions and structure. Given over 18,000 genes as response variables and the limited sample size, it becomes challenging to draw reliable conclusions. Further-more, this scenario often leads to model overfitting and weak interpretability. Hence, analogous to the independent variables, it is imperative to apply dimensionality reduction to the response variables. We employ the Principal Component Analysis (PCA) method, preserving 10-20 dimensions to maintain a cumulative contribution rate of 95%, thus constructing new response variables.</p> <p>Feature construction<br> The column ”SMILE” (Simplified Molecular Input Line Entry System [2]) encompasses the chemical properties of drugs, for instance, Clotrimazole, with its chemical encoding as<br> ”Clc1ccccc1C(c1ccccc1)(c1ccccc1)n1ccnc1”. We utilize natural language processing techniques<br> to encode the chemical expressions of drugs using strings. For example, ”c1ccccc1” represents<br> a benzene ring, and ”Cl” signifies a chlorine atom. Certain punctuation marks also bear specific meanings, such as ”=” for a double bond and ”#” for a triple bond. Through encoding chemical structures, we extract chemical structural features pertinent to the drugs. The S and R configurations in chiral structures are denoted by ”@” and ”@@”, respectively.</p> <p>After these transformations, we convert the boolean values of ”control” to {0,1}, remove the original three columns of drug information from the dataset, and thus obtain a preliminarily processed dataset.</p> <p>Modeling</p> <p>SVR<br> Using the SVM-based regression algorithm from scikit-learn for prediction, it can be anticipated that due to the low dimensionality of the independent variables and the complex relationships of the response variables, there will be a significant prediction error. This method does not incorporate any specificity processing based on the model's background.</p> <p>Tensor<br> We adapted Algorithm 3 from the article "Tensor Completion with Noisy Side Information" for tensor-based analysis. This approach involved transforming the original dataset into a 3-dimensional tensor object with the dimensions (146, 18211, 6), corresponding to (drug, gene, cell type). To address missing values in the gene expression data, a "warm start" matrix is employed. This matrix is initially populated with zeroes or randomized values for the absent data points. This technique provides a preliminary structure, aiding in more effectively estimating and filling in the missing gene expression values. </p> <p>CPA method (didn’t been realized but we think it is a good method)<br> I explored the CPA (Component-Based Pathway Analysis) method, which aims to uncover latent variables that influence response prediction. This algorithm converts gene expression data, along with all other features, into the 'adata' data type. By using a more complex data structure, it enables the full utilization of the information. Our goal is to garner more gene-related information, such as gene length, expression characteristics, and whether certain genes exhibit synergistic or antagonistic effects with specific cells or drugs. Enriching the feature information of the existing data could significantly reduce the prediction error rate for differential expression in the forecasting process.</p> <p>In Kaggle, there are some public datasets related to specific competitions. Building upon the corresponding files, I attempted to apply a linear combination with weighting to them. The results of this approach were superior to those of each individual file used in the process.<br> /kaggle/input/open-problems-2-submits-collection<br> /kaggle/input/op2-603<br> /kaggle/input/op2-604<br> /kaggle/input/op2-607<br> /kaggle/input/op2-720</p> <p>reference:<br> Daniel Burkhardt, Andrew Benz, Richard Lieberman, Scott Gigante, Ashley Chow, Ryan Holbrook, Robrecht Cannoodt, Malte Luecken. (2023). Open Problems – Single-Cell Perturbations. Kaggle. <a href="https://kaggle.com/competitions/open-problems-single-cell-perturbations" target="_blank">https://kaggle.com/competitions/open-problems-single-cell-perturbations</a><br> Deutsch, Francine M., Dorothy LeBaron, and Maury March Fryer. "What is in a smile?." Psychology of Women Quarterly 11.3 (1987): 341-352.<br> Bertsimas, Dimitris, and Colin Pawlowski. "Tensor completion with noisy side information." Machine Learning 112.10 (2023): 3945-3976.<br> Lotfollahi, Mohammad, et al. "Predicting cellular responses to complex perturbations in high‐throughput screens." Molecular Systems Biology (2023): e11517.</p>
Open Problems – Single-Cell Perturbations
XGBoost in Compressed Space | 425th Place Solution Writeup for Open Problems – Single-Cell Perturbations
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Problems – Single-Cell Perturbations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We appreciate the organizers and Kaggle for hosting this interesting competition. We are also grateful to the participants who shared notebooks and ideas. Below is a detailed write-up of our solution, which did not achieve a high score, but we believe it gives a distinctive perspective on the problem. </p> <h1>Context</h1> <ul> <li><p>Competition context: <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/overview</a></p></li> <li><p>Data context: <a href="https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data" target="_blank">https://www.kaggle.com/competitions/open-problems-single-cell-perturbations/data</a></p></li> <li><p>Our notebook with details: <a href="https://www.kaggle.com/popovanastya/scp-xgboost-in-compressed-space-for-single-cell-p" target="_blank">https://www.kaggle.com/popovanastya/scp-xgboost-in-compressed-space-for-single-cell-p</a></p></li> </ul> <h1>Introduction</h1> <p>In this project, we built an XGBoost model to tackle a multi-output regression challenge<em>. Noteworthy for its computational efficiency and robustness to dataset noise, our approach operates effectively within a compressed space.</em> Our model allows estimation of compound impact on gene expression in a target cell type by leveraging averages from other cells in the compressed space. This approximation can serve as a valuable baseline for training advanced models without prior biological knowledge.</p> <h2>Goal</h2> <p>In this project, we aimed to build a simple and accurate ML model to predict differential gene expression for different cell types affected by various chemical substances (referred to further as "drugs"). </p> <h2>Feature Selection</h2> <p>We did not integrate any prior biological knowledge for feature augmentation, i.e. feature engineering was done using only cell type, chemical compound name, and 18211 gene differential expression (DE) from <code>de_train.parquet</code>. </p> <h2>Performance</h2> <p><strong>Without ensembling</strong> with other models, our method gives <code>0.594</code> for public and <code>0.777</code> for private scores (time of computing is around 3 min). Bagging of XGB will give an additional but insignificant improvement in the scores. </p> <h1>Brief Exploratory Data Analysis</h1> <p>At first glance, the idea of predicting 18,211 genes using only 2 features and 614 observations seems unsolvable (the combinations of cell and drug do not repeat, we have 6 cells and 146 drugs, and 4 drugs for T cells CD8+ are missed), however, the distributions of most genes are Gaussian with about zero means. These distributions also show numerous outliers that resemble noise, making it challenging to determine their biological relevance. Since the metric is very sensitive to them by definition, we lost hope of figuring this out (for example, for 3 points from zero-mean Gaussian distribution and 1 outlier the error <em>RMSE = (0.99-0.98)^2 + (0.01-0.03)^2 + (170 - 168)^2 = 4.0005</em>, which is mostly caused by the outlier even in the case that we predicted it almost correctly). Thus this metric would push any ML model to reward noise prediction instead of signal. Even if we clean the dataset, our prediction will be evaluated using the noisy test set. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13819485%2Fae23cf95314e73cd203cdbd3eee0d9fd%2Ftypical.png?generation=1702332388281071&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13819485%2F161b35727ba1a49208002e737cc038cb%2Fnon-typical2.png?generation=1702332414333546&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13819485%2F525c2f2711afaf85984e9beabf1d46ca%2Fnon-typical3.png?generation=1702332076289987&amp;alt=media" alt=""></p> <p>Based on this logic we decided to work in compressed space (we applied TruncatedSVD). The surprisingly significant (in terms of the public score) result gave the simple averaging cell types for each drug after decompressing (median), and we continued working within compressed space. </p> <p>The most useful visualization for us was considering the data as a "signal" along rows or columns. It was also a way to assess what exactly was going on with a model's prediction for the test data.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13819485%2F316867dcb963e504f03567120a747411%2Frow_num.png?generation=1702332457069523&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13819485%2F35a138ef2baa99eb1c93c53d454bc6fb%2Fgene_num.png?generation=1702332475260578&amp;alt=media" alt=""></p> <h1>Modelling</h1> <p>Originally, we believed that some form of averaging over clusters must work in compressed space, and most of the time of the competition was spent on this approach. Sadly, the biggest score was obtained by a simple approximation: <strong>again average (mean) of all cells for each drug, but now in the compressed space</strong>. After decompression, it gives a suspiciously good prediction (<code>0.615</code> was the high score for public notebooks with much more complex models), and the figures of DE look like this averaging captures the main features of the dataset. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13819485%2F73e2fa374df7c0af73dbf8a86e0bbf56%2Fav_pred.png?generation=1702332573559453&amp;alt=media" alt=""></p> <h2>Data Preprocessing</h2> <p>The final model just exploits these observations about average in compressed space. What the average could obviously lose is characteristics specific to a cell type. So we decided to do a feature augmentation in the following way: For each observation, we take the average over the rest of the cells (excluding the cell type for the observation) for a specific drug and use it as features (if we have 36 dimensions after compressing, then we obtain 36 new features). The target was the compressed DE signal for this cell and the drug for the XGBRegressor (also 36 data points). </p> <p>It worked well, but still, the model could not capture the difference in cell types treated with different drugs (signal along columns). Therefore, we added more features using TargetEncoder, which gives the numerical representation of the categorical features. Finally, we added one-hot encoding (we put 1 for cell types across which we did averaging and 0 for the target cell), providing information on what types of cells were averaged. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13819485%2Fcaa2ab99b6c21dd9e3de0fb98a6a8ebe%2Fdemo_1.png?generation=1702332625709048&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13819485%2Fb6eee63ad3ef4969780a39f54b923592%2Fdemo_2.png?generation=1702332643555645&amp;alt=media" alt=""></p> <h2>Hyperparameters</h2> <p><code>tree_method</code> specifies the tree construction algorithm, we set it to 'hist', which means XGBoost will use a histogram-based algorithm for tree construction. </p> <p><code>eval_metric</code> defines the evaluation metric used to assess the model's performance during training, we set it to 'rmse' (Root Mean Squared Error). </p> <p><code>max_depth</code> controls the maximum depth of each tree in the ensemble, it's set to 2 to prevent overfitting. </p> <p><code>learning_rate</code> is set to 0.2.</p> <p><code>n_estimators</code> determines the number of boosting rounds or trees to be built. In our case, the XGBoost model will create an ensemble of 1000 trees.</p> <h2>Validation</h2> <p>We used 6-fold cross-validation to test our model, but we reduced space before train-test split. Our CV score correlates with public/privite scores. </p> <p>The accuracy of our model does not depend on cell types in the dataset. To show this, we computed the metric for cell types excluding one of 4 cell types from the training set ("NK cells", "T cells CD4+", "T cells CD8+", "T regulatory cells"). We believe that it happens because our prediction is based on the average of cell types in compressed space, therefore training on data with more cell types gives a better prediction. </p> <h2>Scores</h2> <p><strong>Baseline model #1:</strong> gives all zeros. </p> <p><strong>Model #2:</strong> TruncatedSVD (35 components) -&gt; Inverse TruncatedSVD -&gt; Aggregate Drugs -&gt; <code>.quantile(0.54)</code></p> <p><strong>Model #3:</strong> TruncatedSVD (36 components) -&gt; Average for each drug -&gt; inverse TruncatedSVD </p> <p><strong>Model #4:</strong> TruncatedSVD (n_components=36, n_iter=7) -&gt; TargetEncoder (smooting=8) -&gt; XGBoost (1000 estimators, feature augmentation with average for each drug) -&gt; inverse TruncatedSVD </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13819485%2F9b2fc4af41a944a01761f23c21218ced%2Fscores.png?generation=1702332495533180&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13819485%2Fc7db57c6b82932593aa4a26b829f37b8%2Fxgb1.png?generation=1702332971425025&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13819485%2Feed8526c3cb7504eb6b79e0354cf7ccc%2Fxgb2.png?generation=1702332986435903&amp;alt=media" alt=""></p> <h1>Conclusions</h1> <p>We employed an XGBoost model to address a high-dimensional multi-output regression problem to predict the differential expression of 18,211 genes across 6 cell types affected by 146 chemical substances. Our approach stands out for its computational efficiency and robustness to noise in the dataset, owing to its operation within a compressed space. Simultaneously, it is important to note that the accuracy of our method is constrained by information loss due to compression.</p> <p>As demonstrated, the impact of compounds on gene expression in a target cell type can be estimated by leveraging the average values for the other cells within the compressed space. This approximation can serve as a baseline model and be used for training advanced models in subsequent experiments, and it doesn't require any prior biological knowledge.</p> <h2>Reproducibility</h2> <p>Code will be available and documented on Github at <a href="https://github.com/anastasiia-popova/kaggle_OP_SCP_2023" target="_blank">https://github.com/anastasiia-popova/kaggle_OP_SCP_2023</a>.</p> <h1>References</h1> <p>We appreciate the authors of the following notebooks for sharing their work</p> <p>[1] <a href="https://www.kaggle.com/code/ambrosm/scp-eda-which-makes-sense/notebook" target="_blank">https://www.kaggle.com/code/ambrosm/scp-eda-which-makes-sense/notebook</a></p> <p>[2] <a href="https://www.kaggle.com/code/alexandervc/op2-eda-baseline-s" target="_blank">https://www.kaggle.com/code/alexandervc/op2-eda-baseline-s</a></p> <p>[3] <a href="https://www.kaggle.com/code/alexandervc/op2-models-cv-tuning" target="_blank">https://www.kaggle.com/code/alexandervc/op2-models-cv-tuning</a></p> <p>[4] <a href="https://www.kaggle.com/code/ambrosm/scp-quickstart" target="_blank">https://www.kaggle.com/code/ambrosm/scp-quickstart</a></p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
#39 Solution soon. Congrats to everyone. Thanks to Hengck23
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>, I am still learning after competition ( <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189171" target="_blank">https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189171</a> ). Now exploring how you used masked layer in transformer. Because of you we learned transformers basics from your discussions/code and public kernels.</p> <p>Thanks <a href="https://www.kaggle.com/tarique7" target="_blank">@tarique7</a>.</p> <p><a href="https://www.kaggle.com/returnofsputnik" target="_blank">@returnofsputnik</a> Thanks and happy to see you here. Hope we will work soon together as team.</p> <p>But all credits goes to <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> we follow his directions and ideas with awesome public kernels</p> <p>Thanks to my team ( <a href="https://www.kaggle.com/gopidurgaprasad" target="_blank">@gopidurgaprasad</a>, <a href="https://www.kaggle.com/aayushchou" target="_blank">@aayushchou</a>, <a href="https://www.kaggle.com/jagadish13" target="_blank">@jagadish13</a>), kaggle team and mainly <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> without you, our team won't able to reach this goal. we will share our writeup in few hours. time to sleep now</p> <p>Magic for this competition is Henk discussions, shared code and new learnings<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F761268%2F4dd3de1dd275e9e039f4877e44a36240%2FScreenshot%202020-10-07%20at%201.25.09%20AM.png?generation=1602014149917679&amp;alt=media" alt=""><br> coming soon…</p> <p><a href="https://github.com/seshurajup/kaggle-stanford-covid-vaccine" target="_blank">https://github.com/seshurajup/kaggle-stanford-covid-vaccine</a> ( Papers, discussions, Local CV )</p> <p>will share the code soon.</p> <h2>Background Material</h2> <ul> <li><p><a href="https://www.youtube.com/watch?v=v1UbIUZ8k9o&amp;ab_channel=TheW.C.RayLab" target="_blank">Youtube: Understanding RNA folding energy dot-plots</a></p></li> <li><p><a href="https://www.nature.com/articles/s41467-019-13395-9.pdf" target="_blank">RNA secondary structure prediction using an ensemble of two-dimensional deep neural networks and transfer learning</a></p> <ul> <li>The majority of our human genome transcribes into noncoding RNAs with unknown structures and functions. Obtaining functional clues for noncoding RNAs requires accurate base-pairing or secondary-structure prediction. However, the performance of such predictions by current folding-based algorithms has been stagnated for more than a decade. Here, we propose the use of deep contextual learning for base-pair prediction including those noncanonical and non-nested (pseudoknot) base pairs stabilized by tertiary interactions. Since only 10,000 nonredundant RNAs made available through comparative analysis. The resulting method achieves large, statistically significant improvement in predicting all base pairs, noncanonical and non-nested base pairs in particular. The proposed method (SPOT-RNA), with a freely available server and standalone software, should be useful for improving RNA structure modeling, sequence alignment, and functional annotations.</li> <li><a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/182303" target="_blank">https://www.kaggle.com/c/stanford-covid-vaccine/discussion/182303</a></li> <li><a href="https://dash.plotly.com/dash-bio/fornacontainer" target="_blank">https://dash.plotly.com/dash-bio/fornacontainer</a></li> <li>in my previous competition on DNA, you will get better results to encode k-mers, rather than using single nucleotide<br> e,g:<br> single nucleotide : GACGACG --&gt; 1,2,3,1,2,3,1<br> k-mers: [GAC][GAC][G…] --&gt; 47, 47, 56, …</li></ul></li> <li><p><a href="https://github.com/DasLab/arnie" target="_blank">Arnie Software</a>:</p> <ul> <li>Python API to compute RNA energetics and do structure prediction across multiple secondary structure packages.</li> <li><a href="https://github.com/DasLab/arnie/blob/master/notebooks/start_here.ipynb" target="_blank">Jupyter Notebook</a></li></ul></li> <li><p>[RNA secondary structure packages ranked and improved by highthroughput experiments](The computer-aided study and design of RNA molecules is increasingly prevalent across a<br> range of disciplines, yet little is known about the accuracy of commonly used structure<br> prediction packages in real-world tasks. Here, we evaluate the performance of current packages<br> using EternaBench, a dataset comprising 23 in vitro structure mapping and 11 riboswitch activity<br> datasets involving 18,509 synthetic sequences from the crowdsourced RNA design project<br> Eterna. We find that CONTRAfold and RNAsoft, packages with parameters derived through<br> statistical learning, achieve consistently higher accuracy than more widely used packages like<br> the ViennaRNA software, which derive parameters primarily from thermodynamic experiments.<br> Motivated by these results, we develop a multitask-learning-based model, EternaFold, which<br> demonstrates improved performance that generalizes to diverse external datasets, including<br> complete viral genomes probed in vivo and synthetic designs modeling mRNA vaccines.)</p></li> <li><p><a href="https://www.biorxiv.org/content/10.1101/2020.08.22.262931v1" target="_blank">Theoretical basis for stabilizing messenger RNA through secondary structure design" - Hannah K. Wayment-Steele</a></p> <ul> <li>RNA hydrolysis presents problems in manufacturing, long-term storage, world-wide delivery, and in vivo stability of messenger RNA (mRNA)-based vaccines and therapeutics. A largely unexplored strategy to reduce mRNA hydrolysis is to redesign RNAs to form double-stranded regions, which are protected from in-line cleavage and enzymatic degradation, while coding for the same proteins. The amount of stabilization that this strategy can deliver and the most effective algorithmic approach to achieve stabilization remain poorly understood. Motivated by the need for stabilized COVID-19 mRNA vaccines, we present simple calculations for estimating RNA stability against hydrolysis, and a model that links the average unpaired probability of an mRNA, or AUP, to its overall rate of hydrolysis. To characterize the stabilization achievable through structure design, we compare optimization of AUP by conventional mRNA design methods to results from the LinearDesign algorithm, a new Monte Carlo tree search algorithm called RiboTree, and crowdsourcing through the OpenVaccine challenge on the Eterna platform. Tests were carried out on mRNAs encoding nanoluciferase, green fluorescent protein, and COVID-19 mRNA vaccine candidates encoding SARS-CoV-2 epitopes, spike receptor binding domain, and full-length spike protein. We find that Eterna and RiboTree significantly lower AUP while maintaining a large diversity of sequence and structure features that correlate with translation, biophysical size, and immunogenicity. Our results suggest that increases in in vitro mRNA half-life by at least two-fold are immediately achievable and that further stability improvements may be enabled with thorough experimental characterization of RNA hydrolysis.</li></ul></li> <li><p><a href="https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/s12859-018-2018-4" target="_blank">Capturing alternative secondary structures of RNA by decomposition of base-pairing probabilities" -Taichi Hagio</a></p> <ul> <li>Background: It is known that functional RNAs often switch their functions by forming different secondary structures.<br> Popular tools for RNA secondary structures prediction, however, predict the single ‘best’ structures, and do not produce<br> alternative structures. There are bioinformatics tools to predict suboptimal structures, but it is difficult to detect which<br> alternative secondary structures are essential.<br> Results: We proposed a new computational method to detect essential alternative secondary structures from RNA<br> sequences by decomposing the base-pairing probability matrix. The decomposition is calculated by a newly<br> implemented software tool, RintW, which efficiently computes the base-pairing probability distributions over the<br> Hamming distance from arbitrary reference secondary structures. The proposed approach has been demonstrated on<br> ROSE element RNA thermometer sequence and Lysine RNA ribo-switch, showing that the proposed approach<br> captures conformational changes in secondary structures.<br> Conclusions: We have shown that alternative secondary structures are captured by decomposing base-paring<br> probabilities over Hamming distance. Source code is available from <a href="http://www.ncRNA.org/RintW" target="_blank">http://www.ncRNA.org/RintW</a>.<br> Keywords: RNA secondary structure, Dynamic programming, Base-pairing probability, Partition function</li></ul></li> </ul> <hr> <h2>Architectures</h2> <ul> <li><p><a href="https://arxiv.org/pdf/1901.10526.pdf" target="_blank">Comprehensive evaluation of deep learning architectures for prediction of DNA/RNA sequence binding specificities</a></p> <ul> <li>Motivation: Deep learning architectures have recently demonstrated their power in predicting DNA- and RNA-binding specificities. Existing methods fall into three classes: Some are based on Convolutional Neural Networks (CNNs), others use Recurrent Neural Networks (RNNs), and others rely on hybrid architectures combining CNNs and RNNs. However, based on existing studies it is still unclear which deep learning architecture is achieving the best performance. Thus an in-depth analysis and evaluation of the different methods is needed to fully evaluate their relative. Results: In this study, We present a systematic exploration of various deep learning architectures for predicting DNA- and RNA-binding specificities. For this purpose, we present deepRAM, an end-to-end deep learning tool that provides an implementation of novel and previously proposed architectures; its fully automatic model selection procedure allows us to perform a fair and unbiased comparison of deep learning architectures. We find that an architecture that uses k-mer embedding to represent the sequence, a convolutional layer and a recurrent layer, outperforms all other methods in terms of model accuracy. Our work provides guidelines that will assist the practitioner in choosing the best architecture for the task at hand, and provides some insights on the differences between the models learned by convolutional and recurrent networks. In particular, we find that although recurrent networks improve model accuracy, this comes at the expense of a loss in the interpretability of the features learned by the model. Availability and implementation: The source code for deepRAM is available at this <a href="https://github.com/MedChaabane/deepRAM" target="_blank">https://github.com/MedChaabane/deepRAM</a></li></ul></li> <li><p><a href="https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/s12859-019-3039-3" target="_blank">A classification model for lncRNA and mRNA based on k-mers and a convolutional neural network</a></p> <ul> <li>We established a classification model for lncRNA and mRNA based on k-mers and the convolutional<br> neural network. The classification accuracy of the model with 1-mers, 2-mers and 3-mers was the highest, with an<br> accuracy of 0.9872 in humans, 0.8797 in mice and 0.9963 in chickens, which is better than those of the random forest,<br> logistic regression, decision tree and support vector machine.</li></ul></li> </ul>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
11th Lightweight Single Architecture [Conv-GCN]-> RNN
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Introduction</strong><br> We have been using this lightweight architecture on the 2nd in the LB one week before the end of the competition. It is about 2 times if not 3 times faster than the sharing from <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a>.<br> Just to comment, <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a> did release the gold range secret at that time. </p> <p><strong>High-Level Idea</strong><br> How do we pass information from 1 sequence position to the other, given they are dependent? </p> <p><em>Passing information from position i to j</em><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F690886%2F83da03ea10245b6867febd170f6188f4%2FScreenshot%202020-10-08%20at%207.51.09%20AM.png?generation=1602114976893373&amp;alt=media" alt=""></p> <p>Our model achieves this by 3 steps as shown in the above image.</p> <ol> <li>Gathering local information with 1D Conv.</li> <li>Passing remote information with the Paired/Skip connections across the sequence dimension.</li> <li>Gathering local information with 1D Conv. again</li> </ol> <p>Finally, pushing the information to a longer range with Bi-directional RNNs.</p> <p><strong>Notebook</strong><br> Here is the notebook about this solution, note that it is a simplified version, mainly removed some blendings and optimizations.<br> <a href="https://www.kaggle.com/kingychiu/mrna-base-degradation-keras-cnn-gcn-rnn" target="_blank">https://www.kaggle.com/kingychiu/mrna-base-degradation-keras-cnn-gcn-rnn</a></p> <p><strong>Model</strong><br> Conv. step (1+2+3), where the GraphConv is simply a dot product to a learnable weight + filter from<br> <a href="https://github.com/danielegrattarola/spektral/blob/a2cd265a9440831afc441c1774dd1b7d080a59f8/spektral/layers/convolutional/graph_conv.py#L96" target="_blank">https://github.com/danielegrattarola/spektral/blob/a2cd265a9440831afc441c1774dd1b7d080a59f8/spektral/layers/convolutional/graph_conv.py#L96</a></p> <pre><code> def conv_block(x): conv = tf.keras.layers.Conv1D(hidden_dim*2, 5, padding='same', activation='tanh')(x) gcn = GraphConv( graph_channels, )([conv, As_in]) gcn_2 = GraphConv( graph_channels, )([conv, bpp_inputs]) gcn = tf.keras.layers.Conv1D(graph_channels, 5, padding='same', activation='tanh', )(gcn) conv = tf.keras.layers.Concatenate()([conv, gcn, gcn_2]) conv = tf.keras.layers.Activation("relu")(conv) conv = tf.keras.layers.SpatialDropout1D(0.1)(conv) return conv </code></pre> <p>Putting together with RNNs</p> <pre><code> hidden = conv_block(reshaped) hidden = gru_layer(hidden_dim, dropout)(hidden) hidden = conv_block(hidden) hidden = gru_layer(hidden_dim, dropout)(hidden) </code></pre> <p><strong>Acknowledgment</strong></p> <ol> <li>Thanks <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> for the solid validation in this notebook <a href="https://www.kaggle.com/its7171/gru-lstm-with-feature-engineering-and-augmentation" target="_blank">https://www.kaggle.com/its7171/gru-lstm-with-feature-engineering-and-augmentation</a></li> </ol>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
12th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We ( <a href="https://www.kaggle.com/hidehisaarai1213" target="_blank">@hidehisaarai1213</a>, <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a>, <a href="https://www.kaggle.com/rishigami" target="_blank">@rishigami</a>, <a href="https://www.kaggle.com/copasta" target="_blank">@copasta</a> ) would like to thank kaggle &amp; host for the interesting competition and to all the participants for giving us a lot of ideas. And congrats to winners!</p> <h1>Solution overview</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1807118%2F6c9e9e79b259cecb81d3c15f1bcdeedd%2Foverview.png?generation=1602163661261585&amp;alt=media" alt=""></p> <h1>models</h1> <p>As the team name suggests, we developed many types of models.<br> Here’s a list of models.</p> <ul> <li>Pytorch RNN</li> <li>Keras One-Hot RNN</li> <li>Tree models based on NN extracted features</li> <li>MPNN</li> <li>DeepGNN</li> <li>Pytorch RNN + Domain Adversarial NN (DANN)</li> <li>DeepGNN + DANN</li> <li>Neighborhood Attention Architecture model by <a href="https://www.kaggle.com/reppic" target="_blank">@reppic</a></li> <li>AE pretrained Transformer model by <a href="https://www.kaggle.com/takadaat" target="_blank">@takadaat</a></li> <li>AE pretrained GraphTransformer by <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a></li> </ul> <p>We also combined different architectures. (e.g. MPNN + BERT-Encoder)<br> For final submission, we used 67 models.</p> <h1>features</h1> <ul> <li>bpps sum/max/nb</li> <li>additional structure/loop type sampled with ViennaRNA</li> <li>weighted one-hot encoding of additional structure/loop type with probability</li> <li>position Shannon entropy proposed <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/187560" target="_blank">here</a></li> </ul> <h1>training strategy</h1> <ul> <li>GroupKFold with kmeans</li> <li>weighted MCRMSE loss, np.log1p(1/error) for weights</li> <li>validation with SN_filter=1 data</li> </ul> <h1>stacking &amp; blending</h1> <p>We prepared NN<em>7, LGB</em>1, XGB<em>1, CAT</em>1 stacking predictions and blended them using <code>sp.optimize.minimize</code> for final predictions. This is worse CV than simple blend of models but performed better on LB because if we blend models simply there will be some zero weight models and lose diversity. So we decided to prepare some stacking models and blend them for diversity.</p> <h1>Domain Adversarial Neural Networks (DANN)</h1> <p>To accommodate the differences between the train and test datasets, we came up with a strategy to bring the distribution of features closer. This would work if train and test follow the same y (<code>reactivity</code>, <code>deg_Mg_pH10</code>, <code>deg_Mg_50C</code>…) generation rules, and if unrelated elements (e.g. length of sequences) cause changes in the distribution. To do this, we use Domain Adversarial Neural Networks [1], which tries to minimize the loss of original task(s) while learning a representation invariant to train/test difference through adversarial training.<br> Most of the technical details are the same as that of the paper: here’s the difference in our implementation</p> <ul> <li>Exponential warm up of lambda parameter</li> <li>Two domain classifiers: train/public domain classifier + train/private domain classifier</li> </ul> <p>We used this in some of the models used for stacking, and also for NN stacking. Our second submission was NN stacking + DANN. However, it turned out that this didn’t have a positive effect on any of CV/public/private.</p> <p>[1]: Ganin, Yaroslav, et al. "Domain-adversarial training of neural networks." The Journal of Machine Learning Research 17.1 (2016): 2096-2030.</p> <h1>Things that did not work</h1> <p>・Data augmentation<br> ・Distillation<br> ・Pseudo labeling<br> ・Fine-tuning multi-task model to single-task model</p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, many thanks to the Kaggle team and Stanford team for hosting this competition, and for all the re-scoring efforts. Many thanks to people who shared their kernels and ideas - I learnt a lot from you and was inspired by you. And congrats to all winners!</p> <p>My ideas are mainly on the data side (especially the way to use pseudo labeling), as well as some feature engineering. I'm still a beginner/learner in architecture - my initial versions were based on <a href="https://www.kaggle.com/xhlulu" target="_blank">@xhlulu</a>'s <a href="https://www.kaggle.com/xhlulu/openvaccine-simple-gru-model" target="_blank">solid gru baseline</a>. I added distance embedding (to be elaborated later in the 'features' section), as <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> did <a href="https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/119430" target="_blank">for NFL</a>. Then after <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a> published <a href="https://www.kaggle.com/mrkmakr/covid-ae-pretrain-gnn-attn-cnn" target="_blank">the excellent ae+gnn kernel</a>, I merged my frame work with this one, and it's pretty much as simple as adding a GRU layer before the final dense. (Personally I always enjoyed reading kernels by <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a>. I started learning NN by reading his kernel in the NFL game, and I also learnt a lot from this one!)</p> <p><strong>Data preprocessing and weights</strong></p> <ol> <li>I didn't use the SN_filter. Instead, I give Nan value to individual positions with large errors (started with error&gt;10 and value/error&lt;1.5 - and diversity can be added by varying this). I edited the loss function so that the Nan targets won't contribute to loss during training. About 20K values are labeled as Nan across all 5 targets, which is much less than the data we'd lose by using the SN_filter.</li> <li>I calculated edit distance between sequences, and did clustering based on it (trying to reverse engineer what the organizer did to the data). I found that many clusters only contain 1 sequence, but some clusters contain as many as 60 sequences. Thus, I decided to give sample weights proportional to <code>1/sqrt(count_in_cluster)</code> </li> <li>My sample weight also favors sequences that are "closer" to the private dataset.</li> <li>I assign column weights [0.3,0.3,0.3,0.05,0.05] to favor the scored columns </li> </ol> <p><strong>Data augmentation</strong></p> <ol> <li>Reverse the sequence both train and test time. This has better effects when you also encode the start and the end of the sequence.</li> <li>Random perturb of targets. Since targets have measurement errors, I randomly perturb the targets proportionally to the error.</li> <li>Some of my versions used <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a>'s <a href="https://www.kaggle.com/its7171/how-to-generate-augmentation-data" target="_blank">arnie augmented data</a>. I use this mainly for diversity.</li> </ol> <p><strong>Features</strong></p> <ol> <li>Some standard bpp/pairing based features discussed by many others</li> <li>The "distance" matrix to be used in the distance embedding - it can be calculated as the position difference in the sequence as <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a> did in his kernel, or it can also be calculated as the position difference after adjusting for primary pairs: for example, if positions (5,20) are a pair, then 5 has distance 1 to 4,6,20, and distance 2 to 3,7,19,21 - this can be calculated iteratively.</li> <li>The distance to the closest paired position, as well as the distance to the closest unpaired position. This is a very strong feature, which may even deteriorate performance in complex models because (I guess) it prevents the model from learning this feature using Attention layers. However in simple models this helps a lot, and it provides diversity.</li> </ol> <p><strong>Pseudo Labeling (of private test set, as well as a randomly generated data set)</strong><br> Finally my favorite part. I got at least 80 bps from this, and maybe more if time / computing power permits. </p> <ol> <li>I started off trying pseudo labeling the private dataset and positions 68-107 of the train set. It didn't work, which pushed me to do some EDA. Then I realized my predictions (from different single models) have much larger variance for positions 68-107 in train and positions 91-130 in test. I would have been very worried if my predictions were more noisy for positions 68-130, but the std didn't start to increase until position 91. Which made me wonder, are the last 39 positions special/less stable/different from others, so that the hosts also leave them out when scoring? Anyway, I decided only to use the first 91 from private to do pseudo labeling.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3652230%2F3d862334265e6423cfb64ccef6bd6878%2Fprivate_noise.png?generation=1602133417143543&amp;alt=media" alt="private data error as function of position"></li> <li>Similar to what I did to train data, I also Nan the pseudo labels at positions where my predictions have large STDs. I also do random perturb of targets proportional to STD.</li> <li>Then I train train set and the pseudo label set in turns (for example, 5 epoch of train as a "train section", following by 2 epochs of PL as a "PL section"). I observed that the PL set makes convergence faster initially, but continue to point to a worse direction at later stages. So I used a "save and load" method (many thanks to the RPG games I've played - from which I learnt this ugly but useful method). Such that whenever a PL section deteriorate the val score by more than 20 bps, I reload the weight before this PL section, and continue from there to another train section (effectively skipped that PL section). This works, and immediately increase the public score of my single model by 30 bps! (even 10 bps higher than the blend solution used as PL labels).</li> <li>This is very dangerous, because using blending as PL labels leads to leaks among folds, thus my CV is no longer trust worthy (I thought about doing foldwise-PL, but didn't do so due to time constraint). Also this save&amp;load method is overfitting the val. So basically after started using PL, the public LB is the only validation set I've got. Then after I used public LB for too many times, I started to be skeptical about it (thus I had a "overfitting?" suffix in my team name if anyone noticed).</li> <li>Being aware of the danger in 4, I didn't do any feature picking or model picking after I started doing PL. Instead, I focus on model correlation a lot, and that's why I mentioned "diversity" several times in this post. Moreover, one of my 2 subs (private score 0.34198) was based on best public score, but there's another one based only on correlation (private 0.34453) - I simply run clustering methods on all my single models, and do simple mean within each cluster, and assign weights to aggregate these clusters based on my knowledge/intuition about each cluster of models. - This "safe sub" got me 300+ place before the rescoring - otherwise I would have been 800+ place then.</li> </ol> <p><strong>Pseudo Labeling (from a randomly generated data set)</strong></p> <ol> <li>I thought another potential danger of doing PL on the private dataset (without doing foldwise PL) is that, all folds may overfit to the PL labels, thus have less diversity in the private set predictions compared with public ones. That's another reason why I had a "overfitting?" suffix to my name. To overcome this, I randomly generated a set of len=130 sequences, and did pseudo label based on them just like I did with the private set.</li> <li>BPP matrix and the pairing structures of the randomly generated set are calculated using <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a>'s excellent kernel <a href="https://www.kaggle.com/its7171/how-to-generate-predicted-loop-type" target="_blank">for arnie library</a></li> <li>It turned out I my private set PL didn't overfit, but these random data PL contributed a lot of diversity in the final blending.</li> <li><strong>Personally, I think using PL random dataset, together with different/amazing architectures contributed by other people, have potential of huge improvements</strong> (if we still need better predictions to fight COVID now). The reason is, with PL random dataset we now have access to unlimited "data" - labeled with blending of forecasts from different architectures (better than any single model). This is then used as PL input to improve all single models. After several turns, all single models will be much better. I got at least 80 bps with only 1 main architecture, so I think improvement may be a couple of percent if we use more diverse architectures contributed by my fellow Kagglers.</li> </ol> <p>Last, I'd say I didn't expect this… My goal (which felt like a dream only 2 weeks ago) was getting a solo gold. I'm still a beginner in DL, and I've got a lot to learn from many of you. So thank you to all Kagglers who made this community beginner/learner friendly.</p> <p>Thank you Chris for the encouragement! The creating random sequences part is my personal favorite as well :-)</p> <p>Thank you! Congrats on your results as well!</p> <p>Thanks! Yeah actually I think we could try the recursive train/PL save&amp;load thing next time. May give better results than simply blending PL with train, considering that PL's are not as accurate as real data, and may point to wrong directions at later stages.</p> <p>Thank you! I really like your solutions as well :-) And congrats to you on getting the solo gold, too!</p> <p>Thanks! I'll try to clean up my code and post a readable version later.</p> <p>Thank you for sharing this method! Very elegant (and pythonic)!</p> <p>Pseudo labels are actually just predictions from my models. I use my blend prediction as pseudo labels, and since they're better than any of my single models, thus they can help to improve my single models.</p> <p>Thanks! I only tried clustering on edit distance, but intuitively I think the two would give very similar results. I only used edit distance because this is what the hosts said that they used - then I can get (what I think almost is) the same clustering based on their hint that "public data were chosen from clusters with 3 or fewer members." (This can be achieved by adjusting the distance_threshold without pre-setting the number of clusters.)</p> <p>You're right - I used edit distance for different sequence length as well. So the min distance from train to private is at least 23. </p> <p>I trained on google cloud jupyterlab: 30GB CPU, and P100 GPU. With pseudo labeling, and all kinds of TTA, and 5CV, each single model takes up to 6 hours. The final submission is an ensemble about 30 models.</p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
23rd Place write-up
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks! It seems that many people already know I was using Transformer when I am on top :)</p> <p>I forgot to make my Model public, here it is <a href="https://www.kaggle.com/wimwim/modellib" target="_blank">https://www.kaggle.com/wimwim/modellib</a></p> <p>Sure! But I need some time to clean my code. Here is my dirty code if you don't mind <a href="https://www.kaggle.com/wimwim/covid19-cnn-transformer" target="_blank">https://www.kaggle.com/wimwim/covid19-cnn-transformer</a></p> <p><strong>Model:</strong><br> Transformer + CNN + RNN</p> <p>transformer block:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1904629%2F87390c6d22d2f21710e0c95331f09874%2FUntitled%20Diagram%20(2).png?generation=1602124003960935&amp;alt=media" alt=""><br> The bpps attention is inspired by <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> 's graph transformer (<a href="https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/119430" target="_blank">https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/119430</a>)<br> I added single layer bidirectional LSTM before and after stacked transformer blocks, and I also added positional encoding before the first LSTM layer.</p> <p><strong>Ensemble:</strong></p> <ul> <li>30+ models with different hyperparameters (number of layers, convolution kernel size, sample weighting, loss function…) in addition to some public models</li> <li>Stacking (Ridge + backward elimination)</li> </ul> <p><strong>Shakedown:</strong><br> It seems that my model does not generalize to a longer sequence very well, the predictions for position 68-91 looks different from 0-67, I guess this is the main reason for my shakedown.</p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
2nd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I was preparing this as 1st place solution but unfortunately I got <strong>5th</strong> 2nd place again.<br> Who cast the curse???</p> <h1>LB progress</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F317344%2F7839efcb3651f2aedaa830116f096133%2F2020-10-19%207.52.26.png?generation=1603061606818239&amp;alt=media" alt=""></p> <h1>Pipeline</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F317344%2Fb13f002dcf298c871e273638e6930c43%2F2020-10-17%2016.50.40.png?generation=1602921081920687&amp;alt=media" alt=""><br> I made prediction of SN_filter and pseudo label, but otherwise it's almost same methodology as another of my <a href="https://www.kaggle.com/c/trends-assessment-prediction/discussion/162765" target="_blank">2nd place solution</a> of TReNDS.</p> <h1>Stacking</h1> <p>I'm really good at stacking with xgboost. However, stacking using oof of pseudo label model is little bit hard due to the drastic oof. So I tried to add gaussian noise to the oof and prediction of test, and then I succeeded to avoid overfitting. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F317344%2F0f51b7b8c34d1a85170217132173549a%2F2020-10-19%2015.42.57.png?generation=1603112091717651&amp;alt=media" alt=""><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F317344%2Ff7d8997d7beac2371a450165963b9980%2F2020-10-19%2015.44.59.png?generation=1603112114562030&amp;alt=media" alt=""></p> <h1>CV vs LB</h1> <p>Also, our CV and LB are super-correlated. We can completely predict LB score using CV, so we didn't submit much.</p> <h1>LB simulation</h1> <p>Now I think this work<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F317344%2F42216ecab6efd6b771802b927a2149ae%2F2020-10-19%2021.53.01.png?generation=1603112007528427&amp;alt=media" alt=""><br> <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189196#1040074" target="_blank">https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189196#1040074</a></p> <h1>Reference</h1> <p><a href="https://www.kaggle.com/mrkmakr/covid-ae-pretrain-gnn-attn-cnn" target="_blank">AE GNN</a><br> <a href="https://www.kaggle.com/its7171/how-to-generate-augmentation-data" target="_blank">Data augmentation</a><br> <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189196" target="_blank">First penguin who has suspicion of LB</a></p> <p>Yes, I used both w/ filter and w/o filter.<br> Not only that, I used signal_to_noise and mean err as filter.<br> This way can generate diversity.</p> <p>If I remember correctly, it's about 60 or so</p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
33rd Place - RNA Folding Analysis
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I had a few varieties of transformed <code>BPPS</code>. You could add another column that indicated the chosen bond type <code>C-G</code>, <code>A-U</code>, <code>G-C</code> that corresponds with our <code>structure</code> and <code>predicted_loop</code> features. This <code>BPPS</code> variety increased GRU model CV LB by 0.01. The basic variety of <code>BPPS</code> presented in discussion above increases GRU CV LB by 0.005</p> <p>Also instead of transforming <code>BPPS</code> into 3 channels, we can encode the bases of <code>A, C, G, U</code> down the middle (the sequence) into 4 extra colors. Then the <code>BPPS</code> "images" become 7 channels. After doing this, you can remove the feature <code>sequence</code> from your model elsewhere.</p> <p>Using only the 7 channel <code>BPPS</code> "image" and no other features, you could get under CV 0.25 (if I remember correctly).</p> <p>I did not compare the benefit of <code>BPPS</code> images versus using the benefit of popular <code>bpps_mean</code>, <code>bpps_max</code>, and <code>bpps_nb</code> features. It would be interesting to see how they compare. Presumably, using <code>BPPS</code> images will allow the model to extract whatever it wants from the <code>BPPS</code> instead of just <code>mean</code>, <code>max</code>, and <code>nb</code>. Because i would build the <code>BPPS</code> convolutions directly into the model so it can learn the weights during training.</p> <p>To get alternate foldings, I used ViennaRNA. (Tutorial <a href="https://www.tbi.univie.ac.at/RNA/ViennaRNA/doc/html/examples_python.html" target="_blank">here</a>, see "Suboptimal Structure Prediction" recommended by <a href="https://www.kaggle.com/linshokaku" target="_blank">@linshokaku</a> in discussion <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/187560#1031776" target="_blank">here</a>). And I used ViennARNIE. (Notebook <a href="https://www.kaggle.com/its7171/how-to-use-arnie-on-kaggle-notebook" target="_blank">here</a> by <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> ).</p> <p>I created another <code>test2.json</code> with the same number of rows as the original but all the structures are the second best. Then I created a third <code>test3.json</code> where all structures are third best. The structures contained in <code>test3.json</code> are different from <code>test2.json</code> and <code>test.json</code>. And the structures in <code>test2.json</code> are different than <code>test.json</code>.</p> <p>For half of my models, I predicted <code>test2.json</code> and <code>test3.json</code> along with <code>test.json</code>. I also made <code>train2.json</code> and <code>train3.json</code>. Occasionally I would add these to train data for variety.</p> <h1>OpenVaccine: COVID-19</h1> <p>I'm excited to win solo silver. Thank you Eterna and Kaggle for hosting this fun and important competition. Five days ago, I knew nothing about RNA. Now I know the basics and I'm curious to learn more about this mysterious molecule.</p> <h1>Unique Competition</h1> <p>This is unique competition. When we call <code>model.fit(X, y)</code>, we use features <code>X</code> to predict targets <code>y</code>. Many Kaggle competitions have noisy targets <code>y</code> due to human labeling. This competition had noisy targets <strong>and</strong> noisy features! The provided features <code>X</code> were only estimates and not truth and the targets <code>y</code> were estimates too!</p> <h1>Private Test <code>id_01cd826d9</code></h1> <p>Here's private test RNA <code>id_01cd826d9</code>. The provided structure is display below. (Plots use <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> <a href="https://www.kaggle.com/its7171/how-to-use-arnie-on-kaggle-notebook" target="_blank">notebook</a>)<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F0f4190923b04ac5103045f5e9bf30d85%2Fimg0.jpg?generation=1602037311050479&amp;alt=media" alt=""></p> <h1>RNA Folding <code>id_01cd826d9</code> Uncertainty</h1> <p>The above provided structure is only a <strong>guess</strong>. We see in the picture that there are attractions <code>(1)</code> and <code>(2)</code>. The bases <code>A</code> and <code>U</code> each have a probability of 14% to bond as indicated by the <code>bpps</code> matrix. Therefore private test <code>id_01cd826d9</code> may actually have the following structure. Note, I have changed the color scheme below. Blue indicates high probability of bonding while yellow indicates low probability.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2Fa293c37ca79657fa55bcb2a22152b297%2Fimg2.jpg?generation=1602037322557672&amp;alt=media" alt=""></p> <h1>RNA Folding <code>id_01cd826d9</code> Uncertainty</h1> <p>We still don't know if the structure is the above. We notice that there are attractions <code>(3)</code>, <code>(4)</code>, <code>(5)</code>. Therefore the RNA may fold into the following<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F135cdd09c9f07df86b39f69197a8163c%2Fm3.png?generation=1602037807933315&amp;alt=media" alt=""></p> <h1>Dealing With Uncertainty</h1> <p>Since the structure is uncertain, I did 2 things. </p> <ul> <li>Convert the <code>bpps</code> matrices into color images and convolute them into embeddings for my models</li> <li>Predict targets for the top 3 folding of each test RNA and average according to probability</li> </ul> <p>If I had more time, I would train with all the different folding of train images and set train sample weight according to their probability.</p> <h1>BPPS Matrix</h1> <p>Below is an example BPPS matrix. To input these into a CNN, first skew the matrix. Then each number in the matrix represents one of three types of bond (1) C-G (2) A-U (3) G-U. Change that cell to red, green, blue respectively. Set the intensity of the color equal to the value in the matrix. Next convolute maintaining the <code>seq_length</code> but using <code>strides</code> to reduce the other dimension</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2Ff4371b3166cf5ffa5ac6cea423814dde%2Fimg3.jpg?generation=1602040512839574&amp;alt=media" alt=""></p> <pre><code> inp2 = L.Input(shape=(seq_len, WIDTH, DIM)) x2 = L.Conv2D(24,3,padding='same',activation='relu')(inp2) x2 = L.Conv2D(24,3,strides=(1,2),padding='same',activation='relu')(x2) x2 = L.Conv2D(24,3,padding='same',activation='relu')(x2) x2 = L.Conv2D(24,3,strides=(1,2),padding='same',activation='relu')(x2) x2 = L.Conv2D(24,3,padding='same',activation='relu')(x2) x2 = L.Conv2D(24,3,strides=(1,2),padding='same',activation='relu')(x2) x2 = tf.reshape(x2,shape=(-1, x2.shape[1], x2.shape[2]*x2.shape[3] )) </code></pre> <h1>Example Foldings</h1> <p>For the <code>BPPS</code> matrix above, below are some sample foldings<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F17f5e7e66a48ee6c202639376136dbea%2Fimg4.jpg?generation=1602039065083154&amp;alt=media" alt=""></p> <h1>Starter Notebooks</h1> <p>I studied the following 4 starter notebooks. Thanks, these notebooks are great! </p> <h3>GRU/LSTM <a href="https://www.kaggle.com/xhlulu" target="_blank">@xhlulu</a></h3> <p><a href="https://www.kaggle.com/xhlulu/openvaccine-simple-gru-model" target="_blank">https://www.kaggle.com/xhlulu/openvaccine-simple-gru-model</a></p> <h3>Deep GCN <a href="https://www.kaggle.com/symyksr" target="_blank">@symyksr</a></h3> <p><a href="https://www.kaggle.com/symyksr/openvaccine-deepergcn" target="_blank">https://www.kaggle.com/symyksr/openvaccine-deepergcn</a></p> <h3>AE Pretrain, GNN Attn CNN - TensorFlow <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a></h3> <p><a href="https://www.kaggle.com/mrkmakr/covid-ae-pretrain-gnn-attn-cnn" target="_blank">https://www.kaggle.com/mrkmakr/covid-ae-pretrain-gnn-attn-cnn</a></p> <h3>AE Pretrain, GNN Attn CNN - PyTorch <a href="https://www.kaggle.com/takadaat" target="_blank">@takadaat</a></h3> <p><a href="https://www.kaggle.com/takadaat/openvaccine-pytorch-ae-pretrain" target="_blank">https://www.kaggle.com/takadaat/openvaccine-pytorch-ae-pretrain</a></p> <h1>Build 109 Models!</h1> <p>I built 109 different models by modifying the above starter notebooks and then predicted the top 3 foldings for each test RNA (reported by <code>ViennaRNA</code> and <code>ViennARNIE</code>). Next I plotted the correlations of their predictions. I removed models whose predictions looked strange and ensembled the rest with models of similar type. Then I ensembled the ensembles into a final model with <code>(GRU + GCN + 2 * TF + 2 * PY)/6</code></p> <p>With LB probing, I discovered that multiplying the public LB predictions by <code>1.04</code> improved LB score. So, for my final private submission, I also multiplied predictions by <code>1.04</code>. This proved to be helpful.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F4adadba9f7f2f71edd8dea41cc738eea%2Fcorr_mat.png?generation=1602039482879603&amp;alt=media" alt=""></p> <h1>Thank You</h1> <p>Thank you everyone for your sharing your discussions and notebooks. I could not have made my models without your help.</p> <p>To get alternate foldings, I used ViennaRNA. (Tutorial <a href="https://www.tbi.univie.ac.at/RNA/ViennaRNA/doc/html/examples_python.html" target="_blank">here</a>, see "Suboptimal Structure Prediction" recommended by <a href="https://www.kaggle.com/linshokaku" target="_blank">@linshokaku</a> in discussion <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/187560#1031776" target="_blank">here</a>). And I used ViennARNIE. (Notebook <a href="https://www.kaggle.com/its7171/how-to-use-arnie-on-kaggle-notebook" target="_blank">here</a> by <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> ).</p> <p>As for models, there was nothing special about 109. I was surprised to observe that every time I added a new trained model it increased CV and LB. Even models with minor changes. It appears there is lots of variability in these models. So I just kept making as many models as I could the last few days. I started with those 4 diverse notebooks, and did changes like add / remove features, modify architecture, adjust losses, learning schedules, batch sizes, change seeds, modify train data with ViennARNIE etc etc</p> <p>I started out using 5 KFold random seed 42 with <code>train.loc[train.signal_to_noise&gt;=1]</code>. This gave me a sense of what increased CV and LB.</p> <p>Then I started training models with different folds 5, 11, 15, 25, different seeds, and some models using all the train data at once. Afterward, I grouped models into 4 categories (GRU, GCN, GNN-CNN-TF, GNN-CNN-PY) and averaged each category with np.mean() no weighting. Then i did (1xGRU + 1xDCN + 2xTF + 2xPY)/6</p> <p>I suspected that private LB would be different than CV and LB. So to be more general, I decided to average a bunch of models equally instead of stacking or learning ensemble weights per model based on the train data or public LB.</p> <p>Weighted average. I grouped models into 4 categories (GRU, GCN, GNN-CNN-TF, GNN-CNN-PY), then averaged each category with <code>np.mean()</code> no weighting. Then i did <code>(1xGRU + 1xDCN + 2xTF + 2xPY)/6</code></p> <p>I didn't want to be more precise than that because I thought private test would be different than train CV and public test LB.</p> <p>Probing a multiplier from public LB is a trick for regression competitions. If the test data comes from a different distribution than train, each test target (i.e. all reactivity or all deg_Mg_50C) may have a different overall mean than the respective overall train target and your model may not predict it correctly. Through trial and error, you can determine if an offset (add/subtract a number) or a multiplier (multiply a number) improves public LB. (Similarly there are probe tricks for AUC and other metrics).</p> <p>If (1) both private and public are from a different distribution than train and (2) private and public come from the same distribution, then you want to use your probed multiplier or offset from public LB and apply to private LB.</p> <p>In this competition there was no reason to believe that private comes from the same distribution as public. Therefore the multiplier for private will most likely be different than the multiplier for public. (As it turns out they were different, the multiplier for private was less than one, while the multiplier for public was greater than one).</p> <p>For one submission I applied the public multiplier of <code>1.04x</code> and for the other submission, I applied no multiplier. I covered both scenarios to be safe. The no multiplier one was the best private LB.</p> <p>Ok, I updated my post to include the links <a href="https://www.kaggle.com/lftuwujie" target="_blank">@lftuwujie</a> </p> <p>Thanks Gabriel</p> <p>After the LB rescore, the private submission with <code>1.04x</code> multiplier is now the best. So it appears that both the public LB and private LB benefited from multiplying predictions by <code>1.04x</code>.</p> <p>(Before rescore, the private submission without multiplier was best)</p> <p>Yes, WIDTH defines how many neighbors to include. The train and public test have modified BPPS size of 107x214. And the private test have 130x260. Using WIDTH we extract the BPPS image <code>WIDTH//2</code> left of center and <code>WIDTH//2</code> right of center. Then train and public have 107xWIDTH and private has 130xWIDTH</p> <p>I tried a few different WIDTHs but didn't optimize it completely. My final models just used WIDTH=214</p> <p>Thanks. I agree that pictures make explanations easy to understand. And I enjoy making pictures, so I like to include them in my discussion posts.</p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
3rd Place Write-up [UPDATED]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It's actually simple using Selenium, for a single txt file containing information following RNAcomposer's format, the following code can submit one batch job. </p> <blockquote> <p>from selenium import webdriver </p> </blockquote> <pre><code>file = open(os.path.join(txt_path, f"{i}.txt"), "r") driver = webdriver.Chrome(executable_path=r'C:\Users\chromedriver.exe') driver.get("http://rnacomposer.cs.put.poznan.pl/") username = driver.find_element_by_name("j_username") username.clear() username.send_keys(USERNAME) password = driver.find_element_by_name("j_password") password.clear() password.send_keys(PASSWORD) driver.find_element_by_class_name("button").click() driver.find_element_by_id("input").clear() driver.find_element_by_id("input").send_keys(file.read()) driver.find_element_by_id("Submit1").click() driver.close() </code></pre> <p>Truly want to thank Eterna and Kaggle for hosting this meaningful competition. Thanks all the kagglers for sharing great ideas and notebooks, I learnt a lot in this competition. And congratulations to all winning teams, especially my former teammate <a href="https://www.kaggle.com/nullrecurrent" target="_blank">@nullrecurrent</a> for the 1st place!! </p> <p>These two days were like a roller coaster.. that I dropped from 4th place on LB to 720th place on the first evaluation, and climbed back to 3rd place today. I think my shake were among the largest in this competition… which means my solution is not that robust in different situations, compare to <a href="https://www.kaggle.com/youhanlee" target="_blank">@youhanlee</a> and <a href="https://www.kaggle.com/nyanpn" target="_blank">@nyanpn</a> . There is still many things to learn. </p> <p>Before this competition, I know little about mRNA and degradation. So most techniques I used were from NLP competitions. I will summarize my approaches below: </p> <h3>Data</h3> <ul> <li>I filtered out data based on <code> *_error_* </code> columns larger than [6,8,10] and train labels less than -0.5</li> <li>The first augmentation of data I did was reversed sequences (you also need to reverse other features and labels too). This is natural, since as an augmentation in NLP I would change the order of sentences in a paragraph. I noticed LB benefitted a lot from data, and correlation of private predictions decreased. Here the correlation is only calculated on the first 91 positions. </li> <li>Then I thought adding more augmented data to train and test would make sense, so I used <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a>'s great <a href="https://www.kaggle.com/its7171/how-to-use-arnie-on-kaggle-notebook" target="_blank">notebook</a> to generate possible structure and predicted loop types for each samples, using them as training and test augmentation. </li> <li>So my final training data includes (1) original data (2) reversed original data (3) Augmented data from Arnie (4) reversed augmented data. My fold prediction is also an average of these four. </li> </ul> <h3>Model</h3> <ul> <li>I used mostly the AE pretrained GNN notebook by <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a> (thanks so much for this great great notebook, I wouldn't be able to get 3rd without your work). </li> <li>added different layers of LSTM/GRU/wavenet at the end with different units/parameters for diversity.</li> <li>In general, 2 x 128 units of LSTM or GRU layer at the end works the best for me. </li> <li>wavenet is having very low correlation with other structures but score is generally worse. good for blending though. </li> <li>I tried to slight increase the units of multi-head-attention, change the dropout of different layers etc. But in general the original structure is very good already. </li> </ul> <h3>Features</h3> <ul> <li>I again used the backbone of <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a>'s code :)</li> <li>In additional his original node features, I added bpp max, bpp second max, diff between max and 2nd max, bpp sum, pair type etc. </li> <li>In addition to <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a>'s structure adjacency matrix, I also added two matrices to specify the neighbors of each node's pair. This feature alone increased 20bps on LB. </li> </ul> <h3>3D distance</h3> <ul> <li>I managed to get 3D structures for all samples from <a href="http://rnacomposer.cs.put.poznan.pl/" target="_blank">http://rnacomposer.cs.put.poznan.pl/</a>, as discussed by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> and <a href="https://www.kaggle.com/shujun717" target="_blank">@shujun717</a>. This website can calculate predicted 3D structures given sequence and original structure. But the problem is that even the batch work only take maximum of 10 sequences. So I wrote a simple <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189574#1041899" target="_blank">script</a> using Selenium to call their server 600+ times to get all the 3D structures in .pdb files. </li> <li>Now I published the 3D data in this post, along with a starter notebook: <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189604" target="_blank">https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189604</a></li> <li>I parsed these pdb files to get 3D distance between all C1 atoms, to replace the original distance matrix from <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a>'s notebook. I noticed about 10-15bps increase in LB and decrease in prediction correlation. </li> <li>I feel that I didn't use the full power of these .pdb files, but I didn't have enough time to dig deeper. </li> </ul> <h3>Training Strategy</h3> <ul> <li>5 fold stratified CV based on sequence edit distance. </li> <li>Based on <a href="https://www.kaggle.com/xhlulu" target="_blank">@xhlulu</a>'s loss function (version v9), I used different weights for each target columns. I tried [0.2, 0.3, 0.3, 0.1, 0.1] and [0.2, 0.4, 0.2, 0.1, 0.1]. </li> </ul> <h3>Pseudo Labeling</h3> <ul> <li>As also mentioned in <a href="https://www.kaggle.com/nullrecurrent" target="_blank">@nullrecurrent</a>'s great write-up, using the first 91 positions is the key here. After reading her write-up, I have to say that the save and load methodology is genius :). But I found that in most situations, even PL increase the validation score (even 30~50 bps), it helps the next epoch (trained on training samples) to converge much faster than not using the PL. </li> <li>This methodology was tricky because if I add too much PL, CV score became unreliable. So I only added a few epochs of PL at the beginning of each fold to help the model converge. </li> </ul> <h3>Blending</h3> <ul> <li>Final model is a simple blending of 20+ models. </li> <li>My strongest single model is using all of the above with LB 0.23125 and PB 0.34429</li> <li>I have a safe submission, which blended 4 models that only used GNN and very little PL, because I feel that RNN may still be affected by length of the sequence, and using blend PL might cause overfitting too. This blend ended up with PB 0.34540. </li> </ul> <h3>Final Thoughts</h3> <ul> <li>For me, <strong>the key to climb the LB was 4x augmented data and PL</strong>. Adding some RNN layers at the end of <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a>'s model, with some basic bpp features will help you in 241x range. Reversed sequence will help about 20bps, Arnie augmentation will help about 10bps, and PL will help about ~50bps if using correctly. </li> </ul> <p>For me it seems to happen at the relatively early stage (after 5 epochs of training), not the very beginning. Since I'm using 4x training data to train, it might be the case that model is already heading to some wrong local optima and PL tries to correct it, thus the temporary worsening of val score - only my personal guess :) </p> <p>Thanks! :)</p> <p><a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>, <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189604" target="_blank">https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189604</a> I uploaded the dataset :) </p> <p>And also I want to thank you, for all your discussion and sharing in this competition. I followed most of your threads and truly learnt a lot from them! Thank you so much! </p> <p>Yes, every batch only allows 10 sequences, so I made 600+ calls to the server :) <br> You are right, I didn't spend too much time on the pdb files other than using the 3D distance as a diversity input. But other people may find effective ways of using them :) I'm uploading the files and will let you know once done.</p> <p>Yeah :) Congrats on the 1st place!!</p> <p>Thank you Youhan, congratulations to you too for becoming GM :) well deserved! </p> <p>Thank Kamal, we all learnt a lot from this community :) congrats on bronze!</p> <p>Thanks! and congrats on bronze medal!</p> <p>Thank you! Congrats to you too :)</p> <p>Thanks :) yours is coming :)</p> <p>Thanks a lot <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a>, congrats to your team too! I published the 3D data in this <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189604" target="_blank">post</a>. I read some solutions in molecular competition, but didn't have enough time to explore different approaches. Agree with you that there should be more potential here. </p> <p>Also, thanks for pushing a rescoring, otherwise I would still be crying looking at my 700+ ranking.. </p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
4th place solution: code & blog published
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you! I've changed it.</p> <p>Thank you and most welcome! I hope that it can be of any use!</p> <p>I finally got around cleaning up the code and writing a more elaborate write-up in the form of a blog post!</p> <p>The code can be found <a href="https://www.kaggle.com/group16/covid-19-mrna-4th-place-solution" target="_blank">here on Kaggle</a> or on <a href="https://github.com/GillesVandewiele/covid19-mrna-degradation-prediction" target="_blank">Github</a>.</p> <p>A blog post can be found on <a href="https://gillesvandewiele.medium.com/predicting-mrna-degradation-using-gnns-and-rnns-in-the-search-for-a-covid-19-vaccine-b3070d20b2e5" target="_blank">Medium</a>. As always, I would greatly appreciate any kind of feedback. If there is anything unclear or badly written, you can definitely tell me.</p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
51th place solution. Single Model. Private Score 0.35175, Public Score 0.24019.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>score update~!<br> no - crop and padd : 0.35215<br> crop : 0.35274<br> crop and padd : 0.35175</p> <hr> <p>I have uploaded my code.</p> <p>I coded in <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a>'s code.<br> Thanks for sharing the nice code.</p> <p>The part where my code looks so different from other kaggler code<br> To prevent overfitting according to the length of private and public data is that I used padding and cropping.</p> <p><a href="https://www.kaggle.com/ttagu99/16th-place-single-model-private-score-0-40760" target="_blank">https://www.kaggle.com/ttagu99/16th-place-single-model-private-score-0-40760</a></p> <p>I want to go back to the days without Covid.<br> It's been too long since I went on a trip.<br> I hope the vaccine comes out soon~.</p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
6th place solution (UPDATED)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><blockquote> <p>any comparison results using original data and vienna data? e.g. from 0.2xx --&gt; 0.2xx?</p> </blockquote> <p>In my model, <strong>eternafold gave me ~0.005 improvement by just replacing the data</strong>. To see if this improvement is generalizable, I also did late submission by replacing the data in the public kernel.</p> <h4>my baseline model</h4> <table> <thead> <tr> <th></th> <th>default</th> <th>eternafold</th> </tr> </thead> <tbody> <tr> <td>public</td> <td>0.24888</td> <td>0.24313 (-0.00575)</td> </tr> <tr> <td>private</td> <td>0.41873</td> <td>0.41410 (-0.00463)</td> </tr> </tbody> </table> <h4>public kernel</h4> <p>result: <a href="https://www.kaggle.com/nyanpn/eternafold-open-waccine-pytorch-ae-pretrain" target="_blank">https://www.kaggle.com/nyanpn/eternafold-open-waccine-pytorch-ae-pretrain</a></p> <table> <thead> <tr> <th></th> <th>default</th> <th>eternafold</th> </tr> </thead> <tbody> <tr> <td>public</td> <td>0.24557</td> <td>0.23894 (-0.00663)</td> </tr> <tr> <td>private</td> <td>0.41681</td> <td>0.41497 (-0.00184)</td> </tr> </tbody> </table> <p>According to this result, the accuracy of both the public and private data was indeed improved in the other model too. I'm curious to see if it can be improved in other participants' models.</p> <p><a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> </p> <blockquote> <p>Do we know the sequence length? It is my understanding that the data are just partial sequences.</p> </blockquote> <p>You're right, I meant the length of partial sequences :)</p> <p><a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> </p> <blockquote> <p>I tried to use eternabench to look at the effect of sequence length and to predict the effect of scaling"<br> i tried to use this too but I did not find the function to do this? how did you do this?</p> </blockquote> <p>"Mar23_unfiletered_cloud_lab_rounds.json" in eternabench (see <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/183791" target="_blank">this thread</a>) contains 11591, 2987 and 2267 RNAs with length=107, 130 and 131 respectively. This file also contains "reactivity" (maybe SHAPE reactivity) columns.</p> <blockquote> <p>We used the Eterna “Cloud Labs” for this purpose: 23 datasets of player-designed constructs, ranging from 80-130 nucleotides in length<br> <a href="https://www.biorxiv.org/content/10.1101/2020.05.29.124511v1.full.pdf" target="_blank">https://www.biorxiv.org/content/10.1101/2020.05.29.124511v1.full.pdf</a></p> </blockquote> <p>What I did was to train a model that predicts this "reactivity" with eternabench. The model was trained on data with a length of 107 in eternabench and validated on data with a length of 130/131 to ensure that the model was not affected by length.</p> <p>BTW thanks for a lot of helpful discussions, <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> ! I cloud not reach this place without your posts.</p> <p><em>2020-10-08 14:22 UTC: The first write-up was written in a hurry, so I've updated the details a bit. I'm really glad I got the gold medal at the end!</em></p> <h4>Model</h4> <p>I trained various versions of the model with different loss and data based on the following four different architectures.</p> <ul> <li>Model 1-a: CNN+GNN (private 0.35045, public 0.24371)</li> <li>Model 1-b: CNN+GNN+GRU (private 0.35192, public 0.24085)</li> <li>Model 2: Modified Version of <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a> 's great <a href="https://www.kaggle.com/mrkmakr/covid-ae-pretrain-gnn-attn-cnn" target="_blank">kernel</a></li> <li>Model 3: Modified Version of <a href="https://www.kaggle.com/takadaat" target="_blank">@takadaat</a> 's great <a href="https://www.kaggle.com/takadaat/openvaccine-pytorch-ae-pretrain" target="_blank">kernel</a></li> </ul> <p>My "Model 1-a" is a bit similar to the solution of <a href="https://www.kaggle.com/kingychiu" target="_blank">@kingychiu</a>, after thinking about how to propagate the long-term interaction via pairing. Here is the architecture of my model:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1532198%2Fb1daa7f96e24d96f932f802cc57d324c%2FCNN-GCN.png?generation=1602165368336895&amp;alt=media" alt=""></p> <p><code>Residual Graph Attention</code> is a simple combination of <code>torch.bmm</code> and conv1d. The SE-Residual block is the same as the block of SE-Resnet, only the convolution has been changed from 2d to 1d. Kernel size of 1d convolution is small (3 to 7) to prevent overfitting.</p> <pre><code>class ResidualGraphAttention(nn.Module): def __init__(self, ch, dropout): super().__init__() self.conv = nn.Sequential( nn.Conv1d(ch, ch, kernel_size=7, padding=3), nn.BatchNorm1d(ch), nn.ReLU(), nn.Dropout(dropout), nn.Conv1d(ch, ch, kernel_size=7, padding=3), nn.BatchNorm1d(ch), nn.ReLU(), nn.Dropout(dropout) ) self.relu = nn.ReLU() def forward(self, src, attn): """ :param src: feature vector. [batch, features, seq] :param attn: attention matrix. [batch, seq, seq] :return: """ h = self.conv(torch.bmm(src, attn)) return self.relu(src + h) </code></pre> <p>The last relu layer corresponds to what is described in the postprocessing part. The primary and secondary bpps varies among models (contrafold, rnasoft…) to make diversity. Pre-training like <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a> 's kernel was applied to all of my models, but I'm not sure how it affected the private score (public score was not improved at all by pre-training).</p> <p>I was initially unsure of how robust GRU and Transformer were to changes in sequence length, so I focused on modeling CNN-GNN. Later on, I noticed that adding GRU layer improve a bit on public LB (Model 1-b) so I started to focus on the GRU model, but the first model was better after all.</p> <h4>Data</h4> <ul> <li>Replace original structure, predicted_loop_type and bpps files with eternafold &amp; its mfe structure<ul> <li>this gave me big boost (~0.005 in both public and private, see my comment below)</li> <li>mea-based structure didn't help</li></ul></li> <li>Add bpps from vienna(T=20, 50, 70), contrafold, rnafold as additional channels of attention layer</li> </ul> <h4>Feature</h4> <ul> <li>max, sum, nb_count, 2ndmax, 3rdmax of bpps</li> <li>not using distance matrix</li> </ul> <h4>Validation</h4> <ul> <li>7-fold StratifiedGroupKFold</li> <li>Grouped by sequence similarity (use agglomerative clustering with hamming distance)</li> </ul> <h4>Loss</h4> <ul> <li>training with all data</li> <li>use various sampling weight like log(snr + 1.1)/2, log(snr + 0.5)/2, …</li> </ul> <h4>Training</h4> <ul> <li>Adam with default parameters</li> <li>The first 30 epochs are trained using 5 columns, and then the weight of the 2 columns not used for scoring is reduced to 0.01</li> </ul> <h4>Ensemble</h4> <ul> <li>Ridge stacking</li> <li>Use all five columns as features for each column's prediction</li> </ul> <h4>Postprocessing</h4> <ul> <li>Clip all prediction with [-0.5, 100] (according to the data description, all ground truths should be greater than -0.5)</li> </ul> <h4>Others</h4> <ul> <li>I tried to use eternabench to look at the effect of sequence length and to predict the effect of scaling, but I did not apply scaling based on these experiments because it didn't seem to be clear result for me. I should have looked more carefully.</li> </ul> <h4>Resut</h4> <p></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1532198%2F53ce2be1a5172288889ba72360a41330%2F22b3ff0b8f92a0f6d126e2ebed329864.png?generation=1602042636806898&amp;alt=media" alt=""></p> <p>I finally did it! </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1532198%2Fe6909d4e5628e74654bda2bf6f9a8fdf%2Fcdb350b981b5515cbd56c48614920847.png?generation=1602167470518665&amp;alt=media" alt=""></p> <p>Thank you! But what I've got (so far) is a silver medal 😂</p> <p>Thanks, and yes, we can learn a lot from great kagglers.</p> <p>Thanks! According to the other thread, it may be too early to be depressed :)</p> <p>Thanks and congratulations too! About 30 models are used for ensemble. Maybe I should blend more :)</p> <p>Thanks! I hope my model performs well on filtered data too…</p> <p>I don't remember it well, but I'm sure it's not very big. Maybe around 0.001 in CV.</p> <p>Thanks, I don’t have an clear idea why my model worked well on noisy data, but I’m satisfied with the final rank :)</p> <p>I doubt that my model is biased a bit toward zero (and this is why my model performed well on non-filtered dara), so upscaling may improve my score. But I wanted to try more essential idea like pseudo-label, weighting by cluster size, etc before starting gamble.</p> <p>Thanks! I’ll release at least a part of my code in this weekend :)</p> <p>"The most robustness award" is a great honor for me :) But to be honest, I'm not sure why my ranking was so stable. My guess is:</p> <ul> <li>All training data including SN_filter = 0 was used for training</li> <li>Avoided using risky features</li> <li>Pseudo Labeling was not performed (It was actually a good idea, but I simply didn't think of it)</li> <li>1D-CNN was accidentally robust to noise</li> <li>The use of Eternafold was advantageous because the properties of the RNA data used in the Eternafold training were similar to those of the private test (maybe)</li> </ul> <p>Thank you! And congratulations to your Gold and GM too!</p> <p>Thank you! I'd be happy to team up with you for future competitions ;)</p> <p>Thanks, maybe I'm too lucky ;)</p> <p>Thanks! Maybe it's because I'm not used to writing code with pytorch, but I seem to tend to create small models.</p> <p>I don't fully understand the process of measuring the actual RNA reactivity, but certainly given a longer sequence, I think the bpps would be completely different given a longer sequence. I had assumed that they were actually measuring RNA fragments that were 107 or 130 in length…</p>