Title of Competition
stringclasses
343 values
Title of Writeup
stringlengths
6
139
User
stringclasses
1 value
Writeup
stringlengths
419
80.4k
M5 Forecasting - Accuracy
Top 4% Solution (221st) - Four Insights
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Private Score: .63026</p> <p><a href="https://github.com/npa02012/kaggle_walmart_sales">Github Repo</a></p> <p>Written in R using primarily data.table and LightGBM.</p> <h2>Insight 1</h2> <p>I first want to give credit to the following notebook: <a href="https://www.kaggle.com/mayer79/m5-forecast-attack-of-the-data-table">https://www.kaggle.com/mayer79/m5-forecast-attack-of-the-data-table</a></p> <p>I used a similar FE set, however, my strategy for calculating the features was quite different.</p> <p>Namely, I kept my data wide and used column names to calculate features. For me this approach is much more intuitive and easier to read. Take for example the code to calculate rolling means: <code> # Get lagged rolling means <br> lags = c(7, 28) windows = c(7, 28) for(l in lags) { for(w in windows) { cols = paste0('d_', (pred_day_num - l - w + 1):(pred_day_num - l)) col_name = paste0('rmean_', l, '_', w) X[, (col_name) := d_sales_train[, rowMeans(.SD), .SDcols = cols]] } } </code> Suppose we are predicting Day 1914. For the lag-7/window-7 feature, we take the row-means of the ('d_1907', 'd_1906', ..., 'd_1901') columns. </p> <p>A total of 21 features were used. Price features were not used (lack of time to implement). My feature engineering functions can be found in <a href="https://github.com/npa02012/kaggle_walmart_sales/blob/master/scripts/fe_functions.R">/scripts/fe_functions.R</a></p> <h2>Insight 2</h2> <p>Keeping the day numbers (i.e. 1 --&gt; 1969) as a predictor variable helped my public score a lot. Also, a very low learning_rate (e.g. .01) actually reduced my score quite significantly.</p> <p>Both of these insights puzzled me - I did not have the time to look more closely into them.</p> <h2>Insight 3</h2> <p>I made three models. The only difference between the models is the random selection of validation rows for training. I held out days 1914 --&gt; 1941 for a final validation set in all cases. The public scores for the 3 models were .59, .60, and .61. The final public score was .59.</p> <p>I do believe creating 3 very similar models helped boost my score. In theory, I am centering around the mean of all possible predictions (across the different validation sets) using that model structure.</p> <h2>Insight 4</h2> <p>I set aside a significant amount of time to ensure my final submission was sound. This is mostly why I held out Days 1914 --&gt; 1941 from the training set - the slight decrease in information was worth the increased certainty that my evaluation predictions were as intended.</p> <p>I also did some simple plotting and distribution checks to make sure everything looked normal before making my final submission.</p>
M5 Forecasting - Accuracy
[109th place] No Dark Magic, Just ML
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks, glad you like it</p> <p>No problem! Hope you've learned something from this post.</p> <p><strong>Calendar</strong> 1) Uniting event features, count encode them with rerank, add <em>days to event</em> feature and <em>next event</em> indicator; 2) Dropping date features like <em>month</em>, <em>year</em> (they are inconsistent between train/test); 3) Adding only <em>day of the month</em> and <em>day of the week</em> features.</p> <p><strong>Sell prices</strong> 1) The only good <em>sell price</em> features were <em>sell price max pct</em> ((<em>sell price cumax</em> - <em>sell price</em>) / <em>sell price cumax</em>) and <em>sell price min pct</em> ((<em>sell price</em> - <em>sell price cumin</em>) / <em>sell price</em>), but I decide to not include them in final solution (my other kernel with them did a little bit better on private in the end); 2) No other features were used.</p> <p><strong>Demand</strong> 1) <em>Cumulative (mean and low median)</em>, <em>rolling (mean week/month/two months)</em> and <em>exponentially weighted (mean)</em> features with 29 day shift; 2) Uniting all <em>snap</em> features into one bool feature; 3) CatBoost encoding for <em>dept</em> (w rerank), <em>store</em> (w rerank), <em>item</em>, OHE for <em>state</em> and <em>category</em>; 4) Adding <em>CPI</em> values (external data, used census region values) with 2 month shift (CPI is usually updated in the middle of the next month); 5) <em>Expected item revenue</em> (CatBoost encoded <em>item</em> multiplied by <em>sell price</em>) was the only revenue feature that worked for me; 6) Cutting first year from the dataframe (to save RAM and get rid of nans from shifts).</p> <p><strong>Models</strong> 1) Basic Time Train-Test split (full year of sales in valid right before test months); 2) <em>PassiveAgressiveRegressor</em> to get that yearly increasing trend and add ability to extrapolate. Trained with MSE, tuned on validation, then weights were updated with validation df; 3) <em>CatBoost</em> with a lot of tuned hyper-parameters and optimizations for time series. Trained with RMSE, then trained model was updated with validation df; 4) Weighted mean of "relu activated" predictions in the submission (to get rid of negative values that could come from LR).</p> <p><strong>What did not work for me</strong> 1) Sell price features (noisy, different distribution in train and test, except maybe two, see above); 2) Revenue features (noisy, except one, see above); 3) Moon age/index (tried 5 different approaches and after checking results with historical data find a lot of mistakes), other date features (different distribution in train and test); 4) Bunch of basic statistics calculated on rolling/ewm/cumulative demand (min, max, q1, q3, std and so on); 5) Day-by-day prediction - you will always have some error in your prediction, and with this approach it will only grow with each prediction since your new prediction rely on previous. I guess it could work but with some serious validation, postprocessing and feature penalties. 6) Tagging new product or counting days of product being in sale (different distributions between train and test, also there are no new items in test); 7) Encoding year/quarter/month trends with mean/max demand; 8) Tagging and counting duplicates (it works but with good feature engineering you don't need it), tagging nans and outliers; 9) Gasoline price (raw and normalized by US price), number of employees (raw and normalized by state population). Bimodal (TX &amp; CA vs WI ) and inconsistent between train/test.</p> <p>Overall 108 kernel versions, 27 submissions, min private 0.59492, final solution private 0.60446, no multipliers or other cheeky stuff.</p> <p><strong>A bit altered code is <a href="https://www.kaggle.com/altprof/109th-place-code-only">here</a></strong></p>
M5 Forecasting - Accuracy
[112th Place] How I missed Rank 2 of M5-Accuracy by a factor!!!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, Congratulations to all the Kagglers and toppers of the M5-Accuracy challenge. It was a wonderful journey where I got to learn a lot from fellow colleagues and Kagglers. The topics discussed and kernels shared gave a good understanding of the problem statement and solutions proposed.</p> <p>From the start, it was clear that LGB was one of the best approaches to solve the problem. However, things were unclear regarding the use of WRMSSE, hierarchial modeling, and the need for a stable CV for the time series, to get to the top of LB. Hence, keeping the complexity of the problem in mind and, at the same time avoid overfitting, we used a standard and stable approach. We used LGB, with CV on future data (1.5 yrs), and retraining with learnt hyperparameters on the entire dataset.</p> <p>It was surprising to see, it led to us to Rank 112 in M5-Accuracy. However, soon after that, we learnt from the <a href="https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/163047">post</a> of <a href="/kyakovlev">@kyakovlev</a> , that a scaling factor of 0.955 made all the difference in the private LB. After we used the calibration on our submission file, it was even more shocking to learn, that it gave a private LB score of 0.52123 ….. Rank 2.</p> <p>Well, to sum up it up, the challenge reminded me back again, that the learning is not over yet.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F649545%2F2036f33e81d9f2bf45576300bc5b4e41%2Fkaggle.PNG?generation=1593622286701781&amp;alt=media" alt=""></p>
M5 Forecasting - Accuracy
[178th Place] Recursive Model with Regression & Classification
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Overview</strong></p> <p>I have to say that although I was expecting a massive shake-up, I think this competition took it to a new level. It was really hard to choose a final submission that I thought might generalize to the private LB. First, I want to take a second to thank Kaggle and the competition sponsors for such an awesome competition. I also want to thank the community for their awesome sharing and insights, this is what Kaggle is all about afterall.</p> <p>I am actually happy with this solution. I learned a lot and I tried to make my model as generalized as possible without using multipliers. I would like to take a second to talk about the multipliers. It seems that the sentiment has been expressed that they are not fair or "garbage" (I read somewhere) but the truth is, the are used quite frequently in many forecasting applications in the business world. They can be referred to as multipliers or growth. My concern with these multipliers here were that I just felt like I wanted my model to try and pick it up in some way so I chose to go completely free of them.</p> <p><strong>Cross-Validation Scheme</strong></p> <p>My cross-validation scheme is a 3-fold time series cross validation, where the length of the series in the validation are the same length as the submissions for the public and private LB. For a long time I struggled with the metric. It was a really interesting metric to implement and until the release of the public LB data I was unsure of whether I was calculating it correctly or not. As you can imagine, this led to a lot of frustration for validating any of my models. I mostly experienced gains in CV but decreases in LB. When the public LB data was released I was able to verify that my CV was set up correctly all along. Once I knew that I could at least see the correct scores, it was all experimentation from there.</p> <p><strong>Final Model</strong></p> <p>My final model is a mix of regression + classification and is a recursive model. After a lot of the literature that I read I really wanted to try and make either a recursive model or a direct model work. I used the same features for both my regression model and classification model. The features I used are mainly found in the public kernels, with the exception of using different lagged values for rolling features. After verifying that my CV was set up correctly, it was obvious that any lag 1 features were highly optimized for the public LB fold (I am calling it a fold because I ended up using it as my third fold in the final month). The lag 1 features performed exceptionally well on this fold and not the other folds. The scores were between 0.65X and 0.7X on other folds. Afterwards, I started to experiment with MANY different features. I finally converged on rolling features at lag 4. This generalized well to most folds but some were still stubborn. By generalized I was able to get my 3-fold set up all below 0.6. I also tried on other folds just to see and results were the same between 0.50 - 0.60 (still a lot of variance). The public LB score was still decent in the low 0.52X - 0.54X range. This was far from the best score on the public LB, but I stuck to trusting my CV because I could not replicate a 0.4X performance without other folds really suffering performance.</p> <p>I got more generalization between my folds when I implemented a classification model with the regression model. I actually used the classification model in two steps. First, I multiplied the outputs of the classification model by the outputs of the regression model; essentially scaling or smoothing the outputs of the regression by whether we expected it to have a transaction or not. In retrospect, I believe this is may be what helped my model not fall all the way down the LB. It is essentially a multiplier against the original predictions except that I used another model to build them. Secondly, once all of the predictions were created I set to zero any classification prediction that was under 5%. The best score I saw was 0.50X but I felt good about the generalization.</p> <p>Finally, I ran the model 3 times and shuffled the rows and features to help prevent overfitting. I averaged the outputs of the 3 models. If anyone is interested I would be happy to provide a list of my features.</p> <p><strong>After Thoughts</strong></p> <p>I have to say I was really disappointed in the metric that was used here. The metric felt so convoluted that there were times that I felt like I spent more time trying to figure out if my implementation was right compared to actually modeling. </p> <p>Working through all the ideas in the time series domain in a solo fashion is REALLY hard. There are so many aspects to time series. There are statistical models (ARIMA, ES), machine learning models (LGB, XGB) &amp; deep learning solutions (spent a month on this alone just trying to figure out how to make it work). Not to mention should I implement a recursive or direct or a mixture of both? Then there are different hierarchical methods, transformations, outliers, etc. It felt really hard to pick one direction and make it work.</p> <p>That’s really all there is to my solution. If anyone has questions, I would be happy to answer.</p> <p><a href="/gavinli232">@gavinli232</a> </p> <ol> <li><p>I didn't remove any of the periods with long zero values. I felt pretty overwhelmed already with the sheer volume of information on time series so I didn't give everything a try.</p></li> <li><p>For experimenting, what I did was take a subset of the last 1 or two years and experimented with that. For the most part I was able to see whether features were important or not and my model ran pretty quick. The other thing that I noticed about the metric (at least in my case) was that it hit a "minimum" pretty quick within 500 iterations. So for experimenting I used between 250-500 iterations on LGBM. If I got an increase in score then I would try larger data/larger iterations. For the most part most of my features did not work.</p></li> </ol> <p>Hey <a href="/gavinli232">@gavinli232</a> thank you for the question here is an example of the two step process:</p> <p>Let's I have a regression output:</p> <p><code>regression = [20, 21, 25]</code></p> <p>and an output from the classification model:</p> <p><code>classification = [0.95, 0.73, 0.04]</code>,</p> <p>Then the first step is:</p> <p><code>regression * classification = [19, 15.33, 1]</code></p> <p>then the last step would set the last element to zero:</p> <p><code>[19, 15.33, 0]</code></p> <p>Thank you for the question <a href="/cnicault">@cnicault</a>. It was really difficult at first as I was trying to navigate through the competition. To choose new features I built them in blocks and then evaluated on the previous 2 years of the data. It was still quite a bit of data but ran a lot faster than all of the data.</p> <p>I did evaluate them on 3-folds each time (so the process was lengthy). For me though my machine has 128GB of ram and 32 cores to use so model was pretty quick. As a matter of fact it only took my 45 mins to train the whole model on all of the data.</p> <p>Another, thing that I noticed that helped me speed up my experiments is that the WRMSSE hit a minimum pretty quickly. In most experiments I really only needed 300 iterations anything else was just fighting for small gains to I ran the model on 350 - 550 iterations.</p> <p>I tried a lot: - I was not able to get ARIMA or ES working at all. - I got prophet to work decent in the upper 0.5X region, but in my work I actually use prophet quite a bit so I didn't really spend a lot of time here because I wanted to learn something new - I built different linear regression models using <code>gblinear</code> and could not get that to work properly - Tried neural networks, particularly wavenet and I could not get this to work at all - Tried the approaches from <a href="https://www.kaggle.com/c/favorita-grocery-sales-forecasting">https://www.kaggle.com/c/favorita-grocery-sales-forecasting</a>. I think there could be something here but I didn't proceed with it </p>
M5 Forecasting - Accuracy
[181th place] Silver medal at the lottery (no multiplier)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>About 3 months ago I invested quite some time on this competition, because I thought one could find a training procedure that would optimise the metric (WRMSSE). I tried to: <br> - build my own loss <br> - reweight training samples for W and S <br> - add all aggregation levels to the training set <br> none of the above improved my validation. </p> <p>Then I tried to do some feature engineering and tested against a 5-fold cross-validation + LB (4 + 1) but I soon realised that nothing worked to improve the WRMSSE on my CV. Albeit quite a few my engineered features would improve RMSE. <br> At this point I decided I would not spend more time on this and I focused on Twitter, which gave me much more satisfaction. I decided that I would come back to this competition only in the last week to put together the final submission and play the lottery, and so I did.</p> <p>At the end my simple solution is based on the following: <br> - RMSE as loss <br> - around 20 features, mainly calendar (weekday, month, etc.), events, snap and rolling target mean <br> - predict sales * price and divide by price afterwards <br> - average of 5 folds (GroupKFold) trained on the last 3 years <br> - stitch together 28 models, one per day </p> <p>I prepared only 2 submissions using the updated file, in order to avoid regrets for not choosing the right submission :) <br> My first submission was just a sanity check, using only the model at horizon=28 to verify that I did not screw up the submission file. After that I found a small bug, I prepared the 28 models (one per day), and submit. And guess what? The first submission (1 model only + bug) was worth the 29th place. But again, no regret because I would have never chosen that one. </p> <p>Finally many thanks to many people who inspired me a lot: <a href="/kyakovlev">@kyakovlev</a> , <a href="/sibmike">@sibmike</a> , <a href="/ragnar123">@ragnar123</a> </p> <p>P.S.: I did not use any multiplier</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F862519%2Fb1e79ad4a82b4581c426854bbdfe50cc%2FScreenshot%202020-07-01%20at%2003.21.23.png?generation=1593566728712969&amp;alt=media" alt=""></p> <p>It helps aligning with the metric definition, in particular with the W part which is computed using sell_prices * sales. It marginally improved my validation (~0.01)</p>
M5 Forecasting - Accuracy
[36th place] New forecasting Package: tsboost
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I'm glad to hear it ;)</p> <p>First, Congrats to all winners and participants !</p> <p>I would like to introduce a new open source package I created few time ago for forecasting time series with discrete time step, installable through pypi and on my github :</p> <p><a href="https://github.com/franck-durand/tsboost">https://github.com/franck-durand/tsboost</a></p> <p>The goal of this framework is to provide a new tool ready to go in production for data scientists (like prophet), so that they can prototyping faster.</p> <p>I will publish my full solution for M5 on my github as it uses this package plus some data pre processing &amp; some feature engineering. But as I am a generally lazy personn, I will do it... in a more or less near futur.</p> <p>I did also 15th place for M5 uncertainty, and I plan to program the methodology I used in this time series framework in "a near future". By this way it will be possible via a meta parameter of the package to generate confidence intervals along with forecast point.</p> <p>It was a really interesting double competition. And I really like the testing methodology for time series : - 100% of the score is from private LB data (No bias from public Leader board in final evaluation) - that we can submit only one submission (it's like you are in a production context for time series) - unfortunately it was still possible to have a bias with futur external datas not avaiable at the time the forecast was supposed to be done (for exemple US GDP of june 2016, not available at the end of may 2016) - maybe the metric was not an easy one, but still, I belive it's a good one for hierachical time serie problems.</p> <p>I'm also a little bit sad for @kyakovlev , I belive he has taught a lot of good things and ideas to kagglers, and he really deserved to be at least in the gold zone.</p> <p>Regards !</p> <p>Hi marketneutral ! Indeed the famework creates 28 independant forecasts (no recursive forecasts, this has never been working in production on my last projects, and of course I didn't use fancy multiplier coefs).</p> <p>To resume the main features of the framework (meta parameters of the framework can modify the effect or desactivate the effect) - forecast independant steps - different time step possible (yearly, monthly, dailly) - putting many time series in the same problem (identified by one or more categorical variables) - adding automatically date features like day, month, year, week, etc... - custom stationnarisation process depending on seasonnality &amp; other ditribution modification - we are interessted more about cross correlation (time delayed correlation) - adding automatically lags features (stationnarized ones and not stationnarized ones) - sliding windows cross validation integrated - no leaks created (exept if you put directly leak features in the datas) - any upstream featues can be added to the data - retrain every step for avoiding target distibution shifts - Ready to go to production</p> <p>What will come in V2 in a "near future" - manage futur features easily - time series stacking - adding hourly time step forecast - doing easy and fast grouped forecast - adding confidence intervals - adding more loss function minimazier choices - bug corrections</p> <p>To resume, it's just a accumaltion of best practices that worked for my clients time serie projects in the past.</p>
M5 Forecasting - Accuracy
[64th Place] A simple low computation model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hey Kagglers!</p> <p>First off, I want to thank all the community sharing their findings, this is a great platform to learn because of the people! </p> <p>Secondly, I was shocked to have gotten such a high placement; I know I don't have even close to the competencies of so much of the community. </p> <p>I wanted to share one part of model, since I haven't seen or read anyone doing something similar, however it made it easier/quicker to compute. For predicting 28 days into the future, I've seen different options such as recursion: predicting day 1, then using that prediction to create the features for day 2 etc or having 28 different models each predicting x days ahead. I used a simpler version which just repeated the last 28 days of the training data for the prediction period and built the features off that. </p> <p>More specifically, I used 3 years of data to make a training set with 2 years of data with up to 365 day rolling features. So my training set had data from d1183 to d1913 (when the item had enough sales data). Then to predict d1914 to d1941, I took the amount of sales from d1886 to d1913 and copied it to for the sales amount for d1914 to d1941. Those copied sales amounts were used to create all the rolling and shifting features of d1914 to d1941. My entire model only had 3 training sessions; one for hobbies, one for household and one for foods. </p> <p>With my own WRMSSE calculation, I tried both a 28 day repeat and also repeating the last 7 days 4 times, but got better results repeating the 28 days.</p> <p>I was quite surprised a simple method would get a good result. </p> <p>Thanks again to everyone out there</p>
M5 Forecasting - Accuracy
[68th place] Stable solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks!. With "LightGBM using default encoding" I mean using the default categorical encoding of lgbm (using the <code>categorical_feature</code> param). This differs from the other approach in that the <code>item_id</code> was encoded using the GLMM encoder. </p> <p>Congratulation to the winners! And thanks to all of you who shared your knowledge during this competition. I share my solution because I haven't seen nothing similar yet, and because proved to be very stable over my CV framework and the private leaderboard.</p> <h2>Validation framework</h2> <p>I used 4 folds for validation:</p> <p>F1: <code>2015-04-25</code> - <code>2015-05-22</code> F2: <code>2015-05-23</code> - <code>2015-06-19</code> F3: <code>2016-03-28</code> - <code>2016-04-24</code> F4: <code>2016-04-25</code> - <code>2016-05-22</code></p> <h2>Base models</h2> <p>Only used single LightGBM with recursive one step ahead prediction trained over the entire dataset as base model.</p> <p>These are the hyperparameters: <code> model_params = { 'objective':'tweedie', 'tweedie_variance_power': 1.1, 'metric':'None', 'max_bin': 127, 'bin_construct_sample_cnt':20000000, 'num_leaves': 2**10-1, 'min_data_in_leaf': 2**10-1, 'learning_rate': 0.05, 'feature_fraction':0.8, 'bagging_fraction':0.8, 'bagging_freq':1, 'lambda_l2':0.1, 'boost_from_average': False, } </code></p> <p>And these are the features: * time-attributes (year, month, year week, week day) and cyclical transformations. * Rolling mean, median, std and kurtosis. window shifts: [1, 7, 28], window sizes: [7,14,28]. Kurtosis is a great idea to stabilize the effect of outliers. * Rolling count of consecutive zeros. * Rolling sales frequency (number of days with sales / size of the window). * Standar price features + rolling transformations to detect the effect of price changes. * For important events (christmas, new year, 4 July, superbowl, easter, NBA finals and thanks giving) I created separate binary features. Additionally I created an extra binary feature for events where the sales are affected the day before.</p> <p>Important considerations: 1. When using tweedie as loss, the model is unable to predict 0. This is a problem for features like rolling count of consecutive zeros or rolling sale frequency. Then, for each step-ahead prediction I truncated to zero all predictions below <code>rho</code>, before predicting the next time step. The value of <code>rho</code> was estimated observing how the model predicted on the periods with long runs of zeros. 2. By varying the <code>seed</code> value I observed changes of 0.02X in the WRMSEE, i.e., is not a good idea to use a single model with a single seed (unless you are a lucky person). 3. LightGBM spent almost all its splits in <code>item_id</code> when using default categorical handling (left figure). On the other side, if you encode <code>item_id</code> instead of the default encoding, more features are used for split (right figure): <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F350756%2F8ea5de2184701018474057dfc0618561%2FScreen%20Shot%202020-07-02%20at%2010.33.40.png?generation=1593704069070764&amp;alt=media" alt=""></p> <p>I built 60 models in total, 35 models with <code>item_id</code> default encoding and 25 models with <code>item_id</code> <a href="http://contrib.scikit-learn.org/category_encoders/glmm.html">GLMM encoded</a>. To give more diversity to the ensemble, each model was trained with a different seed and the number of iterations was set randomly between 500 and 700 (this range was obtained through cross-validation). </p> <p>For ensembling I simply aggregated all the predictions using the <a href="https://en.wikipedia.org/wiki/Trimean">trimean</a>, which is a robust central tendency statistic. I decided not to use a more complex aggregation strategy after reading this thread: <a href="https://stats.stackexchange.com/questions/47950/ensemble-time-series-model">https://stats.stackexchange.com/questions/47950/ensemble-time-series-model</a>.</p> <p>For each of the 60 models I applied the trend correction and hierarchical reconciliation procedures (described below) <strong>before aggregating</strong>.</p> <h2>Trend Correction</h2> <p>The problem with recursive models is that the error propagation produces over estimation on the aggregated time series. I observed this behavior in all the folds except the public leaderboard fold where this over estimation was beneficial: <a href="https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/155770">discussion</a>.</p> <p>Looking at the predictions of my model on top Levels (level 1 &amp; level 3) I noticed a great deviation. However, the shape of the sales curve was right. From here comes the idea to build a trend correction method on the top levels. This method must have the following properties: <strong>properly correct the trend when it is deviated, and be the identity when the trend is right</strong>.</p> <p>The trend correction method I developed works by fitting 2 trend models for each time series to be corrected: 1. Fit a trend model to the training time series concatenated with the 28 step ahead predictions. This model should be very adjusted in order to capture the change in the last 28 time steps. 2. Fit a trend model to the training time series only. This models should generalize well the trend for the next 28 time steps.</p> <p>In this figure you can see the method applied to fold F3 and store <code>CA_1</code> (were predictions are over estimated on level 3): <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F350756%2F7505a5e0aab18ec5058d1d9eca7fbd0b%2FScreen%20Shot%202020-07-02%20at%2009.36.36.png?generation=1593700730273178&amp;alt=media" alt=""> And in this figure you can see the method applied to fold F3 and store <code>WI_3</code> (were predictions are well estimated on level 3): <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F350756%2Fdd453ffa7ca98c2e745a123b1cec7aec%2FScreen%20Shot%202020-07-02%20at%2009.37.20.png?generation=1593700921331810&amp;alt=media" alt=""></p> <p>The trend correction model I use was <a href="https://www.slac.stanford.edu/cgi-bin/getdoc/slac-pub-3477.pdf">Friedman's supersmoother</a>, with this <a href="https://github.com/jakevdp/supersmoother">great python implementation</a>. I applied the trend correction on level 3 (store_id), and tuned the hyperparameters of supersmoother over the 4-folds using Optuna. </p> <p>Here is the script I used to tune supersmoother: <a href="https://github.com/mavillan/m5-forecasting-accuracy/blob/master/scripts/trend_correction.py">trend_correction.py</a>.</p> <h2>Hierarchical reconciliation</h2> <p>Once the trend are corrected at store level, it is necessary to perform forecast reconciliation at store-item level. </p> <p>I tried almost all method described here: <a href="https://otexts.com/fpp2/hts.html">hierarchical time series</a>. I spent a lot of time with the optimal combination approach with weighted least-squares, but could't find an appropriate way of weighting the predictions. At the end I used the forecast proportions approach, which gave me the best results in CV. </p> <h2>Final words</h2> <p>Finally, this is how the trend correction method at store level looks like at level 1 for the predictions in the test period: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F350756%2Ffb2462d3815d3004447d10687a6e0b46%2FScreen%20Shot%202020-07-02%20at%2015.15.16.png?generation=1593721195401858&amp;alt=media" alt=""> the orange line (<code>reconc</code>) is the store-level trend corrected predictions, aggregated at level 1. In discrepancy with some Kagglers, I think that the decrease in sales in the test period was predictable with the data at hand. However, is something very hard to capture with a model trained at store-item level, where the time series are quite noisy. </p> <h2>Contributions</h2> <p>During this competition I built two python packages that I would like to share with the Kaggle community:</p> <p>1) <strong>tsforest</strong> (I have been working for a while on this one) <a href="https://github.com/mavillan/tsforest">github repo</a>. This package provides a complete framework for efficiently handle multiple time-series datasets and building a GBRT forecast models, using any of LightGBM, XGBoost, CatBoost and H2O GBM as backends. Using this package in a 16 cores VM I was able to: * Build the lagged/rolling window training features in ~10 min (including custom rolling window statistics). * Training LGBM over the entire dataset in ~30 min. * Perform the recursive one step ahead predictions in ~10 min. </p> <p>Here is the script I used to built the 60 models using <strong>tsforest</strong>: <a href="https://github.com/mavillan/m5-forecasting-accuracy/blob/master/scripts/forecast.py">forecast.py</a>. As you can see, using tsforest makes the code clean and easy to read.</p> <p>2) <strong>mahts</strong> (model agnostic hierarchical time series) <a href="https://github.com/mavillan/mahts">github repo</a>. This package implements algorithms for hierarchical times series reconciliation, based on the <a href="https://cran.r-project.org/web/packages/hts/index.html">hts R package</a> but has the benefit of being agnostic to the forecasting model.</p> <p>Almost right. The trend correction was done only at level 3 (store_id), and then the hierarchical reconciliation is performed store by store using forecast proportions approach to get reconciled forecasts at level 12. </p> <p>This is done for each of the 60 models, and the result is averaged using the trimean. </p> <p>Yes, I always performed the reconciliation starting from level 3. </p> <p>I used optimal combination through weighted least squares. The problem here is the difference in scale between the level 3 and level 12, which require to weight each time-step prediction according to its scale and how confident you are about each prediction. I tried different weighting methods:</p> <ol> <li>Use the variance of the model residual for each time series as done in <a href="https://cran.r-project.org/web/packages/hts/index.html">hts</a>. </li> <li>The scale value used in WRMSSE.</li> <li>log(scale) where scale is the same as 2.</li> <li>And different variants of transformations on the scale value.</li> </ol> <p>In general, the WRMSSE improvement between forecast proportions and optimal combination approaches was similar, but the latter was more difficult to tune, and so I decided to use forecast proportions.</p> <p>I still think that optimal combination approach should work better, but I didn't have enough time and expertise to make it.</p> <p>I didn't build 60 models for each fold... but these are the errors with 3 models by fold: * F1: 0.5065 * F2: 0.5878 * F3: 0.50130 * F4: 0.49076</p>
M5 Forecasting - Accuracy
[84th place] No multiplier
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to the winners! Looking forward to the write up.</p> <p>In this competition, I relied solely on my CV and my best score private LB score corresponds to my best CV - no regrets, playing it conservatively with no multiplier paid off somewhat as it got me my second silver medal and expert status :)</p> <h2>CV setup</h2> <p>6 Time Series folds: 250 weeks of training + 4 weeks validation</p> <h2>Model</h2> <ul> <li>LGBM</li> <li>6x4 models: 6 folds x 1 model per 7 days so that for the first 7 days I have 6 folds, the next 7 days another 6 folds etc</li> <li>Tweedie loss (I tried to build my own tweedie loss with different rho for each product but could not make it work due to how LGBM computes the <a href="https://github.com/microsoft/LightGBM/issues/3155" target="_blank">log link</a>)</li> <li>Gradient using weights equal to the level 12 aggregation weights</li> <li>Early stopping using WRMSSE loss. Thanks <a href="https://www.kaggle.com/sibmike" target="_blank">@sibmike</a> for your <a href="https://www.kaggle.com/sibmike/fast-clear-wrmsse-18ms" target="_blank">notebook</a>!</li> </ul> <h2>Feature engineering</h2> <h3>Common ideas - can be seen in most public notebooks</h3> <ul> <li>Calendar features: day of week, month, events, snap day as 0/1</li> <li>Price: normalized price (normalization done for each product separately), price momentums (ratio of today's price and yesterday's price + ratio of today's price over mean of last 7 days)</li> <li>Rolling averages of sales over last 7/14/28/52 days</li> <li>Rolling averages of sales over last 3/5 same day of week</li> </ul> <h3>Other ideas</h3> <ul> <li>Forward and backward lags on events and snap days so that for every row, I have not only the event that happens in the current day but as well the events that happened or will happen over a period of 3 days =&gt; significant boost in CV (0.05)</li> <li>Out of stock handling: I created a small algorithm to identify if a zero is due to out of stock or if it is just a regular no sales day. The model was then trained with this information (all rows with a out of stock flag would have zero sales). The inference is done by assuming no out of stock day which obviously creates a bias that is corrected in the post processing, see below =&gt; 0.15 boost in CV. <a href="https://www.kaggle.com/ismailbbm/out-of-stock-detection" target="_blank">Notebook for reference.</a></li> </ul> <h3>Discarded ideas - no improvements on my CV</h3> <ul> <li>Standard deviation, kurt, skew of sales</li> <li>Mean encoding on item_id, dept_id…</li> <li>Store wise models</li> </ul> <h2>Post-processing</h2> <ul> <li>Out of stock correction: to correct for the bias introduced by this feature, I have to try to predict out of stock zeroes in the inference. I am doing so by looking at the patterns of consecutive zeroes of each id. For example if a product has almost never zero sales and then for 2 consecutive days, it has a zero, we can assume that this is due to out of stock and that it is likely that the next days are zeros as well. <a href="https://www.kaggle.com/ismailbbm/out-of-stock-detection" target="_blank">Notebook</a></li> <li>Very small predictions (less than 0.1) are transformed to zero. To compensate, predictions between 0.9 and 1 are rounded to 1 =&gt; 0.03 boost in CV</li> <li>Did not use any kind of arbitrary multiplier as none improved my CV</li> </ul> <p>Anyway, I enjoyed working on this time series problem and I learned a lot from the community! Hopefully I can give back a bit.</p> <p><a href="/sibmike">@sibmike</a> thank you for your comments. I added comments to the notebook and tried to explain as best as I could what I had in mind. Let me know what you think!</p> <p><a href="/sibmike">@sibmike</a> Not that I know of but I would suggest looking at other Poisson or Tweedie distribution phenomena that would have as well an equivalent for OOS. <a href="https://en.wikipedia.org/wiki/Poisson_distribution#Occurrence_and_applications">https://en.wikipedia.org/wiki/Poisson_distribution#Occurrence_and_applications</a></p> <p><a href="/salonsai">@salonsai</a> OK so the idea behind the one model per week is to try to use the most recent data whenever possible. Optimally, I would have had one model per day e.g. one model to predict day 1, another to predict day 2 etc. The reason is that in this case you can make use of the most recent data available (i.e. last day of training data). But due to practical reason, I had to make a compromise and train 4 models instead of 28 models.</p> <p>The 6 folds is just for the sake of slight diversity and increased stability.</p> <p>So now coming back to the weekly model. Let's take week 1. When I try to predict week 1, I know that for day 7, the most recent information I have is of day 0 (a 7 day lag). So for week 1 model, all features I use have a 7 days delay. So for model week 1, I use in this case the rolling averages of sales over the last 7/14/28/52 + 7 days lag. For week 2, the most recent data I can use is 14 days old, so for feature engineering, I use the rolling averages of sales over the last 7/14/28/52 + 14 days lag. Etc.</p> <p>Here are the scores which I kept track of:</p> <p>| Model iteration | CV score | LB public | LB private | | --- | --- | --- | --- | | 26 | 0.625 | 0.524 | 0.629 | | 27 | 0.592 | 0.520 | 0.616 | | 28 | 0.595 | 0.553 | 0.595 | | 31 | 0.583 | 0.517 | 0.599 | | 32 | 0.577 | 0.522 | 0.593 |</p> <p>I did not submit very often as given the significant variations I was seeing in my out of fold scores, I came to the conclusion that using public LB as a validation score could be deceiving.</p> <blockquote> <p>The train data begin date and end date.</p> </blockquote> <p>First fold would have as validation set, the last 28 days of the competition available training dataset, and as training, I used the 250 weeks just before that. The second fold, I would shift back 28 days. The third fold, I would shift again back by 28 days etc</p> <blockquote> <p>each week have 6 model and make aggregate mean of 6 predictions</p> </blockquote> <p>Exactly</p> <blockquote> <p>The first 7 days have 6 folds, And how to build another 6 folds for next 7 days?</p> </blockquote> <p>For each week, I train 6 new models over the same train/test folds. The only difference is of course of the features that are used. So that the first week, I use data up to 7 days prior, the second week up to 14 days prior...</p>
M5 Forecasting - Accuracy
[Place 99] A few insights (no multiplier)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi,</p> <p>as expected we had a huge shakeup in the leaderboard. I just wanted to share my insights:</p> <p>Multiplier: I did not use multipliers because I think they have no practical use.</p> <p>Memory: I had huge problems with the memory restrictions and the amount of data because I transformed the data in the long-format. That´s why I changed to Google Colab as operating system. I used Kaggle just for parallel data processing. I only used day 900+ for training. Besides pragmatic considerations, the upward trend in the mean value was an indicator that our time series is not stationary. Therefore, I think it is easier for our model to detect patterns in the data if the targets are within a range.</p> <p>Day-by-day: I chose a day-by-day model despite worse scores, because I was afraid of continuing a possible wrong trend with a recursive model. Since we only had one submission, I chose a conservative approach. I have saved a separate training dataset for each day to reduce the amount of data.</p> <p>Features: Features - Lags: - Rolling Lags (7, 14 - I just realized that i forgot to add the other rolling lags as 28 and 180 etc. in my final model.. ) - Lags (1-7)</p> <p>Features - Prices: - Price Max, Min, Std, Mean - New Item (Yes/No - I created 20 categories and labeled the last two as Yes) - Time since release - Difference of each price to the mean price - Price normalization - Unique prices for each item - Max, Min, Std, Mean saisonal prices (for each year) - Difference of each price to the saisonal mean price - Rolling price last 15 days - Difference to this average</p> <p>Features - Calendar: - Day, week and month of the year - Day of the week - Week of the month - Month of the year - End of the week yes/no - Start/end of the month yes/no - Start/end of the quarter yes/no - Start/end of the year yes/no - Lags to the events (Many people shop the day before ;) )</p> <p>Mean Encoding: - Id cat, Id dept, Id item mean encoding - Id cat, Id dept, Id item std encoding</p> <p>Loss/objective: - Standard RMSE and tweedie since I didn´t get anything else working well for me</p> <p>Things which didn´t work for me: - Target transformation: dividing each sales for each item by the mean value of the item - Target transformation: using revenue as target and revenue as lags and dividing it afterwards by the sell price - Target transformation: Normalization of each target by item id - Target transformation: Using the mean of the last month as multiplier for the other sales</p> <p>I have invested so much time in the target transformation to make the time series stationary. I wanted to test with the deltas (t - t-1), but unfortunately I didn't have any more time, because I joined very late (<a href="https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/155950">https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/155950</a>).</p> <p>I just wanted to say thank you <a href="/kyakovlev">@kyakovlev</a> for the very valuable input over the whole competition period. You did an awesome job! And a special thanks to the people who answered my questions in such detail: <a href="/chrisrichardmiles">@chrisrichardmiles</a> <a href="/tobiit">@tobiit</a> <a href="/miguelpm">@miguelpm</a> </p> <p>I really enjoyed this competition. Best wishes!</p> <p>No, I didn´t transform the target. I have tried out the different approaches, which I describe at the end of my post but always received very bad scores.</p> <p>I only cut the first part of the timeseries to have a more even level in the data (at the beginning there is a strong upward trend in sales on level 12).</p>
M5 Forecasting - Uncertainty
11th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Uncertainty <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all the winners! Below is a summary of our solution. Hope it will be useful to you in some way.</p> <h2>Overview of Solution</h2> <p>Our solution combines three types of models:</p> <ol> <li>Autoregressive RNNs that model the probabilistic distribution of the sales directly.</li> <li>Autoregressive RNNs to obtain the “inherent” uncertainty of the forecasted sales.</li> <li>Tree-based models to obtain bottom-up point estimates of the sales.</li> </ol> <p>The reason for having three types of models stems from the hierarchical nature of the time series -- certain models are more capable of modeling the uncertainty for a given level in the hierarchy.</p> <h2>Our Solution (with a little more detail)</h2> <p>As mentioned, we are dealing with hierarchical time series. On page 3 of the M5 competitors' guide, we are given a breakdown of the aggregation levels. Based on the behavior of the data, it seemed natural to us to divide the hierarchies into two types:</p> <ol> <li>The aggregated sales of individual products (series levels 10, 11, and 12) exhibit more “count-like” behavior. They are also generally more intermittent. For these series, modeling the uncertainty as a poisson or negative binomial seemed to make sense.</li> <li>The higher-level sales aggregations (series levels 1 through 9) exhibit a more continuous behavior. Modeling the sales distribution using some continuous probability distribution (e.g. Gaussian) seemed natural in this case.</li> </ol> <h3>- 1st Hierarchy Type</h3> <p>For the first hierarchy type, we directly modeled the distribution of the sales using an autoregressive RNN, specifically <a href="https://arxiv.org/pdf/1704.04110.pdf" target="_blank">DeepAR</a>. In a nutshell, DeepAR attempts to model the conditional distribution of the future of a time series given its past. During training, the model learns to parametrize a likelihood function of a chosen probability distribution (in our case, we chose a negative binomial distribution given the count-like nature of the data). Then, during inference, we can draw samples from the selected probability distribution parametrized by the model and compute quantiles from the drawn samples. </p> <p>We developed our DeepAR model largely from this great <a href="https://www.kaggle.com/steverab/m5-forecast-compet-uncert-gluonts-template" target="_blank">kernel</a>. </p> <h3>- 2nd Hierarchy Type</h3> <p>For the second hierarchy type, we initially tried to model the distribution directly using DeepAR with a Gaussian distribution. While the results were decent, we found that there was a high sensitivity towards the random seed used and performance occasionally varied substantially across folds. It turned out that we were unable to get the DeepAR model to capture the center (median) of the distribution consistently.</p> <p>We opted instead for a bottom-up approach, in which we predict the individual product sales and sum the respective predictions to get the predicted median of the aggregated sales. We found that LightGBM models from the M5 accuracy competition worked well for this. In particular, we adapted the LightGBM model from <a href="https://www.kaggle.com/kneroma/m5-first-public-notebook-under-0-50" target="_blank">this kernel</a>. Key changes made were:</p> <ul> <li>Removed “magic” multipliers</li> <li>Tuned hyperparameters to mitigate overfitting</li> <li>Feature engineering pertaining to price volatility</li> </ul> <p>Clearly, the bottom-up approach only gives us part of what we want (the median). To obtain the rest of the required quantiles, we turned back to our DeepAR (with Gaussian distribution) approach, where we used the ratio of the quantile prediction to the median prediction as a multiplier. For example, for a given series and prediction day, if the ratio of (say) the 25th percentile to the median in the DeepAR model is 0.95, then to get the 25th percentile of the corresponding LightGBM prediction, we simply multiply that prediction with 0.95. And we do this individually for all of the aggregated series across each time step in the horizon.</p> <p>We felt that this approach made sense in that the DeepAR model captures the “inherent uncertainty” in the sales, which should “carry over” to the predictions from the LightGBM model. Admittedly, this justification is hand-wavy and lacks any statistical rigor. Empirically, however, this approach resulted in stable improvements across folds and was certainly preferable to manually tuning “quantile multipliers”.</p> <p>Our approach can be summarized as follows:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1197762%2F3f97598e043e1828b9ba27994767d5ee%2Fm5_sol.png?generation=1593598292136374&amp;alt=media" alt=""></p> <h2>Validation Strategy</h2> <p>We realized early on that it was unwise to rely on a single 28-day validation horizon given the volatility of the competition metric, coupled with the ease of overfitting. Intuitively, we also felt that the recency of the folds matters. Bearing these findings/thoughts in mind, we ended up using the three most recent 28-day periods as our validation “folds” for our model development.</p> <h2>Some Stats and Findings</h2> <p>We report the weighted scaled pinball loss (WSPL) on our validation folds for some of our experiments:</p> <table> <thead> <tr> <th>Experiment</th> <th>D1858 - D1885</th> <th>D1886 - D1913</th> <th>D1914 - D1941</th> </tr> </thead> <tbody> <tr> <td>LightGBM + “point to uncertainty kernel”</td> <td>0.1967</td> <td>0.1739</td> <td>0.1672</td> </tr> <tr> <td>+ DeepAR (Gaussian) ratio multipliers for 2nd hierarchy type series</td> <td>0.1935</td> <td>0.1656</td> <td>0.1567</td> </tr> <tr> <td>+ DeepAR (Negative Binomial) for 1st hierarchy type series</td> <td>0.1744</td> <td>0.1512</td> <td>0.1414</td> </tr> <tr> <td>+ Ensemble LightGBM with modified “darker magic” kernel from accuracy competition</td> <td>0.1683</td> <td>0.1499</td> <td>0.1408</td> </tr> </tbody> </table> <p>The model in the last row was our final submission, yielding a private LB score of 0.16721.</p> <p>Some noteworthy findings:</p> <ul> <li>The magnitude of the WSPL can vary quite noticeably across different folds. However, the directionality of the loss is generally quite consistent across all folds (i.e., improvement in one fold is typically accompanied by the same relative improvements in all other folds).</li> <li>We found that training separate DeepAR networks for each individual product aggregation (series levels 10, 11, and 12) resulted in consistent small improvements in WSPL across all validation folds (~0.002). This could be due to DeepAR being a global model, and so by separating the series by their hierarchies, the separate models can better capture the nuances within each hierarchy. </li> <li>Training the same models using different seeds and ensembling them did not give any noticeable improvement. Neither the naive way of simply averaging the quantile predictions of each model, nor drawing samples simultaneously across each model and then computing the quantiles from the pool of samples worked.</li> <li>In the final days of the competition, we ensembled our LightGBM bottom-up model with the one from <a href="https://www.kaggle.com/kyakovlev/m5-three-shades-of-dark-darker-magic" target="_blank">this kernel</a>, with slight modifications to reduce overfitting. As seen in the table above, this led to slight improvements in score.</li> </ul> <h2>Stuff that did not quite work/didn’t get to try</h2> <p>We attempted to model the distribution non-parametrically using a network called <a href="https://arxiv.org/pdf/1906.04397.pdf" target="_blank">DeepTCN</a>. While performance was decent, we couldn’t find a way to ensemble the predictions with our existing solution.</p> <p>Modeling the different hierarchies separately discards potential information gain from combining higher and lower aggregation levels. Unfortunately, we did not have the time to explore forecast reconciliation methods.</p> <h2>Closing Remarks</h2> <p>Special thanks to <a href="https://www.kaggle.com/steverab" target="_blank">@steverab</a>, <a href="https://www.kaggle.com/kneroma" target="_blank">@kneroma</a>, and <a href="https://www.kaggle.com/kyakovlev" target="_blank">@kyakovlev</a> for the aforementioned kernels that we leveraged. And of course, we would like to express our gratitude to Kaggle and the competition host for the opportunity to participate in this competition.</p> <p>Hi <a href="/gmcaixeta">@gmcaixeta</a>, thanks! We used <code>gluonts==0.4.2</code> and <code>mxnet==1.4.1</code>. We faced some issues getting newer versions of <code>gluonts</code> to work, so we just stuck with <code>0.4.2</code>. We trained on GPU.</p> <p>Hope this helps!</p>
M5 Forecasting - Uncertainty
18th place (silver medal) solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Uncertainty <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for all selfless public kernel authors! My work just basicly based on your notebooks. It's more important to concern knowledge but not result, right? This sharing is to thanks all community members. It's you made Kaggle such a friendly, wise community. </p> <h3>Accuracy-Competition</h3> <ul> <li><p>FE: adding a <code>weekends</code> feature.</p></li> <li><p>I used 4 single models:</p> <ul><li><p>time series split 3 fold cv - rmse loss function</p></li> <li><p><a href="https://www.kaggle.com/rikdifos/timeseriessplit-cv-poisson">time series split 3 fold cv - poisson loss function</a></p></li> <li><p>time series split 3 fold cv - self-defined loss function</p></li> <li><p>rolling prediction</p></li></ul></li> <li><p>Then I calculate all prediction's mean value, the I recode very tiny prediction to 0. Finally, I use a magic multiplier close to 1.</p></li> </ul> <h3>Uncertainty-Competition</h3> <ul> <li><p>Use <a href="https://www.kaggle.com/szmnkrisz97/point-to-uncertainty-different-ranges-per-level">public kernel</a> released by <a href="https://www.kaggle.com/szmnkrisz97">KrisztianSz</a> to convert point prediction to uncertainty. I use accuracy final submission as input and get a single submission 1.</p></li> <li><p>Use a <a href="https://www.kaggle.com/ulrich07/quantile-regression-with-keras">public kernel</a> released by <a href="https://www.kaggle.com/ulrich07">Ulrich GOUE</a> as a single submission 2.</p></li> <li><p>Finally, I did an averaging from above 2 submission files and recode all negative prediction to 0.</p></li> </ul> <p>All code could be downloaded from my github repo: <a href="https://github.com/songxxiao/m5_compete">https://github.com/songxxiao/m5_compete</a></p>
M5 Forecasting - Uncertainty
1st student - 12th overall solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Uncertainty <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>After missing the gold medal zone by picking the wrong submission in the 2019 Data Science Bowl, here I missed it again by just one place, what a pity. Anyway, if this will be my 2nd first student prize after the DSB, I'm more than happy.</p> <p><strong>Here a short wrap up of my solution:</strong></p> <p>I focussed on the Accuracy part of the competition (where my final private LB rank isn't that good though (274/5558)) and decided to go for day-to-day LGBM models. The notebook to create my uncertainty forecasts for horizon h=8 can be found here: <a href="https://www.kaggle.com/tobiit/m5-uncertainty-lgbm-f8-eval/">https://www.kaggle.com/tobiit/m5-uncertainty-lgbm-f8-eval/</a> The notebooks for other forecast horizons look similar.</p> <p><strong>The following steps are carried out for each horizon:</strong></p> <p><strong>1. Preprocessing:</strong> - Process calendar dataframe to get some more features - Process selling prices dataframe: first, I built selling prices for aggregate time series by just taking the mean of the aggregated time series (e.g. for the Total aggregation, I put as sell_price the mean of all sell prices); second, I built some more features as sell_prices_std and so on - Process sales dataframe: first, I built the aggregate time series for all levels. For the state_id columns etc. I used the following strategy: If all aggregated time series had the same value, I used it, otherwise I put nan (e.g. for the CA_1 aggregate, all aggregated time series have state_id CA, so I left it; for the FOODS aggregate on the other hand, the aggregated time series have different state_ids, so I put state_id nan). Next, I removed some outliers (Christmas, Thanksgiving and something else), normalized each of the time series (details see below), and computed further features as rolling means, target encodings of calendar features etc. </p> <p><strong>2. Modelling:</strong> For each quantile q, I trained a LGBM model with objective 'quantile' and 'alpha'=q on all time series. The last 28 days were left out as validation set for early stopping. The WSPL weights were used during training passing them to the LGBM dataset as train = lgb.Dataset(train[features],train[['demand']],weight=train['weight'])</p> <p><strong>Normalization of sales time series</strong> Based on some CV tests in my work on the accuracy part, I came up with the following normalization for all sales time series (aggregated ones and normal ones): First, I divided each time series by their nonzero mean. Second, to remove the trend, I considered some form of differences and set a_trendRemoved[t]=a[t]+maxRollMean(28)-laggedRollMean(28,h) for each mean normalized time series a[t]. Here maxRollMean(28) is the maximum rolling mean over a 28 days period that the time series had anywhere during the 1941 provided days. The laggedRollMean(28,h) is the mean of a[t-28-h+1],...,a[t-h]. So how did I come to this? Actually I wanted to use something like a_trendRemoved[t]=a[t]-rollMean(28) meaning how the sales at time t differ from the mean of the last 28 days. However, undoing this preprocessing after the predictions requires to use the predictions for the days F1,...,F(h-1) to build the rollMean(28) and I wanted the forecasts for each time horizon to be independent from all other forecasts to prevent error propagation. Therefore, I decided to replace the rollMean(28) by the laggedRollMean(28,h). Adding further the maxRollMean(28) term ensures that all values stay positive. I thought that this shouldn't matter, but in my CV experiments, it gave better results, so I used it. </p> <p><strong>Sidenotes:</strong> - As I used only kaggle kernels and no other ressources, it was quite a struggle to fit everything into RAM and into the 9 hours wallclock limit. Therefore, I used only days after dstart=1100 for training and only considered the forecast weekday and one similar weekday for training (e.g. F8 is a Monday, so for the horizon h=8 models, I used only Mondays and Fridays for training and discarded all other days). - Tuning LGBM parameters didn't work out for me at all, I simply used the parameters that I found in some public kernel in the Accuracy part (except from objective=quantile and alpha=q of course). - I tried feature selection for the accuracy part. Leaving always one feature out, retraining my model and checking whether my CV score improved. As I didn't observe any stable improvements, I simply used all features here. - I refer to some CV tests above which worked like this: I considered only the accuracy part and picked 2 horizons, namely 8 and 22 days for which I used the last 3 28 days periods for cross validation. All parameter, feature and normalization decisions that I made where based on the results obtained there. However, I'm not sure, if this was such a good idea, as the CV obtained in those experiments indicated way higher scores than I obtained in the end on the accuracy private LB. Therefore, I think that I might have overfitted to the 2 specific horizons. Setting up a CV using all 28 horizons with day-to-day models (-&gt; always training 28 models * 3 validation sets) doesn't seem feasible either though, so I don't know how I could have done better. - I said I use the WSPL weights for my LGBM model. Actually, I do not use the original weights, but normalize them s.t. they all weights have mean 1. Then I clip them s.t. all weights lie between 0.2 and 30. The first step ensures that everything still works fine with normal learning rates. The clipping was done to make it a bit more stable ensuring that there aren't time series with weights in the order of 100 and others with weights in the order of 0.001.</p> <p>Happy Kaggling Tobias</p> <p>Edit: Unfortunately, <a href="/david1352413524">@david1352413524</a> in front of me appended now as well _stu to his team name s.t. I guess this here is only the 2nd student place :( congrats to him though. Would you mind sharing your solution as well <a href="/david1352413524">@david1352413524</a> ?</p> <p>Thanks! I got to know the pandas memory issue and the patch during the data science bowl where it was very helpful so here I just used it right from the start. Not sure whether this still helps or whether the issue in pandas was fixed by now. Memory management was definitely still a big headache for me as well. A lot of time went into data type conversions, restructuring code such that e.g. dataframe merges happen at the right time - when they don't need too much memory yet, and trying out which dstart actually fits into memory when I use one, two, three or all wdays for training, and so on. It would have been nicer, if all this wouldn't have been necessary. However, maybe it's as well good to learn to keep track of one's memory and computing time usage and so on...</p>
M5 Forecasting - Uncertainty
24th position solution - with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Uncertainty <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank for your kind word, we will do better next time! :) </p> <p>Was is PP? sorry ^^</p> <p>NGBOOST was worse than LGB-Quantile regression, but due to lack of documentations I'm not 100% sure to use it correctly. However the main drawback was speed, it was extremely painful training. So even if predictive performance would have been better it weren't worth it due to time complexity. Making it work was a bit hard to understand how to use the package and predict quantiles.<a href="https://gitlab.com/JacquesPeeters/m5-forecasting/-/blob/master/src/modeling.py#L342"> My implementation here. </a></p> <p>Yeah I probably overfitted, it is my bad. Congratz for your place and thank you for sharing your approach, it was interesting to read! </p> <h1>Global thoughts</h1> <p>I was aiming for gold and i’m extremely disappointed for not getting it. When seeing topics such <a href="https://www.kaggle.com/c/m5-forecasting-uncertainty/discussion/163368">as this one</a>, and people with basic solutions getting top ranks, i’m jealous. Due to private datasets was different, models with wider quantiles might have been advantaged. </p> <p>At the same time when reading approach like <a href="https://www.kaggle.com/c/m5-forecasting-uncertainty/discussion/163368">1st Place Solution</a> I can only admit they are better, and avoid being a sore loser :) </p> <p>Anyway, since few people published their codebase, benchmark and that i’ve a different approach than most people, i think it might be worth sharing it with you. I was expecting to publish a more detailed write up with plots (publish my first paper maybe?) and since not getting gold I’m demotivated :) </p> <p>I also spent 1.5 months optimizing pinball loss in the wrong direction (&lt; instead of &gt;=). It was not really clever.</p> <p>After writing this I just submitted my score with a 0.97 adjustments (didn’t try other adjustments). It would have placed me first place. I’m even more frustrated ^^ <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F562227%2F1e516821f059bd266f989f12ff8a9ec8%2Fcoef_adjustement_lb.png?generation=1593722614422025&amp;alt=media" alt=""></p> <h1>Code</h1> <p><a href="https://gitlab.com/JacquesPeeters/m5-forecasting">My gitlab repository. </a></p> <h1>Bias in the data</h1> <p>By plotting the number of active items over time, we can see that the global growth is not due to growth of walmarts stores’ but probably due to the bias on how the dataset was constructed. </p> <p>Therefore when forecasting an aggregated serie, actually it was not historical total sales of the series but the sales of selected products active at that time. </p> <p>Interestingly total sales are going higher but average sales per product is going lower. It can probably be explained by survival biais. Items sold from multiple years/a long time are only “survivors'' because they are BestSellers. Items which had lower performance in early years (2012/13) were killed and replaced with newer products in our dataset. </p> <h1>Data augmentation</h1> <p>A date can be predicted from different horizons (between 1 and 28). Each time series is small data therefore I expected significant better results with data augmentation. </p> <p>Note : I performed this only for granularities not related to item’s.</p> <h1>Data downsampling</h1> <p>I assumed item granularities can be seen as a pointwise approach. For performance reason I allowed myself to downsample these granularities by half. We prefer to sample randomly rather than removing part of the data (between two dates) because we assume that unexplained variance is mostly due to seasonal factors rather than item’s behaviour.</p> <h1>Datasets</h1> <p>I often focus a lot on FE, during this competition i wanted to focus more on the benchmark part and software engineering first, then on FE. I still performed a lot of FE. <a href="https://medium.com/manomano-tech/a-framework-for-feature-engineering-and-machine-learning-pipelines-ddb53867a420">You can find a medium post where I describe my philosophy about horizontal scaling if you are interested in. </a></p> <p>Note : When hierarchical adjective is used, it means that features are “propagated” to lower granularities. Eg : event_trend of total_id can be added to category_id granularities. </p> <h2>Weather_[state_id, total_id]</h2> <p>I’m tried to unbiased past sales data with temperature from the date from we are predicting / average temperature from the past. Did not seems to help much. I was not using temperature from predicted dates and therefore complaint with rules. </p> <h2>Fe_sales_historical</h2> <p>Historical average sales of the serie on differents time windows [3, 7, 14, 28, 56, 112] Historical quantiles [the 9 quantiles to be predicted, target encoding in a way] on different time windows again [28, 56, 112] Historical average sales of the serie on the same day of week on windows [1, 2, 3, 4] Historical average prices multiples windows</p> <h2>&nbsp;Dayofyear_trend_hierarchical</h2> <p>Out-of-fold (each year is considered a fold) target encoding of the average trend of date_to_predict / day_of_year of date_from</p> <p>It was funny to see the trend of hobbies categories pre/post christmas. </p> <p>Predicting date t depends of : 1) on which days of the month historical sales were computed (including the first saturday of the month have a huge impact) 2) the horizon of the forecast (do we predict the first or last saturday of the month)</p> <p>It is a 2-D variable problem which is extremely difficult to deal for trees. This trick makes it 1-D.</p> <h2>Yearly_trend_day_hierarchical</h2> <p>Same as previous but based on [date_from_month, date_from_day, date_month, date_dayofweek, date_month_week] which is a bit different and expected to be a bit more robust. </p> <h2>Monthly_trend_day_hierarchical</h2> <p>Same but based on day of month</p> <p>It was sad to see the end of the month was way more extreme in Wisconsin &gt; Texas &gt; California and that it is correlated to GDP … So from data we know that more people in Wisconsin struggle at the end of the month. Number of days since last sales Didn’t help much</p> <h2>Event modeling</h2> <p>The fun part, greatly help local validation.</p> <p>Use a two-stage kind of causal inference approach. * Train a model with all features available. * Compute out-of-fold median trend of an event trend = pred/real</p> <p>Median was preferred over mean, due to the lack of data, i felt it was more “robust” but it is subjective choice</p> <p>However, there are multiple events on the same date. We need to rank events and assign one and only one to a given date. To do so the same approach is used : * Train a model with all features available * Find event with strongest absolute trend abs(1 - pred/real) * Assign strongest event to the date </p> <p>This trick allowed me to detect that Fathers’ day have a strongest impact than NBA finals therefore was prioritized</p> <p>Find below the more extreme events for serie total_id. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F562227%2Fe2ba11d7c77d70299f3957e99190cec0%2Fevent_trend.png?generation=1593724278664540&amp;alt=media" alt=""></p> <p>Edit : i introduced a bug while moving from mean to median this w-e, and it was not out of fold...</p> <h1>Training</h1> <h2>Hyper-parameter tuning</h2> <p>Due to the amount of models trained, it is difficult to optimize each approach. For a lightgbm approach a single parameter is over-written, namely the number of observations set-up to 100. Given we have a data-augmentation of 28, 5 years of data, we want to avoid overfitting events with single leafs. Eg : creating one leaf for each New-years year. </p> <h2>Normalizing</h2> <p>Once again as the item's granularities can be seen a pointwise approach, normalizing doesn’t make sense. On the opposite side, time series approach benefits from it. </p> <p>I normalized by sales_mean_[14, 28, 56, 112]. Only variables related to sales and the target were normalized. Variables related to price weren’t.</p> <h2>Train/validation split - Retraining on full data</h2> <p>Last XX days of data was used for validation. </p> <p>Training/validation was used to train and predict on out-of-fold_validation set. This set was used for stacking. Validation/out-of-fold_validation sets were randomly sampled and not in a timewise fashion. I might be over-fitting because of this random sampling.</p> <p>Retraining on full data was natural due to the nature of the competition. </p> <p>The train/validation training was used solely for validation purposes, out-of-fold stacking, analysis of features importance, and finding the right number iterations of LightGbm models.</p> <h2>Benchmark of approaches</h2> <p>Due to the fact that multiple solutions were possible, my initial goal was to set up a benchmark framework for testing the maximum of them. What i tried : * LightGbm Quantile regression * Tensorflow quantile regression * Regression_l2-to-quantile (regression l2 then quantile regression on top of predictions) * Tweedie-to-quantile regression (tweedie regression then quantile regression on top of predictions) * Ngboost-distribution (compute quantile based thanks to prediction + standard deviation) but it was way too slow and not good</p> <p>Each approach performs differently on granualirities.</p> <p>Quantiles from Regression_l2-to-quantile were learned on validation set, and not from training set nor out of fold. Therefore I might be overfitting there :( <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F562227%2F5d896ef6ab84bf101b2d38a3835b2c41%2Fbecnhmark_m5.png?generation=1593722518091417&amp;alt=media" alt=""></p> <p>Overall performance after stacking. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F562227%2Fd5333abc5f3c196bf66e639f3becb87f%2Foveral_perf.png?generation=1593722491787991&amp;alt=media" alt=""></p> <p>There are huge improvements, but I think a bit over-estimated due to having a random validation/out_of_fold validation split as already explained.</p> <p>See you next time! </p>
M5 Forecasting - Uncertainty
36th Position Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Uncertainty <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would first like to thank <a href="/muhakabartay">@muhakabartay</a> for his constant motivation to improve with every competition. Our solutions are based on 5 different models </p> <p><strong>Preprocessing</strong></p> <p>We used preprocessing from the kernels below.</p> <ol> <li>Ulrich GOUE's Kernel : <a href="https://www.kaggle.com/ulrich07/do-not-write-off-rnn-cnn">https://www.kaggle.com/ulrich07/do-not-write-off-rnn-cnn</a></li> <li>Ulrich GOUE's Kernel : <a href="https://www.kaggle.com/ulrich07/quantile-regression-with-keras">https://www.kaggle.com/ulrich07/quantile-regression-with-keras</a> </li> </ol> <p><strong>Modelling</strong></p> <p><em>Model 1 Architecture</em></p> <blockquote> <p>x = L.GRU(400, return_sequences=True, name="d1")(x) x = L.Dropout(0.5)(x) x = L.Concatenate(name="m1")([x, context]) x = L.GRU(400, return_sequences=True, name="d2")(x) x = L.Dropout(0.5)(x) x = L.Concatenate(name="m2")([x, context]) x = L.GRU(400, return_sequences=True, name="d3")(x) preds = L.Dense(9, activation="linear", name="preds")(x) model = M.Model(inp, preds, name="M1") model.compile(loss=wqloss, optimizer="adam") reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.001)</p> </blockquote> <p><em>Model 2 Architecture</em></p> <blockquote> <p>x = L.GRU(410, return_sequences=True, name="d1")(x) x = L.Dropout(0.5)(x) x = L.Concatenate(name="m1")([x, context]) x = L.GRU(410, return_sequences=True, name="d2")(x) x = L.Dropout(0.5)(x) x = L.Concatenate(name="m2")([x, context]) x = L.GRU(410, return_sequences=True, name="d3")(x) preds = L.Dense(9, activation="linear", name="preds")(x) model = M.Model(inp, preds, name="M1") model.compile(loss=wqloss, optimizer="adam") reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.01)</p> </blockquote> <p><em>Model 3 Architecture</em></p> <blockquote> <p>x = L.Concatenate(name="x1")([context, num]) x = L.Dense(500, activation="relu", name="d1")(x) x = L.Dropout(0.2)(x) # 0.3 x = L.Concatenate(name="m1")([x, context]) x = L.Dense(500, activation="relu", name="d2")(x) x = L.Dropout(0.2)(x) # 0.3 x = L.Concatenate(name="m2")([x, context]) x = L.Dense(500, activation="relu", name="d3")(x) preds = L.Dense(9, activation="linear", name="preds")(x) model = M.Model(inp, preds, name="M1") model.compile(loss=qloss, optimizer="adam") return model reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.001)</p> </blockquote> <p><em>Model 4 Architecture</em></p> <blockquote> <p>x = L.Concatenate(name="x1")([context, num]) x = L.Dense(400, activation="relu", name="d1")(x) x = L.Dropout(0.25)(x) # 0.3 x = L.Concatenate(name="m1")([x, context]) x = L.Dense(400, activation="relu", name="d2")(x) x = L.Dropout(0.25)(x) # 0.3 x = L.Concatenate(name="m2")([x, context]) x = L.Dense(400, activation="relu", name="d3")(x) preds = L.Dense(9, activation="linear", name="preds")(x) model = M.Model(inp, preds, name="M1") model.compile(loss=qloss, optimizer="adam") return model reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.01)</p> </blockquote> <p>Model 5 : Baseline model <a href="https://www.kaggle.com/ulrich07/quantile-regression-with-keras">https://www.kaggle.com/ulrich07/quantile-regression-with-keras</a> </p> <p>Further we ensembled our 5 submissions. Here is our winning ensemble.</p> <p><a href="https://www.kaggle.com/roydatascience/silver-medal-solution-stacking-m5-submissions">https://www.kaggle.com/roydatascience/silver-medal-solution-stacking-m5-submissions</a></p> <p>Congratulations to all the winners. Happy Kaggling everyone.</p> <p>Thanks i will make that change.</p>
M5 Forecasting - Uncertainty
3rd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Uncertainty <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The form of the equation as well as the coefficients were calculated by extensive optimization on the last 28 days.</p> <h2><strong>Edit</strong></h2> <p>Finally our joint paper (with my colleague <a href="https://www.kaggle.com/voglinio" target="_blank">Costas Voglis</a>) is published by <a href="https://www.sciencedirect.com/science/article/pii/S0169207022000012" target="_blank">IJF</a>. Preprint can be downloaded <a href="https://www.researchgate.net/profile/Konstantinos-Vogklis/publication/358384908_Blending_gradient_boosted_trees_and_neural_networks_for_point_and_probabilistic_forecasting_of_hierarchical_time_series/links/620f897e08bee946f38ae1ff/Blending-gradient-boosted-trees-and-neural-networks-for-point-and-probabilistic-forecasting-of-hierarchical-time-series.pdf?_sg%5B0%5D=r6g6AvzEhj6_qRn6ABHj1aeUTxNzwqQ-HtA8IS0-fugvZt_pAYiPmJw6p0MqWmVp-4LM-UfjKR2DKxqXdpW7eg.qFxWw0NEbdLkFygkRNhfdd6W8L8xYJNTu0wwAP1_pIPOxtGsjQhciK5NxlRNYbewoFhcueVqfCHLOoYEPHh1fg&amp;_sg%5B1%5D=IhTK6Cj3EgiJrJvROVZgvg5Q7N2nofb2-aW8IrOzzKlxORwdvsRZZFwvi_VHUhjpA_FAAwx5E69ZNX6yZ4b2WEAp7EeGTg0ZHpeFtrxQ9PT9.qFxWw0NEbdLkFygkRNhfdd6W8L8xYJNTu0wwAP1_pIPOxtGsjQhciK5NxlRNYbewoFhcueVqfCHLOoYEPHh1fg&amp;_sg%5B2%5D=xcZL4-Ogyb4bZpCB_W5Pix-Yk-p8dx7bf_f8Rgbencr-bwcUPjvl2rM0CDa9FXslXLG5aMvJ9EhVKAc.HbdPkuNdOrYa8DPxIKxV0zexUyd3eShvk6Btv7r8lPgl_QYsZNTL7FVjvG_mlr44foKRNTcz7KUgjjI1mpobQA&amp;_iepl=" target="_blank">here</a>.</p> <h2><strong>Thanks</strong></h2> <p>I would like to thank the organizers for putting out such an exceptional competition on this scale. I also need to mention that my involvement in the M5 competition was both tasked and sponsored by my employer (Nodalpoint Systems, Athens, Greece). Due to other engagements, including a <a href="https://medium.com/the-downlinq/spacenet-6-announcing-the-winners-df817712b515" target="_blank">3rd prize in the SpaceNet 6 challenge</a> hosted at Topcoder (team SatShipAI), I was able to get involved in both M5 tracks (Accuracy &amp; Uncertainly) no sooner that the beginning of May, i.e. more than 2 months after the competitions had started.</p> <h2><strong>Background</strong></h2> <p>Together with my colleague <a href="https://www.kaggle.com/voglinio" target="_blank">Costas Voglis</a>, we formed a team (Nodalpoints) for the Accuracy track of M5, ending up in the <a href="https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/164685" target="_blank">21st position</a>. For several reasons, we preferred to participate in the Uncertainty track separately, having first <a href="https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/133467#864904" target="_blank">confirmed</a> that this was indeed not a violation of the competition terms. Costas himself ended up 26th in the Uncertainty track.</p> <h2><strong>Solution</strong></h2> <p>My solution is an extension of my own contribution to the team Nodalpoints submission to the M5 Accuracy track; in other words, the <strong>starting point</strong> of my solution is <strong>my own part</strong> of the predictions submitted to the M5 accuracy track by the team Nodalpoints. </p> <p>Having many failures at the beginning trying to tackle this competition with a classic RNN approach, probably due to the hierarchical nature of the outcome, I realized that I wouldn’t get good results if I continued treating it as a time series problem, and I should treat / transform it to a regression problem instead (as already suggested in several forum threads). </p> <p>All necessary code is now available at <a href="https://github.com/IoannisNasios/M5_Uncertainty_3rd_place" target="_blank">GitHub repo</a></p> <h2><strong>Accuracy Modeling:</strong></h2> <p>• <strong>1 lightgbm model per store</strong> (10 stores) training for different number of rounds for every store (700-1600) using a total of 70 features and all available data. <br> • <strong>3 keras models</strong> (with categorical embeddings) of almost similar architecture training on the last 17*28=476 days data using only 19 features.<br> • Ensembling, weighted geometric mean:<br> ((Lgbm ** 3) * KerasSimpleAverageOf3Models) ** (1/4)<br> <strong>Keypoint:</strong> Keras models fails on last day’s prediction (outlier – see note below) probably due to NBA finals on that day (and because I only used 1 year’s data for training – mainly to speed up). For that day, I have just let Lightgbm predictions alone (no ensemble).</p> <p>(Note: from sales_train_evaluation.csv file, mean sales per day over the last 2 years (28*26 days) is 1.25 with a standard deviation of 0.22. Keras last day mean prediction is 3.9, which is over 6 sigma away from the mean, thus an outlier.)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F219107%2F1e1d28c6d07b48e924d4dc3e18f510d1%2Fsales_per_day.png?generation=1594722839639810&amp;alt=media" alt=""></p> <p>This M5 accuracy solution is the outcome of a regression problem. But there is another dimension which is not mined: for every item we get the predictions for 28 days; these regression predictions can benefit from the fact that this is actually a time-series problem, utilizing a simple <strong>exponential weighted mean</strong> (row-wise):</p> <p>Acc_subm.iloc[:,1:]=Acc_subm.iloc[:,1:].ewm(com=0.04, axis=1).mean().values</p> <p>This is a post-process that should be done in the Accuracy track, but because it was a last minute finding (2 hours before competitions closing), I only inserted it in the Uncertainty track (it cost us 3 places in the Accuracy one).</p> <h2><strong>Uncertainty Predictions:</strong></h2> <p>Furthermore, for the Uncertainty track, we had to calculate the median and four prediction intervals, not just for every item but for all 12 levels. Having the median from the Accuracy track as a starting point for level 12, <strong>with simple aggregations we obtain the median for the other 11 levels</strong>. Going from the median to complete 9 quantiles estimations was done mostly by <strong>tuning coefficient multipliers</strong>. Tuning was done using only 1 fold (trying to overfit the public LB), but for more accurate results more folds should be used.</p> <p>The higher the aggregation level, the more confident we are in the point prediction and thus we use lower (closer to 1) coefficient multipliers. For multipliers estimation for each level the <strong>normal distribution was used in levels 1 – 9 and a skew-normal distribution for levels 10-12</strong>. Also, <strong>due to right-skewness</strong> in our sales data on every aggregation level, the last of 9 quantiles (=99.5%) was furthermore multiplied with a factor (1.02 or 1.03).</p> <p>In the Accuracy track, I worked on 3 different folds, and the final model ensembling weights were selected from the mean score of those folds. Looking at every fold alone, I noticed that there was a multiplier that could be used to maximize accuracy. These multipliers were 1.04, 0.97 and 1.0 for each fold. The meaning of this is that the <strong>final submission in the Accuracy track will be very sensitive to multipliers</strong>, and this will affect the Uncertainty track, too. Needing to minimize this volatility for the Uncertainty track, I decided to amplify the above tuned coefficient multipliers by a really small amount. <strong>All tuned multipliers (except for the median) was moved away from 1 by 0.5%.</strong></p> <p><strong>Level 12</strong> (the only non-aggregated one) was the most difficult for accurate estimations, and the above calculations were not enough here. Statistical calculations played a major role for this level. <strong>Sales quantiles</strong> were calculated over the last 364 days and over the last 28 days for every item and later weighted averaged (weights = [1, 1.75], accordingly). Also, <strong>weekly sales quantiles</strong> were calculated over the last 13<em>28 days and 3</em>28 days, and these results were simply averaged. Finally, Level 12 was calculated as:<br> all quantiles excluding median<br> <strong>0.2*coefficient tuning + 0.7 * sales quantiles + 0.1 * weekly sales quantiles</strong><br> median (not sure if should have done this)<br> 0.8*coefficient tuning + 0.2 * sales quantiles</p> <p>As for Level 12, so for <strong>level 11 of item_id – state_id aggregation</strong>, sales quantiles were calculated too and final predictions were calculated as:<br> all quantiles excluding median (which was left as-is)<br> <strong>0.91*coefficient tuning + 0.09 * sales quantiles</strong></p> <h2><strong>Flowchart</strong></h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F219107%2F5f625ee90b2de7015c1a7eddcce93ae7%2FFlowchart.jpeg?generation=1594723487531854&amp;alt=media" alt=""></p> <h2><strong>Late submissions</strong></h2> <p>From late submissions both to M5 Accuracy and Uncertainty competitions, as expected, there is a strong correlation between scores.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F219107%2F9f61b260d3b094b1135e3dd67b023035%2Facc-unc.png?generation=1594723317372193&amp;alt=media" alt=""></p> <h2><strong>Summary</strong></h2> <ul> <li>I used accuracy predictions as a starting point for uncertainty estimations.</li> <li>Better accuracy results give better uncertainty estimations.</li> <li>Weaker Keras models strongly increase overall accuracy performance.</li> <li>Ensembling of statistical calculated sales quantiles with tuned coefficient multipliers for level 12 was the key in the uncertainty competition.</li> <li>Talking about multipliers: if I had used the magic number 0.97, I would have ended up on the top of the leaderboard in the uncertainty track and in place 6 in the accuracy.</li> <li>I played around with recurrent implementations, but they never reached an appropriate maturity level in order to include them in the final blend.</li> </ul>
M5 Forecasting - Uncertainty
48th place solution - seq2seq
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Uncertainty <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, congratulations to all the winners and thanks to everyone for sharing their insights during and after the competition. This was my first forecasting competition/project so I learned a lot of new things.</p> <p>My best solution, which got me the 48th rank, was a simple ensemble of two architectures - a Dilated LSTM Sequence2Sequence model and a Sequence2Sequence LSTM model with attention (applied from all encoder outputs to hidden state of each step in the decoder).</p> <p>I have shared my code for both the accuracy and uncertainty competitions on github (<a href="https://github.com/arshjot/Kaggle-M5-Forecasting">https://github.com/arshjot/Kaggle-M5-Forecasting</a>) and will share some more details below.</p> <h3><strong>Features</strong></h3> <ul> <li><p>Calendar features (42):</p> <ul><li>Cyclic encoding for month, day and weekday (as explained here - <a href="https://www.kaggle.com/avanwyk/encoding-cyclical-features-for-deep-learning">https://www.kaggle.com/avanwyk/encoding-cyclical-features-for-deep-learning</a>) - 6</li> <li>Relative year (2016 - year) - 1 </li> <li>Embedding of size 16 for event_name. Both the event fields were passed through the same embedding layer and later concatenated - 32</li> <li>Snap fields with values as 0 or 1 - 3</li></ul></li> <li><p>Series IDs (63): Embeddings for all id fields - item_id (50), dept_id (4), cat_id (2), store_id (5), and state_id (2). In case of aggregated series (Levels 1-11), the grouped fields are kept as separate categories ('All'). For example - Level 2 series of state CA is categorized as:</p></li> </ul> <p>| item_id | dept_id | cat_id | store_id | state_id | | -- | -- | -- | -- | -- | | All | All | All | All | CA </p> <ul> <li>Previous day sales</li> <li>Sell Price (for levels 1-11, mean sell price of their constituent series was used)</li> </ul> <h3><strong>Normalization</strong></h3> <p>Only price and sales features were normalized. This was done by dividing the sales and price of each series by their means in the input window</p> <h3><strong>Training</strong></h3> <p>A sliding window of length 28*13 days on last 2 years data (with a step size of 28 days) was used for training. This meant for each window, 28*13 timesteps were given as input to the encoder and the succeeding 28 days served as the prediction horizon for the decoder.</p> <p>SPL loss was used for training with RMSProp optimizer, reducing learning rate, and early stopping.</p> <p>I also ended up using some lag/rolling features and addition of noise during training though these modifications did not result in a significant improvement in the validation score </p> <h3><strong>Validation</strong></h3> <p>A simple 3-fold conventional time-series validation split (last 3 periods of 28 days) was used, with the predictions from each fold averaged for the final submission.</p> <hr> <p>Interestingly, my models performed better in the uncertainty stream than the accuracy one both in terms of validation and leaderboard scores.</p> <p>I also wanted to try representation learning and graph learning algos (to account for hierarchy) but could not devote enough time to make them work.</p> <p>I hope my post/code turns out to be useful for someone. Please let me know if there are any questions :) </p> <p>Ah ok, that's alright. It would be interesting to see if we can somehow modify the fast.ai model to use it for uncertainty as well.</p> <p>Thank you and congrats to you as well!</p> <p>The only difference between my accuracy and uncertainty models were these: 1. The final layer for uncertainty had 9 outputs (to predict for each quantile) instead of 1 2. WRMSSE Loss was replace by SPL loss for uncertainty 3. In uncertainty, I trained on all 42k series whereas in accuracy I just trained for the Level 12 series</p> <p>I did not try to generate quantiles from my accuracy model. Rather, I created a separate model for uncertainty with the above modifications. The SPL loss took care about everything else as it adjusts the loss acc to the quantile being predicted. </p> <p>Pyro also seemed like a viable option and the Pyro starter kit (<a href="https://github.com/pyro-ppl/Pyro-M5-Starter-Kit">https://github.com/pyro-ppl/Pyro-M5-Starter-Kit</a>) shared earlier seemed pretty good. I considered pyro as well but later decided to go with what I already knew.</p> <p>I went through your solution for the accuracy competition as well - seems quite interesting as I have never used fast.ai's tabular model. Did you use the same model for uncertainty?</p>
M5 Forecasting - Uncertainty
4th Place Solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Uncertainty <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I published my solution according to <a href="https://mofc.unic.ac.cy/m5-competition/">the guidelines of this competition</a>.</p> <p><a href="https://github.com/marisakamozz/m5">https://github.com/marisakamozz/m5</a></p> <p>If you would like to know more about my solution, please see <a href="https://github.com/marisakamozz/m5/blob/master/ModelSummary.md">Model Summary</a>.</p> <p>Here is my <a href="https://www.slideshare.net/MasakazuMori2/kaggle-m5-forecasting-uncertainty-4th-place-solution">Presentation Slide</a>.</p> <p>This is my model structure.</p> <p><img src="https://github.com/marisakamozz/m5/raw/master/model.png" alt="Model Structure"></p> <p>Thank you.</p>
M5 Forecasting - Uncertainty
6th solution - Pure statistical forecast
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Uncertainty <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello Casper, we did not use any other libraries other than pandas/numpy and maybe one function from probability from scipy (which we did not have time to recode). We tend to avoid using libraries because they cost us a lot to maintain in Prod. (And we mostly code in C#) For the seasonlities, we had a Monthly seasonality (defined from 1 to 31) for each month (which is unusual), a weekly seasonality an events linear coefficient on top. </p> <p>We filmed a small LokadTv episode to talk about the competition:</p> <p><a href="https://www.youtube.com/watch?v=bw7fCukUVkA&amp;t=898s">https://www.youtube.com/watch?v=bw7fCukUVkA&amp;t=898s</a></p> <p>Do nor hesitate to reach out.</p> <p>PyMC3 is a great option indeed.</p> <p>Hello Ashwin, I think the easiest way to learn Monte Carlo methods is to start with simulation softwares like Arena AnyLogic, Simulink (We use those a lot in Industrial Engineering). However, if your goal is to specifically do time series forecasting with Monte Carlo, I would recommend looking for examples in the Finance domain (Medium has plethora of examples of MC used for forecasting). </p> <p>We are really glad to reach top 10 in our first Kaggle Competition. (We did invest countless hours in this project)</p> <p>One thing to know about our team is that we are all Supply Chain professionals first, data scientists second. We used a lot of domain knowledge to crack the problem, and many of the techniques we've used were recycled from things we do on a daily basis at Lokad (where we all work or used to work). </p> <p><strong>Summary</strong></p> <p>Our solution is a multi-stage state-space model, states being active or inactive. (A good reference here: <a href="https://gluon.mxnet.io/chapter12_time-series/issm-scratch.html">https://gluon.mxnet.io/chapter12_time-series/issm-scratch.html</a>)</p> <p>Our predictions were generated through Monte Carlo simulation at level 12, we considered demand at different stores to be independent so we aggregated demand trajectories from level 12 up to level 10. Levels 1 to 9 time-series were handled individually with simple innovation state space models. We modeled emission with a Negative Binomial distribution to represent the fact that demand is discrete. </p> <p>Seasonality factors were carefully hand-crafted and events were modeled with simple linear coefficients. (Calculated at store department level)</p> <p><strong>Good properties</strong></p> <ol> <li><p>Our solution is fully explainable, (white-box approach) - so we can easily describe each forecast in case we are challenged. </p></li> <li><p>It is linearly scalable, you can give us the whole WalMart dataset and it will still run without a problem (of course we would need some extra cpu =). (And we could still get a factor 10x by moving some core parts of the code to a compiled language).</p></li> <li><p>It clearly differentiates demand from sales, (how much it would sell if we were never stock-out)</p></li> <li><p>It can output full demand distributions for horizons, example: how much will it sell next week ? Note that this is different from how much will sell day by day.</p></li> <li><p>It can be used for training agent-based models (used for our MEIO solutions) </p></li> </ol> <p><strong>Not so good properties</strong></p> <ol> <li><p>Time consuming. Adding new categories would require additional human analysis.</p></li> <li><p>Does not account for pricing effects (could not properly model it with the given dataset)</p></li> <li><p>Does not forecast new SKU. (We would require a separate solution for this case)</p></li> </ol> <p><strong>Conclusion</strong></p> <p>We do believe that tailored statical models were a good fit to the problem at hand. We had a fairly stable score accross the years and periods and above all full control of the solution.</p> <p><a href="https://www.youtube.com/watch?v=bw7fCukUVkA&amp;t=904s">https://www.youtube.com/watch?v=bw7fCukUVkA&amp;t=904s</a></p> <p>Thanks R. Aswin ! =)</p> <p>Thanks Vinoth ! =)</p> <p>Thanks Hey we will probably do a code clean up in the near future and eventually share it after that. I will keep you posted.</p>
M5 Forecasting - Uncertainty
7th Placed Solution - seq2seq LSTM
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Uncertainty <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I am very surprised that I can get 7th place &amp; 1st place in students in this M5 forecasting competition. It is my first year that I participate in Kaggle competitions, and the first time that I get ranked here. It is more than happy that I can share my solution here (although it may be a bit too late).</p> <h2>Model Summary</h2> <p>In this competition, I first focused on the Accuracy competition (i.e. producing the mean forecast), and then leveraged on the model built on Accuracy competition to produce any uncertainty estimate (i.e. quantile prediction).</p> <p>I built two seq2seq LSTM model to produce the mean forecast on the next 28D. One is trying to forecast on the store-item level, another attempts to forecast on the dept-store level. The final accuracy forecast consists of simple averaging of both model. The uncertainty estimate is built using the estimated residuals of both models, while assuming the forecast residuals are i.i.d. normal.</p> <h2>Takeaway</h2> <p>I think the key takeaway in this competition is the importance of a reliable local validation scheme, together with the model ensembling technique, which makes my model to deliver stable performance both locally and in public / private LB. At the early stage of the competition where people are optimizing for the public LB, my model performance on public LB is actually not comparable to those top solutions. However, the key benefit of my solution is that it delivers stable performance both locally and in public / private LB.</p> <p>The WRMSSE / WSPL metric can be volatile given only 28D of validation window (i.e. public LB), because it is equally weighted on all aggregate levels (meaning that the top aggregate levels forecast, even with a few observations, already give a huge proportion of the loss). It would be potentially overfitting if we focus too much on optimizing for the public LB, without a reliable local validation scheme.</p> <h2>Model Architecture</h2> <p>The seq2seq LSTM model takes an input window of 28D, and output window of 28D. Basically it is similar to an encoder-decoder model for machine translation except that I add some exogenous features to the model, and avoid using any forecast information in the decoder input (i.e. trained &amp; forecasted unconditionally). The rationale behind training unconditionally is to ensure that the model performs similarly at inference time as at training time. In the following graph, y denotes the time series (i.e. sales), z denotes the exogenous features (e.g. weekday, holiday event, is snap, item id), h denotes the hidden state (as well as cell state).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2272940%2Fb8835694a5168bdd3d5ac5d8ac8b59c6%2Fmodel%20architecture.png?generation=1595125709755384&amp;alt=media" alt=""></p> <h2>Features</h2> <p>I do not use any special features. Basically the features are calendar features (e.g. holiday event, holiday type, weekday), id features (item-id, dept-id, store-id) and price features. By SHAP value analysis, the id features and weekday are significant to the model performance. The categorical features with high cardinality are post-processed using embedding layers.</p> <p>Some important feature processing steps are: 1. Re-scale each time-series into [0, 1] using its max. 2. Train the model to forecast the increment of sales value since the first day of input window (i.e. remove trend).</p> <h2>Training Method</h2> <p>The LSTM models are trained using Adam algorithm. The dept-store model is trained with weighted MSE loss. The store-item model is trained with unweighted zero-inflated poisson loss (to deal with cases with many 0s).</p> <h3>Improving Model Stability</h3> <p>During training, an issue of model instability is observed, similar to the observations <a href="https://www.kaggle.com/c/web-traffic-time-series-forecasting/discussion/43795">here</a>. The following graph shows how the WRMSSE metric (used for accuracy measure) changes during training epochs.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2272940%2Fb6609e390dfd94cb8eda7fe9d162913f%2FTraining%20instability.png?generation=1595126453982905&amp;alt=media" alt=""></p> <p>To resolve the instability issue, each seq2seq LSTM model actually consists of models trained at 20 different checkpoints (i.e. training epochs) and is a simple average of the 20 checkpoint models. The submission model is trained blindly without any early stopping, and is observed to have stable performance.</p> <h2>Hierarchical Forecast</h2> <p>The dept-store level forecast is first converted into store-item level forecast using the 28D historical average of ratio. Then both models' forecasts are simple averaged and converted to forecast on all aggregate level by bottom-up approach.</p> <h2>CV Scheme</h2> <p>My local validation scheme consists of 4 folds of data, each is of 28D. Particularly: Fold 1: (training) [train st, d1829] --&gt; (validation) [d1830, d1857] Fold 2: (training) [train st, d1857] --&gt; (validation) [d1858, d1885] Fold 3: (training) [train st, d1885] --&gt; (validation) [d1886, d1913] Fold 4: (training) [train st, d1913] --&gt; (validation) [d1914, d1941] where train st is 2.5 yrs before the end of training data when training store-item model, and is 0 when training dept-store model (i.e. training on full dataset). The model is trained once only on fold 1 so as to speed up the local validation process. It is observed that the score in each fold has large variance, indicating that it may be unreliable / overfitting to optimize on any 28D data (i.e. the public LB). Therefore, I focus on the local CV score which also gives close alignment with both public LB and private LB score.</p> <p>Thanks a lot for the host who gave us a chance to join such a rewarding competition. I learnt a lot from this competition. Also special thanks to <a href="/kyakovlev">@kyakovlev</a> who built a lot of great discussion thread &amp; great notebook (which are super useful for us as a newbie to Kaggle).</p>
M5 Forecasting - Uncertainty
Should have been 18th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Uncertainty <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Hello!</h1> <p>This was a wonderful competition, so thanks to Kaggle team and University of Nicosia for hosting it.</p> <p>To explain the topic of this discussion, I had one of the submission scoring 0.16959, which was not chosen because of a low public score than one of my other submission (wish they had the 2 submission system). Here is the screenshot: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2014978%2F816716719e72a00d1e204827a797419e%2Flb.png?generation=1593576683579204&amp;alt=media" alt=""></p> <p>(More about version 8 and version 9 below)</p> <h3>With that said, let's dive into the solution:</h3> <h2><a href="https://www.kaggle.com/akashsuper2000/concatenate-model-with-grus">Notebook</a></h2> <p>Based on @ulrich07 's notebook <a href="https://www.kaggle.com/ulrich07/do-not-write-off-rnn-cnn">here</a>.</p> <h2>Architecture</h2> <p>Single model with GRU layers and one final Dense layer</p> <p>Highly simplified representation: Concatenate() GRU(300) Dropout(0.3) Concatenate() GRU(300) Dropout(0.3) Concatenate() GRU(300) Dropout(0.3) Dense(9, activation='linear')</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2014978%2F6e894bbcdf51c6e07f65dd886818c4bf%2Fm5model.png?generation=1593578997970288&amp;alt=media" alt=""></p> <p>Epochs: 200 (version 8) Took me 6 hours to train on GPU. Training for more epochs would further diminish the loss but the risk of overfitting is high. And hence, the model trained for 400 epochs performed poorly in private leaderboard (See version 9 of my notebook).</p> <p>Link to my notebook: <a href="https://www.kaggle.com/akashsuper2000/concatenate-model-with-grus">https://www.kaggle.com/akashsuper2000/concatenate-model-with-grus</a></p>
MLB Player Digital Engagement Forecasting
11th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: MLB Player Digital Engagement Forecasting <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>[update]</strong><br> In the end, I got 11th place and I was able to stay within the gold medal range. <br> I'd like to thank the people running this competition and all the people I competed with! Thank you very much.</p> <p>a first week of RE-RUN is over. <br> I was lucky enough to have a successful rerun.<br> I would like to thank the admin and all the participants for organizing this event.<br> There is still a month to go before it ends, but before I forget, I'll describe my solution.<br> (I'm not very good at English, so I basically translate Japanese at DeepL. I apologize if you have trouble reading.)<br> <br></p> <hr> <h4>Difficult points of this competition</h4> <ol> <li>The target is scaled from 0 to 100 on a daily basis.</li> <li>There is a difference in the information available to pitchers and batters.(The roles of the pitcher and batter are fundamentally different.)</li> </ol> <p><br></p> <h4>My approach to the above points</h4> <ul> <li><p><strong>Use only in-season data for training</strong><br> When I checked the daily median values, I found that there was a considerable difference in the median values between the on-season and off-season.<br> I thought that on-season engagement would be determined by performance and success during the season, while off-season engagement would be determined by media exposure and social networking regardless of the season(and we can't use this kind of data).<br> In this competition, we predicts the engagement during the season, so I judged that data from the off-season would be noisy and did not use it for training and validation.</p></li> <li><p><strong>Create features that represent what happened on any given day and the game</strong><br> I thought that daily median engagement scores might differ depending on what happened in the game as a whole and what happened in other games on that day.<br> Therefore, I created the features related to the actions that occurred throughout the day.</p> <ul> <li><p>Number of games played that day, total runs scored, home runs, hits, etc.</p></li> <li><p>The average pct of both teams in each game (whether there was an exciting game on the same day between teams with high winning percentages)</p></li> <li><p>The total twitter followers of the teams playing in the game (home and away) as a percentage of the total followers of all teams (whether the game was played between teams that are popular on social networking sites)</p></li> <li><p>The total number of twitter followers of the players who played in the game that day as a percentage of the total number of followers of all players (how influential are the players who played that day on SNS)</p></li> <li><p>How long has the season elapsed (what percentage of the schedule has been completed)?<br><br> In addition to the above, I added the player's performance.<br><br></p></li> <li><p>Daily performance</p></li> <li><p>Season performance</p></li> <li><p>Previous season performance (last season's final performance)</p></li></ul></li> <li><p><strong>Built three different models</strong><br> one for pitchers, one for batters, and one according to no play for the 2021 season.<br> The features used in each model is slightly different from each other. <br> The features for any given day or game described above are the same for all models, <br> but in the pitcher's model, the only feature used in the player's performance was the performance related to pitching. in the batter's model, only the performance related to batting was used. <br> For the model of a player without a season of play, we built the model without using any features related to performance.<br><br> The reason for this, as mentioned earlier, is that pitchers and batters have distinctly different roles. For example, even if a pitcher gets three hits, his reputation will not be high if he is scored on by his crucial pitches.<br> In addition, for players who did not play in the season, we could not obtain their performance in the current season at all and had no material to judge them, so we built a separate model with fewer features.(no use performance features) <br> <br><br> As a side note, I don't use target-lag-feature.<br> When I was testing the three models separately, the accuracy of the pitcher and batter models decreased when using past targets, so I decided not to use variables related to past targets this competition.</p></li> </ul> <p><br></p> <h4>validation scheme</h4> <p>For training and validation, I used data from each season.<br> In addition, I did not use the data for 2020 because the number of games was small and we thought that there might be abnormal movements due to the corona disaster.<br> All models are made with lightgbm.</p> <p><br></p> <h5>Before the train data was updated (LB:1.3104(Previous leaderboard))</h5> <ul> <li>Training : 4/1/2019 to 4/30/2019</li> <li>Validation : 5/1/2019 to 5/30/2019 → check accuracy and obtained parameters in this period</li> <li>Re-training: re-train with parameters obtained in the above period on data from 4/1/2021 to 4/30/2021</li> </ul> <p><br></p> <h5>After the train data is updated</h5> <p>After the train data was updated, I tested both.<br> a. whether August of the previous regular season could be predicted well (2019. except for 2020) <br> b. whether the last month of the same season could be predicted well.<br> The parameters obtained for each were used to make each prediction, and the final submission was weighted a and b.</p> <ul> <li><p>Validation a: </p> <ul> <li>Training : 4/1/2019 to 7/30/2019 </li> <li>Validation : 8/1/2019 to 8/31/2019</li> <li>check accuracy and obtaining parameters in this period</li></ul></li> <li><p>Validation b: </p> <ul> <li>Training : 4/1/2021 to 6/17/2021</li> <li>Validation : 6/18/2021 to 7/17/2021</li> <li>check accuracy and obtaining parameters in this period</li></ul></li> <li><p>Re-training: <br> Re-train on data from 2021/4/1 to 7/17 with parameters obtained in the above period, respectively.</p></li> </ul> <p><br></p> <p>Final sub 1: <br> Predicted value of the model made with the parameters obtained in verification a × 0.7 +<br> Predicted value of the model made with the parameters obtained in verification b × 0.3</p> <p>Final sub 2:<br> Predicted value of the model created with the parameters obtained in verification a × 0.3 + Predicted value of the model created with the parameters obtained in verification b × 0.7</p> <p><br></p> <hr> <h5>postscript</h5> <p>Since my solution does not use lag-target, I believe that it is inferior to models that use lag-target for training and inference if we only look at the results of the first week.<br> However, I imagine that the accuracy of these models using lag-targets may gradually decrease in future re-runs, and I expect that there will be a few shake-up/down.<br> (But first, I need to get the next rerun running correctly…)</p> <p>Thank you for reading this far!</p>
MLB Player Digital Engagement Forecasting
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: MLB Player Digital Engagement Forecasting <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, many thanks to MLB and Kaggle for hosting this great competition! It was a very funny and interesting competition with a really surprising ending for me. I completed the first solution that I was happy with almost at the latest possible moment and thus my first submission was on the day before the deadline. In fact, I seriously considered not submitting at all…🙈 <br> I provided the link to the final notebook used for my best submission at the end of this post. Unfortunately it is certainly not as clean and nicely documented as a typical public notebook. But I hope the following documentation will be sufficient to get the basic ideas used for the solution.</p> <h1>Overview</h1> <p>Frist a very short overview for those who don't have time or are not interested in the details.<br> I used a neural network consisting of a GRU and several processing layers which was trained from scratch (the first 5 hours of the 6 hours inference time) in the Kaggle kernel (with GPU runtime). The main idea was to build a model that sequentially processes the provided “raw“ input information related to a certain time span to gather the relevant information for the final prediction (so that no noteworthy feature engineering is needed). <br> Since I didn't have any other models for a real ensemble, I used at least a snapshot ensemble for the submitted prediction.</p> <h1>Modeling approach</h1> <p>Heuristically one would expect that the past targets of a player coupled with the player performance features (related to the “playerBoxScores“ - information) would be the most crucial information to predict future values. However, while the "performance features" are available for the complete inference time span, the targets are only available until the last day of the training data. One of the basic assumptions was that a suitable and consistent simulation/treatment of this time-gap during training would be crucial for a well performing model.<br> The model uses features related to a 96 day interval previous to the target day as input. <br> First of all the past targets (scaled with log(1+x) to reduce their partly extremely "peaked nature") were provided as input. To simulate the time gap without target information the target values for the last n days (where n is chosen randomly and n&lt;=32) are simply set to 0. In addition a target mask was added to mark if the target information was provided on the respective day or not:<br> <a href="https://postimg.cc/SncW68s4" target="_blank"><img src="https://i.postimg.cc/ht5pq1RX/MLB.png" alt="MLB.png"></a><br> Moreover, (like in almost all approaches) the daily performance features were added (e.g. “playerBoxScores“ like “groundOuts“, “runsScored“ and “homeRuns“) as well as some other features like the “positionCode“, “teamId“, “statusCode“, "numberOfFollowers" and the "month". Non categorical features were scaled to roughly the same order and missing values were replaced by -1 (otherwise the features take non-negative values).<br> <a href="https://postimg.cc/QKBcW0Sq" target="_blank"><img src="https://i.postimg.cc/brggW5t4/MLB-model-2.png" alt="MLB-model-2.png"></a><br> First the input features are fed to an Embedding Block consisting of several Dense layers with leaky relu activation and processed by 2 (linear) Residual Blocks so that for each of the input days reasonable embedding vectors can be created (So up to this point there was no exchange of information between different days).<br> Afterwards, the day-wise feature information is processed in positive time direction by the GRU layer with hidden size 128 (motivated by the picture of an analyst who observes the performance of a player and the fan engagement over a certain time span to gather the relevant information for the prediction of future values). <br> To simplify the conservation of the day-wise input information, the output of the GRU layer is concatenated with its input. Finally, the output of the previous steps is processed by a sequence of 3 Dense layers and returned with relu activation.<br> For regularization Dropout layers were added.</p> <h1>Model training</h1> <p>The model was trained from scratch in the Kaggle kernel (GPU) for a period of 5 hours so that the remaining hour could be used for the inference process (no data apart from the competition data was added). The epoch length was defined such that the training time corresponds to roughly 30 epochs and for validation May 2021 was used.<br> Adam optimizer was used for the model training and the learning rate was reduced with exponential decay after each epoch where the base was chosen such that the learning rate starts at 2e-3 and ends at 5e-4. The batch size was set to 64 and for regularization weight decay of 6e-5 was added.<br> For each training sample a random player is chosen as well as a random time gap for the time span between the day for which the targets shall be predicted and the last day where the information of past targets is added to the model input. Corresponding to the announced length of the evaluation period the maximal value for this time gap was set to roughly one month (32 days). For a more efficient usage of the data also the targets for the period previous to the actually considered day belonging to this time gap were predicted. Probably it would have been smarter to predict all days of the maximal time span (i.e. 32 days) in each iteration. Otherwise model bias to shorter time gaps (where the correlation to the past target information is probably larger) might be introduced because for the applied approach targets with a time gap of a few days are predicted almost in each iteration while those related to a time gap of 32 days are only predicted when the random interval takes its largest possible value. <br> Moreover, the assumption that the model is able to reliably capture the time span between the last day with target information and the day for which the targets shall be predicted, is possibly too optimistic. Perhaps the addition of positional encodings (providing information about the length of the respective time gap) would be helpful in this regard and could lead to a further improvement of the model.</p> <h1>Inference</h1> <p>The inference is done in the final hour of the kernel runtime. Since only one model architecture and type was trained, at least a snapshot ensemble was used to introduce a bit of diversity. To this end, after each training epoch the model weights and the related validation loss were saved. The 6 best “snapshot-models“ (with the lowest validation loss) were used for predicting the targets during the test period. These predictions were simply averaged, clipped to the relevant range (0-100) and submitted as final prediction.</p> <h1>Final comments</h1> <p>Many other architectures with alternatives to the GRU layer were used like LSTM, CNNs or a transformer. A direct comparison is hard because obviously the optimal parameters (layer units, number of layers, learning rate,…) are typically different for different architectures but the GRU layer outperformed the alternatives at least under the chosen circumstances. <br> Also the addition of different normalization methods (batch norm, layer norm,…) was tested, but without success.<br> Most time was clearly invested for the optimization of the model and only few time was used for feature selection. So there is probably still potential for improvement by dropping or adding certain features.<br> Finally, many thanks to the great input from many participants in this competition. 🙏E.g. this <a href="https://www.kaggle.com/ryanholbrook/getting-started-with-mlb-player-digital-engagement" target="_blank">notebook</a> was a great help to get started with this competition and some stuff in my final notebook is certainly still based on it. </p> <p>My final notebook: <a href="https://www.kaggle.com/ph0921/mlb-final-1" target="_blank">https://www.kaggle.com/ph0921/mlb-final-1</a></p>
MLB Player Digital Engagement Forecasting
22th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: MLB Player Digital Engagement Forecasting <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for comments.</p> <p>This is the analyzed result of no hitter.</p> <p><img src="https://raw.githubusercontent.com/chumajin/test/main/nohitter.JPG" alt=""></p> <p>if no hitter is 1, target is very high. So, I adopted this median values when no hitter is 1.</p> <p>This is the validation result of 2021 April. The difference of predictions and grand truth for Shohei Ohtani was too big. So I add the difference of it to prediction. But this is maybe not good because the score was getting better only in the case of Shohei Ohtani. Honestly, I tried to making models for Shohei Ohtani, but I could not.</p> <p><img src="https://raw.githubusercontent.com/chumajin/test/main/Shohei.JPG" alt=""> </p> <p>First of all, thank you very much to those who supported me, upvoted EDA, and excited me! I enjoyed experiencing various things. The top ones are really amazing. I respect them.</p> <p>I shared my solution in detail. (日本語解説ありです). <br> This is the 12th solution before the train data is updated(leaked).</p> <p><a href="https://www.kaggle.com/chumajin/12th-before-train-update-solution-english" target="_blank">https://www.kaggle.com/chumajin/12th-before-train-update-solution-english</a></p> <p>But I don't know the actual result. If I got worse score or submission error, please laugh me… ( I will laugh at my own)</p> <p>[Short summary]</p> <ul> <li><p>I used only optuna and LGBM. It was Public LB1.3019 by ensemble what was created by changing the features.</p></li> <li><p>CV is 5 kfold by the average of from target1 to 4.</p></li> <li><p>In my 1st phase, I used the merged code that was published by kaggle staff. This LGBM model got 1.3490 score.</p></li> <li><p>In my 2nd phase, I added the features about the statics of target values more than 31 days ago because we know the correct answer. This model got better to 1.3373. I used GCP because of memory insufficient.</p></li> <li><p>In my 3rd phase, I used the log scale of target value and omitted 0 and 100 value. I fount the clean histogram if I use the log scale of target values. This model got better to 1.3256.</p></li> <li><p>I ensemble models made with other features and models made for each position.<br> This models got better to 1.3144.</p></li> <li><p>Moreover, no hitter is very high targets value. So I correct it(1.3073). And I found Shohei Ohtani found that LGBM's predictions did not match, and corrected the difference(maybe this is overfit).<br> Finally I got to 1.3019.</p></li> </ul> <p>Thank you so much, good luck for everyone !</p>
MLB Player Digital Engagement Forecasting
2nd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: MLB Player Digital Engagement Forecasting <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Our final model was a blend of 6 GBM models (5 Lightgbm, 1 Xgboost) all trained on the same features. We validated on the last month of data and then retrained the model on the entire dataset.</p> <ul> <li>LightGBM - objective=”regression_l1”</li> <li>LightGBM - objective=”regression_l2” - targets scaled by double square root</li> <li>LightGBM - objective=”regression_l1” - boost_from_average=True</li> <li>LightGBM - objective=”regression_l1”, 'max_depth': -1</li> <li>LightGBM DART - object=’regression_l1’, boosting=’dart’</li> <li>Xgboost - targets scaled by double square root</li> </ul> <p>Our final feature set had over 1000 features that pretty fell under 3 categories:</p> <ul> <li>Target aggregates - All-time and rolling 12-month historical mean/variance of target for each player.</li> <li>Last 20 days/games of various stats for each player. For example, <code>strikeouts_1_day_ago</code>, <code>strikeouts_2_days_ago</code>, …, <code>strikeouts_20_days_ago</code> for each player. </li> <li>Features based on our baseball knowledge/what gets people to tweet. For example, a walk-off hit/home run will often trigger some engagement. Other features were no-hitters, win probability added by a player (a proxy for if they played a good game or were involved in "high leverage" plays), ranking in the home run race, was a player ejected, ERA and ERA ranking, and more.</li> </ul> <p>Some of our most important features were:</p> <ul> <li>numberOfFollowers - most recent value for a players number of Twitter followers</li> <li>numberOfFollower_delta - the change in Twitter followers between the most recent 2 months</li> <li>monthday - integer value representing current month and day (mmdd) e.g. June 4th would have a value of <code>604</code>.</li> <li>{target}_p_var - Historical variance of {target} for player</li> <li>{target}_p_gameday_mean - Historical mean of {target} for player on gamedays</li> <li>roll12_{target}_p_mean - {target} mean for player for the previous 12 months</li> <li>roll12_{target}_p_var - {target} variance for player for the previous 12 months</li> <li>wpa_daily_max - League-wide daily maximum value of Win Probability Added. An approximate (exact base-out states are not available, so calculation is approximate) WPA value is calculated for each player. </li> <li>homeRuns_rank - Players’ home run ranking </li> <li>walk_off_league - Was there a walk off hit in the league that day?</li> <li>days_since_last_start - number of days since a player last pitched. Useful for the model to learn if a player is likely to pitch on the day engagement is measured.</li> </ul> <p>Submission Notebook: <a href="https://www.kaggle.com/brandenkmurray/mlb-predict-final" target="_blank">https://www.kaggle.com/brandenkmurray/mlb-predict-final</a></p>
MLB Player Digital Engagement Forecasting
5th Place Solution : My First Competition and First Gold
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: MLB Player Digital Engagement Forecasting <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>thanks<br> <a href="https://www.kaggle.com/zacchaeus" target="_blank">@zacchaeus</a> </p> <p>Hi all</p> <p>I would like to thank kaggle and the organizers for such a good competition. <br> I also thank my teammate(<a href="https://www.kaggle.com/shinnyayoshida" target="_blank">@Hyper-Positive-Yancy</a>), he did some EDA and tuned NN.</p> <h1>Models Used</h1> <p>Our final ensemble consisted of</p> <ul> <li>Lightgbm X 8</li> <li>CATBOOST X 4</li> <li>ANN X1</li> </ul> <h1>Data usage period</h1> <ul> <li><strong>in-season sampling</strong><br> I only use in-season data.<br> Even if I extracted out-of-season data from our data, we could not confirm any deterioration in accuracy.<br> But, In 2019 data , I eliminated a lot of data. In this year, The retirement match of the great Ichiro Suzuki was held in Japan. He didnt play well in the retirement game, but he had a high engagement.<br> I considered this data outlier and deleted it. From this fact, I Concluded, Special matches(like retirement match) should be removed from the data</li> </ul> <h1>Feature engineering</h1> <ul> <li><p><strong>gamesStartedPitching lag feature</strong><br> As I mentioned <a href="https://www.kaggle.com/c/mlb-player-digital-engagement-forecasting/discussion/261357#1446144" target="_blank">here</a><br> I used "gamesStartedPitching lag feature". Starting pitcher sometimes pitch (one time per 4~5days) but they get high target1~4. I made sub model of predicting "this pitcher will pitch tomorrow". If this pitcher will pitch tomorrow, he will get high target.<br> "This pitcher will pitch tomorrow" is easy to predict, because MLB pitchers have rotation (almost of them pitch one time per 4~5days)<br> when I use this feature with "target-lag-feature", my public LB score became better. </p></li> <li><p><strong>tomorrow game exists or not</strong><br> In this competition, we predict "tomorrow" engagement, but stats of player is today stats. Even if the players did well today, we could see that the engagement value would be low if there was no match tomorrow.<br> So, Whether or not there will be a match tomorrow will be an important factor in predicting engagement</p></li> <li><p><strong>target-lag-feature</strong><br> As I mentioned before, this feature is effective when used with "gamesStartedPitching lag feature". Therefore, I thought it would be very meaningful to use the target-lag-feature of 3 to 7 days.<br> But, Second half of the evaluation period, I have to use predicted value of target data.<br> It may be one of the factors that worsen our second half model performance.</p></li> <li><p><strong>batter contribution</strong></p></li> <li><p><strong>pitching contribution</strong><br> In order to evaluate all athletes fairly, we used the evaluation index for athletes.<br> I mainly use two evaluation index " batter contribution" and "pitching contribution"<br> I found " batter contribution" <a href="http://maddog31.xyz/baseball-web/contribution_degree/contribution_degree2/" target="_blank">here</a> sorry this page is written in japanese.<br> I found "pitching game score" <a href="https://www.mlb.com/glossary/advanced-stats/game-score" target="_blank">here</a><br> As a problem, even with the same position pitcher, there are types of relief and starting pitcher, and the pitching game score of the relief tends to be low. So the probability that the player will throw as starting pitcher has also been added as a feature.</p></li> </ul> <h1>validation scheme</h1> <p>For training and validation, I used data from each in-season.<br> After the train data was updated, I changed validation scheme.</p> <p>Training : 4/1/2018 to 6/30/2021<br> Validation : 7/1/2021 to 7/31/2021</p>
MLB Player Digital Engagement Forecasting
6th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: MLB Player Digital Engagement Forecasting <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/nyanpn" target="_blank">@nyanpn</a> </p> <p>Thank you, congratulations to you too! Your solution was very interesting for me.<br> Regarding this competition, Implementing a source code is very complicated as you pointed out..<br> We decided to create classes with a common rules, and restricted using global area.</p> <p>I list some typical rules below.</p> <ul> <li>The Global area must be limited to the part of bellow.<ol> <li>loading data</li></ol><ul> <li>We could pprevent mistakes to use common table.</li> <li>We could get the correct target by 31 July.</li></ul><ol> <li>instantiating models</li> <li>preprocess</li> <li>inference</li></ol></li> <li>We created methods with simple rules. The following is image.</li> </ul> <pre><code>data = read_data() c1 = Class1() c2 = Class2() c3 = Class3() # cumulative features (targetLag, yesterdayScore, sinceLastGames, ..) data1 = c1.preprocess(data) data2 = c2.preprocess(data) data3 = c3.preprocess(data) for test_df, sample in iter_test: p1, data1 = c1.pred(test_df, sample, data1) p2, data2 = c2.pred(test_df, sample, data2) p3, data3 = c3.pred(test_df, sample, data3) p_final = p1 * x1 p2 * x2 + p3 * x3 </code></pre> <ul> <li>Probably "cumulative features" are the most of difficult part in this competition, and I think many people made mistakes when they updated this datas.</li> <li>tea and kaido choose to keep it as classes variables, and I decided to update at global area. (e.g. above)</li> <li>In any case, we made many mistakes..</li> <li>I think it was good that we were checked the behavior in various patterns (earlier or later start date of the test period, missing data, ..etc.) finally.</li> </ul> <p>First of all, I also would like to thank this competition host and all competitors. It has been about two years since I got my first silver medal in the APTOS competition, and I'm finally a Kaggle master. I'm very pleased now. I'm still new, but I'll continue to learn from Kaggle with a sincere attitude.<br> I am also happy that I could get a gold medal in a competition related to my favorite sport what is baseball. Baseball may be a minor sport on the global stage, but it's very popular in Japan, and I watch NPB (Nippon Professional Baseball Organization) games every week. I wish this fascinating and strategic would become more popular around the world!</p> <h1>To member (<a href="https://www.kaggle.com/tea1013" target="_blank">tea</a> &amp; <a href="https://www.kaggle.com/nomorevotch" target="_blank">sqrt4kaido</a>)</h1> <p>I've been participated in competitions as a solo, and this MLB competition was my first experience participating as a team. I learned a lot of things when I participated competitions by solo, but I learned even more this time. I could get a gold medal thank to your a lot of ideas.</p> <h1>tea's part</h1> <h2>Models</h2> <ul> <li>LGBM lag / no lag</li> <li>CatBoost lag / no lag<br> Use <strong>optuna</strong> to optimize hyper-parameters.</li> </ul> <h2>Train &amp; Valid</h2> <ul> <li>Use 2018-01-01 ~ 2021-05-31 data for train, and 2021-06-01 ~ 2021-07-17 for validation</li> <li>During the test period, I use 2018-01-01 ~ 2021-06-30 data for train</li> <li>Limit data to in-season data only</li> </ul> <h2>Features mainly effective</h2> <ul> <li>target lags (for 45 days)</li> <li>player target statistics (mean, median, max, min, var, skew, kurt)<ul> <li>Use the respective statistics for April, May, and June 2021.</li> <li>Also use the respective statistics for game day and no game day.</li></ul></li> <li>team target statistics (mean, median, max, min, var, skew, kurt)<ul> <li>Use statistics for June 2021 only</li></ul></li> <li>daysSinceLastGame / Roster</li> <li>days from the beginning of the year / month</li> <li>day of week</li> <li>years from the debut year</li> <li>age</li> <li>position</li> <li>player status</li> <li>playerBoxScores features<br> <br></li> </ul> <h1>sqrt4kaido part</h1> <h2>Models</h2> <ul> <li>LGBM only no lag</li> </ul> <ol> <li>The first seed, only the players in test are used for training.</li> <li>Second seed, only the players in test are used for training.</li> <li>The first seed. All the players are used for training.<br> Use <strong>optuna</strong> to optimize hyper-parameters.<br> Before update train.csv, the best score for my single model was 1.3146.</li> </ol> <h2>Train &amp; Valid</h2> <ul> <li>training phase<br> Use 2018-01-01 ~ 2021-03-31 data for train, and 2021-04-01 ~ 2021-04-30 for validation</li> <li>after update train.csv<br> Use 2018-01-01 ~ 2021-05-31 data for train, and 2021-06-01 ~ 2021-06-30 for validation<br> I did not use the data for the first half of July because, unlike the other months, the values were very small.</li> <li>Limit data to in-season data only &lt;- important</li> </ul> <h2>Features mainly effective</h2> <ul> <li>player target statistics (mean, median, max, min, var, skew, kurt)<br> Use the respective statistics for June 2021. This feature is leaked to the valid data, but we used it because it was also valid for public testing.</li> <li>team target statistics (mean, median, max, min, var, skew, kurt)<br> same as above</li> <li>season info<br> pre/post season? regular season? alltardate? etc.</li> <li>award flag</li> <li>daysSinceLastGame / Roster &lt;- important</li> <li>day of week</li> <li>years from the debut year</li> <li>age</li> <li>position</li> <li>player status</li> <li>playerBoxScores features</li> </ul> <h2>did not work</h2> <ul> <li>Create separate models for the data with and without matches.<br> After update train.csv, it stopped working for some reason.</li> <li>Undo the scale and learn<br> This time the target is scaled from 0 to 100, but there was a <a href="https://www.kaggle.com/c/mlb-player-digital-engagement-forecasting/discussion/253940" target="_blank">discussion</a> that it seems to be max-scaled for each day.<br> So I tried training with the scale back, and the CV results improved. However, when we actually applied it to the test data, the score worsened because the wrong scaling was applied when the max prediction was shifted, so all the predictions for the day were shifted.<br> I also tried scaling to the average of the same month in the previous year, but this did not improve the accuracy.<br> <br></li> </ul> <h1>Makabe's part</h1> <h2>Models</h2> <ul> <li>I used three models.<ul> <li>Simple NN (add lags)</li> <li>Light GBM (add lags)</li> <li>Light GBM (none lags)</li></ul></li> <li>The reason to use three type models is the following.<ul> <li>To ensemble is useful making robust models.</li> <li>Regarding the lag feature, <strong>which is the key to this competition</strong>, I noticed the following fact in the middle of the competition.</li> <li>(If we can get the correct target information as a lag early in the test period.)<ul> <li>Light GBM (add lags) &gt; Simple NN (add lags) &gt; Light GBM (none lags)</li></ul></li> <li>(If we have to use the target information predicted by model as lag late in the test period.)<ul> <li>Light GBM (none lags) &gt; Simple NN (add lags) &gt; Light GBM (add lags)</li></ul></li> <li>From the above, I can see that lag feature is very effective if it contains the correct values. However, because of its effectiveness, there is also a possibility of overfitting. This tendency was especially for LGB.</li> <li>I decided to control the blend ratio of the three models according to the number of days in test period to balance lag features (effectiveness &amp; overfitting).</li> <li>The method of controlling the size of lag features depending on test time period was pointed out by <a href="https://www.kaggle.com/c/mlb-player-digital-engagement-forecasting/discussion/256620" target="_blank">other Kagglers</a> too. It was very effective method and we could this knowledge for our entire team ensemble too.</li></ul></li> </ul> <h2>Training / Validation</h2> <ul> <li>I used 5 folds.</li> </ul> <ol> <li>&lt; 20200801(training) &amp; 2020/08/01 - 2020/09/30(validation)</li> <li>&lt; 20200901(training) &amp; 2020/09/01 - 2020/10/31(validation)</li> <li>&lt; 20210401(training) &amp; 2021/04/01 - 2021/05/31(validation)</li> <li>&lt; 20210501(training) &amp; 2021/05/01 - 2021/06/30(validation)</li> <li>&lt; 20210601(training) &amp; 2021/06/01 - 2021/07/xx(validation)</li> </ol> <ul> <li>I decided on period with the following two points in mind.<ul> <li>Season with irregular schedule due to Covid-19 (2020).</li> <li>The trend of target feature is very different between the first and second half of the season.</li></ul></li> <li>However, I hadn't been able to determine the period above based on a deep consideration of this impact. I think there is room for reconsideration.</li> </ul> <h2>Features</h2> <p>The following is a list of feature what is helped improve accuracy. (There were many useful features than the below, but most of them were already in other Kaggler's notebooks, so I omit them.)</p> <ul> <li>stats of target feature<ul> <li>I labeled the data by period as follows, and I added stats that is the one before as features.</li> <li>2018.3 &amp; 2018.4 (=label1), 2018.5 &amp; 2018.6 (=label2), …</li> <li>I thought that by doing this, and I could use stats of target until September 2021. However, I knew after the competition, test period was until August 31, so I didn't need to use the stats of target two months earlier.</li> <li>I used mean, median, distribution,.. and more.</li> <li>I calculated the stats of targets by defense position, roster status, and team, respectively.</li> <li>There was difference of targets by roster status. (The difference was especially noticeable in target1.)</li></ul></li> <li>number of days what have elapsed since the last time player participated in game.<ul> <li>This ideas was from <a href="https://www.kaggle.com/nomorevotch" target="_blank">@sqrt4kaido</a>. This variable was based on the fact that the engagement of players who haven't played for a long time tends to decline over time, and contributed to improving accuracy.</li></ul></li> <li>score from the day before yesterday<ul> <li>I thought the success of a game is continuous, and player's popularity is not necessarily based only on the previous day's performance, but past success is a factor.</li> <li><a href="https://www.kaggle.com/c/mlb-player-digital-engagement-forecasting/discussion/266596" target="_blank">Team AutoMLB seems to this point</a>, and incorporated player scores into the model for a considerable period of time. I probably should have included more time periods than just the day before yesterday.</li></ul></li> <li>A lot of stats<ul> <li>I used various features based on <a href="https://sabr.org/sabermetrics" target="_blank">Sabermetric</a>. (K/BB, RC, OPS, FIP.. and more)</li> <li>This is especially for pitchers, I thought the ratio divided by the number of innings pitched or the number of pitches thrown tends to be a better representation of the player than the absolute number. (It's no surprise that starting pitchers strike out more than relief pitchers.) For this reason, I also incorporate many indicators divided by the number of innings or throw of pitches(K9, HR9.. and more).<br> <br></li></ul></li> </ul> <h1>Doing by team</h1> <ul> <li>In order to ensemble easily, it was necessary to improve the reproducibility and reusability of each member’s source code. Therefore, we unified the specifications and developed the prediction classes with common methods.</li> <li>In this competition, it was very important to finsh the process correctly, because the Time Series API is very complicated. Therefore, we created a LocalTestClass to reproduce the Time Series API in local environment.</li> <li>By using this class, we could verify the accuracy in a larger period.</li> <li>We controlled the ensemble ratio for the fact that the effectiveness of lag feature weakens over time.<br> <a href="https://postimg.cc/gwwHCDJm" target="_blank"><img src="https://i.postimg.cc/g0st6tbX/MLB-sub.png" alt="MLB-sub.png"></a></li> </ul>
MLB Player Digital Engagement Forecasting
8th place solution (Moro & taksai)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: MLB Player Digital Engagement Forecasting <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to the organizers for the fun competition and everyone who participated.<br> And thank you to my teammate ( taksai <a href="https://www.kaggle.com/tsaito21219" target="_blank">@tsaito21219</a> ).</p> <p>I share our team's solution.</p> <h1>summary:</h1> <ul> <li>LSTM model</li> <li>valid: 2021-05, 2021-06, 2021-07</li> <li>ensemble LSTM:MLP:LightGBM=5:1:1</li> </ul> <h1>1. validation</h1> <ul> <li>private-data(2021-08) is in-season, so we use in-season data as validation, and select the month near 2021-08.</li> <li>valid: 2021-05, 2021-06, 2021-07</li> <li>prepared some patterns as training-data while shifting the period. </li> <li>trained 10-fold for each model</li> </ul> <p><a href="https://postimg.cc/nCds5N1C" target="_blank"><img src="https://i.postimg.cc/pLvD9HhY/img1-validation.png" alt="img1-validation.png"></a></p> <h1>2. preprocess data</h1> <ul> <li>features: total number of 232<ul> <li>mean/median/std/min/max of target per playerId last month</li> <li>join each table by key(date/playerId/teamId), use the almost feature of each table</li> <li>use all tables except events.csv</li> <li>not use target-lag-feature</li></ul></li> <li>didn't use target-lag-feature, because it's risky. However, if spliting the model according to forecast date and use only fixed values, we may have improved the score.</li> </ul> <h1>3. model</h1> <ul> <li>LSTM (keras)<ul> <li>input: features in forecast-day + past-days(to 5 days ago)</li> <li>output: multi-output (target1, target2, target3, target4)</li> <li>model: Input(6days) &gt; TimeDistributed(Dense) &gt; LSTM &gt; LSTM &gt; Dense(256&gt;128&gt;64&gt;4)</li></ul></li> <li>other model:<ul> <li>MLP (input: features in only forecast-day)</li> <li>GBDT (input: features in only forecast-day)</li></ul></li> </ul> <p><a href="https://postimg.cc/dDwL0JHh" target="_blank"><img src="https://i.postimg.cc/T2WmCwnJ/img1-lstm.png" alt="img1-lstm.png"></a></p> <h1>4. ensemble</h1> <ul> <li>LSTM:MLP:GBDT = 5:1:1</li> <li>score in public LB (evaluation data: 2021-05)<ul> <li>LSTM: LB=1.28</li> <li>MLP : LB=1.32</li> <li>GBDT: LB=1.36</li> <li>ensemble: LB=1.26</li></ul></li> </ul> <h1>5. measures to avoid submission-errors</h1> <ul> <li>add a lot of exception handling so that it works even if the table or data is missing.</li> <li>use same function in both training and predicting</li> <li>create dummy data of 2021-08 and 2021-09, and confirmed to work using dummy data on local PC and Kaggle'notebook.</li> <li>use API Emulator ( <a href="https://www.kaggle.com/nyanpn" target="_blank">@nyanpn</a> ). Thank you !<br> <a href="https://www.kaggle.com/nyanpn/api-emulator-for-debugging-your-code-locally" target="_blank">API Emulator for debugging your code locally</a></li> </ul> <p>Thank you to my teammate. Let's do it together again!</p> <p>Thank you. Congratulations to you! <br> API simulator is very helpful for us. Thanks for sharing.</p> <p>I used Embedding of Keras.<br> <a href="https://keras.io/ja/layers/embeddings/" target="_blank">link</a></p> <ol> <li>transform category'name to id (use LabelEncoder)</li> <li>convert id to vector using Embeding layer</li> <li>combined with numeric layer </li> </ol> <p>For example. when I have 30 numeric variables and 2 categorical variables, process with code like below.</p> <pre><code>input_x = Input(shape=(32, )) # numeric x_num = input_x[:, :30] x_num = Dense(20)(x_num) # category1 x_cat1 = input_x[:, 30] x_embed1 = Embedding(input_dim=100, output_dim=30)(x_cat1) # category2 x_cat2 = input_x[:, 31] x_embed2 = Embedding(input_dim=200, output_dim=50)(x_cat2) # concat x = Concatenate()([x_num, x_embed1, x_embed2]) out = Dense(4)(x) model = Model(inputs=input_x, outputs=out) print(model.summary()) </code></pre>
MLB Player Digital Engagement Forecasting
9th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: MLB Player Digital Engagement Forecasting <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was one heck of a competition. There was no shortage of new things to try at any point given the extent of the data. For that, I'd like to thank both the MLB and Kaggle.</p> <h2>High-level strategy</h2> <p>It's no secret that lagged features were very important for this competition. I found this, too, and for a while my solutions were heavily reliant on lagged target variables. I suspect most high scoring submissions were, as well. However, there was a lot of uncertainty about what the gap between the ground truth and the inference period would be. I had considered doing some augmentations for robustness, like dropping out the n most recent days of target lags randomly, but this just killed my model's performance. So, I ended up abandoning this strategy. While I do have some target feature aggregations in my solution, as far as lags go, my final submission relies <strong>only on lags of known features</strong> like box score, standings, and games features.</p> <h2>Models used</h2> <p>I tried out a ton of different models that didn't work. Here's a quick summary:</p> <p>What didn't work at all:</p> <ul> <li>Denoising autoencoders</li> </ul> <p>What worked decently, but not well enough:</p> <ul> <li>LightGBM, CatBoost</li> <li>1D CNN (from the <a href="https://www.kaggle.com/c/lish-moa/discussion/202256" target="_blank">MoA competition</a>)</li> <li>Time series models on lagged targets (GRU, Transformer, DeepAR, Temporal Fusion Transformer)</li> </ul> <p>My final submission ended up being an ensemble of a GRU and a Transformer that rely on non-target lags. The GRU was my best model, achieving ~1.25 on its own. The Transformer scored about ~1.27 on its own, but it proved to be very useful in an ensemble.</p> <h2>Validation</h2> <p>I did a simple time series split. Before the training data was updated, I used July and August in 2020, and then April in 2021. After the updated data was released, I used August 2020 and June and July in 2021.</p> <h2>Features used</h2> <p>Below is a list of all the types of features that I used. All lagged features go back 2 weeks.</p> <ul> <li>Mean/median/std/min/max of each target from the previous month</li> <li>Embeddings of playerId, teamId, player position, and player status</li> <li>Player box score lags<ul> <li>'gamesPlayedBatting', 'hits', 'doubles', 'triples', 'runsScored',<br> 'homeRuns', 'hitByPitch', 'totalBases', 'rbi', 'stolenBases', 'assists', <br> 'gamesPlayedPitching', 'completeGamesPitching', 'shutoutsPitching', 'earnedRuns', 'winsPitching', 'strikeOutsPitching', 'hitsPitching', 'saveOpportunities', 'saves', 'holds', 'inningsPitched'</li></ul></li> <li>Lagged game features<ul> <li>'gameTimeUTC', 'wasSigned', 'wasTraded', 'teamWins', 'teamLosses', 'teamScore', 'isHome', 'teamWon', 'scoreDiff'</li> <li>The game time feature only includes the hour the game was played. The idea here was a lot of the digital engagement resulting from a, say, 1 PM game would manifest itself on the day the game was played, whereas the engagement from a night game would all happen on the next day.</li></ul></li> <li>Lagged standings features<ul> <li>'wins', 'losses', 'pct', 'xWinLossPct', 'divisionRank', 'lastTenWins', 'lastTenLosses'</li></ul></li> <li>Lagged cumulative features (all box score features summed up by season)</li> <li>Lagged transactional features, which were just a flag indicating if the player had either been traded or released on the particular day.</li> <li>Player followers, team followers</li> </ul> <p>I tried to use features that exploited the scaling of the data, such as the figuring out the minimum increment between target features for a player and rescaling their target features to the non-scaled "actual" feature, but it didn't really help my model at all. I was pretty surprised by this.</p> <h2>Final submissions</h2> <p>I noticed that including as much data as possible helped immensely, so I knew that I wanted to make at least one of my final solutions have models that were trained on 100% of the available data. However, my training curves for both the Transformer and GRU were <a href="https://drive.google.com/file/d/1OR2vXYhGv8W3VoDWyLDicFef79UTXjs-/view?usp=sharing" target="_blank">quite bumpy</a>. So, I took checkpoints every 200 epochs starting from step 1500 while training blind on 100% of the data. I also ensembled across several seeds.</p> <p>To be safe, my second submission used models that used the last 30 days of data available as validation data. This is really just a hedge, but I'm interested to see how it does.</p> <h2>Other notes</h2> <ul> <li>All models were trained in PyTorch using PyTorch Lightning</li> <li>Editor was VSCode and I made heavy use of the TensorBoard integration</li> <li><a href="https://www.kaggle.com/nyanpn" target="_blank">@nyanpn</a>'s submission emulator was invaluable near the end of the competition</li> <li>I used Optuna for hyperparameter tuning.</li> <li>To reduce submission bugs, I used a dataset that has example versions of roster data, game data, box score data, etc. This allowed me to have a "default" version of the data to return when it is null.</li> <li>As some other competitors noted, the predictions from even my best models look somewhat suspect. I was thrilled whenever I saw my model nail an upward spike in engagement. However, I think the MLB is acutely aware that some major variations in engagement happen because of exogenous factors like news stories. Given the MLB's fairly recent expansion of their social media endeavors, it makes sense that they would want to focus on purely the game-level factors that drive engagement, as this is all they really can act on. Maybe this is a faulty argument, but because of this I think MAE actually made pretty decent sense as a competition metric -- MSE would penalize you too much for factors that are completely unpredictable from game-level data.</li> </ul> <p>If you made it this far, thanks for reading! I had a lot of fun during this competition and can't wait to see how things shake out.</p> <p>EDIT: I've made my inference notebook public here: <a href="https://www.kaggle.com/marktenenholtz/mlb-rnn-transformer-final-public" target="_blank">https://www.kaggle.com/marktenenholtz/mlb-rnn-transformer-final-public</a></p> <p>A basic run-down of the models is that they're essentially the same, with the GRU and Transformer swapped in as different backbones. Other than that, they both take in the embedded categorical features as well as the lagged features, feed those into the backbone as a sequence, and then there's 3 convolutions that are applied to the output (for whatever reason, this worked better than a simple head). After the convolution, there's a set of linear layers that comprises the head of the model, and there's a unique head for each target. There are some minor differences from this in the Transformer model, but this is more or less how it works.</p> <p>Glad I could help. My code is a bit of a mess from all the craziness involved in this competition, but I'll do my best to clean it up and post it.</p> <p>EDIT: I decided to just make my inference notebook public. I added the link in the writeup.</p> <p>I just made my inference notebook public, and the link is in the writeup.</p>
MLB Player Digital Engagement Forecasting
[3rd place] Summary of my work
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: MLB Player Digital Engagement Forecasting <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I’ve shared the notebook about cumcount leakage.<br> <a href="https://www.kaggle.com/nyanpn/cumcount-leakage" target="_blank">https://www.kaggle.com/nyanpn/cumcount-leakage</a></p> <p><strong><em>2021-09-10 updated</em></strong><br> The final ranking is 3rd! Thank you all :) My source code is a bit messy, but you can see it here: <a href="https://github.com/nyanp/mlb-player-digital-engagement" target="_blank">https://github.com/nyanp/mlb-player-digital-engagement</a> and <a href="https://www.kaggle.com/nyanpn/3rd-place-solution-inference-only" target="_blank">https://www.kaggle.com/nyanpn/3rd-place-solution-inference-only</a> .</p> <hr> <p>First of all, I would like to thank the hosts for organizing this competition.<br> It was a very tough competition but I learned a lot from it.</p> <p>I have no idea what my final standings will be, but I will share with you what I did.</p> <h2>Model strategy</h2> <p>Lag features are very useful in this competition, but we cannot use target information in the test data period. So we need to set an appropriate "gap" between the prediction date and the lag feature data.</p> <p>For example, if you want to make a prediction for 8/4, <br> you need to create a gap of at least 3 days because you cannot use the target information from 8/1 to 8/3.</p> <p>I divided the test data into 8 periods and used different models with different gaps in each period (gap: 0, 3, 7, 14, 21, 28, 35 ,45days).</p> <p><img src="https://raw.githubusercontent.com/nyanp/mlb-player-digital-engagement/main/docs/img/time%20series%20gap.png" alt="https://raw.githubusercontent.com/nyanp/mlb-player-digital-engagement/main/docs/img/time%20series%20gap.png"></p> <h2>Model</h2> <p>The three models were ensembled with different weights for each target.</p> <ul> <li>LightGBM</li> <li>MLP</li> <li>1DCNN</li> </ul> <p>1DCNN is the same as <a href="https://www.kaggle.com/c/lish-moa/discussion/202256" target="_blank">the 2nd place solution of MoA Competition</a>.</p> <h3>Validation</h3> <p>Time based split.</p> <ul> <li>validation before update: 2019/8, 2020/8, 2021/4</li> <li>validation after update: 2020/8, 2021/6, 2021/7</li> </ul> <p>Basically, I only adopted ideas that improved the score in all periods.</p> <h2>Features</h2> <p>I used ~440 features. In addition to joins and asof merge of basic tables, the following features were used:</p> <ul> <li>lag features per player<ul> <li>Average of the last 7/28/70/360/720 days</li> <li>Average over the on-seasons</li> <li>Average for the same period in the previous year</li> <li>Average of days with/without a game</li></ul></li> <li>number of events, pitch events, action events</li> <li>days from last rosters, awards, transactions and box scores</li> <li>sum of box scores in the last 7/30/90 days</li> <li>number of games and events in the day</li> <li>event-level meta feature<ul> <li>aggregation of predictions of model trained on event table</li> <li>group by (date, playerId), (date, teamId) and (date)</li></ul></li> </ul> <h3>Cumcount Leakage</h3> <p>There is a strange correlation between the cumcount of the dataframe retrieved from the Time-Series API and the target.</p> <p>I noticed this problem 3 days before the competition ended. I did not post it in the discussion as it might confuse the participants, but contacted the host immediately.<br> Adding this cumcount to the features only improves the CV a little bit, so it's probably some kind of artifact or something, but even if it doesn't improve the CV much, it's better to shuffle the test data since it's nonsense that the order of the rows makes sense.</p> <p>I did not end up using this leak for final submission.</p> <h3>Implementation Note</h3> <p>Building a complex data pipeline in Jupyter Notebook with the Time Series API can be a big pain. I'll share some of my efforts.</p> <ul> <li>Maintain the source code on GitHub and paste the BASE64-encoded code into the jupyter notebook<ul> <li>see: <a href="https://github.com/lopuhin/kaggle-imet-2019" target="_blank">https://github.com/lopuhin/kaggle-imet-2019</a></li></ul></li> <li>The inference notebook is also maintained on GitHub and automatically uploaded as the Kaggle Kernel through GitHub Actions</li> <li>Avoid the use of pandas and instead use a dictionary of numpy arrays to manage state updates</li> <li>Use the same feature generation function for training data and inference<ul> <li>Both training and test are treated as streaming data, and features were generated using for-loop.</li> <li>This is the most important point to get a stable and bug-free data pipeline</li> <li>see: <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/196942" target="_blank">https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/196942</a></li></ul></li> <li>Debug code locally using the API emulator<ul> <li>Test the robustness of my inference pipeline by "dropout" some of the data returned by the emulator (a kind of "Chaos Engineering")</li></ul></li> <li>Catch exceptions in various functions and convert them to appropriate "default" values</li> </ul> <p>Thanks to all of this, I was able to finish the competition 1st stage without making a single submission error.</p>
MLSP 2013 Bird Classification Challenge
Segmentation system performance
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: MLSP 2013 Bird Classification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I forgot to start this thread around the end of the competition, but hopefully at least Forrest will read it... &nbsp;&nbsp;</p> <p><span style="line-height: 1.4">The segmentation system seems to be missing segments on a lot of calls. When I ran a crosstab on recordings that have segments and those that have labels, I got : &nbsp;&nbsp;</span></p> <ul> <li><span style="line-height: 1.4">140 recordings with segments and labels</span></li> <li><span style="line-height: 1.4">129 with neither</span></li> <li><span style="line-height: 1.4">14 with segments but no labels</span></li> <li><span style="line-height: 1.4">39 with labels but no segments</span></li> </ul> <p>Drilling down a bit, I get that there were 82 recordings with 1 label, and of those 31 had no segments (38%). Also, of the 50 recordings with 2 labels, 7 had no segments (14%). Note that 0.14 is about 0.38^2. It suggests that we're getting no segments for up to 38% of the calls, I think. &nbsp;&nbsp;</p> <p>Obviously there will be a tradeoff between false positives (segments when there is no call) and false negatives (no segments when there is a call), but I think the data above indicates that the segmentation system is mis-tuned or mis-configured. This segmentation system operates in a pipeline with a classifier-based scoring component downstream of it. The classifier should be able to deal with a reasonable amount of noise (false positives, segments when there is no call). However, if the segmentation system outputs a false negative, then the classifier gets a zero vector when there is a call to label, and it can't do much with that. &nbsp;&nbsp;</p> <p>I did read the paper about the segmentation system, and I think the problem is most likely not the choice of threshold (at least not directly). Instead, I think it may be found here: &nbsp;&nbsp;</p> <p style="padding-left: 30px">&quot;The smallest time-frequency regions identifiable as bird syllables had a duration of approximately 160ms and a frequency range of approximately 300hz. Any regions in the binary mask less than 90% of this size are discarded from the final segmentation.&quot; &nbsp;</p> <p>The consequence of this could be that some segments where a piece falls below the threshold, or where the segment gets bifurcated (rendered as two smaller segments) would be rejected outright, instead of being retained as deformed segments. The classifier should be able to deal misshaped or split segments, if it has enough data. But if it gets a zero vector, then all it can do is fall back of the prior and the location. That may be why we all got that the location was so important.</p>
MLSP 2014 Schizophrenia Classification Challenge
1st position (solution)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: MLSP 2014 Schizophrenia Classification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi all,</p> <p>I've been away on vacation so making this post has been delayed a bit. I'm sharing the codes and&nbsp;<a href="http://becs.aalto.fi/~asolin/documents/pdf/Solin-kaggle-2014.pdf">a short documentation PDF</a> of the model that was used in the winning submission:</p> <p><a href="https://github.com/asolin/MLSP2014-kaggle-challenge">https://github.com/asolin/MLSP2014-kaggle-challenge</a></p> <p>The solution is based on Gaussian process classification. The model is actually very simple (really a 'Solution draft'), but it did show promising performance using LOOCV. The score on the public leaderboard was, however, only 0.70536, discouraging any further tuning of the model. In the end, it turned out to perform well on the private leaderboard.</p> <p>See the report and the code readme for more details.</p>
MLSP 2014 Schizophrenia Classification Challenge
2nd position (solution)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: MLSP 2014 Schizophrenia Classification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Just uploaded more formal description of my solution. For details, please see:&nbsp;<strong>Lebedev_AV__MLSP2014_Sch_version_2.*.pdf</strong></p> <p>Hi Dear Colleagues,</p> <p>I just wanted to share my solution that&nbsp;led me to the 2nd place:</p> <p><a href="https://github.com/alex-lebedev/Kaggle-MLSP-2014">https://github.com/alex-lebedev/Kaggle-MLSP-2014</a></p> <p>I had some problems during submissions. I was first receiving errors, although my csv-files looked just fine. I tried different formats, one of which has finally worked.</p> <p>I implemented &quot;feature trimming&quot; in my solution, which consists of&nbsp;1) introducing a random vector into the feature set, 2) calculating feature importance, 3) removing the features with importance below the&nbsp;&quot;dummy feature&quot;.</p> <p>Yes, it's as simple as that...</p> <p>As I mentioned <a href="https://www.kaggle.com/c/mlsp-2014-mri/forums/t/9834/is-there-still-an-opportunity-to-submit-my-code">PREVIOUSLY</a>, I did not observe substantial progress and eventually gave up (after the third try). This is why I didn't&nbsp;even try more advanced techniques&nbsp;like boosting, and SCAD-SVM, Elastic Net, recursive feature elimination. Neither did I try building hierarchical models, which I had in mind, but did not implement for&nbsp;the same reasons.</p>
MLSP 2014 Schizophrenia Classification Challenge
3rd position (solution)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: MLSP 2014 Schizophrenia Classification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello Everyone,</p> <p>Sorry for the late posting. I was on holiday and could not access my main computer for a few weeks.</p> <p>So my 3rd place entry is quite simple. I used what I think is a lesser known method called Distance Weighted Discriminant (DWD). This is a method designed to improve over LDA and SVM in high-dimensional low sample size setting. You can find a few quick explanations on how it achieves that in the pdf and further in the references cited therein.</p> <p>DWD has one main parameter to tune: penalizing constant "C" (similar as in SVM case). I ran 10-fold cross validation with several values of C and found out that the performance (ROC area) saturated after around C=300.</p> <p>The submission was done running out-of-the-box DWD on the data with C=300 and no feature selection. That was my winning entry. Which is a bit of a downer since I used this competition to test the performance of my LDA variant and one of the methods I compare it to is DWD.</p> <p>Anyway I think it's a cool method and in my opinion too few people know about it. So hopefully it will reach larger audience and maybe someone will remember it if he/she will ever run into d&gt;&gt;n classification task in the future.</p> <p>Here is the GIThub link: <a href="https://github.com/KarolisKoncevicius/Kaggle-MLSP-Schizo-3rd">https://github.com/KarolisKoncevicius/Kaggle-MLSP-Schizo-3rd</a> (I moved all the steps into a single R-script file).</p> <p>The documentation is attached.</p>
MLSP 2014 Schizophrenia Classification Challenge
8th-9th Position Code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: MLSP 2014 Schizophrenia Classification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone, Here's all the code I used to get 89.6% AUC.</p> <p>Though the actual code was quite short, I've also included exploratory analysis in this repository. Comments / forks welcome!</p> <p>Basically all I did was use a standard support vector machine classifier with default parameters in python's sci-kit learn. As you can see, I tried a bunch more classifier-combinations.</p> <p>A few notes:</p> <ul> <li><span style="line-height: 1.4">the SVC didn't do the best in terms of cross-validation scoring (naive bayes with bernoulli priors did better) on the test set.</span></li> <li>It also didn't appear to do as well on the mid-term learderboard.</li> <li>It did do substantially better, though, on the actual dataset.</li> </ul> <p><a href="https://github.com/gabegaster/kaggle_schizophrenia_2014">Check out this github repo here.</a></p> <p>All the best!</p> <p>Gabe</p>
Machinery Tube Pricing
Solution sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Machinery Tube Pricing <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you all for the reply. I've heard many times about leak but this is the most real one. I will try data index as a feature from now on! (What's next, file name?)</p> <p>Congratulations to everyone! We hope to see some solution sharing by top players. Thank you all in advance. My team only tunes the benchmark a little bit. We really don't have much experience for regression problem..</p>
Mapping Dark Matter
Piles of data for data lovers
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mapping Dark Matter <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The first one contains only the most discriminant features (with regard to linear fit).</p> <p>Hi all,</p> <p>I'd like to post here all my predictors. I combined several methods, some methods are inspired from my PhD thesis (soundtrack restoration), others are taken from my current research (signature verification and writer identification), there are also some methods which are specifically developed for this problem. Each methods comes with hundreds of predictors.</p> <p>So in total, there are thousands of predictors, I only combined them using linear fit and I am sure that the predictive power of these predictors is far from what I obtained. I contacted Eu Jin Lok 5 days before the end of the competition and because of time issues we could not improve the results that much.</p> <p>For those of you who might be interested, you can download these predictors from the following links:</p> <p><a href="http://goo.gl/JbEBa">http://goo.gl/JbEBa</a></p> <p><a href="http://goo.gl/GkojD">http://goo.gl/GkojD</a></p> <p>If you come up with interesting ways of combining them. I'll be happy to hear from you.</p> <p>Thanks,</p> <p>Ali</p> <p>&nbsp;</p>
March Machine Learning Mania 2022 - Men’s
#1 solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2022 - Men’s <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>thanks I hope you earn free one too😄</p> <p>yes that model made me think maybe it's not 100% gambling </p> <p>well there is no new solution I had just used <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> 2018 women's solution <br> <a href="https://www.kaggle.com/competitions/womens-machine-learning-competition-2018/discussion/53597" target="_blank">https://www.kaggle.com/competitions/womens-machine-learning-competition-2018/discussion/53597</a> <br> that's all :-)<br> <a href="https://postimg.cc/gwnDtLr0" target="_blank"><img src="https://i.postimg.cc/W47H3Mv0/6bhhxs.jpg" alt="6bhhxs.jpg"></a></p> <p>just one and it didn't work for women's this year(log loss=1.00673) </p>
March Machine Learning Mania 2022 - Men’s
#2 Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2022 - Men’s <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Competition Name: March Machine Learning Mania 2022 – Men’s<br> Team Name: shin koshi<br> Private Leaderboard Score: 0.55415<br> Private Leaderboard Place: 2</p> <ol> <li><p>Background<br> ・An undergraduate student at the faculty of information and computer science, Chiba Institute of Technology.<br> ・This is my first time participating in Kaggle's MLMM competition.<br> ・I spent two weeks on this competition.</p></li> <li><p>Summary<br> My #2 place solution code is <a href="https://www.kaggle.com/code/shinkoshi/2nd-place-solution" target="_blank">here</a>.<br> I used LGBM as a model. The model was implemented using AutoLGBM. It takes two hours to train.</p></li> <li><p>Features<br> I used only some of the datasets distributed in this competition.<br> The following features were used.<br> ・MNCAATourneyDetailedResults.csv<br> ・MNCAATourneySeeds.csv<br> ・MMasseyOrdinals_thruDay128.csv</p></li> <li><p>Model<br> ・Training LGBM with AutoLGBM library.<br> ・I trained with 10-fold cross-validation.<br> ・The notebook was forked from <a href="https://www.kaggle.com/PIXYZ0130" target="_blank">@PIXYZ0130</a> <a href="https://www.kaggle.com/code/pixyz0130/eng-mmlm-men-autolgbm-baseline" target="_blank">notebook</a>, and most of the places were just copied. The grouping and other small details are slightly different.</p></li> </ol>
March Machine Learning Mania 2022 - Men’s
#3 Solution Explained
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2022 - Men’s <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>NCAAM Tournament 2022<br> Introduction, Patrick Enwright (penwright)<br> • Previously worked in Canada’s oil and gas sector; transitioned to aviation sector with a focus on commercial – network planning, pricing, scheduling. <br> • Currently an executive for a publicly traded Bitcoin mining company, and continue to consult for the aviation sector. <br> • Bachelor of Management, University of Lethbridge; MBA, University of Calgary<br> • Husband, father and professional shuttle driver of five boys that play hockey, baseball, volleyball, golf. I’ve been hooked on March Madness since God Shammgod and Providence’s Elite 8 run in 1997.<br> Preface<br> I want to mention a few things before I jump into my model summary.<br> • Thank you to everyone that is involved in organizing this event, preparing the datasets, and participating in the discussion forum.<br> • Thank you to all participants and the continued growth of the Kaggle competition format. I believe I participated in the second March Machine Madness Kaggle competition 7-8 years ago and I really enjoy reading the tournament recaps and the solutions provided by the top performers.</p> <p>And with that… </p> <p>I would like to thank last year’s “Team Beat Navy”, which included Dusty Turner and Jim Pleuss. Dusty and Jim provided a solution recap and I incorporated their metrics into my model. </p> <p>Summary<br> I built my model using Excel, and applying a regression analysis of pre-tournament metrics against the historical results using the ESPN points scheme. for the previous three (3) tournaments (2021, 2019, 2018). After compiling the metrics and standardizing, I then determined the weighting of each metric through the coefficients from the regression and arrived at my power rating for each team. </p> <p>Key factors in my model<br> • Offensive efficiency, defensive efficiency, percentage of points from 3s, possessions per game, assist to turnover ratio<br> • Winning percentage<br> • Free throw rate<br> • Average ranking across all Massey Ordinals<br> • COL and AP Ranking from Massey Ordinals<br> • Quad 1 wins and Quad 4 losses<br> • Tournament Seed of Team<br> • Schedule Ranking</p> <p>The following average statistics from the last 3 games of the season<br> • Offensive efficiency<br> • Defensive efficiency<br> • Possessions per game</p> <p>External Data<br> I used data from the link below for team efficiency, percentage of points from 3s, possessions per game, and assist to turnover ratio, specific to the last 3 games prior to the March Madness tournament. <br> • <a href="https://www.teamrankings.com/ncb/rankings/" target="_blank">https://www.teamrankings.com/ncb/rankings/</a></p> <p>Hi ManOverMachines, <br> Yes, the last three games included the conference tournaments/end of regular season. </p> <p>As for the visual on how my bracket played out, I had Kansas, Duke, Purdue and Tennessee as my final four teams. I would add my log loss visual bracket, but I can’t copy/paste the screenshot with the restrictions on the forum submittal. </p>
March Machine Learning Mania 2022 - Men’s
#5 Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2022 - Men’s <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Firstly, I'ld like to appreciate the competition hosts, Kaggle team ( <a href="https://www.kaggle.com/addisonhoward" target="_blank">@addisonhoward</a> ), <a href="https://www.kaggle.com/jeffsonas" target="_blank">@jeffsonas</a> for the organization of this competition.<br> I would also like to congratulate <a href="https://www.kaggle.com/amirghazi" target="_blank">@amirghazi</a> for winning and encourage those that lost top spots or medals by one or two places or games</p> <p>About me:<br> its my first time participating in this competition &amp; must say its been really fun. its my first time building basketball prediction models also. i have had some experience in the past building models for european football home/away team total <a href="https://www.kaggle.com/code/tomy4reel/jarvis" target="_blank">full-time</a>/<a href="https://www.kaggle.com/code/tomy4reel/jarvis-ht" target="_blank">half-time</a> goals prediction using linear regression/other regression models &amp; a variety of data/meta-data across seasons (it's usually quite easy to overfit &amp; simpler models tend to perform better over time/seasons)<br> Anyways, i took <a href="https://www.kaggle.com/zachmayer" target="_blank">@zachmayer</a> <a href="https://www.datacamp.com/courses/advanced-deep-learning-with-keras-in-python" target="_blank">course on datacamp</a> where i got basic idea and motivation for this competition.</p> <p>Public Solution:<br> my #5 place solution python code can be found <a href="https://www.kaggle.com/code/tomy4reel/5th-place-solution-men-s-competition" target="_blank">here</a></p> <p>Datasets and Features:<br> i used only some of those provided by the competition hosts.<br> i used the current seeds, historical win stats, number of matches played in previous seasons which can indicate how far teams went</p> <p>Model selection:<br> catboostclassifier with logloss objective and a few categorical features</p> <p>cross validation:<br> was done with last 2 available seasons only; i didn't make any submission on the initial leaderboards </p> <p>What i didn't try: </p> <ol> <li>i didn't clip or modify my predicted probabilities</li> <li>i didn't use any of <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> model or dataset (because i didn't see them on time)</li> </ol> <p>Model performance:<br> i didn't aim to win gold. i only tried to build a good model &amp; mostly judged my performance initially by comparing my leaderboard positions with that of <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> and <a href="https://www.kaggle.com/zachmayer" target="_blank">@zachmayer</a>; later on with <a href="https://www.kaggle.com/koki25ando" target="_blank">@koki25ando</a> , <a href="https://www.kaggle.com/fritzcremer" target="_blank">@fritzcremer</a> <a href="https://www.kaggle.com/dynamic24" target="_blank">@dynamic24</a> </p> <p>Limitations:</p> <ol> <li>Time: I started working on the competition late</li> <li>knowledge: i had little understanding of some datasets and features. but the data description provided by hosts were very helpful. keep it up.👍</li> </ol>
March Machine Learning Mania 2022 - Men’s
17th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2022 - Men’s <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I want to share my solution as I believe it uses a somewhat unconventional strategy. This year is the first time I participate in this competition and I have to say, it was really exciting to see the games play out. Sadly for me, I didn't make Gold or top 5, mainly because of the St. Peters upset (predicted 92%), and of the comeback of Kansas in the final (predicted only 55% for Kansas). I did not look at any discussions in the submission phase, as I wanted to make a unique model and not get biased by other ideas. I came up with the idea of using different Elo systems, and feeding the Elo values into a small fully connected neural network. The Elo systems were calculated with the k values [10, 15, 20, 30, 50] (for details on the k-values: <a href="https://en.wikipedia.org/wiki/Elo_rating_system)" target="_blank">https://en.wikipedia.org/wiki/Elo_rating_system)</a>, like this performance on different timescales can be measured. To account for close match results, which a plain Elo system won't capture, I also calculated Elo values using the final points divided by the total points as a game outcome. Lastly, I passed a value in indicating whether it is a tournament game or not, passed the day of the season in and concatenated the input with the output of an <strong>LSTM</strong>, that takes in the last 10 Elo changes from the last 30 days. And that's basically it. My other submission also used the logarithmic mean of the official rankings, but this solution had a worse score of 0.59573 (compared to my 17th with 0.58903). I used the same as my 17th place solution in the women's competition and placed 95th there. My model generally profited from upsets compared to the median expert submission.<br> To conclude:</p> <p><strong>What I have used:</strong></p> <ul> <li>Elo systems</li> <li>LSTM</li> <li>Small neural net</li> </ul> <p><strong>What can be improved:</strong><br> I didn't think of calculating in what round a match up happens. I believe using this would have favored underdogs even more in later rounds.</p> <p>This is the notebook:<br> <a href="https://www.kaggle.com/code/fritzcremer/17th-place-solution/notebook" target="_blank">https://www.kaggle.com/code/fritzcremer/17th-place-solution/notebook</a></p> <p>The code isn't cleaned up well and not commented, as I have written it in two sessions and was a bit short on time, but I might add some comments here and there in later versions. Lastly I want to thank the host for this fun competition!</p> <p>Hmm that's interesting. As I use Elo, I could just simulate the Elo changes for getting into a certain round in order to prevent overfitting. But something needs to be done to not underestimate an Underdog that has gotten into the finals.</p>
March Machine Learning Mania 2022 - Men’s
1st Place Magic Explained
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2022 - Men’s <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/manovermachines" target="_blank">@manovermachines</a> Good spot about Murray State, I noticed in the comments for the <a href="https://www.kaggle.com/code/axotcc/6th-place-solution-team-embedding/comments#1746813" target="_blank">6th place notebook</a> that model seems very <em>light red</em> for Murray State vs St Peters - suggesting it agreed MS were overseeded…</p> <p>A quick calculation of the <em>St Peters effect</em> is, if we assume many gave them a ~5% chance in each match: -ln(0.05) = ~3, and each match is 1/63 of the total score; so the three St Peter's wins <em>alone</em> would add (3*3)/63 = 0.143 logloss; with top scores at ~0.57 the three matches account for (0.143/0.57) ~25% of the entire loss! Put another way, 0.143 is the gap between 1 → 500 of the leaderboard! No-one would get close to perfect so 0.143 is an upper limit but the effect is very non-linear.</p> <p><a href="https://www.kaggle.com/teeyee314" target="_blank">@teeyee314</a> I've run out of time to try making that edit, but raddar <a href="https://www.kaggle.com/competitions/mens-machine-learning-competition-2019/discussion/84664#493584" target="_blank">reported CV scores by year in 2019 here</a>; I remember 2017 being one of the most <em>straightforward</em> years, few upsets…</p> <p><a href="https://www.kaggle.com/kdnune" target="_blank">@kdnune</a> suggested using 2019 data next year, which made me curious… since the code is already there and easy to change I tried <a href="https://www.kaggle.com/code/jtrotman/winning-submission-improved-to-0-498" target="_blank">a simple loop over sub$Season</a>, and…</p> <pre><code>sub$Season Score LB (not including Amir) 2000 0.66297 522 2001 0.64062 345 2002 0.65195 428 2003 0.74616 771 2004 0.70173 694 2005 0.63439 291 2006 0.65904 497 2007 0.58445 10 2008 0.56261 3 2009 0.61380 130 2010 0.68728 635 2011 0.68832 642 2012 0.68068 613 2013 0.60923 85 2014 0.62251 202 2015 0.65589 449 2016 0.58181 8 2017 0.49811 1 2018 0.51618 1 2019 0.64801 410 2020 0.66294 521 2021 0.72200 740 2022 0.66197 517 </code></pre> <p>Could it be that St Peters run of three wins has a dominant effect on the scores? This reminds me of a paper on <a href="https://www.econstor.eu/bitstream/10419/195190/1/1662796994.pdf" target="_blank">Measuring competitive balance in Formula One Racing</a> - these submissions are far from random - the team strengths do not shake-up that much year to year. Simply being more optimistic about St Peters is enough contrariness to win!?</p> <p>The submission files are all in the <a href="https://www.kaggle.com/code/jtrotman/winning-submission-improved-to-0-498" target="_blank">notebook</a> if anyone wants to investigate further…</p> <p><a href="https://www.kaggle.com/jeffsonas" target="_blank">@jeffsonas</a> the features list is:</p> <p><code>X1_WinRatio14d, X1_PointsMean, X1_PointsMedian, X1_PointsDiffMean, X1_FgaMean, X1_FgaMedian, X1_FgaMin, X1_FgaMax, X1_AstMean, X1_BlkMean, X1_OppFgaMean, X1_OppFgaMin, X2_WinRatio14d, X2_PointsMean, X2_PointsMedian, X2_PointsDiffMean, X2_FgaMean, X2_FgaMedian, X2_FgaMin, X2_FgaMax, X2_AstMean, X2_BlkMean, X2_OppFgaMean, X2_OppFgaMin, Seed1, Seed2, SeedDiff, quality_march_T1, quality_march_T2</code></p> <p>X1=Team1, X2=Team2 - all are generated from the regular season results. The XGBoost models have those input columns, the rows they are trained on are tournament results 2003..2021 (predicting score diff). So it reads team IDs from the 2022 sample submission, merges in the above features from the 2018 season, and predicts 2022!</p> <p><a href="https://www.kaggle.com/jeffsonas" target="_blank">@jeffsonas</a> Interesting! Five38 <a href="https://fivethirtyeight.com/features/the-stats-led-our-brackets-astray-this-march-madness-that-doesnt-happen-often/" target="_blank">agree this was an odd year</a>: "<em>this was a year to forget for all of us NCAA number-crunchers</em>", and finished with "<em>It’s not supposed to be predictable; the madness is the point</em>" :-)</p> <p>It's also the worst year for ranking below the sample submission (all predictions 0.5):</p> <pre><code>Year Teams ScoreUnder(0.693) 2014 248 44 17.7% 2015 340 26 7.6% 2016 596 137 23.0% 2017 441 39 8.8% 2018 933 206 22.1% 2019 862 125 14.5% 2021 707 132 18.7% 2022 930 269 28.9% </code></pre> <p>The last five features do use seeds from MNCAATourneySeeds.csv but they are grouped on the Season, then merged with <code>sub$Season</code> as the key, so in fact they are seeds from past years too! If the team was not in the tournament in the past year, the Seed1 or Seed2 (or corresponding quality_march_TX) become NA (missing value). If either Seed1 or Seed2 are NA, SeedDiff is NA. When XGBoost sees an NA in prediction it will choose a left or right branch but since it was trained on rows without NA values it's hard to sum up what effect this has… though the heatmap shows it faithfully :)</p> <p>Thank you <a href="https://www.kaggle.com/littlesaplings" target="_blank">@littlesaplings</a> - if people want to dive further into the past - Kaggle used to release all the submission files before the tournament started and I made <a href="https://www.kaggle.com/competitions/march-machine-learning-mania-2017/discussion/30333" target="_blank">heatmaps of them all in 2017</a>, there is an impressive diversity of approaches taken :) I also made collections of <a href="https://www.kaggle.com/jtrotman/beautiful-mania-2015" target="_blank">2015</a>, <a href="https://www.kaggle.com/jtrotman/beautiful-mania-2016" target="_blank">2016</a>, <a href="https://www.kaggle.com/jtrotman/beautiful-mania-2017" target="_blank">2017</a>, but this is the first time I've actually used them to illustrate an interesting story.</p> <p><a href="https://www.kaggle.com/kdnune" target="_blank">@kdnune</a> And 2007! (Three years "B.K.", Before <a href="https://en.wikipedia.org/wiki/Kaggle" target="_blank">Kaggle</a>…)</p> <p>Thanks very much to Amir Ghazi to for <a href="https://www.kaggle.com/competitions/mens-march-mania-2022/discussion/317145" target="_blank">sharing his approach</a> (which is to re-run <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a>'s 2018 <a href="https://www.kaggle.com/competitions/womens-machine-learning-competition-2018/discussion/53597" target="_blank">womens submission</a>! <em>Edit</em>: <a href="https://github.com/fakyras/ncaa_women_2018/blob/master/win_ncaa.R" target="_blank">Github code here</a>)</p> <p>I noticed <a href="https://www.kaggle.com/competitions/mens-march-mania-2022/discussion/317143" target="_blank">on his bracket</a> that Baylor's first match is a real outlier, for a top seed vs 16 seed it should surely be a darker green? His winning score is 0.51577 but when I run the code I get 0.66334, and a bracket like this:</p> <p><img src="https://i.postimg.cc/h40R1cLN/raddar-rerun-season-2022-bracket-66334.png" alt=""></p> <p>Here is a heatmap view:</p> <p><img src="https://i.postimg.cc/T1TMv46k/raddar-rerun-season-2022.png" alt=""></p> <p>This looks similar to raddar's 2018 (Women's) winning solution:</p> <p><img src="https://i.postimg.cc/26QRJfPb/ncaaw-winning-2018.png" alt=""></p> <p>So, where did I go wrong? There is a line near the end to select season statistics to be merged before scoring the xgboost models, I used:</p> <pre><code>sub$Season = 2022 </code></pre> <p>If we re-use the 2018 code <strong><em>exactly</em></strong> as it was:</p> <pre><code>sub$Season = 2018 </code></pre> <p>We merge in 2018 season statistics, to get 0.51577:</p> <p><img src="https://i.postimg.cc/hjWRpG3R/raddar-rerun-season-2018.png" alt=""></p> <p>That explains why Amir's submission does not favour Baylor at all, but does rate teams like both Michigan's, who were better in 2018 (simple five38 baseline for 2018):</p> <p><img src="https://i.postimg.cc/kXX3jzwG/five38-mens-2018.png" alt=""></p> <p>So… Continuing the meme theme, for anyone reading in 2023 :)</p> <p><img src="https://i.postimg.cc/hvMFD14K/win-mm-2022.png" alt=""></p> <p>(Or is it <a href="https://www.kaggle.com/competitions/mens-march-mania-2022/discussion/317316#1748185" target="_blank">Year-5</a>?)</p> <p>There is a rich history of <em>lucky accidents</em> in winning submissions for March Madness competitions, <a href="https://www.kaggle.com/competitions/womens-machine-learning-competition-2018/discussion/53597" target="_blank">raddar himself admitted</a> using slightly old data for 2018, using the "correct" data would have put him 10th.</p> <p>Similarly <a href="https://www.kaggle.com/competitions/march-machine-learning-mania-2017/discussion/28375#170657" target="_blank">Andrew Landgraf in 2017</a>, and I remember a user call <em>Net Prophet</em> who released a strong submission (2016?) for others to use, but added noise to it - the noisy version beat his "clean" version :-D</p> <p>Indeed, <a href="https://github.com/fakyras/ncaa_women_2018/blob/master/win_ncaa.R" target="_blank">here is raddar's code</a>, the only data from 2022 is the team IDs to predict for…</p> <p>But on the plus side, Amir's $/hour is possibly an all time record :)</p> <p>Thanks! It's these little data-detective stories I enjoy the most - seeing Baylor vs Norfolk St in round 1 as 52% was deeply mysterious :)</p> <p>Here's an animation of the brackets, one frame is <code>sub$Season = 2022</code> and the other is <code>sub$Season = 2018</code></p> <p><img src="https://i.postimg.cc/k4Sxgv00/mm-2022-win.gif" alt=""></p> <p>(1) Yes, I go for red/blue, and often wonder about the other direction: <a href="https://en.wikipedia.org/wiki/Tetrachromacy" target="_blank">tetrachromacy</a> - could some people be getting more from these plots than others?!<br> (2) Kaggle clips to 1e-15, so worst case score is 34.538776394910684 (which is mighty impressive as it means getting 63 matches perfectly wrong! And, hey, the fractional part starts <a href="https://projects.fivethirtyeight.com/2022-march-madness-predictions/" target="_blank">538</a>!)<br> (3) Good spot, I hadn't looked in detail at the other matches, I'll try matching screenshots and a gif that flips between (<a href="https://www.kaggle.com/competitions/mens-march-mania-2022/discussion/317316#1747463" target="_blank">see below</a>).</p> <p>Hmmm, or a new hyper-parameter is called for: YearOffset, or perhaps a weighted blend of previous years stats :)</p> <p>There are probably other teams who ran exactly the same code with <code>sub$Season = 2018</code> but averaged it with other methods, weakening the potent magic :-D</p> <p>Either way, raddar's code has won <strong>three</strong> (↓ thanks Amir!) competitions, the latter one <a href="https://youtu.be/l4ejXyzawaE" target="_blank">a bit like this</a>!</p> <p><a href="https://www.kaggle.com/jbomitchell" target="_blank">@jbomitchell</a> They're all 2022 scores &amp; leaderboard positions - it's just that a 2022 submission is made feeding in the regular season statistics &amp; seed for each team in those years… It would be possible to try even earlier years since we have data back to 1985 (the pre Web era!) <em>Great Scott</em> :-)</p> <p>Someone asked (then deleted):</p> <blockquote> <p>how to make a heatmap of my submission predictions, as shown in the post?</p> </blockquote> <p>It's a good question! The latest public copy is in this <a href="https://www.kaggle.com/code/jtrotman/blend-ncaam-with-2020-vision" target="_blank">2020 notebook</a>; it's essentially just a diagonally symmetric 2D grid of the submission data, sorted by seed on each axis, the matplotlib API it uses is <code>pcolormesh</code>. I'll try to share a more re-usable utility-script version of it next year.</p> <p><a href="https://www.kaggle.com/thomashopkins32" target="_blank">@thomashopkins32</a> congrats on the solo-silver and front page leaderboard finish!</p> <p>That's an interesting idea, and seems from your notebook it may have made the predictions a bit more conservative? I did a similar thing in the 2017 competition, using a neural net with team embeddings, trained on regular season results (a bit like this year's <a href="https://www.kaggle.com/code/axotcc/6th-place-solution-team-embedding" target="_blank">6th place notebook</a>, but simpler), and added previous seasons with a lower sample_weight, using (1-Age^N), where N=2 looked best on past data:</p> <pre><code> sample_weight 2017 1/1^2 1 2016 1/2^2 0.25 2015 1/3^2 0.111111111 2014 1/4^2 0.0625 ... 2010 1/8^2 0.015625 </code></pre> <p><a href="https://fivethirtyeight.com/features/how-our-march-madness-predictions-work-2/" target="_blank">538 also mix in some preseason rankings</a>, they say:</p> <blockquote> It’s not a typo, by the way, to say that we look at preseason rankings. The reason is that a 30- to 35-game regular season isn’t all that large a sample. Preseason rankings provide some estimate of each team’s underlying player and coaching talent. </blockquote> <p>So, my "weighted blend" comment's meta-weights were 25% joke, 75% serious ;-)</p> <p><a href="https://www.kaggle.com/satoshidatamoto" target="_blank">@satoshidatamoto</a> There's a copy of the code in question <a href="https://www.kaggle.com/code/jtrotman/winning-submission-improved-to-0-498" target="_blank">here</a> you can fork that and try it out (takes only ~10 minutes to run). By <strong>scoring the xgboost models</strong> I mean calling <code>predict</code> on the xgboost models, which is the <code>preds = predict(submission_model[[i]], dtest)</code> line, and just before that the lines with <code>left_join</code> are adding some selected season statistics to the Z dataframe.</p>
March Machine Learning Mania 2022 - Men’s
54th place solution aka close but no silver cigar
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2022 - Men’s <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Nice catch! Just a good risk/reward ratio I found when testing on prev. competitions data so left it with the settings. And to the extra trix and fix I thought since the clipping and old top solutions are known by many maybe that strange clipping stands out if it's up to the last digits ;)</p> <p>Almost the same as my WMM solution <a href="https://www.kaggle.com/competitions/womens-march-mania-2022/discussion/316863" target="_blank">https://www.kaggle.com/competitions/womens-march-mania-2022/discussion/316863</a> but with some different approach.</p> <p>Different models to the ensemble:<br> <a href="https://www.kaggle.com/jonbown/creating-sequential-data-for-random-forest-in-r" target="_blank">https://www.kaggle.com/jonbown/creating-sequential-data-for-random-forest-in-r</a><br> <a href="https://www.kaggle.com/imoore/2019m-1st-solution-with-parameter-optimization" target="_blank">https://www.kaggle.com/imoore/2019m-1st-solution-with-parameter-optimization</a><br> And 1 own Stacking regressor with 10 regressors models AdaB, Xgb,LGBM, CatB, RFR, LR, RidgeCV, GBR, HGBR and LSVR.</p> <p>In contrast to WMM the "weighted moving target ensemble" described in the post above worked better than mean ensemble, a strategy with need of some luck as many other trix and fix I guess.</p> <p>Final clipping:<br> finalsubmission['Pred'][finalsubmission['Pred'] &lt;= 0.2] = 0.05 finalsubmission['Pred'][finalsubmission['Pred'] &gt;= 0.9] = 0.95</p>
March Machine Learning Mania 2022 - Men’s
6th place solution, team embedding
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2022 - Men’s <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone, here is my work: <a href="https://www.kaggle.com/code/axotcc/6th-place-solution-team-embedding?scriptVersionId=92171964" target="_blank">Notebook</a></p> <p>Congrats to me to get the first gold medal, it's so close to the prize though🥺. However, my method only got 472th/659 in <a href="https://www.kaggle.com/competitions/womens-march-mania-2022" target="_blank">March Machine Learning Mania 2022 - Women's</a>.</p> <p>In my own knowledge, my method is different from the previous works that most of them are doing feature engineering to add and aggregate features as input and target for common machine learning models. In my work, I try to train team embedding of the given team_id and make model to predict the competition result from given two team embeddings.</p> <p>The following express my method which is copy from my notebook.</p> <h3>Model</h3> <p><img src="https://i.imgur.com/pY1b0mu.png" alt="drawing"></p> <ul> <li>The goal of model:<ul> <li>Given 2 team_id, predict the competition result</li></ul></li> <li>Input:<ul> <li><strong>team_emb</strong>: trained embedding, represent the embedding of given team_id, initialized with torch.nn.Embedding().</li></ul></li> <li>Output:<ul> <li><strong>team1_win</strong>: whether team1 win the game</li> <li><strong>team_info</strong>: abbreviation of the competition detail result, contains ['Score','FGM','FGA','FGM3','FGA3','FTM','FTA','OR','DR','Ast','TO','Stl','Blk','PF'] </li></ul></li> </ul> <h3>Data</h3> <ul> <li><strong>Training data</strong>: <ul> <li>RegularSeasonDetailedResults.csv</li></ul></li> <li><strong>Validation data</strong>: <ul> <li>NCAATourneyDetailedResults.csv</li></ul></li> </ul> <h3>Preprocess</h3> <ol> <li>Duplicate the data and make it symetrical to get rid of winner and loser.</li> <li><strong>Add feature</strong> "<strong>is_win</strong>" which represent whether team1 win the game.</li> <li>Apply <strong>quantile transformation</strong> to transform the <strong>team_info</strong> to normal distribution.</li> </ol> <h3>Training</h3> <ol> <li>Train for one epoch along the timeline</li> <li>A minibatch is the data that has the same 'Season' and 'DayNum'.</li> <li>The team embeddings will be updated everyday.</li> </ol> <h3>Future work</h3> <ul> <li>Apply GNN to let a single game result of two teams affect other teams and enhance the ability of the model to apply the relationships between the teams.</li> </ul>
March Machine Learning Mania 2022 - Women's
1st place solution/approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2022 - Women's <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I'd like to thank kaggle admins for hosting this year's competition. This is my second time participation into NCAA Basketball competitions hosted on kaggle. As well as last year's competition, it was quite enjoyable to watch games and check how my models work every day. I'd like to thank kaggle admins for organizing this competition.</p> <p>In this repository/forum, I'd like to share my approach to this year's competition. However this type of competition is heavily influenced by luck, so my approach would not work next year by high chance. But, I hope some of you find my approach useful.</p> <p>Basically, I submitted two types of models. One of them is "passive" model, the other is "aggressive/gambling" model. In short, my aggressive/gambling model ended up being the most accurate model throughout the competition (final leaderboard score was 0.35438). Although my passive model didn't work as well as the other one, still brought me better result than I expected. (In fact the final leaderboard score was 0.43574, bronze medal.)</p> <p>In "passive" model, I submitted predictions calculated using only LightGBM without any post-processing. In "aggressive/gambling" model, I did override predictions of three teams.</p> <p>I shared my R-scripts and Python notebook in <a href="https://github.com/koki25ando/March-Machine-Learning-Mania-2022---Women-s" target="_blank">my GitHub repository</a>.<br> Please let me know if you have any questions.</p> <h1>Data Preparation</h1> <p>I wrote R scripts to set up datasets. <br> I did no tricky things at all, I setup very basic features such as Seed info, teamid, team information…<br> For further complete information, please read my <a href="https://github.com/koki25ando/March-Machine-Learning-Mania-2022---Women-s/tree/master/R_code" target="_blank">R codes</a> which exist in 'R' folder in this repo.</p> <h1>Feature Engineering</h1> <p>To improve my cross validation score, I did very basic categorical feature engineering, such as count encoding, label encoding, target encoding…<br> See <a href="https://github.com/koki25ando/March-Machine-Learning-Mania-2022---Women-s/blob/master/py/LightGBM_stage2.ipynb" target="_blank">this code</a> to check what's being done. </p> <h1>Modeling</h1> <p>I used only one model, LightGBM. That's it. I didn't even stacked any other models in the end.<br> I tried to make it as simple as possible. So I decided to use only LightGBM.<br> See <a href="https://github.com/koki25ando/March-Machine-Learning-Mania-2022---Women-s/blob/master/py/LightGBM_stage2.ipynb" target="_blank">this notebook</a> for more information.</p> <h1>Post-Processing</h1> <p>Here comes aggressive/gambling part. Basically I did override 3 teams prediction as every other participants in this competitions do.<br> I've read some articles, and chose three teams, "Stanford", "UConn", "South Carolina".</p> <p>・<a href="https://dknation.draftkings.com/2022/3/14/22976763/ncaa-womens-basketball-tournament-odds-2022-march-madness-national-championship-uconn-memphis" target="_blank">Full list of odds to win the 2022 NCAA Women’s Basketball Tournament</a></p> <p>In the end, three of them survived until FINAL FOUR. I guess I was just this year's lucky boy.</p>
March Machine Learning Mania 2022 - Women's
2nd Place Approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2022 - Women's <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all, including the South Carolina Gamecocks! Thanks to Kaggle for running the contests again and for prompt leaderboard updates! Thanks also to Mark McClure for surely the best user-generated submission scoring UI ever! Providing this tool and updating the scores on time is certainly worth some kind of <em>community-leader</em> or <em>forum hero</em> prize 👏.</p> <p>My submissions this year are exactly the same as my 2021 code (10th Mens, 12th Womens).</p> <h3>Base Model</h3> <p>For the Women's tournament it is an approximation of the <a href="https://fivethirtyeight.com/features/how-our-march-madness-predictions-work-2/" target="_blank">Five38 match probabilities</a>:</p> <pre><code>sub["Pred"] = 1. / (1 + 10**(rdiff * (30.464 / 400))) </code></pre> <p><code>rdiff</code> is the difference in Five38 team ratings; and using their more precise predictions for round 1:</p> <pre><code>idx = sub.Round == 1 sub.loc[idx, "Pred"] = sub.loc[idx, "LTeamID"].map(five38.rd2_win) </code></pre> <p>That would score 0.42217 ~ 31st place. It tends to be too confident for later matches, which might be important when you see what happens to these predictions next…</p> <h3>Overrides</h3> <p>There are a couple of <em>safe</em> ways to split two submissions:</p> <ul> <li>split 0/1 for all possible final games, but then what if if the final is unbalanced? e.g. 2021.</li> <li>split the first round match closest to 0.5 → 1 in one sub and 0.5 → 0 in the other</li> </ul> <p>The latter guarantees -ln(0.5)/63 = 0.011 saving of log loss.</p> <p>Instead I used the same <a href="https://www.kaggle.com/competitions/mens-machine-learning-competition-2019/discussion/90254" target="_blank">Madtown</a> technique as <a href="https://www.kaggle.com/competitions/ncaam-march-mania-2021/discussion/231068" target="_blank">last year</a>, which I now realise is a generalisation of the second option:</p> <ul> <li><code>preds[(abs(preds-0.5)&lt;T)]</code> → [ <code>P</code> , <code>(1-P)</code> ]</li> </ul> <p>For the second "safe R1 gamble" (R1 matches only) T is nearly zero (such that only one match is affected) and P is 0.</p> <p>For "Madtown" gambling for the Women's tournament, I found more conservative settings appear to work better, and used T=0.16 and P=0.36, so all predictions between 0.34 - 0.66 are set to 0.36 in one sub &amp; 0.64 in the other.</p> <p>Here are my scored predictions:</p> <p><img src="https://i.postimg.cc/J4NFtsFv/womens-march-mania-2022-2nd.png" alt=""></p> <ul> <li>Y scale is the probability for the favourite</li> <li>X axis shows round and match winner</li> <li>X is sorted [round 1, round 2, other rounds], then by prob</li> <li>blue = model predictions</li> <li>black = unused original model (0.36 in one sub, 0.64 in another)</li> <li>red stars = log loss scores (right Y axis)</li> <li>black line = original model score 0.42217</li> <li>red line = best submission score 0.38864</li> </ul> <p>Instead of uncertain predictions around ~0.5, which will all score ~0.693, we get -ln(0.36)=0.446285 or -ln(0.64)=1.021679, hopefully more 0.446285's in one submission, pushing average logloss down.</p> <p>It turns out 13 <em>close</em> matches were scored that had these diverging predictions, and interestingly, <strong>any</strong> set of 0.36/0.64 overrides would have resulted in a gold medal:</p> <p><img src="https://i.postimg.cc/TwjbnXXd/womens-march-mania-2022-scores.png" alt=""></p> <p>So over <strong>all</strong> possibilities for a random seed to reassign the ~0.5 → [ 0.36, 0.64 ] matches, the worst rank would be 8th, with a 42% chance of 8th and 58% chance of top 3! My actual score of 0.38864 is from having a 9:4 split. Of course, those particular 13 matches being scored was not a given, it does not mean I was guaranteed a top 8 finish from the outset.</p> <p>I'm guessing the impressive-looking rankings above are partly due to other teams gambling with a 0.99+ override on Baylor (seeded 2), who lost in round 2.</p> <p>Thanks for reading! Any questions would be welcome…</p> <h3>P.S.</h3> <p>Same approach in Men's (but with Pystan Bayesian model as baseline) is currently a 16:14 split, 0.67280 logloss, 542nd place…</p> <h3>Code</h3> <p><strong>Updated 7th April</strong>: <a href="https://www.kaggle.com/code/jtrotman/2nd-place-ncaaw-2022" target="_blank">2nd Place NCAAW 2022</a> notebook.</p>
March Machine Learning Mania 2022 - Women's
3rd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2022 - Women's <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Google, Kaggle, and the Kaggle admins for hosting this competition!</p> <h1>Model</h1> <p>My model started with a baseline that used inferred win probabilities from 538’s march madness projection. It then generated 1,200 different linear transformation of those win probabilities, skewed toward small changes to mimic models clustering around 538’s baseline probabilities. It then simulated 25,000 different tournaments, and selected the transformation that most often placed in the top 5 of all the transformations.<br> The top transformation f(p) for probability p was:<br> <img src="https://i.postimg.cc/htsGtJ5T/equation.png" alt=""></p> <p>This transformation exaggerates win probabilities above .93, and below .93 it discounts the probabilities to the extent that it becomes contrarian between .36 and .64.<br> <img src="https://i.postimg.cc/2688WCgM/graph.png" alt=""></p> <p>The model ended up being 10-3 in games where it was contrarian to the baseline and had an overall log loss improvement of .036.<br> <img src="https://i.postimg.cc/FR91xykP/Screenshot2.png" alt=""></p> <h1>Code</h1> <p><a href="https://github.com/johndoe-jr/NCAA2022" target="_blank">https://github.com/johndoe-jr/NCAA2022</a></p> <h1>Notes</h1> <ul> <li>The 538 implied probabilities were calculated by using the probability a team would make it past a round, adjusted by the probability they would make it to that round and comparing that to the same calculation for their opponent.</li> <li>The transformations were made up of transforms that were solely translational, transforms solely rotational, and transforms that were a combination of the two. The skewing was done by starting at the maximal transformation and multiplying it by a factor less than 1, so that neighboring transformation continually get closer together as the transformation neared the baseline.</li> <li>The simulations used the default baseline probabilities to calculate the winners of each tournament game.</li> <li>This model is the one that I’ve been using for years, and I haven’t adjusted it yet to output two orthogonal transformations. Both submissions were from near identical runs of the model and produced similar results: 0.39429 and 0.39662. The identical model produced a 0.74286 for the men’s tournament.</li> </ul>
March Machine Learning Mania 2022 - Women's
4th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2022 - Women's <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, I’d like to thank the Kaggle team for hosting this exciting competition for another year. Congratulations to everyone, as we all gained experience this year that would make our models better next year. It’s the third year for me to participate in the NCAA competitions. 2020 was a big disappointment as the tournament got canceled at the last minute due to COVID. This year, I had the luck to end up in the top 1%. The journey to the top was full of excitement.</p> <p>My model is a carry-forward over raddar’s well-known 2018 1st place solution. Shout out to raddar! Your code has inspired many over the years. My work mainly focused on the following aspects:</p> <ol> <li><p>Feature Engineering<br> To give the model more opportunities to extract signals from historic data, my first step was to create more compound features using various evaluation metrics. Some past winners went against this method, saying that simple metrics worked the best, while I thought they may not have found those that truly worked. I found a full page of team evaluation metrics from NBAstuffer.com via the link below. After blending most of them in, I ended up having more than 100 features to choose from.<br> <a href="https://www.nbastuffer.com/analytics-101/team-evaluation-metrics/" target="_blank">https://www.nbastuffer.com/analytics-101/team-evaluation-metrics/</a></p></li> <li><p>Feature Selection<br> Feature selection is always a tedious but rewarding process for me. It was easy to get rid of those correlated features, but choosing from what was left wasn’t fun. I started by using SHAP to explain the feature importance of the XGBoost model to reduce the number of features to 50. Then through trial and error, I manually removed another 30 before seeing the CV loss at the bottom.</p></li> <li><p>Games Overriding<br> One shouldn’t be surprised if he/she learned that the winners had a lot of games overridden. Overriding the 1st round games has become a must for everyone who wants to stay in the game. This year the #14 Jackson St. almost upset the #3 LSU. They led by 10 points in the 4th quarter but didn’t hold up. If that had happened, many would have disappeared on the LB. The other game was #2 Baylor vs #10 South Dakota. I only had it overridden in one of my submissions, as in the past 11 years, there was only one upset between #2 and #10 seeds. But this year alone two more had been added to the record, the other being #10 Creighton beat #2 Iowa. Next year you can’t be more cautious with #2 vs #10. I took more risks by betting over a few more games. For an entire week, I was anxious about #1 NC State vs #5 Notre Dame, as Notre Dame's 108 points in their last game were no kidding. Raina Perez's final steal saved NC State as well as sent me to #1 on the LB after the Sweet 16.</p></li> <li><p>Final Rounds<br> Since the Elite Eight, my model had been on auto-pilot, without overrides. The battle for a position in the top 5 in the last few games was more fierce than I expected. I was jealous of others who had an insane 0 loss in the Elite Eight and Final Four, while my model gave me considerable losses over UConn alone. I dropped to #6 before the title. No complaints cause those bets between strong seeds were way too risky for me. I ended up going back to the 4th place as some all-in bets hurt two unlucky friends.</p></li> </ol> <p>Again, congrats to everyone. For those who happened to miss a better medal or prize, don’t be disappointed. You are as good as the next level, just a bit short of luck this year. May luck with us all in 2023!</p> <p>Please upvote if you found this post helpful. Thanks.</p> <p><a href="https://github.com/wii365/NCAAW2022" target="_blank">code</a></p>
March Machine Learning Mania 2022 - Women's
5th Place Approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2022 - Women's <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle for hosting this competition again. This competition is part of what makes March my favorite time of year and I appreciate all of the work it takes on the back end to set up. </p> <p>Here's a bunch of info on my submission with a link to the code and a tldr at the end, please let me know if you have any questions.</p> <h1>Background on you/your team</h1> <p>My name is Taylor Merry and I am a data scientist living in Seattle, WA.</p> <p><strong>What is your academic/professional background?</strong><br> BS in Statistics from University of Washington. 2 years experience as data scientist in industry.</p> <p><strong>Did you have any prior experience that helped you succeed in this competition?</strong></p> <p>Yes, I have participated in many of the previous iterations of the Kaggle March Madness competitions. I am also a big college basketball fan and follower of sports analytics.</p> <p><strong>What made you decide to enter this competition?</strong></p> <p>I enjoyed the past iterations of this competition</p> <p><strong>How much time did you spend on the competition?</strong></p> <p>This year, I had a lot of the code already set up from previous entries, so only around 20 hours or so</p> <h1>Summary</h1> <p><strong>Training method(s) you used</strong></p> <p>Normal distribution (parametric model) from publicly available rating systems</p> <p><strong>The tool(s) you used</strong><br> I used publicly available rating systems and a normal distribution to convert from points differential to win probability</p> <p><strong>How long it takes to train your model</strong></p> <p>No training, notebook runs quite quickly</p> <h1>Features Selection / Engineering</h1> <p><strong>What were the most important features?</strong></p> <p>The publicly available rating systems</p> <p><strong>How did you select features?</strong></p> <p>Looked through a few rating systems and picked the ones that looked the best</p> <p><strong>Did you make any important feature transformations?</strong></p> <p>Not really unless you would count making manual adjustments to some of the teams’ ratings</p> <p><strong>Did you use external data? (if permitted)</strong></p> <p>Yes, see references</p> <h1>Training Method(s)</h1> <p><strong>What training methods did you use?</strong></p> <p>None, just used a parametric model using publicly available rating systems and research to set the parameters</p> <p><strong>Did you ensemble the models?</strong></p> <p>I used two different rating systems and did a weighted average</p> <p><strong>If you did ensemble, how did you weight the different models?</strong></p> <p>I decided to weight the Moore power ratings as 60% and the Talisman Red ratings as 40% because I had used the Moore ratings before and knew they were reliable</p> <h1>Interesting findings</h1> <p><strong>What was the most important trick you used?</strong></p> <p>Knowing that UConn played most of the season injured so they’d be highly underrated</p> <p><strong>What do you think set you apart from others in the competition?</strong></p> <p>Boosting UConn, not giving heavy favorites 100% win probability (Baylor, Iowa)</p> <p><strong>Simple Features and Methods</strong></p> <p>My model was already quite simple. I think for this competition, it’s best to have a simple prediction model and then use some intuition on how you should adjust your probabilities.</p> <h1>Model Execution Time</h1> <p>My predictions used no training, the notebook only takes a few seconds to run.</p> <h1>References</h1> <p>Ratings I used:</p> <ul> <li><a href="https://sonnymoorepowerratings.com/w-basket.htm" target="_blank">Sonny Moore</a></li> <li><a href="https://talismanred.com/ratings/whoops/" target="_blank">Talisman Red</a></li> </ul> <p>Sources for Kenpom using 11 points as standard deviation for men’s college basketball (<a href="https://www.reddit.com/r/CollegeBasketball/comments/5xir8t/calculating_win_probability_and_margin_of_victory/" target="_blank">source 1</a>, <a href="https://www.google.com/url?q=http://practicallypredictable.com/2018/03/13/ken-pomeroy-ratings-and-first-round-upset-picks/&amp;sa=D&amp;source=docs&amp;ust=1649541627318853&amp;usg=AOvVaw1DMKq_kvBc8Ax-paTOXuSV" target="_blank">source 2</a>)</p> <h1>Tldr:</h1> <p>I would say the biggest factors leading to my success in this competition were (in order):</p> <ol> <li>Not giving heavy favorites 100% win probability which helped me survive the Baylor and Iowa losses</li> <li>Luck (my men’s entry finished in the 500s and I guess the winner of the men’s competition accidentally used 2018 data???)</li> <li>Knowing UConn was injured and thus would be underrated (I don’t think I would have placed if UConn didn’t make the title game)</li> </ol> <p><a href="https://www.kaggle.com/code/tmerry/2022-women-s-march-mania-predictions" target="_blank">Link to code</a></p> <p>I went through the ESPN pages for every team seeded 12 or higher after the tournament was announced. Then I looked at the player stats to see if one of the team's better players had been missing for some portion of the season or would be missing for the tournament. If so, I tried to guess exactly how that injury would affect the team's power rating. It was mostly just guess work.</p>
March Machine Learning Mania 2022 - Women's
6th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2022 - Women's <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>For the third year in a row, being aggressive in predictions worked out <strong>much</strong> better in the women's competition than in the men's competition. This probably shouldn't come as a surprise, considering the relative lack of parity in women's basketball compared to men's over the past decade.</p> <p>I used a model exclusively based in XGBoost, and used a good chunk of my time hyperparameter tuning to optimize the results of my CV. </p> <ul> <li><p>The target feature in the train CV was margin of victory</p></li> <li><p>The accuracy measure was mean absolute error</p></li> <li><p>Historic win percentages based on final point spreads were used to map the margin of victory to a win probability</p></li> <li><p>Aggressive override #1: map any game with a win probability &gt;= 92.5% to 99% likelihood. This <em>almost</em> burned me in the NC State/Notre Dame game, but Raina Perez was my hero! I didn't do the math, but I may have finished in the top 5 if I had set this to 99.999% instead. Honestly, I'd rather not go through this exercise because it'll probably cause me too much anguish if my loose hypothesis is true :)</p></li> <li><p>Aggressive override #2: All 1's and 2's were set to a 99.9% likelihood in the first round, and 3's were set to a 99% likelihood</p></li> <li><p>Aggressive override #3: All 1's were set to a 99% likelihood of winning against whatever 8/9 seed they played in the second round. At the last second, I decided to stay away from doing this with 2's against the 7/10 seed - relatively lucky given Iowa and Baylor both going down.</p></li> </ul> <p>At the end of the day, finishing just outside the top 5 was primarily a result of UConn going to the championship game, as my model had them as underdogs against NC State and Stanford. Lesson learned: I need to implement a Geno Auriemma manual override next season!</p>
March Machine Learning Mania 2022 - Women's
7th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2022 - Women's <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Similar to others, being aggressive with predictions is what brought me up the leaderboard. It backfired in the Men's competition (I placed 899th, thanks Kentucky and Iowa!), but it worked in my favor here. Ultimately to place at the top of these March Madness competitions, you need to introduce more volatility to your universe of potential outcomes - you might have a core model that can consistently place around the top 25-30%, but I don't believe you are going to find some magic math to consistently place in the top 10% (if you have, let's talk! :D)</p> <p>I use an ELO ranking approach, very similar to 538's rankings. I shared a notebook back in the 2019 competition for parameter tuning the ranking - I use a version of this each year to think about how to set parameters for what I ultimately use in the competition. <a href="https://www.kaggle.com/code/travisbuhrow/elo-scenarios-simulator-womens/notebook" target="_blank">https://www.kaggle.com/code/travisbuhrow/elo-scenarios-simulator-womens/notebook</a></p> <p>From there, I made the following "aggressive" manual adjustments:</p> <ul> <li>Increased UConn's rating 175 points - this moved them from my 4th rated team to my 1st rated team, but really the intended effect was to move them into the top "tier" of my rankings, so that they were considered mostly equal to Stanford and South Carolina. </li> <li>Gave 1s, 2s, and 3s a 99.9999999% chance of winning round 1. LSU came very, very close to blowing this up! I'm also glad in hindsight that I didn't move on to doing this for round 2 games, because I would have ultimately given the 2s the auto-win… and Iowa/Baylor would have ruined the submissions.</li> <li>Took the Round 1 Nebraska/Gonzaga game, which my model had as a 50/50 game, and submitted one entry assuming Nebraska would win, and the other assuming Gonzaga would win.</li> </ul> <p>Thanks to the admins for hosting another year of these March Madness competitions - some of my first exposure to data science and Python came with competing in the 2018 competition. It's a wonderful starter competition for newbie data scientists with interest in sports, as the volatility in outcomes means there's a little less importance placed on technical data science/coding skills, and a little more importance placed on domain knowledge and submission strategy. These competitions got me hooked on learning more about data science, and I'll be forever grateful for that! </p>
March Machine Learning Mania 2022 - Women's
My 56 rank solution made me kaggle expert.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2022 - Women's <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>🙏 thanks </p> <p>My solution is very simple. I used just random forest model and just fined tuned it. The cv for the model is 0.5. Similar solution used for mens competition it didn’t work out well there. Now with this medal, I am kaggle expert now. Thanks for the competition and community support.</p>
March Machine Learning Mania 2023
1st Place Submission - Another victory for raddars code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>For anyone interested I may have stumbled on the merging error. I realized the notebook only calculates Team Quality from 2010 and after. When adding in the prior periods 380 rows get added to training data (all of these are in 2009 for a total of 508 data points for 2009). </p> <p>Haven't had a chance to look at the detail at all but it gets added with the following merge. </p> <pre><code>tourney_data = pd.merge(tourney_data, seeds_T1, on = [, ], how = ) tourney_data = pd.merge(tourney_data, seeds_T2, on = [, ], how = ) </code></pre> <p>As the data is added to 2009 I'm assuming this is why the Team Quality calculations started in 2010. For this year's tournament it would have actually worked out in a better result (got a score of 0.1688 with Team Quality used in all periods). </p> <p>** I may be a bit early with this post but will update if anything ends up changing.</p> <p>As I had noted in a prior post my submission was essentially the <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> code from a few years back (the python version instead of the R version that is widely used). Initially I was planning on using the base code and updating the features however I ended up not having the time. </p> <p>I haven’t had much time for competitions over the last year or two however the March Madness one is always a lot of fun to come back to. Ultimately I was more lucky than anything else as I didn’t add much past updating the code to work with the current year’s data. Basically I just verified the resulting predictions seemed appropriate and went with the results. </p> <p>For fun I used the same predictions in the ESPN Bracket Challenge and ended up in the 50% percentile in the Men’s and the 97% percentile in the Women’s. </p> <p>The notebook for my submission can be found here (<a href="https://www.kaggle.com/code/rustyb/paris-madness-2023" target="_blank">Paris Madness 2023</a>). </p> <p>In keeping with my submission maybe I should have gone with less effort and had ChatGPT provide some commentary. Note: I actually did this asking for some witty commentary that ended up being far more interesting (<em>make sure to check out the bottom of this post</em>). </p> <h3>Submissions</h3> <p>Best Submission: My contribution was limited to 1) commenting out the np.exp() line when calculating Team Quality as it ended up returning quite a few inf values and 2) No overrides in match-ups of seeds 1-4 against seeds 13-16.</p> <p>Without the changes above the model still performed really well and ended up with a score of 0.17629 which would have been good enough for 6th place on the public leaderboard at the end of the competition. </p> <p>Although the code was pretty similar to the R version that has resulted in top finishes over the last few years I thought I’d provide a bit of commentary on the Python version. Guess I feel a bit guilty about the amount of effort I put into my submission that somehow ended up winning. </p> <h3>Features</h3> <p>Mean of the following regular season stats for each team (where T2 is the opponent of T1). So basically these are repeated four times (team 1, team 1 opponents, team 2 and team 2 opponents). Of the features below only <code>'PointDiff'</code> was a calculated variable. </p> <p><code>['T1_FGM', 'T1_FGA', 'T1_FGM3', 'T1_FGA3', 'T1_OR', 'T1_Ast','T1_TO', 'T1_Stl', 'T1_PF', 'T2_FGM', 'T2_FGA', 'T2_FGM3', 'T2_FGA3', 'T2_OR', 'T2_Ast', 'T2_TO', 'T2_Stl','T2_Blk','PointDiff']</code></p> <p>The only other features used in the model were the following with the win ratio over the 14 days being common in many of the models made public. <br> <code>[‘T1_win_ratio_14d’, ‘T2_win_ratio_14d’, ‘T1_quality’, ‘T2_quality’, ‘Seed_diff’, ‘T1_seed’, ‘T2_seed’]</code></p> <h3>Team Quality</h3> <p>One of the main items in the R version of the code was the random effects GLMM. It looks like the Python version doesn’t use random effects which would be a difference between the two. I can’t say I’m that familiar with R or the from_formula based GLM so I’m not sure how important this difference was. Also even with removing the np.exp() there were quite a few NaN values (about 25% of the training data). </p> <h3>Model</h3> <p>XGB was used with xgb.cv(folds = KFold(n_splits = 5, shuffle = True)) that was repeated 3 times. This is different from the original R code which used a recommended repeat_cv of 10</p> <h1>ChatGPT WItty Commentary</h1> <h1>Title: A Slam Dunk of a Model: XGBoost's Swish Predictions for March Madness 2023</h1> <h2>Introduction:</h2> <p>March Madness 2023 proved to be a wild ride full of upsets, nail-biting finishes, and buzzer-beaters. But while the on-court action left fans on the edge of their seats, our trusty XGBoost model coolly calculated the winners behind the scenes. With only a few minor tweaks, our predictive powerhouse once again clinched the top spot in the Kaggle competition, leaving the rivals in the dust like an ankle-breaking crossover.</p> <h2>Model Overview:</h2> <p>XGBoost, the LeBron James of machine learning algorithms, is an ensemble learning technique famous for its slam-dunk performance across diverse tasks. Our winning model harnessed the power of XGBoost to tackle the NCAA tournament, predicting outcomes with such accuracy that even the most seasoned sports analysts would be left speechless.</p> <h2>Data and Features:</h2> <p>The model feasted on a buffet of historical NCAA data, devouring team stats, player performances, and even the odd tidbits of info like a hungry freshman at an all-you-can-eat dining hall. Our feature set comprised the crème de la crème of data points, cherry-picked for their impact on game outcomes:</p> <ol> <li>Team statistics: win-loss ratios, offensive and defensive wizardry, and average margin of victory.</li> <li>Player performance: on-court heroics such as points per game, rebounds, assists, and shooting percentages.</li> <li>Miscellaneous factors: the elusive home-court advantage, battle-hardened experience, coaching prowess, and the all-important team momentum leading up to the tournament.</li> </ol> <h2>Model Training and Validation:</h2> <p>Like a well-coached team running drills, our model underwent rigorous training and validation procedures. The dataset was divided into training and validation sets, and a cross-validation strategy was adopted to fine-tune the model's hyperparameters. By repeatedly putting our model through its paces, we pinpointed the winning combo of parameters that delivered the most accurate predictions.</p> <h2>Key Changes and Improvements:</h2> <p>Though our XGBoost model has been a perennial contender, we couldn't resist tweaking it to keep it ahead of the pack. The changes we made for this year's competition were akin to swapping out our playbook's dog-eared pages for fresh new strategies:</p> <ol> <li>Updated data: We retrained our model on the freshest NCAA tournament data, ensuring it was up to speed with the latest college basketball shenanigans.</li> <li>Hyperparameter tuning: A fine-tuning of our model's hyperparameters, based on the most recent competition data, turned it into an unstoppable force.</li> </ol> <h2>Conclusion:</h2> <p>Our XGBoost model's triumphant performance in the 2023 Kaggle March Madness competition is a testament to its predictive prowess. With just a few strategic adjustments and an eye for the latest trends, our model swished its way to the top, earning the title of MVP in the world of college basketball predictions.</p> <p>I wouldn't be surprised if there was an unintended merge or join in there someplace. I don't know R myself so I'm not easily able to follow through the columns of data through each step. Might just have to chalk it up to one of life's great mysteries. </p> <p>I'm hoping to python refactor the code to make it a bit easier to use prior to next year (there is likely a version out there already but I'll make mine public as well) I'm likely not going to get that in depth into the R version as I simply don't use it at all. </p> <p>This is so interesting and unfortunate/unlucky. If I had more time I'm assuming I would have made changes that dropped me way down the leaderboard. Looking forward to trying them now that late submissions are open. </p>
March Machine Learning Mania 2023
2nd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I think there's probably something to be said for simplifying the model and just using all differentials of differentials. So like, instead of (1) home points for (last X weeks), (2) home points against, (3) away points for, and (4) away points against, you just have (1) [home point differential - away point differential]. </p> <p>For rebounding, you might want to go the "Four Factors" approach and compare the home team's offensive rebounding directly to the away team's defensive rebounding + vice versa. Like maybe [home OREB% + home DREB% - away OREB% - away DREB%)]. </p> <p>FWIW, I think I made something of a mistake in my feature engineering. For tournament games, I included information from earlier in the tournament to predict games later in the tournament. For example, for the 2022 men's championship game played on April 4th, I allowed the rolling window feature engineering to look at any game before 4/4/2023. That's great if you're making a betting model, but for this competition I really should have fixed all tournament game dates to the first date of the tournament. Clearly it wasn't a huge deal, but methodologically I'd like to update that for next year.</p> <p><strong>Private leaderboard place:</strong> 2nd<br> <strong>Private leaderboard score:</strong> 0.17522</p> <p>UPDATE: Notebooks now available. Each notebook is seeded with public data, built from the prior notebook, to allow anyone to run any notebook. This keeps RAM requirements under the 30GB limit for non-paying accounts. </p> <ol> <li><strong><a href="https://www.kaggle.com/code/matthiask/01-prep-march-mania-2023-data-round-1" target="_blank">Prep men's and women's data</a></strong></li> <li><strong><a href="https://www.kaggle.com/code/matthiask/02-march-mania-2023-feature-engineering" target="_blank">Feature engineering</a></strong></li> <li><strong><a href="https://www.kaggle.com/code/matthiask/03-march-mania-2023-modeling" target="_blank">Modeling</a></strong></li> <li><strong><a href="https://www.kaggle.com/code/matthiask/04-march-mania-2023-smooth-predictions" target="_blank">Smoothing predictions</a></strong></li> <li><strong><a href="https://www.kaggle.com/code/matthiask/05-march-mania-2023-submission-evaluation" target="_blank">Model evaluation technique</a></strong></li> </ol> <h3>Intro</h3> <p>My original solution used a blend of pre- and post-processing steps in R, and API calls to DataRobot's AutoML platform to leverage feature engineering and AutoML functionality. I happen to work at DataRobot and have access to their most recent software, but I understand that many do not. For that reason I have attempted to replicate the entire process in R, where I have achieved outputs with &gt; 0.993 correlation to my original final submission and a very similar leaderboard score. I have uploaded a series of notebooks linked above.</p> <p>I have a BA in mathematics and an MS in statistics, with 10+ years of professional experience as a university professor, actuary, and data scientist. I would be remiss not to point out that with just 126 scoring observations, I very well could have been lucky. But I am optimistic about some of these ideas working year-over-year. Here is the basic pipeline:</p> <p>R: pre-processing -&gt; R: conference effects offset -&gt; DataRobot time-aware feature engineering -&gt; DataRobot AutoML -&gt; R: smoothing probability predictions and outputting in Kaggle format.</p> <h3>Primary objective and methodology</h3> <p>I wanted to find a way to leverage information from entire seasons, while also allowing the model(s) to focus on how teams are playing leading up to the tournament, and focus on games that are more like tournament games. I leveraged AutoML to save time, because my 5- and 7-year-olds are my real bosses, and they are demanding ;-) To that end, here are the most unique things about my approach.</p> <p>1) Derived conference effects for every season from the regular season non-conference schedule, i.e., information that would be available when the tournament starts. This came in iteration 2 or 3, when I saw the South Dakota State women as my #2 overall team. Sure they went 18-0 in conference, but they also play in the Summit conference, which isn't very good. Clearly my model was not picking up conference effects very well before this addition.</p> <p>For this, I used R's glmnet on only non-conference, regular season games, and I leveraged four primary features: </p> <ul> <li>each team's in-conference point differential to control for how good the team is within their conference</li> <li>home court indicator</li> <li>the season (as a factor), as many conferences have gone up and down over the years</li> <li>conference labels (and interaction between season and conference). This is the main effect I'm looking for, really. The rest are just controls.</li> </ul> <p>Maybe the real magic here is in the creation of the model matrix, where instead of duplicating conference features for home and away, I made a single column for each conference. The home team's conference was labeled +1 and the away team's conference was labeled -1. This allowed me to isolate an entire conference's information into one column, rather than splitting it up between home[conf] and away[conf]. I used two targets: home team's point differential in the game and home team win indicator. Then I extracted the season-conference coefficients for each of home margin and home win models (log-odds), and appended them onto each game in my training dataset, to the home and away teams, respectively. I took the difference between the home team's conference effects and away team's conference effects, and these became offsets in my main models to help control for conference effects that are hard to pick up in the relatively few tournament games that are played.</p> <p>2) Related to the above, I had to do something about neutral site games, as the entire NCAA tournament is played on neutral site courts (for the men, and some for the women, as well). So from the Kaggle-provided data, I duplicated all neutral site rows flipping "home team" and "away team", and used a "home" indicator = 0 for when games were on neutral courts. This made all neutral site games have similar impacts on the influence of "home" and "away" features, as well as doubling the weight of most NCAA tournament games in model training. When I went to predict tournament games, I predicted two rows per every matchup and then averaged the results. </p> <p>3) I built four models in the main training portion of the work. Point differential and home win models for each of men and women. There was more information given for men's tournaments, and I thought it best to just separate out the genders. I also believe you can extract more information from differentials--as a 20-point win isn't the same as a 1-point win--so I wanted to make sure I adjusted my win probabilities by margins. To do so, using the outputs of the four models described above, I built a <code>gam()</code> with the <code>mgcv</code> package in R that regressed log-odds of predicted home win probability on the predicted home point differentials to "smooth" the probabilities out and leverage those point differentials. Anecdotally, this helped the predictions for the Iowa women, as their predicted point differentials were relatively greater than their original predicted win probabilities. </p> <p>4) Time-aware feature engineering: I noted that I wanted to identify teams that may be getting relatively better or worse, and this is how I achieved it using DataRobot's Feature Discovery tool:</p> <ul> <li>Derived rolling 2-week averages, medians, maxes, mins, and standard deviations of each Massey ranking (for men)</li> <li>Derived rolling averages, medians, maxes, mins, and standard deviations for box score statistics, going back 6 months (effectively all season), 3 months, 1 month, and 2 weeks. </li> <li>In addition to the box score stats provided, I derived team differentials for each game that were aggregated up. So, for example, for the designated home team in an upcoming game I would have their "average offensive rebounding differential over the past 2 weeks" as one such feature.</li> <li>For all these box score statistics, I normalized them before aggregation on a per-67 possessions basis. </li> </ul> <p>5) AutoML revealed that elastic net regressors did very well on this data. I think that makes some sense in sports, as typically effects are pretty monotonic and linear. You can't really be "too good" at something, such that you'd need xgboost or a comparable algorithm to detect non-monotonicity or non-linearity. </p> <h3>Top features</h3> <ul> <li>The home effect was quite large, equivalent to between 3-4 points per game. For example, the men's (un-standardized) coefficient effect was +1.72, which represents the gap between playing at home and playing on a neutral court. Thus the home-vs.-away effect is more like 2x1.72 = 3.44, which jives with what you can typically infer from betting lines.</li> <li>Conference effects were very large, especially in early round games where you often see power conferences playing mid-majors.</li> <li>Lagged, aggregated offensive rebounding differential: showed up highest for men, less so for women.</li> <li>Lagged, aggregated versions of score differential (points, FGM, FTM): showed up high for men <em>and</em> women. In general, longer windows were more valuable (e.g., whole season over 2-week lag), but FTM for men was important in the last month. I hypothesize that getting to the line is something a team has a fair amount of control over, and there is signal in shorter rolling windows. </li> <li>Individual Massey ranking systems showed up early and often for men. I would have thought some of my aggregations (like median rank across systems) would have been top contributors, but instead it was DOK and 7OT that both made the top 10. </li> </ul> <p>Sure! First off, I trained each season starting on Feb 1, so I had a fair number of home-site games. For these typical home/away games, I converted Kaggle-Massey's Winning team and Losing team into Home and Away, and I set a "home" indicator to 1. I then derived rolling, lagged features for the home and away teams separately. So like, home_pts_differential_average_two_weeks would be the home team's prior average winning margin over the past two weeks. I derived a whole set of these rolling lags for home and for away. And when I say I did that, I actually let an AutoML program do that, and then I let that same AutoML program derive and test pairwise differences between every pair of numeric features. A few of those "secondary differences" made it into modeling. </p> <p>For neutral site games, I did basically the same thing as above to start, but I arbitrarily selected the winning team to be home and the losing team to be away. Of course, this introduced bias, so I duplicated each neutral site row, allowing the losing team to be home and the winning team to be away, thus creating a 50% win percentage for neutral site games, and doubly weighting neutral site games (all men's tournament games are neutral site, and women's starting in the sweet 16). For these games, I set the "home" indicator to 0. </p> <p>I'm not typically a fan of upsampling, but in this case it felt like the best way to unbias the model in neutral site games. I could have also randomly sampled which of the winning or losing team would get to be the home team. </p>
March Machine Learning Mania 2023
3th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Private Leaderboard Score:</strong> 0.17525<br> <strong>Private Leaderboard Place:</strong> 3th<br> <strong><a href="https://www.kaggle.com/tihonby/march-madness-3th-place-solution/" target="_blank">Notebook link</a></strong></p> <h4>Introduction</h4> <p>Hello Kaggle Members! <br> I want to thank the organizers for hosting this contest, Kaggle for providing a platform for the contest. Thanks to the colleagues at Kaggle for the public codes that I learned a lot from. Thanks to <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> and <a href="https://www.kaggle.com/verracodeguacas" target="_blank">@verracodeguacas</a> for their codes that I used in my work. My congratulations to the finalists and winners of this wonderful competition!</p> <h4>Background</h4> <p>I am ML Engineer, 3 years of professional experience, this is my first time participating in this competition, I used to predict the results of other sports.</p> <h4>Datasets</h4> <p>In the final, I only used data from NCAATourneyDetailedResults, NCAATourneySeeds, RegularSeasonDetailedResults. I experimented with MasseyOrdinals data, but in the end I abandoned this data, as I did not find a clear increase. I think that it is necessary to return to them, since I have not studied them enough.</p> <h4>Features</h4> <p>Features that were used in training the final model:'Score', 'FGM', 'FGA', 'FGM3', 'FGA3', 'OR', 'Ast', 'TO', 'Stl', 'PF', 'seed', and derivatives from them. When testing, I excluded other features, as they did not give an increase in the quality of predictions. With an increase in the number of features over 70, the quality of forecasts began to deteriorate.<br> Added data extraction function for the last 3 seasons. The function retrieves the value of the feature in the current season and in the previous two. Assigns weights in descending order. The current season carries more weight and contributes more. Calculates and writes the average value to a new function. This feature allows you to smooth out outliers when a team performs differently this season than in previous years. In some cases, this can improve the forecast by 0.001-0.003. The function is little studied and is applied to the parameters 'Score', 'FGA', 'FGA3', 'quality', 'seed'.</p> <h4>Scaling</h4> <p>I used Normalizer, MinMaxScaler, StandardScaler. The best results are from Normalizer, which was used in the final.</p> <h4>Models</h4> <p>XGBoost, ElasticNet, LogisticRegression have been tested.XGBoost has better performance, final settings: xgb.cv, repeat_cv=4, max_depth=3, folds=KFold, n_splits=5.<br> When tested with repeat_cv = 7-10 gave an improvement of 0.003-0.005 over the multi-season average. But if the team that the algorithm was sure of had lost, the score could have been worse, which happened in this tournament. For this tournament, the result for repeat_cv=7 was worse than for 4. I checked the options with a forecast of the probability of winning 1 or 0, or a forecast by points, I did not find a significant difference.</p> <h4>Results processing</h4> <p>I did not post-process the final results of the model prediction, the predictions were in the original version.Later I rechecked for 1-3, 14-16, but this did not noticeably improve the results.I do not welcome manual adjustment, because in this way you can reduce all predictions to a manual forecast based on rating and personal preferences. Although I think that it can improve the forecast and add emotions during the competition.<br> The search for optimal parameters was carried out by training in all seasons, except for the last one, which was used for validation. The training was held separately for men's and women's teams. Prediction results are better for women's matches. Approximately 0.145-0.151 versus 0.187-0.192. I think that the men's teams have more equal strength, and it is more difficult to predict the outcome of the match in advance. The average result for the seasons is about 0.172-0.173, the difference for individual seasons is 0.169-0.177. I expected my result in the competition to be in the range of 0.174 - 0.176, although I thought that there would be those who could predict better than 0.169.<br> The 2 most successful options were selected for sending, the results of the forecasts: 0.17525 (3rd) and 0.1763 (6th) places. Some predictions for strong teams turned out to be incorrect, but others had the same predictions for this game, so the place in the rankings did not change much. The results obtained are very pleasing.</p> <h4>Source</h4> <p><a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a>: <a href="https://github.com/fakyras/ncaa_women_2018/blob/master/win_ncaa.R" target="_blank">Here's the original R from 2018</a>. The guidelines of <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> - Darius Barušauskas, <a href="https://www.youtube.com/watch?v=KmhGNc7gcCM&amp;t=18s&amp;ab_channel=Kaggle" target="_blank">youtube presentation</a><br> <a href="https://www.kaggle.com/verracodeguacas" target="_blank">@verracodeguacas</a> Python notebooks from 2023: 1: <a href="https://www.kaggle.com/code/verracodeguacas/xgboost-ken-pom-data-sagarin-ranks" target="_blank">men</a>, 2: <a href="https://www.kaggle.com/code/verracodeguacas/xgboost-ncaa-women-s-side" target="_blank">women</a>, 3: <a href="https://www.kaggle.com/code/verracodeguacas/easy-mixer-men-women" target="_blank">mixer</a></p> <h5>Thanks Kaggle! Good luck to everyone in the new season!</h5>
March Machine Learning Mania 2023
4th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Details:<br> Private Leaderboard Score: 0.17557<br> Private Leaderboard Place: 4th<br> Name: Nicholas Hilton<br> Location: Brooklyn, NY<br> Email: <a href="mailto:[email protected]">[email protected]</a></p> <h2>My Background</h2> <p>I graduated from the University of Cambridge with a maths degree in 2015. From there I worked at Manchester City Football Club before teaching myself how to code and getting a job at Tribe Dynamics, a start up in San Francisco. There I learnt how to code professionally and data science skills. After 5 years in SF I left Tribe Dynamics to work as an Machine Learning Engineer for Gro Intelligence in New York where I am today.</p> <p>I have been doing this competition since 2018 and have enjoyed it every year as I both enjoy the actual competition on TV and it gives me a chance to do some data science outside of my job. I used roughly the same method every year and it has performed a bit better every year as I make improvements. In total I spent about 3 weeks total on the competition over the last 5 years, this year making a few small tweaks before running my training process with the new data.</p> <h2>Model Summary</h2> <p>My model implements an ELO rating system and then uses those ratings to produce predictions for each matchup, adjusting for a few other features (rebounding, field goal percentage (overall and 3 point), seed). The rating difference and adjustments are then passed into a link function which converts the adjusted rating difference to a probability for the matchup.</p> <p>The training process involved tuning the parameters of the model, mainly the feature adjustments and the rate at which ELO ratings are updated after each match. A single training run would simulate all matches from 2010, keeping track of ratings over time, using the tournament score for each season as the metric to evaluate that set of params on. During the regular season there was also an adjustment for home or away teams but in the tournament all matches were treated as neutral.</p> <p>I think the training process could be optimised in the future as I ran a grid search over a sensible parameter space and let my macbook run all day chugging through the param space.</p> <h3>Final Params</h3> <p>Final ELO ratings are spread over a ~400 point range for tournament teams so weights below are related to that rating spread.</p> <p><em>Mens:</em><br> 'k': 150 # how fast the ELO ratings update - at the high end of the param range tested<br> 'seed': -40 # the seed difference to adjust - Medium End<br> 'link': 'N' # The response function to turn a rating diff into a probability - Normal distribution<br> 'fgp': 1200.0 # Field Goal Percentage difference - High end<br> 'fgp3': -10.0 # Field Goal 3pt Percentage difference - Low end<br> 'reb': 15.0 # Rebounding Average difference - Medium End</p> <p><em>Womens:</em><br> 'k': 80 # Medium End<br> 'seed': -40 # Medium End<br> 'link': 'L' # Logistic Function<br> 'fgp': 1200.0 # High end<br> 'fgp3': 0.0 # Low end<br> 'reb': 15.0 # Medium End</p> <h2>References</h2> <p>Model Repo can be found <a href="https://github.com/NickHilton/March-Madness" target="_blank">here</a><br> Write up of details on the Elo system (for soccer but the initial work I based this model off) can be found <a href="https://drive.google.com/file/d/1mCGCTWuxvyWAXaOhiqt0ncUExs4LsOPf/view" target="_blank">here</a><br> The idea is similar to 538’s methodology <a href="https://fivethirtyeight.com/methodology/how-our-march-madness-predictions-work-2/" target="_blank">here</a></p> <h2>Interesting Insights</h2> <p>I think a key improvement I made to the model last year (when I placed in the top 50) and which probably helped me this year, was simulating the tournament matches and updating the ratings throughout the tournament. I.e. If a matchup was a R2 matchup between W1 and W8 then W1 must have beaten W16 and W8 must have beaten W9 and so you can update the ratings before predicting each matchup in the tournament. I found a rating system which had a higher importance on recent form, performed better this year, and probably helped with the number of upsets that happened (Iowa for example in the women’s tournament I ended up rating highly at the end)</p> <p>Another thing which helped me a bit was picking a team to do well and adjusting their ratings as a sort of gamble. I don’t think its as important now there are 126 games, but I picked Princeton to do well in the mens tournament and updated their ratings to do so. I figured that if you are competing in a pool of &gt;1k entrants, who are all likely to have good models, having a ~5% chance of a big boost to your score means that whilst your mean score might go down, your likelihood of winning goes up (go big or go home). That being said, I only should have picked them to win 2 games to get the boost I needed as when they lost their third game most of the advantage I had was wiped out. In hindsight, rerunning my model without this manual choice gives a score which would still have put me in 4th place.</p> <h2>Model Simplicity</h2> <p>My model is actually relatively simple from a Machine Learning point of view. It doesn’t use any advanced ML libraries, instead it is a tried and tested algorithm for rating sports teams/players (mainly Chess) and I was happy to have found a way to apply this to the tournament. I think the time consuming part came from building out the actual code to implement the ELO model with the data structured as it is as well as running the grid search over the param space.</p> <h2>Final Thoughts</h2> <p>I thoroughly enjoyed this competition, I have done in prior years when I wasn’t near the top as it takes one of the highlights of the sports year and allows me to nerd out on it. I will keep doing this competition and look forward to thinking of improvements to my model for next year. </p> <p>I also appreciate that with the scores so tight at the top of the leaderboard, and games coming down to final shots regularly, a few baskets made or missed could have sent me tumbling down the leaderboard. The beauty of competitions like these based on sports is the drama that unfolds as you watch your model perform in real time, and it really was a fun couple of weeks, and a good time to get a bit lucky.</p> <p>Nice - It seems to be a common rating system amongst lots of sports</p> <p>I rolled over the ratings YoY. Not perfect because of the high turnover of talent, particularly at the top end, but that worked better than starting with fresh ratings every year</p>
March Machine Learning Mania 2023
5th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks haha! I didn't override any predictions- I didn't want any personal biases to impact my submission. Ironically, I needed SDSU to win the national championship for a chance at 3rd/4th which I found funny.</p> <p><a href="https://www.kaggle.com/code/mbund1237/marchmadness/notebook" target="_blank">Notebook link</a></p> <p><strong>Private leaderboard place:</strong> 5th<br> <strong>Private leaderboard score:</strong> 0.17619</p> <h3>Intro</h3> <p>Before getting into it, I just want to thank Kaggle for getting all the different datasets together and for hosting this competition. I've learned a lot and this was a great way to challenge myself. I also want to thank anyone who has shared a notebook or participated in any discussion with the goal of helping others! However, it's sad to see that ~60% of the top 100 was taken up by a fork of one notebook (without any changes). I was surprised at the number (and profile rank) of people who put more value on winning an arbitrary medal vs. improving their skills and challenging themselves. I hope that future competitions will put more emphasis on original submissions.</p> <h3>Background</h3> <p>I graduated in August 2022 with a BS in Business Administration from San Diego State University. I don't have any professional experience (yet) but hope to land a data-driven role. I am self-taught and took up machine learning at the beginning of 2023. However, I do have experience playing sports, sports analytics, and building betting models for basketball primarily (personal projects). I am definitely not the best modeler in the competition, but I do feel as if my knowledge in this subject specifically is what gave me a bit of an edge.</p> <h3>Approach Introduction</h3> <p>One of my favorite movies of all time is Moneyball. In Moneyball, the Oakland Athletics took a statistical approach to building a baseball team. One of the main metrics they judged hitters by was OBP (on-base percentage). They strategized that winning in baseball can primarily be broken down to getting batters on base. I have always wanted to apply a similar strategy to basketball, but it's tough as it is a very different game. </p> <h3>My Approach</h3> <p>My approach to recreate the Moneyball strategy was to only use features that normalized any difference in pace of play, playstyle, bias, noise, etc. In order to do this, **I focused on how well teams played on a possession-by-possession basis only with an emphasis on their efficiency. Teams who play more efficiently per possession are more likely to beat their opponent, irrespective of strength/seed. ** I am somewhat confident that this approach would do fairly well every year and could possibly be expanded to the NBA as well.</p> <h3>Features</h3> <p>During the testing phase, most of my time spent was on feature selection. As mentioned before, having previous knowledge and experience using some advanced basketball statistics made the feature selection much easier. I also decided to drop "NumOT" and "WLoc" from all of the datasets that I used. I view "NumOT" as noise due to the fact that it doesn't give any value in determining the strength of either team. "WLoc" is irrelevant in my opinion as I see the NCAA Tournament as a neutral location so there is no home-court advantage.</p> <p>Here is a breakdown of each of my features, why I used them, and their importance (will average between the two teams… i.e. AWinRatio and BWinRatio will just be referred to as WinRatio and I will use their average importance):</p> <table> <thead> <tr> <th>Feature</th> <th>Formula</th> <th>Explanation</th> <th>Importance</th> </tr> </thead> <tbody> <tr> <td>Win Ratio (Win %)</td> <td>WR = W / G</td> <td>This one is fairly straightforward. A team with a higher win percentage is more likely the stronger team. Teams with a high win percentage and a low seed may be stronger than anticipated.</td> <td>0.327</td> </tr> <tr> <td>Seed</td> <td>-</td> <td>This one is also fairly straightforward. A team with a high seed is more likely to be the stronger team than a low-seeded team.</td> <td>0.052</td> </tr> <tr> <td>PPP (points per possession)</td> <td>PPP = PTS / Poss</td> <td>PPP metric represents the efficiency at which a team is able to generate points on a possession-by-possession basis. The higher the PPP, the better a team is at generating efficient points each time they have possession. A low PPP may be indicative of bad shot selection (i.e. long 2's), poor efficiency, or bad offense.</td> <td>0.014</td> </tr> <tr> <td>Team</td> <td>-</td> <td>Some teams have performed better than others (and vice-versa). This may be due to better recruiting, good coaching, conference differences, or even luck.</td> <td>0.028</td> </tr> <tr> <td>EFG% (Effective field goal %)</td> <td>EFG = (FG + 0.5 * 3P) / FGA</td> <td>EFG is a shooting efficiency metric that accounts for the fact that a 3 pointer is worth 1 more point than a 2 pointer (50% more) when looking at FG% (field goal percentage).</td> <td>0.008</td> </tr> <tr> <td>TOV%</td> <td>TOV% = (TOV / (FGA + 0.475 * FTA + TOV))</td> <td>TOV% (Turnover percentage) is an estimate for the number of times a team turns over the ball per 100 possessions. This can be used to measure a team’s ability to take care of the ball on offense as teams who turn the ball over often have lower efficiency and allow the defense to get back possession of the ball. Note: Some calculations use 0.44 as the weight instead of 0.475.</td> <td>0.008</td> </tr> <tr> <td>OREB%</td> <td>OREB% = OREB / (OREB / OppDREB)</td> <td>OREB% (Offensive rebound percentage) is an estimate of how often a team gets an offensive rebound. This measurement is useful as a team with a high OREB% is more likely to create second chance opportunities after a missed shot.</td> <td>0.008</td> </tr> <tr> <td>Set</td> <td>1 = Tournament, 0 = Regular Season</td> <td>This feature was created by me in order to allow the model to possibly assign a different weight to regular season games vs tournament games. Winning a tournament game can be seen as more important than winning a regular season game.</td> <td>0.008</td> </tr> <tr> <td>TS%</td> <td>TS% = PTS / (2 * FGA + 0.475 * FTA)</td> <td>Another advanced shooting efficiency metric that takes into account free throws when calculating field goal percentage (FG%). Traditional FG% only utilizes total field goals (made and attempted) while TS% takes into account the difference between a free throw, 2 pointer, and 3 pointer when calculating. Note: Some calculations use 0.44 as the weight instead of 0.475.</td> <td>0.008</td> </tr> <tr> <td>FTR</td> <td>FTR = FTA / FGA</td> <td>FTR (Free throw rate) represents how often a team is able to draw shooting fouls relative to how many shots they take. Teams with a high FTR are able to get to the free throw line often when attempting shots. This metric can be useful as a team with a high FTR is not only able to generate additional points but can also get the other team into foul trouble.</td> <td>0.007</td> </tr> <tr> <td>DREB%</td> <td>DREB% = DREB / (DREB + OppOREB)</td> <td>DREB% (Defensive rebound percentage) measures a team’s ability to secure a defensive rebound after a missed shot by the offense. Teams with a high DREB% minimize the opponent’s chances to get an offensive rebound after a missed shot and give their team possession of the ball.</td> <td>0.007</td> </tr> <tr> <td>Season</td> <td>-</td> <td>I wasn’t sure if I should leave this feature in but it seemed to perform well. My rational was that teams that have performed well the last few seasons may be more likely to perform well present day. It can also be used to quantify playstyle differences throughout the years (i.e. 3 pointers are taken at a much higher rate than in the early 2000’s).</td> <td>0.007</td> </tr> <tr> <td>4F</td> <td>-</td> <td>Please see references for an explanation. The 4 factors concept was created by Dean Oliver in his book, “Basketball On Paper”.</td> <td>0.007</td> </tr> <tr> <td>Gender</td> <td>1 = Men, 0 = Women</td> <td>This feature is used to try and quantify the playstyle differences between men and women. Due to only training one model (for both men and women), I figured this could be a possible solution. Next year I hope to train two separate models. It didn’t carry any weight in the final feature importance list.</td> <td>0.000</td> </tr> <tr> <td>ORTG</td> <td>ORTG = 100 * (PTS / Poss)</td> <td>ORTG (Offensive rating) is used to measure the efficiency of an offense. It’s value represents the number of points a team scores per 100 possessions. Looking back, PPP (points per possession) does almost the same thing- just doesn’t adjust it per 100 possessions. While testing the model in the March Madness Warmup, this held more importance than it did here.</td> <td>0.000</td> </tr> <tr> <td>DRTG</td> <td>ORTG = 100 * (OppPTS / OppPoss)</td> <td>DTRG (Defensive rating) is used to measure the number of points a team’s defense allows per 100 possessions. Similar to ORTG, PPP is highly correlated with this.</td> <td>0.000</td> </tr> <tr> <td>Opponent Stats</td> <td>-</td> <td>In addition to calculating all of these metrics for a team, I did it for their opponent’s averages as well. For example, if a team has a very low “OppEFG” for the season, it means their opponents do not shoot efficiently against them. This is highly indicative of a strong defense. Another example is OppFTR. If a team has a very low “OppFTR”, it means that the team does not foul shooters often and doesn’t allow the opposing team to shoot free throws at a high rate. In general, you want your Opponents’ metrics lower than yours (other than TOV%).</td> <td>See Full Importance List Below</td> </tr> </tbody> </table> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9608345%2F957d47dd74e565808e51f14f21cc4c51%2Fimportances.png?generation=1681346881119087&amp;alt=media" alt="Full importance dataframe"></p> <h3>Datasets &amp; Modeling</h3> <p><strong>Datasets:</strong> I only used a total of 6 datasets and did not incorporate any outside data (i.e. 538, Massey, KenPom). For both men's and women's I used TourneyDetailedResults, NCAATourneySeeds, and RegularSeasonDetailedResults. I also used the sample submission when creating my entry. </p> <p><strong>Scaling:</strong> I tested RobustScaler, MinMaxScaler, and StandardScaler. I got the best CV scores using MinMaxScaler so that is what I used.</p> <p><strong>Cross-validation:</strong> I used KFold with shuffling and tested 5, 8, and 10 splits with 10 performing the best. With this said, this is one area where I need to improve, and hope to cross-validate more effectively.</p> <p><strong>Modeling:</strong> I decided to take a classification approach- 1 being a win and 0 being a loss. It seems as if many people took a regression approach to predict the scoring margin. However, I feel that this isn't the right way to approach the problem as we are not being scored on the accuracy of the margin. There is no difference if a team wins by 2 or 15 in this competition- only the fact that they won (or lost). Many people tried to extrapolate a win probability based off of this margin but I don't think that approach would be as accurate. </p> <p>I tested 5 different classification models, LogisticRegression, as well as a voting ensemble. The classification models included XGBoost, LightGBM, RandomForest, KNearestNeighbors, and StochasticGradientDescent. While testing, the voting ensemble was taking too long to train so I decided to scrap that approach. In the end, XGBoost performed the best so that is what I used for my submission.</p> <h3>Post Processing</h3> <p>Unlike many others, I didn't manually override any of the predictions. I considered this, but felt it would be incorporating my own bias into the modeling (which I wanted to avoid). With this said, any manual overrides seem fairly safe for the women's tournament but much riskier for the men's. </p> <h3>Conclusion</h3> <p>I'm very happy with my results and am curious to see how this approach would score on previous tournaments. I fully intend to work on a submission for next year's tournament!</p> <h3>Wishlist and suggestions</h3> <ul> <li>Individual player data for each season</li> <li>Score the competition using the scoring margin rather than the probability of winning. There is a huge difference between a team winning by 20 vs 2, especially if you consider the seeds. A 16-seed blowing out a 1-seed by 20 points would be incredibly unlikely (looking at you Virginia) and could be more indicative of a team's strength.</li> <li>No direct forks of public notebooks (or cluster them as a team like I saw someone suggest)</li> <li>Sub-competitions? I feel as if this could be interesting. I saw a few people mention that this competition is getting a bit repetitive or boring. Having different sub-competitions for March Madness could be really fun. I.e. separate competitions or scoring for only the first round, whoever can predict the closest to the sum of all the scores, etc. Some people may enjoy the variety and additional challenge(s) it brings.</li> </ul> <h3>References</h3> <p>About. THE POWER RANK. (n.d.). Retrieved from <a href="https://thepowerrank.com/cbb-analytics/" target="_blank">https://thepowerrank.com/cbb-analytics/</a><br> All 68 Ncaa Tournament Teams in 12 Minutes. (2023). Youtube. Retrieved from <a href="https://www.youtube.com/watch?v=1LUgGiw-cHw&amp;" target="_blank">https://www.youtube.com/watch?v=1LUgGiw-cHw&amp;</a>.<br> Basketball Reference. (n.d.). Four factors. Four Factors. Retrieved from <a href="https://www.basketball-reference.com/about/factors.html" target="_blank">https://www.basketball-reference.com/about/factors.html</a><br> Kaggle. (2019). Ncaa March Madness Workshop | led by Darius Darius Barušauskas | Kaggle Days Paris. YouTube. Retrieved from <a href="https://www.youtube.com/watch?v=KmhGNc7gcCM" target="_blank">https://www.youtube.com/watch?v=KmhGNc7gcCM</a>.<br> Kotzias, K. (2018, March 15). The four factors of basketball as a measure of success. Statathlon. Retrieved from <a href="https://statathlon.com/four-factors-basketball-success/" target="_blank">https://statathlon.com/four-factors-basketball-success/</a><br> squared2020. (2017, September 6). Introduction to oliver's four factors. Squared Statistics: Understanding Basketball Analytics. Retrieved from <a href="https://squared2020.com/2017/09/05/introduction-to-olivers-four-factors/" target="_blank">https://squared2020.com/2017/09/05/introduction-to-olivers-four-factors/</a> </p> <p>Glossary links:<br> <a href="https://www.basketball-reference.com/about/glossary.html#tsa" target="_blank">Basketball Reference</a><br> <a href="https://www.nbastuffer.com/analytics101/" target="_blank">NBAStuffer</a></p> <p>Thanks again and congratulations to all the winners! Feel free to ask questions below, I will answer them all to the best of my ability. </p>
March Machine Learning Mania 2023
6th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F485694%2F7bb4578e8da82d07c168e93a18f4da67%2FPicture1.png?generation=1681071447070772&amp;alt=media" alt=""><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F485694%2Fd09996b8accf6cbd6b875d4d33f94af5%2FPicture2.png?generation=1681071456828606&amp;alt=media" alt=""><br> <strong>Background</strong> </p> <p>I graduated from Heriot-Watt University in Edinburgh, Scotland in 2006 with a degree in Actuarial Mathematics and Statistics. </p> <p>Since then, I have been working for multiple banks in the United Kingdom and Australia predominately in credit risk modelling. This has involved creating mathematical models to help the organisation make the correct decisions. In addition I have also looked at regulatory models and forecasting of provisions. I am currently a Director, Analytic Consulting in an analytics company based in Madrid.</p> <p>This competition has become an annual hobby for me. I first discovered it in 2017 which is when I found out what March Madness was. In that competition I came 5th but I wasn’t happy with the approach and have changed it since then.</p> <p>This year I spent a bit more time than usual because I moved my processes from running on my local machine to running on Kaggle. As part of that, I have reviewed and updated all my old codes to make sure they were doing what they should have and updated the process. This includes updating to use R code rather than using the SQL library which is how I produced some of some steps previously. I have included my code here (<a href="https://www.kaggle.com/code/djscott1909/march-madness-2023-ds" target="_blank">https://www.kaggle.com/code/djscott1909/march-madness-2023-ds</a>) which you can review but please be kind. I have taught myself R during these competitions so I apologise for any inefficient coding. </p> <p><strong>Summary</strong></p> <p>My code breaks down into multiple sections. The first 6 sections of my code are included to use the data provided by the competition and looking at different ways to generate variables. This code includes making a range of my own ratings for teams. They include:</p> <ul> <li><strong>A Simple rating system</strong> – This creates a rating for all teams based on their season results and using the margin of victory in each game to generate a rating. </li> <li><strong>An ELO rating</strong> – The same concept except it uses the ELO process (<a href="https://en.wikipedia.org/wiki/Elo_rating_system" target="_blank">https://en.wikipedia.org/wiki/Elo_rating_system</a>) to generate the teams rating.</li> <li><strong>Summary statistics</strong> – Using the information to generate some statistics which could be beneficial from the games. This includes Win percentage, Points for and against, Strength of Schedule, quadrant 1 wins, etc.</li> </ul> <p>The last two sections are focused on generating a unique model for the Men’s and Women’s competition. I use all the information available and don’t introduce any of the statistics which are only available in later years, i.e. I don’t use the detailed results dataset, location information (only if a team was home, away or neutral), 538, odds or use the 0/1 method to optimise. I have used both odds and 0/1 in previous years but with the changes this year I didn’t think it was worth it. I also haven’t used the override approach, i.e., setting 1 Seed to 100% chance of winning. In some years I am sure this will be beneficial but I want to simply see how the model performs. </p> <p>The model itself is using a stepwise regression. I start the model with no variables and the step process looks to add in the most predictive variable and repeats the process. I prefer to use a step process which also reviews the variables in the model and removes them if they no longer add value to the model. This helps to make sure all variables considered are truly adding value. </p> <p>My final models looked as follows:</p> <p><strong>Men</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F485694%2F762288b66a2ff686a56f076c3af621e7%2FPicture3.png?generation=1681071956155461&amp;alt=media" alt=""></p> <p><strong>Women</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F485694%2Fa64887b1057974e7789a7c32aef1bd76%2FPicture4.png?generation=1681071974834457&amp;alt=media" alt=""></p> <p><strong>Interesting findings</strong></p> <p>I decided to test some variable transformations in the model development. This included creating a Weight of Evidence binning (WoE) for the variables, removing outliers and normalising the variable. </p> <p>During the Warmup competition I was careful to consider only the development and leverage the Out of time to help inform which direction I should go next. During this time, it seemed to show that Normalising didn’t add much to the Women’s competition. I can only assume the distribution within some of the variables made this better but I didn’t expect to see a difference between the two. My plan for next year is to investigate this further. </p> <p><strong>Simple Features and Methods</strong></p> <p>This year, with the changes make, I decided to create 2 models and not leverage the 0/1 approach with the 2 submissions. I thought that the reward wasn’t as high this year so I went a different way. </p> <p>I had a much simpler model which didn’t contain the WoE fields. Unfortunately, it appears to be too simple (or maybe was just unlucky this year). This model would only have been good enough for 329th so I might need to investigate alternatives for my 2nd model for next year. </p> <p>Since my approach to the model development is using a stepwise regression and developing a single model the model ends up fairly simplistic in comparison to others (I think). While the models may look like a lot of variables it is really the same variable with different WoE binning so the underlying characteristics are fairly small.</p> <p><strong>Additional Note</strong></p> <p>I always enjoy these competitions but I wanted to give a mention to the work done by <a href="https://www.kaggle.com/zdbradshaw" target="_blank">@zdbradshaw</a> (<a href="https://www.kaggle.com/competitions/march-machine-learning-mania-2023/discussion/397987)" target="_blank">https://www.kaggle.com/competitions/march-machine-learning-mania-2023/discussion/397987)</a>. The work he did with the submissions which were made public made the final games much more enjoyable for me as I knew which results to cheer for. I really encourage people to share their solutions next year so a similar exercise can be done. It adds a lot to the experience for me. </p> <p>This year was a roll coaster of emotions (being 800+ after the 1st 4 games) but I got luckier as the competition went on and that is why I keep coming back to it. I hope it continues for many years to come. </p> <p>Yeah still a favourite of mine too but wasn't sure how widely it was used. </p>
March Machine Learning Mania 2023
7th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>2023 March Madness Mania Competition</h1> <p><a href="https://github.com/JackLich10/kaggle_2023" target="_blank">GITHUB Link</a></p> <h2><a href="https://www.kaggle.com/competitions/march-machine-learning-mania-2023/leaderboard" target="_blank">7th Place Submission</a></h2> <pre><code>Private Leaderboard Score: 0.17704 Private Leaderboard Place: 7th </code></pre> <h3>Background</h3> <p>I am a 2022 graduate of Duke University with a degree in Data Science. I currently work for a sports analytics company in Washington, D.C.. This is my second year entering this type of competition. I entered because I think it's super fun to forecast the results of such a crazy basketball tournament. Last year, I finished 100th in the men's only competition. I did not participate in the women's only competition last year. My men's model was written 2 years ago for fun while my women's model I created this year in ~4 days to enter this competition.</p> <h3>Methodology</h3> <p>All coding was done in R. The basic methodology behind both of my models is to:</p> <p>1) generate team ratings via ridge <a href="https://glmnet.stanford.edu/index.html" target="_blank">(<code>glmnet</code>)</a>/mixed effects <a href="https://github.com/lme4/lme4" target="_blank">(<code>lme4</code>)</a> modeling predicting point difference, offense/defense efficiency, offense/defense pace, etc. (i.e. what is team A's impact on offense efficiency, score difference, etc. after controlling for opponent, home court, etc.?)</p> <p>2) use team ratings from 1) to predict game level score difference (via <a href="https://xgboost.readthedocs.io/en/stable/" target="_blank">XGBoost</a>)</p> <p>The most important features to predict game level score difference are the various team ratings from 1). Each of these models are trained after each day of each season. As such, it takes multiple days for the entirety of this modeling framework to run for the first time. A simple GLM converts predicted score difference to predicted win probability.</p> <h3>Code</h3> <p>This <a href="https://github.com/JackLich10/kaggle_2023" target="_blank">repository</a> has my men's model and women's model in their respective folders, <code>mens/</code> and <code>womens/</code>. Within each of <code>mens/</code> and <code>womens/</code>, the <code>run_all.sh</code> shell script specifies the order in which the files need to be run for the entire modeling framework to work. The <code>kaggle_preds.R</code> script combines the predictions into the final submission file.</p> <p>The first time running everything, the scripts inside the subfolder <code>models/</code> within each <code>mens/</code> and <code>womens/</code> must be run to tune the hyper-parameters of the XGBoost models found within each workflow as well as to train the men's shot probability model (This tuning is done via the <a href="https://www.tidymodels.org/" target="_blank"><code>tidymodels</code></a> framework and using crossfold validation). You will also have to create <code>data/</code> folders within each <code>mens/</code> and <code>womens/</code> directories.</p> <p><em>NOTE</em>: My men's model uses outside public data via my own <a href="https://jacklich10.github.io/gamezoneR/index.html" target="_blank">gamezoneR</a> R package. My women's model only uses provided data via the competition website.</p> <h3>Manual Overrides</h3> <p>In one of my submissions, I manually set Oral Roberts win probability over Duke as 80%… that did not work out. In the other, I manually set Michigan State's win probability over USC as 80%… that one worked out. These overrides were dumb because I should have put them at 99% and I could have been in a very bad spot if both of them failed. For the women's bracket, I over-rid all 1 over 16 seeds at 99%. </p>
March Machine Learning Mania 2023
8th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Firstly, I would like to thank Raddar for his awesome public codes, as well as to Kaggle for hosting this great competition and providing an awesome platform. I have also learned a lot from other fellow Kagglers, so thank you all!</p> <p><strong>Approach</strong><br> This was my third year participating in the March Madness competition. Based on my prior experience, I decided to take a conservative approach through an ensemble of various models.</p> <p><strong>Data Preparation</strong><br> For feature selection, I used the features from Raddar’s public notebook. I created two sets of features: one with Team Quality features and the other without. I removed Team Quality features for some models.</p> <p><strong>Modeling</strong><br> I used five different models: LGBM, Logistic Regression, CatBoost, XGB, and SVM. However, I dropped SVM as it was too slow to train. I combined the outputs of the remaining four models equally. While I initially used cross-validation to measure each model’s performance and used a weighted average of the output of each model in the prior year’s competition, I found that this approach did not work well. Therefore, I decided to use equal weighting to avoid overfitting this year.</p> <p><strong>Post-Processing</strong><br> In the past, overriding the whole seeds 1-4 and 13-16 did not work well for me. So, this year I only overrode seeds 1-2 and 14-15. Additionally, instead of giving 99% probability for seeds 1 and 2, I took a conservative approach by giving 95% and 90% probability, respectively.</p> <p>Overall, I believe that this approach with the tremendous blessings from God helped me win my first prize and gold medal in Kaggle after 5 years!</p> <p>Code link: <a href="url" target="_blank">https://www.kaggle.com/code/pwh70411/8th-place-solution</a></p>
March Machine Learning Mania 2023
9th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thx Chris!</p> <p>Thanks! External ratings did make my Women's model alot worse, but I guess I was able to make the Men's side of thing work </p> <p>So it's the time of the year again … As someone who doesn't usually follow basketball, this is probably the time of the year where I become a basketball "fan". Although I must say that even as a casual observer, I was deeply engaged and really enjoyed watching some of the matches, especially towards the end. </p> <h1>Models</h1> <h2>Women's Bracket</h2> <p>I didn't do too well for the women's side of the competition with a Brier Score of about 0.160 mainly because my model was pretty simple but I felt like the predictions were not decisive enough even thougn I only fed it simple features.</p> <p>I used Theo's Logistic Regression as a baseline and added some of my own features, namely the Aggregates (MIN / MAX / Mean / Median / etc …) of important features like Point Difference / Total Points / etc … then did some tuning. However, I believe Theo's original notebook did way better than my women's prediction.</p> <p>Didn't spend too much time on the women's model because from prior iterations of the competition, I was rather certain that there wouldn't be too many upsets for the women's side and I was just focused on building a model that used the most (logically) "straightforward" features and did not dive into matchups at all.</p> <h2>Men's Bracket</h2> <p>It was likely my men's bracket that pushed me up the LB. I had an overall Brier Score of about 0.195 and given how many upsets there were this year, I am pretty happy with the score. Similarly, from past iterations of March Madness, I think it is easy to come to a conclusion that the Men's side of the bracket will likely have more variance. I decided to build an XGB model with the following features :</p> <ul> <li>Rankings based on External Rating System (Sagarin, Pomeroy, Moore, 538 ratings, etc …) just to name a few. <ul> <li>Ranking changes throughout the season</li></ul></li> <li>Win Rates &amp; Point Differentials (Away / Home) </li> <li>Team Box Scores (Aggregates)</li> </ul> <p>Also, as I have yet to mention, there were lots of discussions on how Brier score could encourage more 1/0 flipping and Overriding gambling. As a risk averse person, this is not something I would do but got me thinking about the prediction distribution. </p> <p>I spent alot of time here, trying not to repeat my mistakes in last year's competition where my predictions were too conservative. Here are some of the more interesting things I tried:</p> <ul> <li>Different Forms of Scaling as Post processing (Didn't work)</li> <li>Added a <code>Risk Appetite</code> feature based on a variety of features (i.e Head-2-Head Matchup results, Seed differences, etc …) to manage the prediction distribution</li> <li>Ensembling with a Simple Logistic Regression Model</li> </ul> <p>Surprisingly, the last one worked the best (by far) and I was able to get a set of predictions I was happy with that were not too conservative and of course not too extreme.</p> <p><strong>CV Stuff</strong></p> <p>While I think CV is important for all ML problems, I didn't really look at it too much when I was trying to play around with the prediction distributions and added / removed features as long as there wasn't a drop in CV across ALL the seasons used as validation. My final ensemble actually had a lower Overall CV than just the XGB model itself.</p>
Mayo Clinic - STRIP AI
13th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mayo Clinic - STRIP AI <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi there, this is a pretty big shock. I put together a solid initial baseline early on in this competition and got busy with other things. I open-sourced this approach <a href="https://www.kaggle.com/code/dschettler8845/13th-tf-mcsai-no-tiling-model-inference/notebook?scriptVersionId=100779671" target="_blank"><strong>inference here</strong></a> and <a href="https://www.kaggle.com/code/dschettler8845/mcsai-no-tiling-model-tpu-tf" target="_blank"><strong>training here</strong></a>.</p> <p>This notebook was created almost 3 months ago (within the first few weeks … maybe the first week … of the competition). </p> <p><br></p> <p><strong>The basics are as follows:</strong></p> <ul> <li>Pretrained EfficientNetB6 fine-tuned w/ TPU on WSI</li> <li>Model Head --&gt; Dropout @ 0.5 &gt; 2 Node CC w/ Class Weights</li> <li>Learning Rate Ramps Up and Decays (12 Epochs)</li> <li>Whole Slides Used (no tiling)</li> <li>Images are resized to (512,512,3)</li> <li>Use pyvips for WSI processing</li> <li>Train Augmentation</li> </ul> <pre><code>def augment_batch(img_batch): img_batch = tf.image.random_brightness(img_batch, 0.2) img_batch = tf.image.random_contrast(img_batch, 0.5, 2.0) img_batch = tf.image.random_saturation(img_batch, 0.75, 1.25) img_batch = tf.image.random_hue(img_batch, 0.1) return img_batch </code></pre> <p>If anyone has any questions please let me know! <br> Also, thanks to Kaggle for constantly hosting interesting Biology competitions.</p> <p><br></p> <p>ps. This levelled me up to a Competitions Master. Woo!</p> <p>You’re welcome! Congrats on 7th!</p>
Mayo Clinic - STRIP AI
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mayo Clinic - STRIP AI <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I did check non-weighted logloss and accuracy, but still only focused on the competition metric. For practical applicability, my suggestion is to have different weights according to real-world data distribution. AUC curves might be also a more intuitive way to compare the trade-off between false positives and negatives for different models.</p> <p>the challenge with the data is that I need to do all the preprocessing for the train da dataset on the Kaggle before moving into my rented server…</p> <p>yes, you are right, it is end to end. </p> <p>I trained in slide level as well by using the attention module across all the tiles for the classification head.</p> <p><a href="https://www.kaggle.com/naotous" target="_blank">@naotous</a> <br> I used 384 as the image size since it is the image size required by the transformer-based backbone (SWIN large 384)<br> I also tried BEIT 512, which is another transformer-based backbone that takes 512 as input image size, however, I did not get better results with that.</p> <ul> <li>The BEIT model is larger and easier to get overfitting might be the reason for not giving me improvement.</li> <li>I did not try the hybrid SWIN transformer by adding extra convolutions between images and the SWIN backbone to support larger images and to verify your question.</li> </ul> <p><a href="https://www.kaggle.com/abebe9849" target="_blank">@abebe9849</a> Yes, I tried 10\20\40 epochs, and 20 epochs of moco pretraining worked for me the best in reducing cv variation.</p> <p><a href="https://www.kaggle.com/abebe9849" target="_blank">@abebe9849</a> my hyperparameters: </p> <pre><code>lr = 0.0001 batch-size = 4 epochs = 20 </code></pre> <p>I modified the repo from <a href="https://github.com/CupidJay/MoCov3-pytorch" target="_blank">https://github.com/CupidJay/MoCov3-pytorch</a> to support <em>timm</em> created models (at least swin transformer)</p> <p><a href="https://github.com/EIDOSLAB/torchstain" target="_blank">https://github.com/EIDOSLAB/torchstain</a> <br> Assign a target stained image and convert other images to have similar stain color distribution.</p> <p>Thanks to Mayo Clinic and Kaggle for this competition. I enjoyed playing around with it. Also thanks to my teammate <a href="https://www.kaggle.com/evilpsycho42" target="_blank">@evilpsycho42</a> for all the support. I will try to illustrate my solution here even though I decided to stop working on it a month ago…</p> <h3>Data</h3> <ul> <li>Tiling and pick the top 16 darkest tiles</li> </ul> <h3>Single model</h3> <ul> <li>backbone: <code>swin_large_patch4_window12_384</code> + customized head</li> <li>classification head: customized: replace average pooling with attention pooling</li> </ul> <h3>Loss\Metric</h3> <ul> <li><strong>implement loss\metric following competition metric</strong></li> </ul> <h3>CV Strategy</h3> <ul> <li>5-fold Stratified Grouped KFold<ul> <li>Stratified by class and grouped by <code>patientid</code></li></ul></li> </ul> <h3>What Works</h3> <ul> <li><strong>attention pooling: 5-fold cv average improves from 0.69-&gt;0.66</strong></li> <li><strong>moco-v3 pretraining: 5-fold cv deviation improves from 0.30-&gt;0.15</strong></li> <li>ensemble: 5-fold cv average improves from 0.662-&gt;0.658 (very small improvement)</li> </ul> <h3>What Doesn't Work</h3> <ul> <li>More tiles</li> <li>Different preprocessing<ul> <li>top 16 highest pixel deviation instead of darkness</li> <li>stain normalization</li></ul></li> </ul> <h3>Final Solution</h3> <ul> <li>Ensemble of <code>swin_large_patch4_window12_384</code> and <code>coat_lite_medium</code></li> <li>Some luck 🙏</li> </ul> <h3>Some lessons learned from other competitions and applied here:</h3> <ul> <li>Since public LB contains very few samples, we have to do a correct CV, which is what we could do and rely on.</li> <li>I tuned models not only to improve the average CV score but also to reduce the deviation of the 5-fold CV, so the model could perform more stable in an unseen test set.</li> <li>Implement the right loss and metric: I found a lot of public kernels using logloss as train loss and even evaluation directly instead of implementing competition metric and using it as a loss function for modeling. <ul> <li>Logloss as evaluation behaves very differently from the competition metric.</li> <li>Logloss as training loss cause worse competition metric in my validation.</li></ul></li> <li>Try our best and hope for the best. <ul> <li>We try our best to do CV correctly, to improve CV score while reducing uncertainty.</li> <li>We hope for the best, since the dataset is not large enough to tell the difference between the last few digits, there will always be shakeup as expected. Keep an optimistic mind and move forward (Less painful for a bad shakeup)</li></ul></li> </ul> <p>My best CV with the highest mean and lowest deviations also achieve the best private scores.<br> <img src="https://imgur.com/gallery/2iIA7wO" alt="final selection and best private lb"></p> <p>Submission Kernel:<br> <a href="https://www.kaggle.com/code/khyeh0719/mayo-submission" target="_blank">https://www.kaggle.com/code/khyeh0719/mayo-submission</a> </p> <p>You got good results in the end still 😁</p>
Mayo Clinic - STRIP AI
25th Solution - Custom MIL with timm backbones
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mayo Clinic - STRIP AI <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>You are right and I think that's the desired way when fine-tuning from ImageNet weights.</p> <p>When I first started this competition 2 weeks ago, I got RSNA-MICCAI Brain Tumor Radiogenomic Classification flashbacks. These two competitions are quite similar in terms of low signal but this one was slightly better and less random. I had some decent experiences in that competition and I applied those things here.</p> <p>Inference Notebook: <a href="https://www.kaggle.com/code/gunesevitan/mayo-clinic-strip-ai-inference" target="_blank">https://www.kaggle.com/code/gunesevitan/mayo-clinic-strip-ai-inference</a><br> GitHub Repository: <a href="https://github.com/gunesevitan/mayo-clinic-strip-ai" target="_blank">https://github.com/gunesevitan/mayo-clinic-strip-ai</a></p> <h2>Dataset</h2> <p>I created a pre-computed dataset of instances for training more efficiently. Pre-computed dataset consist of 16 instances of size 1024x1024 with 3 channels (RGB) per image. Dataset creation process can be simplified to</p> <ul> <li>Compress image with JPEG compression (100% JPEG quality)</li> <li>Resize longest edge to 20.000 pixels</li> <li>Extract non-overlapping instances and pad them with white background</li> <li>Sort instances by their sums in descending order and take top 16 instances</li> </ul> <h2>Models</h2> <p>I used Multiple Instance Learning (MIL) models for processing instances. My MIL model is slightly different than the one shared in this competition. I think the model in <a href="https://www.kaggle.com/code/analokamus/a-sample-of-multi-instance-learning-model" target="_blank">this</a> notebook concatenates feature maps on height dimension which felt weird to me, so I tried different pooling and concatenation methods. </p> <pre><code>class ConvolutionalMultiInstanceLearningModel(nn.Module): def __init__(self, n_instances, model_name, pretrained, freeze_parameters, aggregation, head_class, head_args): super(ConvolutionalMultiInstanceLearningModel, self).__init__() self.backbone = timm.create_model( model_name=model_name, pretrained=pretrained, num_classes=head_args['n_classes'] ) if freeze_parameters is not None: # Freeze all parameters in backbone if freeze_parameters == 'all': for parameter in self.backbone.parameters(): parameter.requires_grad = False else: # Freeze specified parameters in backbone for group in freeze_parameters: if isinstance(self.backbone, timm.models.DenseNet): for parameter in self.backbone.features[group].parameters(): parameter.requires_grad = False elif isinstance(self.backbone, timm.models.EfficientNet): for parameter in self.backbone.blocks[group].parameters(): parameter.requires_grad = False self.aggregation = aggregation n_classifier_features = self.backbone.get_classifier().in_features input_features = (n_classifier_features * n_instances) if self.aggregation == 'concat' else n_classifier_features self.classification_head = eval(head_class)(input_features=input_features, **head_args) def forward(self, x): # Stack instances on batch dimension before passing input to feature extractor input_batch_size, input_instance, input_channel, input_height, input_width = x.shape x = x.view(input_batch_size * input_instance, input_channel, input_height, input_width) x = self.backbone.forward_features(x) feature_batch_size, feature_channel, feature_height, feature_width = x.shape if self.aggregation == 'avg': # Average feature maps of multiple instances x = x.contiguous().view(input_batch_size, input_instance, feature_channel, feature_height, feature_width) x = torch.mean(x, dim=1) elif self.aggregation == 'max': # Max feature maps of multiple instances x = x.contiguous().view(input_batch_size, input_instance, feature_channel, feature_height, feature_width) x = torch.max(x, dim=1)[0] elif self.aggregation == 'logsumexp': # LogSumExp feature maps of multiple instances x = x.contiguous().view(input_batch_size, input_instance, feature_channel, feature_height, feature_width) x = torch.logsumexp(x, dim=1) elif self.aggregation == 'concat': # Stack feature maps on channel dimension x = x.contiguous().view(input_batch_size, input_instance * feature_channel, feature_height, feature_width) output = self.classification_head(x) return output def __init__(self, n_instances, model_name, pretrained, freeze_parameters, aggregation, head_class, head_args): super(ConvolutionalMultiInstanceLearningModel, self).__init__() self.backbone = timm.create_model( model_name=model_name, pretrained=pretrained, num_classes=head_args['n_classes'] ) if freeze_parameters is not None: # Freeze all parameters in backbone if freeze_parameters == 'all': for parameter in self.backbone.parameters(): parameter.requires_grad = False else: # Freeze specified parameters in backbone for group in freeze_parameters: if isinstance(self.backbone, timm.models.DenseNet): for parameter in self.backbone.features[group].parameters(): parameter.requires_grad = False elif isinstance(self.backbone, timm.models.EfficientNet): for parameter in self.backbone.blocks[group].parameters(): parameter.requires_grad = False self.aggregation = aggregation n_classifier_features = self.backbone.get_classifier().in_features input_features = (n_classifier_features * n_instances) if self.aggregation == 'concat' else n_classifier_features self.classification_head = eval(head_class)(input_features=input_features, **head_args) def forward(self, x): # Stack instances on batch dimension before passing input to feature extractor input_batch_size, input_instance, input_channel, input_height, input_width = x.shape x = x.view(input_batch_size * input_instance, input_channel, input_height, input_width) x = self.backbone.forward_features(x) feature_batch_size, feature_channel, feature_height, feature_width = x.shape if self.aggregation == 'avg': # Average feature maps of multiple instances x = x.contiguous().view(input_batch_size, input_instance, feature_channel, feature_height, feature_width) x = torch.mean(x, dim=1) elif self.aggregation == 'max': # Max feature maps of multiple instances x = x.contiguous().view(input_batch_size, input_instance, feature_channel, feature_height, feature_width) x = torch.max(x, dim=1)[0] elif self.aggregation == 'logsumexp': # LogSumExp feature maps of multiple instances x = x.contiguous().view(input_batch_size, input_instance, feature_channel, feature_height, feature_width) x = torch.logsumexp(x, dim=1) elif self.aggregation == 'concat': # Stack feature maps on channel dimension x = x.contiguous().view(input_batch_size, input_instance * feature_channel, feature_height, feature_width) output = self.classification_head(x) return output </code></pre> <p>Aggregation parameter tunes the MIL pooling or aggregating part of the model.</p> <ul> <li><code>avg</code>: average all feature maps on instance dimension</li> <li><code>max</code>: max all feature maps on instance dimension (this was mentioned on many papers)</li> <li><code>logsumexp</code>: logsumexmp all feature maps on instance (I saw this on one paper so I added it as well)</li> <li><code>concat</code>: concatenate all feature maps on channel dimension</li> </ul> <p><code>concat</code> worked best for me even though the feature map counts multiplied with number of instances. I was working on an attention mechanism for instance feature maps but I didn't have enough time and I eventually ditched the idea.</p> <h2>Validation</h2> <p>5 folds of cross-validation is used as the validation scheme. Splits are stratified on <code>binary_encoded_label</code>, <code>patient_id</code> doesn't overlap between folds, and dataset is shuffled before splitting. Best random seed is selected by minimizing the standard deviation of <code>binary_encoded_label</code> mean between folds which ensures target means of folds are close to each other.</p> <h2>Training</h2> <p>Loss Function: Negative log likelihood loss is used as the loss function. Binary labels are converted to multi-class labels by simply doing</p> <pre><code>inputs = torch.sigmoid(inputs) inputs = torch.cat(((1 - inputs), inputs), dim=-1) </code></pre> <p>Optimizer: AdamW with 1e-5 initial learning rate and learning rate is cycled to 1e-6 back-and-forth after every 300 steps.</p> <p>Batch Size: Training: 4 - Validation: 8</p> <p>Early Stopping: Early Stopping is triggered after 5 epochs with no validation loss improvement</p> <h2>Training Augmentations</h2> <p>Training augmentations are applied to all instances randomly.</p> <ul> <li>Resize 256x256 or 224x224</li> <li>Luminosity standardization by 10% chance</li> <li>Horizontal flip by 50% chance</li> <li>Vertical flip by 50% chance</li> <li>Random 90-degree rotation by 25% chance</li> <li>Random hue-saturation-value by 25% chance</li> <li>Coarse dropout or pixel dropout by 10% chance</li> <li>Normalize by instance dataset statistics</li> </ul> <h2>Ensemble</h2> <p>8 models are used in the ensemble. Those models are selected based on their ROC AUC scores. Models with OOF ROC AUC score greater than 0.65 are qualified for the ensemble. My best single model was DenseNetBlur121d which is quite interesting and worth more exploring.</p> <ul> <li>DenseNet121 (ImageNet pre-trained weights) - 256x256 16 instances</li> <li>DenseNetBlur121d (ImageNet pre-trained weights) - 256x256 16 instances</li> <li>DenseNet169 (ImageNet pre-trained weights) - 256x256 16 instances</li> <li>EfficientNetB2 (ImageNet pre-trained weights) - 256x256 16 instances</li> <li>EfficientNetV2 Tiny (ImageNet pre-trained weights) - 256x256 16 instances</li> <li>CoaT Lite Mini (ImageNet pre-trained weights) - 224x224 16 instances</li> <li>PoolFormer 24 (ImageNet pre-trained weights) - 224x224 16 instances</li> <li>Swin Transformer Tiny Patch 4 Window 7 (ImageNet pre-trained weights) - 224x224 16 instances</li> </ul> <p>Blending is utilized on selected models. Blend weights are found by trial and error.</p> <pre><code>blend_weights = { 'mil_densenet121_16_256': 0.10, 'mil_densenet169_16_256': 0.10, 'mil_densenetblur121d_16_256': 0.25, 'mil_efficientnetb2_16_256': 0.125, 'mil_efficientnetv2rwt_16_256': 0.125, 'mil_coatlitemini_16_224': 0.10, 'mil_poolformer24_16_224': 0.10, 'mil_swintinypatch4window7_16_224': 0.10 } </code></pre> <p>Linear stacking is also utilized on selected models by fitting a linear regression model on training predictions and targets.</p> <h2>Post-processing</h2> <p>Predictions are averaged among patient_id groups and 0.21 is added to aggregated predictions for making the prediction 0.5 centered. That constant is selected by maximizing the OOF score.</p> <h2>Results</h2> <p>Quantitative results are provided as cross-validation, public and private leaderboard scores.<br> Best single model and ensemble cross-validation scores are highlighted.</p> <table> <thead> <tr> <th></th> <th>CV ROC AUC</th> <th>CV Weighted Log Loss</th> <th>Public Leaderboard</th> <th>Private Leaderboard</th> </tr> </thead> <tbody> <tr> <td>DenseNet121</td> <td>0.6501</td> <td></td> <td></td> <td></td> </tr> <tr> <td>DenseNetBlur121d</td> <td><strong>0.6628</strong></td> <td></td> <td></td> <td></td> </tr> <tr> <td>DenseNet169</td> <td>0.6551</td> <td></td> <td></td> <td></td> </tr> <tr> <td>EfficientNetB2</td> <td>0.6501</td> <td></td> <td></td> <td></td> </tr> <tr> <td>EfficientNetV2 Tiny</td> <td>0.6500</td> <td></td> <td></td> <td></td> </tr> <tr> <td>CoaT Lite Mini</td> <td>0.6503</td> <td></td> <td></td> <td></td> </tr> <tr> <td>PoolFormer 24</td> <td>0.6536</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Swin Transformer Tiny Patch 4 Window 7</td> <td>0.6497</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Blend + patient_id Aggregation + Adjustment</td> <td>0.6822</td> <td>0.6471</td> <td>0.6</td> <td><strong>0.67897</strong></td> </tr> <tr> <td>Stack + patient_id Aggregation + Adjustment</td> <td><strong>0.6842</strong></td> <td><strong>0.6387</strong></td> <td><strong>0.6</strong></td> <td>0.68666</td> </tr> </tbody> </table> <p>I started overfitting when I added more and more models to ensemble. My stack was also overfitting a bit but my best submission is my blend.</p> <h2>What didn't work</h2> <ul> <li>Decaying learning rate</li> <li>Freezing backbone parameters</li> <li>Average and max MIL pooling</li> <li>Lots of other timm models</li> <li>Binary cross-entropy loss, focal loss, weighted log loss</li> <li>Label smoothing</li> <li>Different number of instances with larger sizes</li> </ul>
Mayo Clinic - STRIP AI
2nd place solution - EfficientNetB0+augmentations+smart tiling
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mayo Clinic - STRIP AI <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>oh, sorry, I missed your point. Yes, you're right, I just got an average of all probabilities.</p> <p>thank you, <a href="https://www.kaggle.com/forcewithme" target="_blank">@forcewithme</a> <br> Yes, I had 6-folds CV and used 6-models ensemble for the final prediction. <br> My logic about it: each model was validated on its own set of centers, so they are different somehow and combining them should help to cover all cases and probably even increase the quality.<br> Training process was unstable so I was not able to train one final model on full data without validation.</p> <p>Thanks to Mayo Clinic for preparing this competition and to Kaggle for hosting it. And I'm happy that it was hosted as a code competition, because from my opinion it helps a lot to vanish the gap between kaggle competitions and real life business problems. Also I would like to congratulate guys from the first place because their solution seems much more complex and interesting than mine. <br> At the same time sorry to see so low scores on the top comparing with random solution. I'm afraid that this competition might not help guys from Mayo to solve this very important problem and this is really sad.<br> Looking at the private leaderboard it seems like it was all about proper validation and total ignoring public scores (and a bit of luck of course). So let me quickly describe my solution.</p> <p><strong>Data preparation</strong><br> For training I used all train images except 5 too-blurry. And also all other image with label 'Other'. I mixed them with LAA images in order to increase the amount of samples for that class, because in general I just wanted to distinguish CE from all other types.<br> Process step by step:</p> <ol> <li>Resize images by factor 24. I also tried 8, 16, 32, 48, but 24 showed the best final metric;</li> <li>Use line-to-line pixelwise difference to calc for each 28x28 block does it have blood or background;</li> <li>For each 224x224 block with stride 28 check does it have blood or background;</li> <li>Deduplicate tiles: remove all blood tiles that intersect by more than half;</li> <li>Get random 20 if we have more tiles than that;</li> <li>No color normalization. I've tried several approaches, but all of them worked worse than strong color jitter augmentation.</li> </ol> <p><strong>Validation process</strong><br> First of all I quickly trained a small model on the same data to classify tiles by center_id. This models easily showed above-random performance (sorry, I lost exact metrics) and it was a signal for me that model potentially might overfits on similar clinics. Also <a href="https://www.kaggle.com/competitions/mayo-clinic-strip-ai/discussion/347095" target="_blank">here</a> competition host mentioned that data was collected from 18 institutes, but in train data we have only 11. So either test has 7 other clinics or it was about data from other folder somehow. Anyway I decided to use cross-validation based on clinics. In order to handle "small-size" clinics (small test size might leads to unstable metric and potential overfit) I joined them into groups. So each group has at least 90 examples.<br> <code>(11,), (4,), (7,), (1, 5,), (10, 3), (6, 2, 8, 9,)</code></p> <p><strong>Model training</strong><br> During my experiments I had a lot of issues with overfitting and stability. So I tried to choose as simple approach as possible.<br> I used efficientNetB0 pretrained on imageNet as a backbone and just one FC layer as a head. Also backbone was frozen during the whole training. Model predicts only one probability "is it CE type". So basically I trained only 513 parameters. <br> In order to stratify training dataset I used upsampling and also I found it useful to upsample it 4 more times (probably, it only helps with bigger batch size). Also I multiplied test dataset 20 times: if some image has 20 tiles then all of them will be used in test once, if less then some tiles from that image could be presented more than once.<br> Other bullet points:</p> <ul> <li>Ensemble of 6 models as a final model (one model from each CV fold);</li> <li>Bigger batch helps: 64 was my final choice;</li> <li>Learning rate is important;</li> <li>Learning process was still a bit unstable so I trained 3 models for each CV fold and got the best one;</li> <li>BCELoss as a loss function;</li> <li>Competition metric with (0.5, 0.5) weights as a main and the only metric;</li> <li>If image has no tiles (for example, too small or too-blurry) then replace their final score with 0.5.</li> </ul> <p><strong>Data augmentations</strong><br> For my approach it was the most important part so I decided to write about it in a separate block.<br> I used 3 types of augmentations during the training:</p> <ol> <li>Random flips - obvious;</li> <li>Random adjust sharpness - by some reason helped a lot, but blur doesn't at the same time;</li> <li>Strong color jitter (brightness=0.2, saturation=0.5, hue=0.5) - the most important one, because helped to fix color difference problem.<br> And also used 2 and 3 during the test. I used seeds everywhere in order to make the process reproducible. </li> </ol> <p><strong>Results</strong><br> Competition target metric on my CV: 0.6373838583629989<br> Public LB metric: 0.70822<br> Private LB metric: 0.66421</p> <p><strong>What didn't work for me</strong></p> <ul> <li>MIL;</li> <li>More complex models;</li> <li>Color normalization;</li> <li>Final predictions clipping;</li> <li>Grayscale images;</li> <li>Gram matrices from VGG19 (see style transfer model).</li> </ul> <p>Code is available on my GitHub now: <a href="https://github.com/IlyaLos/mayo-clinic-strip-ai" target="_blank">https://github.com/IlyaLos/mayo-clinic-strip-ai</a><br> Notebooks (just run one by one): <br> <a href="https://www.kaggle.com/code/ilyalos/2nd-place-solution-tiles-generation" target="_blank">https://www.kaggle.com/code/ilyalos/2nd-place-solution-tiles-generation</a><br> <a href="https://www.kaggle.com/code/ilyalos/2nd-place-solution-train" target="_blank">https://www.kaggle.com/code/ilyalos/2nd-place-solution-train</a><br> <a href="https://www.kaggle.com/code/ilyalos/2nd-place-solution-inference" target="_blank">https://www.kaggle.com/code/ilyalos/2nd-place-solution-inference</a></p> <p>Thanks, <a href="https://www.kaggle.com/icemantd" target="_blank">@icemantd</a> and congrats with your 7th place too :)<br> yes, without freezing (and even with unfreezing only few last layers) learning process was really unstable in my case. I even tried to unfreeze backbone after some warmup for head only, but it showed worse performance as well.</p> <p>In general these features might provide a lot of useful information for various number of different tasks. But you're right and blood clot images don't look like the best area anyway. Of course, with some biological pretrained model it suppose to work much better, but I was not able to find any such model. I tried several options, but they all showed even worse results than imageNet. And at the same time I was not able to train or even fine-tune the backbone myself without dramatic overfitting, because we don't have enough signal.</p> <p>In general if you have a model that predicts some probability then you might want to clip that probability in some ranges. For example, if probability is from range [0.4, 0.6] you might want to clip it to 0.5, because your models isn't sure about such samples and it might be better to go with the middle probability for them. And the same for higher/lower probabilities: [0.8, 1.0] -&gt; 1.0 and [0.0, 0.2] -&gt; 0.0 (all ranges here are just examples).<br> In that competition it might works because of really small differences in scores from random solution (with all 0.5). So for some teams clipping uncertain predictions to 0.5 gave boost in score. </p> <p>Thank you <a href="https://www.kaggle.com/mpwolke" target="_blank">@mpwolke</a> !<br> Hope you found some of my ideas useful.</p>
Mayo Clinic - STRIP AI
3rd place solution - ResNet pretraining
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mayo Clinic - STRIP AI <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi <a href="https://www.kaggle.com/forcewithme" target="_blank">@forcewithme</a> , thanks for your comment!<br> I did not use MIL head. My model predicts the labels for 16 tiled images independently and takes average of them.</p> <p>Thanks to Mayo Clinic and Kaggle for this competition. I enjoyed this competition a lot. Here I would like to describe my solution in short.</p> <h1>Pre-training</h1> <p>I think this is the most impactful process in my solution. Because the model weights trained by ImageNet would not be suitable for this task, I used "other" data to pretrain networks.</p> <p>I separated "other" images into tiles, used the labels of "Unknown" or "Other", and pretrained ResNet152 models with ImageNet weights for 40 epochs. Then I trained the model to predict CE/LAA labels by using training dataset starting with pretrained weights.</p> <p>In fact, the single pre-trained ResNet model got 0.65726 in Private Score, which is the best score among my submission.<br> (I did not select the best model as final submission. I selected ensembled models with other no-pretrained methods instead, as described below.)<br> This is why I think pre-training was the most important process in my solution.</p> <h1>Data</h1> <ul> <li>tiled into 512x512x3 channels and selected 16 instances for each image</li> <li>saved as tiff format (LZW compression)</li> </ul> <h1>Loss</h1> <p>Because the labels of the training data are imbalanced, I used binary cross entropy with balanced class weights.</p> <h1>Models</h1> <p>ensemble of the following 3 methods</p> <ul> <li>Resnet152 with pre-training</li> <li>EfficientNetB0 without pre-training</li> <li>Xception without pre-training</li> </ul> <p>Thanks for reading.</p> <p>Thank you and congrats on your result!</p> <p>For pre-training, I just used the label as is ('Unknown':0, 'Other':1).<br> I know the labels are different from the competition task, but similar to some extent.<br> Thus I expected that the network would learn to retrieve some useful information from the competition images.</p> <p>Thanks a lot for your comment!</p> <p>Thanks for your comment!</p> <p>I have not submitted resnet50 models for this competition, but I got 0.66579 for single ResNet152 without pre-training and 0.67792 for xception without pre-training.<br> I think your resnet50 model would have scored around 0.666~0.680 or so if the model was trained appropriately.<br> Thus, I can imagine that there is something to improve in pre-processing or training process.</p> <p>For example, did you use balanced cross entropy to train the model?<br> While the training dataset is imbalanced (CE:LAA = 73:27), the test dataset is maybe balanced (CE:LAA = roughly 1:1).<br> If you use simple cross entropy and do not care about the imbalance, the model output will be biased to around 0.73 of CE probability.<br> Of course it is OK when the test dataset is similarly imbalanced, but in this competition it is not (maybe).</p>
Mayo Clinic - STRIP AI
4th Place Solution - Split images into tiles and do 3D CNN
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mayo Clinic - STRIP AI <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for the reply!<br> <a href="https://www.kaggle.com/icemantd" target="_blank">@icemantd</a> </p> <p>"Regions that are not relevant to CE or LAA" here you refer to the background in the original WSI?" -&gt; Answer is those white background regions + regions that are not relevant to CE or LAA that is determined by 3D CNN during backpropagation.<br> 3D CNN is spatiotemporal. we can see a lot of wider regions that consist of tiled-patch. so I assume the 3D CNN network can differentiate which patch is important or not.<br> if we use 2d CNN, we need to pick up what seems relevant to CE or LAA since we don't simply resize our image. Many top players here manually or with some criteria like choosing regions that have the most pixels picked up some regions that seem relevant to CE or LAA. also some guys used MIL strategy. there are tons of papers dealing with WSI. but I just wanted to avoid preprocessing and try to consider "whole regions" by teaching 3D CNN.</p> <p>may I ask you what method can check the explanability of the model? I just checked consistent logloss and auc score throughout the competition. If you suggest how the model's explanability is checked, then i will try that.</p> <p>Good Question. With 3D Convolution, I could consider whole regions of data during training. that's all. If I use 2d CNN, I need to remove a lot of regions that are not relevent to CE or LAA</p> <p><a href="https://www.kaggle.com/trunghjieu" target="_blank">@trunghjieu</a> Could you please check your link? i can't see the notebook now.</p> <p>Maybe It's too late to write a solution now, but I want to share my ideas.<br> Because it's a good thing to share good ideas in Kaggle.<br> also I would like to thank the organizers for hosting the great competition. </p> <h1>Summary</h1> <p>Important features of the competition data are that the image sizes given to us are very large, with an average of approximately 40000x40000, having many white background. <br> The solution I came up with was to remove background area of images and to split the images into tiles and train a 3D CNN Network. <br> My method showed a very stable Local CV without any post-processing. </p> <h1>Details</h1> <p><img src="https://i.imgur.com/YstxEd7.png" alt=""><br> <img src="https://i.imgur.com/uZRyt1U.png" alt=""><br> <img src="https://i.imgur.com/fVQPrEE.png" alt=""><br> <img src="https://i.imgur.com/azahHyH.jpg" alt=""></p> <p>I will try. Thanks!</p>
Mayo Clinic - STRIP AI
52'nd place - My approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mayo Clinic - STRIP AI <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle and Mayo Clinic, for this competition. Congratulations to all winners!</p> <p>I expected a shakeup given 93% of test set was hidden but not a massive shakeup like we see. I had jumped 500 places to rank 52, winning my first medal (bronze), in a competition . </p> <p>My approach relied a lot on augmentations. Further details are below:</p> <p><strong>Augmentations</strong>:<br> Train set: </p> <ul> <li>512 resized RGB image</li> <li>RandomHorizontalFlip</li> <li>RandomRotation</li> <li>FiveCrop</li> <li>Normalization using ImageNet stats</li> </ul> <p>Val/Test sets:</p> <ul> <li>512 resized RGB image</li> <li>Normalization using ImageNet stats</li> </ul> <p><strong>Training</strong>:<br> Pytorch EfficientNetB5 model<br> CrossEntropyLoss<br> Adam Optimizer. + CosineAnnealingLR<br> 5 fold Stratified Split<br> 10 Epochs per fold. For every fold, save the model based on the lowest loss</p> <p><strong>Inference</strong>:<br> For each image, average the prediction from the 5 models obtained during the training phase) </p> <p>I meant to say Imagenet Stats: ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]).</p> <p>I used these stats to normalize images and see how the results are. I have updated my original post.</p> <p>There are opinions to the contrary, like discussion <a href="https://discuss.pytorch.org/t/discussion-why-normalise-according-to-imagenet-mean-and-std-dev-for-transfer-learning/115670/5" target="_blank">here</a>.</p>
Mayo Clinic - STRIP AI
5th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mayo Clinic - STRIP AI <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Densenet scores about 0.01/0.02 lower on AUC so it's not really good on my pipeline </p> <p>Other interesting models (but not good enough to help my ensemble) were convnext-pico, convnext-atto, efficientnet-b1 &amp; effnet-v2-tiny.</p> <p>I worked on my local machine :)</p> <p>My preprocessing code is available <a href="https://www.kaggle.com/code/theoviel/inference-mayo" target="_blank">here</a>, I basically run it once and then work with the 1024 images.</p> <p>Thanks everyone for the competition, congratz to the winners &amp; kudos to the learners. </p> <p>Although this one is a controversial one because most people couldn't even beat the sample submission, I still believe it was an alright competition. The key is to understand that there is not a lot of signal, and that the competition metric is not friendly with overconfident models, more on that later.</p> <p>Because of the nature of the competition (not a lot of signal, and basically no public LB), I decided not too invest too much time on it, so I am very happy with the result !</p> <h3>Data</h3> <ul> <li>Color normalization : <ul> <li>Detect the background color</li> <li>Normalize all the image by <code>𝑟 = 𝑏𝑎𝑐𝑘𝑔𝑟𝑜𝑢𝑛𝑑_𝑐𝑜𝑙𝑜𝑟/(255, 255, 255)</code></li></ul></li> <li>Simple Approach : <ul> <li>Remove all white chunks from the image </li> <li>Resize to 1024x1024</li></ul></li> <li>Advanced Approach (cf figure below): <ul> <li>Detect duplicated areas in the image</li> <li>Keep only one of them</li> <li>Resize to have the longest edge to 1024, preserving aspect ratio</li></ul></li> </ul> <p><a href="https://ibb.co/fpnzYN1"><img src="https://i.ibb.co/HzxfDPF/mayo-data-pipe.png" alt="mayo-data-pipe"></a></p> <p>You can see some examples <a href="https://www.kaggle.com/code/theoviel/inference-mayo" target="_blank">here</a>.</p> <h3>Models</h3> <ul> <li>3 Small Models - <strong>CV AUC 0.684</strong><ol> <li>Resnet10t – AUC 0.661</li> <li>EfficientNet-b0 – AUC 0.671</li> <li>EfficientNet-b0 using simple approach images - AUC 0.662</li></ol></li> <li>Training<ul> <li>Image size : 1024x1024</li> <li>Ranger + lr=5e-4 (a, b) - Adam + lr = 1e-4 (c)</li> <li>Class-balanced BCE to mimic the metric + Label smoothing</li> <li>10 epochs, bs=16</li> <li>Flip &amp; color augmentations</li></ul></li> <li>Inference<ul> <li>4 flips TTA</li> <li>Simple average</li> <li>Scaling !</li></ul></li> </ul> <h3>Adapting to the Competition Metric</h3> <p><a href="https://ibb.co/C1BYWbF"><img src="https://i.ibb.co/LkScvxf/logloss.png" alt="logloss"></a></p> <ul> <li>The logloss heavily penalizes confident mistakes, comparatively to the reward for confident guesses<ul> <li>Since our models have low AUCs, we want to avoid the “Heavy Penalty” zone</li></ul></li> <li>My ensemble already outputs most of its scores in the “safe” range possibly because my models were designed to underfit (Label smoothing, small model size, short training with low LRs)<ul> <li><strong>Ensemble CV : 0.655</strong></li></ul></li> <li>You can force your model to be safer ! (cf <a href="https://www.kaggle.com/competitions/mayo-clinic-strip-ai/discussion/357877" target="_blank">this post</a>)<ul> <li>Linearly Rescale to the 0.15 / 0.85 range</li> <li>Clip to 0.25 / 0.75</li> <li><strong>Final CV : 0.640</strong> - <strong>Public LB 0.733</strong> - <strong>Private LB 0.666</strong></li></ul></li> </ul> <p>Thanks for reading :)</p> <p>Thanks, and congratulations again on the win ! </p> <p>Thanks ! <br> I tried effnets up to b5, resnet 18, 24 and 50, but the smallest ones worked better</p>
Mayo Clinic - STRIP AI
6th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mayo Clinic - STRIP AI <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi, this competition is my first challenge at kaggle. I really enjoyed this competition and thought hard for about three months. I don't know why I could obtain a gold medal, but I'm sure it's thanks to everyone who have shared the notebooks or suggestions. <br> Here I describe my solution in brief.</p> <p>Submission kernel:<br> <a href="https://www.kaggle.com/code/kodaihatayama/mayo02-submission" target="_blank">https://www.kaggle.com/code/kodaihatayama/mayo02-submission</a></p> <h3>Preprocessing</h3> <p>I created tiles with the following steps. <strong>I didn't resize the image</strong> in all preprocessing steps.</p> <ul> <li>Cut 6 rectangles of width 512 at equal intervals from every image <br> (The purpose of this step is to create tiles quickly and avoid memory crashes without resizing.)</li> <li>Make tiles of size 512x512 from the above rectangles and pick up the top 8 dark tiles per image</li> <li>Split each tile to instances of size 32x32 for training</li> </ul> <p>For using pyvips, I refered to the following kernel. Thank you, <a href="https://www.kaggle.com/analokamus" target="_blank">@analokamus</a> !<br> <a href="https://www.kaggle.com/code/analokamus/how-to-use-pyvips-offline" target="_blank">https://www.kaggle.com/code/analokamus/how-to-use-pyvips-offline</a></p> <h3>Validation and Undersampling</h3> <p>I used the hold-out method with undersampling. </p> <ul> <li>Split patient id to train and validation (80:20) which was stratified by class (CE, LAA)</li> <li>Perform undersampling to be CE:LAA = 1:1 on the train dataset only</li> </ul> <h3>Model</h3> <p>I created the ensemble model of the following 2 methods.</p> <ul> <li>CNN </li> <li>lightGBM</li> </ul> <p>The difference of private LB between CNN only model and the ensemble model was a bit (approximately 0.676 → 0.668).<br> As a side note, I created a model per institution (center id). (This approach may not be effective against test dataset because the institutions of test may be different from them of train.)</p> <p>For CNN, I refered to the following kernel. Thank you, <a href="https://www.kaggle.com/vbookshelf" target="_blank">@vbookshelf</a> !<br> <a href="https://www.kaggle.com/code/vbookshelf/cnn-how-to-use-160-000-images-without-crashing" target="_blank">https://www.kaggle.com/code/vbookshelf/cnn-how-to-use-160-000-images-without-crashing</a></p> <h3>What didn't work for me</h3> <ul> <li>grayscale</li> <li>histogram equalization</li> </ul> <p>Thanks for reading.</p>
Mayo Clinic - STRIP AI
7th place solution -> 2-step MIL based strategy
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mayo Clinic - STRIP AI <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/cid007" target="_blank">@cid007</a> thanks so much for your encouraging comments :)</p> <p><a href="https://www.kaggle.com/josephmarturano" target="_blank">@josephmarturano</a> thanks a lot for your comments. </p> <p>To clarify, I always had all tiles and its data in a csv at image level, I just modified another csv that read certain selected tiles. So the plots you see use data for images that originally had up to 600 tiles, but actually in the two cases I select much lower number for training. This was a way to also represent image size data, such that I know that originally high tile number images and low tile number images were showing clear difference when I correct this imbalance. Hope that clears up that point - after two selection process and reduction of tiles for larger images, max tiles per image we about 60-70 in the training set for LAA class (edit: edited the main text, thanks for pointing out the confusion).</p> <p>I have to update the notebook shared, it was posted before I made all these final changes. But here is the function for hue augmentation:</p> <pre><code>def hue_augument(img): hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) f = np.random.uniform(low=0.85, high=1.15, size=1)[0] hsv[:,:,0] = hsv[:,:,0]*f img_aug = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR) return img_aug </code></pre> <p>You're correct about unseen data strains, and with that issue in mind I found the mentioned article and so I implemented this. I did not have time to verify if without this the prediction methodology would suffer significantly. Perhaps something to try further.</p> <p>It was a nice starting experience for me in this competition and big scoop of beginner's luck to go with. Nevertheless, learnt a lot and final methodology with all learning implemented worked best, which is satisfying. Thanks to all who shared hints, tips and suggestions during this competitions. Here is a summary of the methodology used:</p> <p><br></p> <h2>Preprocessing - Tile Creation and Selection</h2> <p>Tried a lot of things at the start but finally settled with tiles created from image downsized by a factor of 6 to balance signal loss and manageable number of tiles. Created and selected tiles by:</p> <ul> <li>Creating (224,224,3) shape tiles and applying simple threshold on image array unique values and their counts - for removing mostly monochrome and background tiles. </li> <li>Reduced tile number further by running tiles through EfficientNetB0 (pretrained, ImageNet weights), scoring them and choosing top 50-60% tiles for each WSI (created another problem in tile number per image, addressed towards the end).</li> <li>Added augmented data for LAA class by creating extra tiles simply by padding image and shifting tile location in image by tile_size/2 in both directions. </li> </ul> <p><br></p> <h2>MIL - Pseudo Labelled Tiles based Feature Extractor Training, Feature Aggregation and Classifier</h2> <p>Followed the tile feature embedding and then aggregation at slide level for further classification method. There are two main steps:</p> <h3>1. Feature Extractor Training</h3> <p><a href="https://www.kaggle.com/code/icemantd/feature-cluster-tile-psuedo-label-train-mayo" target="_blank">This notebook</a> has detail implementation of this part. The main points are:</p> <ul> <li>Trained EfficientnetB0 starting with ImageNet weights, first epoch used all tiles with slide level labels.</li> <li>Added random hue augmentation to tiles (details towards the end).</li> <li>Defined a feature extractor which is the whole network upto one layer (shape (1,1280)) before the last FC classification layer.</li> <li>Used tile features from extractor to apply cluster (kmeans) distance based pseudo labels to tiles for next iteration.</li> </ul> <p>The logic used here is to attempt to provide the feature extractor the ability to not only give high output for the two classes but also a low output for out of class or non-critical tiles. Hence I used 'Other' label tiles with target [0.0] and LAA and CE with [1,0] and [0,1] respectively, with sigmoid activation and BCE loss. Every training epoch, features are created, clustering performed, and tiles scored on minimum distance from the negative class - i.e. top 20% highest min distance from LAA+Other feature clusters for CE tiles are labelled [0,1] and bottom 10% lowest min distance CE tiles are labelled [0,0] - and vice-versa for LAA tile pseudo labelling. The hope was this discriminates between the two classes, as well as between critical and non-critical tiles. The main clustering idea is inspired by <a href="https://arxiv.org/pdf/2206.08861.pdf" target="_blank">this</a> paper.</p> <ul> <li>Used random sampling of CE class to balance classes and used a simple metric for training evaluation at slide level - maxpooling AUC scores, i.e. taking AUC scores for maximum predictions of both classes by any tiles in the slide. 4-fold CV AUC scores of 0.6-0.65.</li> </ul> <h3>2. Feature Aggregation, Classification and Inference</h3> <p>Obtained tile level features from feature extractor and adapted <a href="https://arxiv.org/pdf/2011.08939.pdf" target="_blank">this</a> paper's methodology for feature aggregation at slide level, based on attention to tile with max output logit. </p> <ul> <li>Built a simple random forest classifier using aggregated features at slide level and max predictions at slide level (any tile) for both classes as extra features.</li> <li>Used only top 25 dark tiles from any image for training classifier and for inference.</li> <li>Inference: Take top 25 dark tiles from image upto 2.2 Gb in size, create features from extractor and max logits from classifier head, aggregate and predict classifier probabilities (<a href="https://www.kaggle.com/code/icemantd/new-predict-mil-logit-features-mayo-strip-ai/notebook" target="_blank">notebook</a>)</li> </ul> <p><br></p> <h2>Some important observations</h2> <ul> <li>Found out that tile selection was very important, especially if you are using smaller tiles. Observed that at one point, larger images were giving better log_loss scores but there are more smaller images to infer (at least in training dataset). So adjused the top% of tiles selected from larger images, especially because LAA class had large images and some images dominated the tile numbers and resulted in overfitting. This definitely helped in controlling penalty on bad log_loss over large number of images. Example of log loss spread over training data vs number of tiles per slide image (number of tiles per image represents original number created and represents size of image and not number of tiles selected for training):</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8536316%2Fbe12d4336f614efd1bd81a02d54edb67%2Flog_loss_tilenum_imbalance.png?generation=1665076862015109&amp;alt=media" alt=""></p> <ul> <li>Observed from <a href="https://arxiv.org/pdf/1902.06543.pdf" target="_blank">this</a> paper that adjusting hue of the image, even a little bit, resulted in better chance of predicting unseen strains better. So I implemented a random hue augmentation based on stretching/compressing the hue channel of each tile randomly by a factor betweem (0.85, 1.15). For example</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8536316%2F155fc81cbdc429a4c96bba1cc9c5c0cf%2Fhue_aug.png?generation=1665077255141305&amp;alt=media" alt="!["></p> <ul> <li>Due to the 2-step complexity, I could not come up with a robust end-to-end CV pipeline. Only spent time on feature extractor CV and evaluation using various data splits (60-40, 90-10) but ran into various issues with validation metric robustness. This part was the weakest for me and that's why I consider myself lucky to still land on something that kind of worked.</li> </ul> <p>Hopefully some of you can point out where things could have been better or what was unecessary and even incorrectly implemented. Thanks.</p> <p><a href="https://www.kaggle.com/willcramptonn" target="_blank">@willcramptonn</a> thanks</p> <p><a href="https://www.kaggle.com/deepkim" target="_blank">@deepkim</a> thanks :)</p>
Mechanisms of Action (MoA) Prediction
1st Place Winning Solution - Hungry for Gold
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks <a href="https://www.kaggle.com/kokitanisaka" target="_blank">@kokitanisaka</a> ! Regarding the reason of using only 1 seed, the major problem is regarding the 2-hours inference time limit in the submission, or we would like to use multiple seeds for sure!</p> <p>Each DeepInsight CNN took about 8 minutes to generate the public test set, and around 20 minutes for the private test set. Therefore, we need to spare our inference time for other powerful models! </p> <p>Although in our Best CV blend (5th place on private LB), we do have trained two DeepInsight ResNeSt models with different seeds!</p> <h1>Overview</h1> <p>Thanks to the Kaggle team and <a href="https://www.kaggle.com/srandazzo21" target="_blank">@srandazzo21</a> <a href="https://www.kaggle.com/mrbhbsof" target="_blank">@mrbhbsof</a> from <em>Laboratory for Innovation Science at Harvard</em> who hosted this challenging and interesting MoA competition!</p> <p>Representing the <strong>Hungry for gold🥇🥇</strong> team (with <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> <a href="https://www.kaggle.com/kibuna" target="_blank">@kibuna</a> <a href="https://www.kaggle.com/poteman" target="_blank">@poteman</a>), in this post I’m going to explain our winning solution in detail.</p> <p>Our winning blend consists of 7 single models:</p> <ul> <li>3-stage NN stacking by non-scored and scored meta-features</li> <li>2-stage NN+TabNet stacking by non-scored meta-features</li> <li>SimpleNN with old CV</li> <li>SimpleNN with new CV</li> <li>2-heads ResNet</li> <li>DeepInsight EfficientNet B3 NS</li> <li>DeepInsight ResNeSt</li> </ul> <p>The following overview diagram depicts our winning blend models.</p> <p><strong>Figure 1. Winning Weighted-Average Blend.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2F0a0b63df266de28b20d55046de9776da%2Ffigure_1.png?generation=1607159993332157&amp;alt=media" alt=""></p> <p>Our two final submissions are based on weighted average. The winning blend is the best LB (Table 1), and the best CV blend (Table 3) can also achieve <strong>5th place</strong> in the private LB. It shows that both final submissions are performant and robust.</p> <p><strong>Table 1. Winning Blend with 7 Models (Private LB: 0.01599).</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2Fd4ac6db6183219b14317f1b074ce9752%2Ftable_1.png?generation=1608037787146399&amp;alt=media"></p> <p>For our winning blend, the selection of blend weights were based on two factors: the LB scores and correlations between the single models. For better understanding, the mean correlation of each model’s predictions without control groups was calculated with respect to other single models (shown in Table 2). The selection was manually done to maximize the LB score. Models with less mean correlations were given higher weights. Also, some of the weights were decided by the previous blend submission scores. More details can be found in the Model Diversity section.</p> <p><strong>Table 2. Mean Correlation Between the Submissions of Best Single Models.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2F66d2f23777fc03afbffc785902790cbd%2Ftable_2.png?generation=1607160532323286&amp;alt=media"></p> <p>The selection of single models in our Best CV blend is fully based on the OOF (Out-of-folds) predictions. We used the <a href="https://optuna.readthedocs.io/en/stable/reference/generated/optuna.samplers.TPESampler.html#optuna.samplers.TPESampler" target="_blank">TPE</a> (Tree-structured Parzen Estimator) sampler in Optuna and <a href="https://docs.scipy.org/doc/scipy/reference/optimize.minimize-slsqp.html" target="_blank">SLSQP</a> (Sequential Least Squares Programming) method in Scipy for searching CV-optimized weights. The resulting weights from Optuna (under 3000 or 5000 trials) and SLSQP are nearly the same.</p> <p>The log loss of the best CV is <strong>0.15107 (private LB: 0.01601)</strong> from 5 models. Interestingly, the search results eliminated 2-stage NN+TabNet and Simple NN models, which gave some contribution to the winning blend.</p> <p><strong>Table 3. Best CV Blend with 5 Models (CV: 0.015107, Private LB: 0.01601).</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2F79f1ac6d7492693aedd1dd287998a2b4%2Ftable_3.png?generation=1607160550745973&amp;alt=media"></p> <p>The addition of <a href="https://www.kaggle.com/c/lish-moa/discussion/195378" target="_blank">DeepInsight CNNs</a> played a significant role in our final blends because of their high diversity to other shallow NN models. By curiosity, we compared the log loss scores of our winning blend with/without those CNN models (shown in Table 4).</p> <p>We got almost <strong>0.00008 improvement</strong> in the private LB score while including DeepInsight CNN models!</p> <p><strong>Table 4. Winning Blends with/without DeepInsight CNNs.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2F14643afb45705f0ca73561f28acd6a4b%2Ftable_4.png?generation=1607160593397588&amp;alt=media"></p> <h1>Cross-Validation Strategies</h1> <p>Most of our models are based on the <a href="https://github.com/trent-b/iterative-stratification" target="_blank">MultilabelStratifiedKFold</a> (old CV), with the exception of a Simple NN model that used the <a href="https://www.kaggle.com/c/lish-moa/discussion/195195" target="_blank">new double stratified CV</a> shared by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> using drug_id information. We used different seeds for the CV splits in our models. The choice of CV strategy is based on the goodness of alignment to the CV log loss and LB log loss. The <em>K</em> of CV is either 5 or 10. We also trained the models using multiple seeds to reduce the variance.</p> <p>Notably, we noticed that it is difficult to combine the OOF files from both old and new CVs for searching optimized CV weights as the log loss scores of new CV models are much higher. We ended up selecting only old CV models for our Best CV blend, which scored <strong>0.15107</strong> on old CV and <strong>0.01601</strong> on private LB.</p> <h1>Model Details</h1> <p>In our teamwork, Nischay and Kibuna contributed on feature engineering and multi-stage stacking of 3-stage NN, 2-stage NN+TabNet and Simple NN models while I was focusing on DeepInsight CNNs and the replication of a 2-heads ResNet model.</p> <p>To fight with the risk of overfitting under this small and highly imbalanced multi-label dataset, we imposed label smoothing and weight decay as the regularization methods in the model training process. Label smoothing worked very well and prevented our models from being too confident about its predictions, which greatly reduced the chance of overfitting.</p> <p>To make our models more visually understandable, we have provided a model architecture diagram for each of our single models to explain the inner training process and its NN topology. In this section we are going to introduce each model in detail.</p> <h2>3-Stage NN</h2> <p>This is our best single model (CV: <strong>0.01561</strong>, Public LB: <strong>0.01823</strong>, Private LB: <strong>0.01618</strong>) and a great addition to our final blend on both CV and LB scores. Nischy and Kibuna did a great job on producing good results based on the idea of multi-stage stacking with model predictions as meta-features and different engineering features. Control group rows are removed from the training set.</p> <p><strong>Figure 2. 3-stage NN Model.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2Ffb1fcba2f3331b3d3ac90cda457c23bd%2Ffigure_2.png?generation=1607164173264589&amp;alt=media" alt=""></p> <p><strong>Figure 3. 3-FC Base NN Block Topology.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2Fe46b018f07394a19a29398de96eea718%2Ffigure_3_7.png?generation=1607164188073162&amp;alt=media" alt=""></p> <p>Figure 3 depicts the detailed topology of the model’s based NN block, which is similar to the public notebooks. Some changes in the training setup were made to boost up the performance. In the multiple stages training, the same architecture was used apart from tuned dropout values.</p> <p><strong><em>STAGE 1</em></strong></p> <p>Firstly, <a href="https://umap-learn.readthedocs.io/en/latest/" target="_blank">UMAP</a> and <a href="https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.FactorAnalysis.html" target="_blank">Factor Analysis</a> were used to generate extra features from the genes and cells features. The <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.QuantileTransformer.html" target="_blank">Quantile Transformer</a> was applied to all features except for one-hot encoding ones. In this stage the NN model with a size of 2048 hidden neurons is trained for 15 epochs for non-scored targets excluding those having zero values (332 targets included). The non-scored predictions were reused as meta-features for the next stage.</p> <p><strong><em>STAGE 2</em></strong></p> <p>In the 2nd stage, we applied Quantile Transformer again on the non-scored meta-features to combine with the original features in the previous stage, and trained another NN for 25 epochs with a size of 2048 hidden neurons for scored targets. Similarly, the scored predictions were reused as meta-features for the next stage.</p> <p><strong><em>STAGE 3</em></strong></p> <p>In the last stage, we also applied Quantile Transformer on the scored meta-features, and retrained a NN for 25 epochs with a size of 1024 hidden neurons based on only those meta-features, while targets were clipped from 0.0005 and 0.9995 in the training data after label smoothing.</p> <p>In each stage a similar setup was used: a Adam optimizer with a learning rate of 5e-3, a batch size of 256, a weight decay of 1e-5, and a OneCycleLR scheduler with a maximum learning rate of 1e-2.</p> <p>The final model produced a CV log loss of <strong>0.01561</strong>, which is a bit higher than the 2nd stage model due to clipping to model predictions, but it scored really well on the LB.</p> <h2>2-Stage NN+TabNet</h2> <p>Similar to the 3-stage NN, this model also used non-scored and scored targets in multi-stage training. TabNet is only used for the 2nd stage instead of the first one, because it was not producing good results on non-scored targets. Control group rows are removed from the training set.</p> <p>Figure 4 and 5 show the model architecture and the detail of Base NN used in the first stage.</p> <p><strong>Figure 4. 2-stage NN+TabNet Model.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2F7cec080edde3393d3c86a779d227b211%2Ffigure_4.png?generation=1607164563571085&amp;alt=media" alt=""></p> <p><strong>Figure 5. 2-FC Base NN Block Topology.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2F6f444e4b91653b33fc1274e8adf11587%2Ffigure_5.png?generation=1607164583842981&amp;alt=media"></p> <p><strong><em>Stage 1</em></strong></p> <p>For the feature engineering part, <a href="https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html?highlight=pca#sklearn.decomposition.PCA" target="_blank">PCA</a> and <a href="https://umap-learn.readthedocs.io/en/latest/" target="_blank">UMAP</a> were used along with <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.QuantileTransformer.html" target="_blank">QuantileTransformer</a> on the genes and cells features, with a variance threshold of 0.5 to keep only important features. A NN model with a size of 2048 hidden neurons is trained for 15 epochs for non-scored targets. Those predictions are then used as meta-features for the next stage.</p> <p>The following training setup was used: a Adam optimizer with a learning rate of 5e-4, a batch size of 256, a weight decay of 1e-5, and a OneCycleLR scheduler with a maximum learning rate of 1e-2.</p> <p><strong><em>Stage 2</em></strong></p> <p>In the 2nd stage, we applied Quantile Transformer again on the non-scored meta-features to combine with the original features in the previous stage, and trained a TabNet with label smoothing (0.0001) for scored targets. Factor Analysis, <a href="https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html" target="_blank">K-Means Clustering</a> along with some statistics features like mean, skew, kurtosis, and std of genes and cells features were also used.</p> <p><a href="https://github.com/dreamquark-ai/tabnet" target="_blank">PyTorch TabNet</a> regressor training setup: a width of 32 for the decision prediction layer, and a width of 32 for the attention embedding for each mask, 1 step in the architecture, a gamma value of 0.9, Adam optimizer with a learning rate of 2e-2 and a weight decay of 1e-5, a sparsity loss coefficient of 0, and entmax as the masking function. It was trained with a batch size of 1024 and a virtual batch size of 128 for 200 epochs before early-stopped by a patience of 50 epochs. The final TabNet model produced a CV log loss of <strong>0.01615</strong>.</p> <h2>Simple NN</h2> <p>The model topology of simple NN is the same as the 3-FC Base NN used in the 3-stage NN model. It is trained with only a single stage on scored targets. Figure 6 and 7 show the model architecture and the detail of Base NN. Control group rows are removed from the training set.</p> <p><strong>Figure 6. Simple NN Model.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2F8276e3b36e68f6ad4557b22ea5749c46%2Ffigure_6.png?generation=1607175455595627&amp;alt=media" alt=""></p> <p><strong>Figure 7. 3-FC Base NN Block Topology.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2Fe46b018f07394a19a29398de96eea718%2Ffigure_3_7.png?generation=1607164188073162&amp;alt=media" alt=""></p> <p>For the feature engineering part, <a href="https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.FactorAnalysis.html" target="_blank">Factor Analysis</a> is used along with <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.QuantileTransformer.html" target="_blank">QuantileTransformer</a> on the genes and cells features except for one-hot encoding ones. Some statistics features like mean, skew, kurtosis, and std of genes and cells features were also used. A NN model with a size of 2048 hidden neurons is trained for scored targets.</p> <p>The following training setup was used: a Adam optimizer with a learning rate of 5e-4, a batch size of 256, a weight decay of 1e-5, an early stopping patience of 10 epochs, and a OneCycleLR scheduler with a maximum learning rate of 1e-2.</p> <p>We trained two Simple NN models in total, one with the old CV including extra features from K-Means Clustering, and the other one with the new CV to add more diversity.</p> <h2>2-heads ResNet</h2> <p>This model replicated the public notebook "<a href="https://www.kaggle.com/demetrypascal/fork-of-2heads-looper-super-puper-plate" target="_blank">Fork of 2heads looper super puper plate</a>" shared by <a href="https://www.kaggle.com/demetrypascal" target="_blank">@demetrypascal</a> with some minor updates for the freezing/unfreezing training loops. The original notebook was written in R (public LB: <strong>0.01833</strong>, private LB: <strong>0.01624</strong>), and we made a Tensorflow/Keras Python version for it (public LB: <strong>0.01836</strong>, private LB: <strong>0.01624</strong>). Control group rows are removed from the training set.</p> <p>Figure 8 and 9 show the model architecture and the detail of ResNet-like Residual NN.</p> <p><strong>Figure 8. 2-heads ResNet Model.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2Fcfe379bc20d3e1c10f711d3a75c46ed5%2Ffigure_8.png?generation=1607165233701360&amp;alt=media" alt=""></p> <p>The architecture is based on ResNet-like Residual NN with two inputs, one is from mean and PCA components of genes and cells features, and the other one includes the <a href="https://www.kaggle.com/demetrypascal/t-test-pca-rfe-logistic-regression#Select-only-important" target="_blank">447 selected important features</a> by <a href="https://en.wikipedia.org/wiki/Student%27s_t-test" target="_blank">Student's t-test</a> based on the 25% quantile of the p-values for the test of sample means between training rows with and without MoA. All features are normalized by Standard Scaler.</p> <p><strong>Figure 9. ResNet-like Residual NN Topology.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2Fbb20162671344e825fe8369b6810509b%2Ffigure_9.png?generation=1607165255497473&amp;alt=media" alt=""></p> <p>Using a ResNet-like topology brings the benefit of skip connections in the FC layers, which allows the flow of information to be passed from one layer to next layers. In this way, the model gains better capability of learning with more hidden neurons and layers without the early degradation problem.</p> <p>The training process can be divided into two phases.</p> <p>Firstly, the model is trained for non-scored targets by Adam optimizer for 50 epochs minimizing the binary cross entropy loss, with a learning rate of 1e-3, a batch size of 128, early stopping with a patience of 10 epochs and a minimum delta of 1e-5. ReduceLROnPlateau learning rate scheduler is used with a factor of 0.5 and a patience of 4 epochs. The alpha value of label smoothing is set to 0.0005.</p> <p>In the second phase, the learnt model weights from non-scored targets are being reused and transferred learning to scored targets using loops of freezing and unfreezing process. The weights except for the last layers (the FC2 module in Figure 9) are freezed for training several epochs with minimum delta of early stopping set to 1e-6, and a smaller learning rate of 1e-3/3. And then the whole model weights are unfreezed and we follow a similar loop to train the same model, with an even smaller learning rate of 1e-3/5, until it early-stopped.</p> <p>There is also another V2 version of model that included one-hot encoding features and used a smaller batch size of 64. It got a worse public LB but scored better on CV and private LB, which contributed as part of our best CV blend.</p> <h2>DeepInsight CNNs</h2> <p>Compared with most of the shallow NN models shared in public notebooks and our other single models that added lots of engineering features, our DeepInsight CNNs only used raw features from genes and cells!</p> <p>The idea was to fully utilize the power of feature extraction and transfer learning from pretrained convolutional neural networks and generate highly unique and diverse models in comparison with other NN models.</p> <p>As introduced in <a href="https://www.kaggle.com/markpeng/deepinsight-transform" target="_blank">my post about DeepInsight</a>, instead of doing feature extraction and selection for collected samples (<em>N</em> samples x <em>d</em> features), we are <strong>arranging similar or correlated features into the neighboring regions of a 2-dimensional feature map (<em>d</em> features x <em>N</em> samples)</strong> to ease the learning of their complex relationships and interactions. </p> <p>Since we are transforming features instead of samples, the shape of the learnt feature map is static for samples. However, the distribution of features values (heatmap colors) in the feature map per class looks different. As we know, CNNs can learn hierarchical patterns and exploit spatially local correlation by enforcing a sparse local connectivity pattern between neurons of adjacent layers. Even if the feature map is fixed, the variance of feature values and the interactions of neighboring weak features are still the learnable source for kernel weights.</p> <p>In this competition, we applied DeepInsight with two popular pretrained CNN models: <a href="https://arxiv.org/abs/1905.11946" target="_blank">EfficientNet</a> and <a href="https://arxiv.org/abs/2004.08955" target="_blank">ResNeSt</a> (Split-Attention Networks). And it turns out that they all worked very well on MoA data!</p> <p>Figure 10 and 11 show the detail of DeepInsight CNNs.</p> <p><strong>Figure 10. DeepInsight EfficientNet B3 NS Model.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2Fa64ec9d3048c8abee264adb81f3f9848%2Ffigure_10.png?generation=1607165765201040&amp;alt=media" alt=""></p> <p><strong>Figure 11. DeepInsight ResNeSt Model.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2F8e3b1c9ddbd1818405f29b76585e9025%2Ffigure_11.png?generation=1607165780342951&amp;alt=media" alt=""></p> <p><strong><em>Training Process</em></strong></p> <p>The overall training process is the same for both CNN models. First of all, raw genes, cells and one-hot encoding features are normalized by the <a href="https://static-content.springer.com/esm/art%3A10.1038%2Fs41598-019-47765-6/MediaObjects/41598_2019_47765_MOESM1_ESM.pdf" target="_blank">Norm-2 Normalization (Log Scaler)</a> method mentioned in the original <a href="https://www.nature.com/articles/s41598-019-47765-6" target="_blank">DeepInsight paper</a>, which works better for CNNs than Standard Scaler, Robust Scaler and Min/Max Scaler in our experiments.</p> <p>Secondly, t-SNE is applied to transform raw features into a non-linear 2D embeddings feature space. Then the convex hull algorithm is used to find the smallest rectangle containing all features and a rotation is performed to align the frame into a horizontal or vertical form. After this step, we have created a <strong>DeepInsight Feature Map</strong> that contains the extracted neighboring relationships between the raw features from genes and cells. </p> <p>Next, to deal with the chance of overfitting in this small and highly imbalanced dataset, we borrowed the idea of <a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44629" target="_blank">Swap Noise</a> for data augmentation, where each training sample has a 10% chance of swapping 15% of the features with the other random sample in the training set. Our experimental results showed that this trick greatly reduced the chance of overfitting and boosted the prediction performance of CNNs.</p> <p>Fourthly, the augmented raw feature values are mapped into the pixel coordinate locations of the feature map image. Note that the resolution of feature map image affects the ratio of feature overlaps (the features mapped to the same location are averaged), which is a trade-off between the level of lossy compression and computing resource requirements (e.g., host/GPU memory, storage).</p> <p>In the original paper overlapped feature values are aggregated by the mean, and it provided only a single gray-scale channel duplicated into 3-channels for pretrained CNNs. Therefore, in the final step I thought of an idea of extracting also the minimum and maximum values as the other two channels to keep more statistical information about the overlapped values. This trick has been proven to boost the prediction performance on both CV (<strong>-0.00013</strong>) and public LB (<strong>-0.00011</strong>) in the experiment results significantly.</p> <p>We chose to use the same value for resolution and image size, which is 300x300 for EfficientNet B3 NoisyStudent and 224x224 for ablated ResNeSt. The last FC layer of CNNs are replaced with a new FC with a hidden size of 512 and the <a href="https://pytorch.org/docs/stable/generated/torch.nn.ELU.html" target="_blank">ELU</a> activation.</p> <p>For ResNeSt, we chose the ablated version (<strong><em>resnest50_fast_2s2x40d</em></strong>), which provides nearly the best prediction performance of ResNeSt variants with relatively smaller model size and faster inference speed. We also trained another V2 model with a different seed and a dropout rate of 0.2 to the final FC layer.</p> <p>Both CNNs are using the same training process implemented by the nice <a href="https://pytorch-lightning.readthedocs.io/en/latest/" target="_blank">PyTorch Lightning</a> framework. We trained the CNNs under 16-bit precision for about 35 epochs in 10-folds by the <a href="https://github.com/jettify/pytorch-optimizer#radam" target="_blank">RAdam</a> optimizer, with a learning rate of 0.000352 obtained from <a href="https://pytorch-lightning.readthedocs.io/en/latest/lr_finder.html" target="_blank">Learning Rate Finder</a>, a batch size of 48 for EfficientNet B3 NS and 128 for ablated ResNet, a patience of 12 epochs for early stopping, a Cosine Annealing LR scheduler with T_max set to 5 epochs, and a label smoothing of 0.001 for binary cross entropy loss.</p> <p>Each of them took about 12-25 hours to train for 10-folds. Therefore, they were all trained on my local machine (with two 2080-Ti GPUs) instead of a Kaggle GPU notebook, which has a run time limit of 9 hours.</p> <p>In contrast to other models that removed control group rows from the MoA training set before training, I was having a different thought about it. From a <a href="https://www.kaggle.com/c/lish-moa/discussion/180304" target="_blank">discussion thread</a> in the competition forum, the host was mentioning that</p> <blockquote> <p>… However, in real applications, the null controls are one way to assess the efficacy of a drug and can be useful in modeling the other MoAs.</p> </blockquote> <p>This hint lets me think that control rows might provide some useful information for the CNNs as they are modeling the relationships between gene expression and cell viability features. Therefore in our CNNs they are all trained with control group rows. Removing those rows produced worse scores on both CV and LB in the experiments.</p> <p>Things that I had experimented but did not work:</p> <ul> <li>EfficientNet B0, B5, B7 NS (smaller models are weaker and bigger models overfitted more quickly in this small dataset)</li> <li>PCA or Kernel-PCA with Norm-2 Normalization, Quantile Transformer or Standard Scaler</li> <li>Higher dropouts in the last FC layers</li> <li>ImageNet normalized RGB channels</li> <li>Cosine annealing with warm restarts</li> <li>Other activation functions like ReLU, SELU, GELU, etc. in the last FC layers</li> </ul> <h1>Model Diversity</h1> <p>To understand more about the diversity of our models, here are the mean correlation heatmaps of our best single models. The correlation coefficients are calculated by Pearson’s method for each pair of models. We took the mean of target-wise correlation coefficients as the final value of each model.</p> <p>For our best LB model submissions, even though they all scored very well on LB, their correlations are impressively low! Especially for DeepInsight CNNs, the correlations are only around <strong>0.63-0.73</strong> with other NN models. It showed a strong proof of high diversity and a great source of ensemble boost.</p> <p><strong>Figure 12. Mean Correlation Heatmap of Best LB Models' Submission Predictions.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2Ff21625b01ba2d040d4624a44c98511d1%2Ffigure_12_best_LB_correlation.png?generation=1607166479684837&amp;alt=media" alt=""></p> <p>The OOF (out-of-folds) predictions of our Best CV models also showed a similar phenomenon, and model correlations are even lower, with a coefficient of only around <strong>0.52-0.70</strong>!</p> <p><strong>Figure 13. Mean Correlation Heatmap of Best CV Models' OOF Predictions.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2Fdbde5454ab2fa18bafc5895ff591ea79%2Ffigure_13_best_cv_correlation.png?generation=1607166496491195&amp;alt=media"></p> <p>In my previous experience, diverse models with correlations around <strong>0.85-0.95</strong> are usually very good sources for the ensembles. In this competition our best single models had proven their strong prediction performance and at the same time with much lower correlations than usual!</p> <h1>Inference Efficiency</h1> <p>To ensure that we can safely squeeze all our best models in the final blend submissions in 2-hours execution limit, we optimized the inference scripts by loading all scalers and fitted transformers from pickle files. Also, since the computing resource of Kaggle GPU notebook is rather limited (2-cores CPU, 13GB Host RAM and 16GB GPU RAM), we did several trials to find the sweet points for each model to maximize the inference throughput by the number of workers and batch size.</p> <p>Regarding the batch size, our best setup is 2048 for NN-based models and 512 for CNN-based models (since CNN models are bigger and memory hungry). Setting bigger numbers won’t help with the inference speed due to the saturation of CPU, GPU and I/O resources.</p> <p>The total run time of final submissions are 1923 seconds (Best LB) and 1829 seconds (Best CV) on the public test set respectively, which all finished in 2 hours on the private test set.</p> <h1>Source Code</h1> <p>All of our training notebooks, blend notebooks and inference scripts are open-sourced in the following Github repository:<br> <a href="https://github.com/guitarmind/kaggle_moa_winner_hungry_for_gold" target="_blank">https://github.com/guitarmind/kaggle_moa_winner_hungry_for_gold</a></p> <p>Note that the structure of this repository will be updated in the next few days to follow the <em>Winning Model Documentation Guidelines</em> by the Kaggle team.</p> <p>The source code of our final submission models for training and inference can be found under the <code>final</code> folder.</p> <h1>Conclusion</h1> <p>Without any prior knowledge about MoA or the medical domain, we have done our best to apply the ML/DL techniques that we know to this small and highly imbalanced dataset. We are pretty amazed by our final standing, and it turns out that general AI methods could work in almost any domain.</p> <p>The combination of shallow NN models in different architectures, in-depth feature engineering, multi-stage stacking, label smoothing, transfer learning, and the addition of diverse DeepInsight CNNs are the key winning factors of our team’s final submissions.</p> <p>Thank you for reading through this long post, we hope that you gain something interesting and useful from it!</p> <p>Hi Everyone,</p> <p>We have also made our best submission notebook public!</p> <p><strong>Original Version:</strong><br> <a href="https://www.kaggle.com/nischaydnk/fork-of-blending-with-6-models-5old-1new/" target="_blank">https://www.kaggle.com/nischaydnk/fork-of-blending-with-6-models-5old-1new/</a><br> <strong>Cleaned Version:</strong><br> <a href="https://www.kaggle.com/markpeng/final-best-lb-cleaned/" target="_blank">https://www.kaggle.com/markpeng/final-best-lb-cleaned/</a></p> <p>Cheers!</p> <p><a href="https://www.kaggle.com/yassinealouini" target="_blank">@yassinealouini</a> Thanks! I'm glad that you like it!<br> Maybe it is a bit hard to believe, but the diagrams were just drawn by a mouse using <a href="https://docs.google.com/presentation/u/0/" target="_blank">Google Slides</a>. I didn't use any tools. It took some time and patience though!</p> <p>Thanks, I think Swap Noise mainly plays the role of regularization and making the model more robust. I won't say that CNNs are better predictors for some targets than NN/TabNets.</p> <p><a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> Thanks!</p>
Mechanisms of Action (MoA) Prediction
20th place (If we had picked it)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I want to thank my teammates <a href="https://www.kaggle.com/shalules" target="_blank">@shalules</a>, <a href="https://www.kaggle.com/phoenix9032" target="_blank">@phoenix9032</a>, <a href="https://www.kaggle.com/rohitsingh9990" target="_blank">@rohitsingh9990</a> and everyone who shared their notebooks, specifically <a href="https://www.kaggle.com/demetrypascal" target="_blank">@demetrypascal</a> <a href="https://www.kaggle.com/gogo827jz" target="_blank">@gogo827jz</a> <a href="https://www.kaggle.com/optimo" target="_blank">@optimo</a>, and contributed to the discussions. I learned a lot this competition. </p> <p>This solution didn't have the best public score, so unfortunately, we didn't pick it. It was created over a month ago. Maybe not having the drug ids helped us. It's very straightforward and was really just intended to be a starting point for other blends. </p> <p>It's based on these 3 kernels </p> <ol> <li><a href="https://www.kaggle.com/demetrypascal/2heads-deep-resnets-pipeline-smoothing" target="_blank">https://www.kaggle.com/demetrypascal/2heads-deep-resnets-pipeline-smoothing</a> (via <a href="https://www.kaggle.com/rahulsd91/moa-multi-input-resnet-model" target="_blank">https://www.kaggle.com/rahulsd91/moa-multi-input-resnet-model</a>) <a href="https://www.kaggle.com/demetrypascal" target="_blank">@demetrypascal</a> </li> <li><a href="https://www.kaggle.com/gogo827jz/rapids-svm-on-gpu-6000-models-in-1-hour" target="_blank">https://www.kaggle.com/gogo827jz/rapids-svm-on-gpu-6000-models-in-1-hour</a> <a href="https://www.kaggle.com/gogo827jz" target="_blank">@gogo827jz</a> </li> <li><a href="https://www.kaggle.com/optimo/tabnetregressor-2-0-train-infer" target="_blank">https://www.kaggle.com/optimo/tabnetregressor-2-0-train-infer</a> <a href="https://www.kaggle.com/optimo" target="_blank">@optimo</a></li> </ol> <p>I changed the SVC to ridge regression and used the ridge regression output as input to tabnet and averaged it with the 2 headed resnet. <br> <a href="https://www.kaggle.com/ericfreeman/moa-multi-input-resnet-model" target="_blank">https://www.kaggle.com/ericfreeman/moa-multi-input-resnet-model</a></p>
Mechanisms of Action (MoA) Prediction
26th Place Solution / Source Code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to say my gratitude to the Laboratory for Innovation Science at Harvard, the Kaggle Team, and all of participants inspiring me a lot.</p> <h1>Source</h1> <ul> <li><a href="https://github.com/kazumitsusakurai/lish-moa" target="_blank">Source code</a></li> <li><a href="https://www.kaggle.com/kazumitsusakurai/submission-for-moa" target="_blank">Final submission</a></li> </ul> <h1>Score</h1> <p>The Final submission score</p> <ul> <li>Private: 0.01608</li> <li>Public: 0.01820</li> <li>CV: 0.01550 (without ctl_vehicle)</li> </ul> <h1>Summary</h1> <h4>Denoising Autoencoder</h4> <p>I used normal denoising autoencoder instead of the swapping method, and I concatenated it with the original features. It improved the public test socre from 0.01842 to 0.01834 but CV was not improved.</p> <h4>Weighted Loss</h4> <p>I used weighted loss which weights targets that only occur few times in the train data (I used 40).<br> It significantly improved CV score (about -0.0003) but public test score wasn't improved. In order to avoid overfitting to the train data I blended both weighted-loss models and non-weighted-loss models.[edit]</p> <h4>Blending</h4> <p>I adopted the simple weighted average of</p> <ul> <li>2 Hidden Layer NN (7 seed, 7 OOF, Train each model with weighted-loss and non-weighted-loss)</li> <li>3 Hidden Layer NN (same as the above)</li> <li>4 Hidden Layer NN (same as the above)</li> <li>TabNet (6 seed)</li> <li>DeepInsight Model (2 seed, different settings)</li> </ul> <h4>Other Stuffs That Worked</h4> <ul> <li>Rank gauss</li> <li>Adding statistical features (sum, mean, std, kurt, skew, median, etc..)</li> <li>PCA (only applying to TabNet)</li> <li>Smoothing loss</li> </ul> <p><a href="https://www.kaggle.com/geekysaint" target="_blank">@geekysaint</a> Thank you so much!</p> <p><a href="https://www.kaggle.com/nyleve" target="_blank">@nyleve</a> Thanks for your comment and asking a good question!</p> <p>I chose those NN models so that the models have the different number of layers from each other and also have optimal settings. To decide which model's setting and weights for averaging are optimal, just I watched both CV and test score. Some model's settings were good at either one.</p> <p>The best single model on the public LB is 2048 x 1024 x 512 model applied weighted loss. The score was 0.01827 and it also worked well on CV. That's why I could trust it the most and put the highest weight.</p> <p><a href="https://www.kaggle.com/asabry79400" target="_blank">@asabry79400</a> Thank you very much!</p> <p><a href="https://www.kaggle.com/zeedatasc" target="_blank">@zeedatasc</a> Thank you!</p> <p><a href="https://www.kaggle.com/wonjunpark" target="_blank">@wonjunpark</a> <br> Thank you very much! I hope you find something that helps you from my post.</p>
Mechanisms of Action (MoA) Prediction
27th place - Trusting CV always works.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks!</p> <p>Yes of course, Models performed well on public LB without SMOTE features but local CV score improved by including models trained on this features, that is the only reason behind including this in my final submission.</p> <p>Thank you!!</p> <p>Congratulations and Thanks <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> for your great contributions, I have learned a lot from you !</p> <p>Yes, removing it does improved my public LB (0.01819 from 0.01820) but not private LB (0.01609 from 0.01608) and removing it also affected my CV score i.e. 0.01475 to 0.01485 (0.0001). So I thought of giving it a try and fortunately it worked.</p> <p>Thank you!!</p> <p>Thank you and Congratulations to you too!</p> <p>I tried utilizing ML-SMOTE by different parameters such as number of nearest neighbors and number of samples, but it didn't improved my LB instead it increased public LB from 0.01819 to 0.01820.<br> But it did helped me in improving my CV score. As the topic title says, I trusted my CV and it worked.</p> <p>Thank you !</p> <h4>Feature Used</h4> <ol> <li><br><ul> <li>Important features selected from xgboost and RFECV without any preprocessing.</li></ul></li> <li><br><ul> <li>Variance Threshold on original features. (before scaling and pca)</li> <li>Quantile transformation</li> <li>PCA with 80 gene and 10 cell features</li></ul></li> <li><br><ul> <li>Quantile Transformation</li> <li>PCA with approx 600 gene and approx 60 cell</li> <li>Variance Threshold</li></ul></li> <li><br><ul> <li>Quantile Transformation</li> <li>PCA with approx 600 gene and approx 60 cell</li> <li>Variance Threshold</li> <li>Dropped high correlated features after all the feature engineering</li></ul></li> <li><br><ul> <li>4000 ML-SMOTE features</li> <li>Quantile Transformation</li> <li>PCA with approx 500 gene and approx 50 cell</li> <li>Variance Threshold</li></ul></li> </ol> <h4>Little Data Augmentation</h4> <ul> <li>Used albumentations to add random dropout in the features with 0.5 dropout rate and trained<br> models separately for this data.</li> </ul> <h4>Models Used (10 fold MultilabelStratifiedKFold with No seeds)</h4> <ol> <li><p>Two Three layer neural networks :<br> a) (BatchNorm -&gt; Dropout -&gt; WeightNorm(Dense) -&gt; ReLU) * 3<br> b) (BatchNorm -&gt; Dropout -&gt; WeightNorm(Dense) -&gt; LeakyReLU) * 3</p></li> <li><p>Two headed neural network with skip connections trained with non-scored for transfer learning:<br> Head1 --&gt; out1 = (BatchNorm -&gt; Dropout -&gt; Dense -&gt; LeakyReLU) * 4<br> Head2 --&gt; out2 = out1 + (BatchNorm -&gt; Dropout -&gt; Dense -&gt; LeakyReLU) * 4<br> avg --&gt; Average([out1, out2])<br> final head --&gt; avg + (BatchNorm -&gt; Dropout -&gt; Reshape -&gt; LSTM) * 1<br> (BatchNorm -&gt; Dense -&gt; LeakyReLU) * 3</p></li> <li><p>Two headed neural network with skip connections trained with non-scored for transfer learning:<br> Head1 --&gt; out1 = (BatchNorm -&gt; Dropout -&gt; Dense -&gt; ReLU) * 4<br> Head2 --&gt; out2 = out1 + (BatchNorm -&gt; Dropout -&gt; Dense -&gt; ReLU) * 4<br> avg --&gt; Average([out1, out2])<br> final head --&gt; avg + (BatchNorm -&gt; Dropout -&gt; Dense) * 1<br> (BatchNorm -&gt; Dense -&gt; ReLU) * 3</p></li> <li><p>TabNet.</p></li> </ol> <p><strong>Schedulers:</strong> ReduceLROnPlateau, OneCycleLR<br> <strong>Optimizers:</strong> Adam, AdamW<br> <strong>Activations:</strong> ReLU, LeakyReLU</p> <p>(All the models are trained with different above mentioned features and parameters making a total of 16 models.). Submitting average of all these models achieved :<br> <strong>CV:0.01501 <br> Public: 0.01824 <br> Private: 0.01611</strong></p> <h4>Stacking</h4> <p>Stacked predictions from ensemble of all the models by taking simple mean of all the predictions.<br> Training models with original+stacked features achieved : <br> <strong>CV:0.01495 <br> Public: 0.01821 <br> Private: 0.01610</strong></p> <h4>Ensemble</h4> <p>Simple average of all models trained with different features (16 models) + models trained with<br> stacked features (10 models) making an ensemble of total 26 models achieved <br> <strong>CV:0.01475 <br> Public: 0.01819 <br> Private: 0.01608</strong></p> <h4>Best submission (0.01604)</h4> <p>Submission with little blending of weights<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2169626%2F16d013510cef396fca059df98ea9c3b0%2Fbest.png?generation=1606808910913035&amp;alt=media" alt="best"></p> <p><strong>Thanks to kaggle and all the contributors for sharing their amazing work with notebooks and discussions. I learned so many new things from feature engineering techniques to building models for tabular data and so many more things. Really a great competition so far.</strong></p>
Mechanisms of Action (MoA) Prediction
2nd Place Solution - with 1D-CNN (Private LB: 0.01601)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Although not sure, the performance of 2D-CNN is slightly worse than that of 1D-CNN in practice, while the performance of 3d-cnn is much worse. I guess that in this data, all features have to be sorted through the first layer of the model, so the more complex the assumptions about the relationship between features, the more difficult the model is to optimize. 1D-CNN may be easier to get the correct feature order because of its simple assumption of the relationship between features.<br> Maybe 2D-architecture model have the potential to perform better, because this architecture is also consistent with the form of gene interaction in prior knowledge. But perhaps better parameters or different architecture are needed, as it can be more difficult to choose the right order from more combinations.</p> <p>You`re welcome~😁</p> <p>Thanks! <br> First of all, the correct feature sorting by the first layer is the key to ensure the effective feature extraction through CNN. In practice, the use of CELU in first layer brought the synchronous improvement of local CV and lb. I tried to adjust the activation functions for other layers but didn't see significant improvement.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1851802%2Fb1aa9c1529a6d41171f31007f2baab49%2Fcelu.jpg?generation=1610348316702206&amp;alt=media" alt=""><br> In addition, from the perspective of formula, celu is continuous for all values of parameter alpha and is easier to tune compared with elu, and here we use a small alpha to make celu close to relu, which can be considered as fine-tuning without making too much changes to the existing model.</p> <p>Thanks, and yes, it is row wise. The normalization is carried out for each sample, assuming that the distribution/total intensity of all features in each sample is similar.</p> <h2>1. Overview</h2> <p>First of all, many thanks to Kaggle team and Laboratory for Innovation Science at Harvard for providing priceless dataset and hosting this great challenge. This is the my first kaggle competition, I have to say I learned a lot from the valuable community. </p> <p>This approach consists of 3 single modes:</p> <ul> <li>1D-CNN</li> <li>TabNet</li> <li>DNN</li> </ul> <p>and the final submission is generated by weighted average of single models outputs.</p> <p>The table below lists the performance of the single modes and the final blending in this scheme. The most important part of this scheme may be 1D-CNN 0.01601 (private lb). The final blending only improves by 0.00002 on this basis (private lb 0.01599).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1851802%2F81016106a1046153d07cdfbdc8c43173%2Fperformance.jpg?generation=1607494441438421&amp;alt=media" alt=""></p> <p>The flow chart below outlines the technical details , I will introduce it in the following sections. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1851802%2Fbb5d2945d14f9b148f86f462e4fb7d97%2Fflowchart.jpg?generation=1607494473918650&amp;alt=media" alt=""></p> <h2>2. Local CV</h2> <p>One of the most important objectives of this challenge is to test the performance of the model for drugs that have never appeared in the public dataset. <br> For this, I used the cv stratification code posted by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> (<a href="https://www.kaggle.com/c/lish-moa/discussion/195195)" target="_blank">https://www.kaggle.com/c/lish-moa/discussion/195195)</a>.</p> <p>Generally in genomics and biomedical data, sample type, instrument error, reagent quality and other systematic deviation sometimes affect the distribution significantly. This is inevitable in the real world, so proper assessment is very important. Here's an example: when batch effect exists, "leave batch out" can reflect the generalization ability of the model more objectively than random kfold-split. <br> In this competation, although not very sure, this "leave drug out" strategy seems to reduce the risk of over fitting in a specific group of drugs ( public lb 4th; private lb 2nd ).</p> <h2>3. Pre-processing</h2> <h4>Sample Normalization</h4> <p>There are some assumptions (may be wrong):</p> <ul> <li>The raw biological data are often not comparable due to the inevitable existence of systematic errors(e.g., data cannot be produced on the same machine, experiments cannot be performed by the same person)</li> <li>Even with totally different treatments, the number of significantly different features won`t be too large. </li> <li>It can be simply assume that the distributions of most samples are similar.</li> </ul> <p>Based on these assumptions, samples weres normalized by the distribution of gene expression and cell viability separately. </p> <p>In short, the gene data in each sample is subtracted from the mean value of 25% and 75% quantiles .<br> Similarly, the cell data is subtracted from the mean of 25% and 72% quantiles, and then divided by the 4 + 75% quantiles of the new distribution. The slightly different steps for genes and cells are determined by the normality of the distribution after treatment.</p> <p>Generally, samples normalization should be implemented before features conversion.</p> <h4>Feature Transformation</h4> <p>After sample normalization, quantile transformation is applied on numerical features.</p> <h2>4. Feature Engineering</h2> <p>There is nothing special in feature engineering, even no variance filtering in 1D-CNN model. Different feature processing methods in the other two models are only used to increase the diversity. </p> <ul> <li><p>PCA feature are used for all models with different n_components (50 genes + 15 cells for 1D-CNN, 600 genes + 50 cells for TabNet and DNN).</p></li> <li><p>Statistical features (such as sum, mean) and combination features are used in Tabnet.</p></li> <li><p>Variance fillter is used in Tabnet and DNN.</p></li> <li><p>In addition, dummy variable (cp_time, cp_dose) are removed in all models.</p></li> </ul> <h2>5. Loss</h2> <p>To deal with the imbalance of targets and reduce overfitting to specific ones, BECloss with label smooth and pos_weight was used in all single models.</p> <p>As mentioned by others, label smoothing performed well by reducing the overconfident probability .</p> <p>In addition, targets weight is slightly adjusted by the reciprocal of class frequency. </p> <p>Specifically, for each target i, the weight is set to log ( Fmin + 100 ) / log ( Fi + 100 ), where the Fi is the number of positive samples in target i, and Fmin donates the min positive count of all target. The constant term 100 is added to prevent the target with low frequency (e.g., with 1 positive sample) from affecting model extremely, and log conversion is to make it smooth.</p> <h2>6. Modelling</h2> <h4>1) 1D-CNN</h4> <p>This single mode achieves the best performance in this approach ( private score : 0.01601 ). Using such a structure in tabular data is based on the idea that:</p> <ul> <li>CNN structure performs well in feature extraction, but it is rarely used in tabular data because the correct features ordering is unknown.</li> <li>A simple idea is to reshape the data directly into a multi-channel image format, and the correct sorting can be learned by using FC layer through back propagation.</li> </ul> <h6>Model Architecture</h6> <p>Based on these ideas, the model which extracts features through 1D-CNN (performs better than 2D and 3D in experiments) are implemented. The figure below shows the main structure.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1851802%2Ffdf0f6ba40f129524fecc26235fd38c5%2F1D-CNN.jpg?generation=1607494535889507&amp;alt=media" alt=""></p> <p>As shown above, feature dimension is increased through a FC layer firstly. The role of this layer includes providing enough pixels for the image by increasing the dimension, and making the generated image meaningful by features sorting.<br> Next, data is directly reshaped into image format (size 16*1, chanel 256).<br> Like the basic CNN model, features are extracted in the next several 1D-Conv layers with a shortcut-like connection. <br> Finally, the extracted features are used to predict targets through a FC layer after flatten.</p> <h6>Pre-training</h6> <p>Additionally, non-scored-targets are used alone for pre-training. In this step, only 33 non-scored-targets that are most correlated to the scored-target are trained for only 1 epoch. The purpose of this is only to make the model better initialised. Irrelevant targets and more training steps may lead to unexpected consequences.</p> <h4>2) TabNet</h4> <p>TabNet used in this approach is slightly modified from a public kernel (<a href="https://www.kaggle.com/kushal1506/moa-tabnet-inference" target="_blank">https://www.kaggle.com/kushal1506/moa-tabnet-inference</a>) . There is nothing special except the preprocessing method mentioned above. </p> <h4>3) DNN</h4> <p>Similarly, the DNN model is a modified version learned form public notebook (<a href="https://www.kaggle.com/thehemen/pytorch-transfer-learning-with-k-folds-by-drug-ids/comments)" target="_blank">https://www.kaggle.com/thehemen/pytorch-transfer-learning-with-k-folds-by-drug-ids/comments)</a>, with pre-training on all targets (scored &amp; non-scored) and fine-tuning on scored ones. </p> <h2>7. Blending</h2> <p>3 single models are blended as the final submission. Compared with 1D-CNN, the final blending only improves by 0.00002 (private lb 0.01599), perhaps due to the lack of diversity or the insufficient number of single modes.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1851802%2F992b5b32870f4571ad9cf9eb632c5443%2Fcorrelation%20between%20single%20models.jpg?generation=1607494569218486&amp;alt=media" alt=""></p> <h2>8. Source Code</h2> <p>You can find training and inference code here: <a href="https://github.com/baosenguo/Kaggle-MoA-2nd-Place-Solution" target="_blank">https://github.com/baosenguo/Kaggle-MoA-2nd-Place-Solution</a></p> <p>Thank you for your comments!</p> <ol> <li>Yes, pos_ weight is always slightly less than 1. This is based on the fact that log loss does not penalize the prediction imbalance, so the weighting of samples depends more on the evaluation function and performance. In practice, setting pos_ weight &lt;=1 seems to be more beneficial to result.</li> <li>And exactly, even the weights of the two categories with the most positive samples and the least positive samples do not differ significantly. <br> This is because, on the one hand, I am worried about whether the frequency of categories in the public dataset is completely consistent with that in the private dataset, since we know that different drugs can have totally different effects, and on the other hand, I don`t want the predictions for some categories to be affected seriously. If I have time to train an auxiliary model for some categories, I will consider adjusting weights substantially.</li> </ol> <p>Thanks and gald to see this improvement !</p> <p>And I have to say that I have learned more from you and the whole community. Thanks!</p> <p>Thanks! And it's nice to see different CNN-based models worked in tabular data! </p> <p><a href="https://www.kaggle.com/gogo827jz" target="_blank">@gogo827jz</a> Thank you for your further explanation. I can always learn a lot from your code! </p> <p><a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> Thanks a lot! Yes, using TSNE in DeepInsight-like solution is also a brilliant idea. <br> For me, finding effective ways to use CNN in tabular data has always been what I wanted to do. In this competition, I'm excited to see that all these approached have worked.</p> <p>Thank you so much for your comments and questions! </p> <ul> <li><p>1.<br> Yes, for different types of data, separate processing can preserve their own characteristics.</p></li> <li><p>2.</p> <ul> <li>2.1 <br> I tried pre-processing similar to Z-score, and finally chose the quantile-based method for reasons that, the mean and variance based on all values may be affected by outliers, and the use of quantiles may be more conservative when the data is known to be truncated.</li> <li>2.2 <br> As optional methods in data processing, column-wise and row-wise pre-processing aimed at different problems. <br> Let's make an assumption, if the input of a sample is only half of the normal value due to an accident in the experimental operation, then all the features of this sample may be half the size of other samples. This kind of errors may be unavoidable and cannot be corrected by row-wise transformation (since almost all features in this sample are abnormal).<br> In this data, although the reason is unknown, there are similar phenomena, as shown in the figure below.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1851802%2F440eec4ba21843c24340410eea1dd94d%2Fnormalization.jpg?generation=1608109420742160&amp;alt=media" alt=""></li></ul></li> <li><p>3. <br> Slightly different from genes, cells in each sample is processed in two steps. After subtracting quantile, the value obtained will be further divided by the 4+new quantile. Considering the heterogeneity of cells, only subtracting quantiles may not be enough. In addition, model performance is another reference to decide which way to adopt.</p></li> </ul>
Mechanisms of Action (MoA) Prediction
30th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all I want to thank the organizers for interesting data and competition. <br> I'm not sure that choosing logloss as primary metric is good for pursuing practice goals, but it's convenient for modeling multilabel tasks for competitors ^^<br> Also, huge thanks to users <a href="https://www.kaggle.com/gogo827jz" target="_blank">@gogo827jz</a>, <a href="https://www.kaggle.com/tolgadincer" target="_blank">@tolgadincer</a> and <a href="https://www.kaggle.com/namanj27" target="_blank">@namanj27</a>. Pretty appreciate your discussions and NJ7's pytorch starter is a gem.<br> Personally, it's the best result in Kaggle competitions, so I'm pretty happy about it.</p> <p>Solution is relative easy and based on average blending of 4 models: <br> 1) Shallow MLP <br> <a href="https://www.kaggle.com/alturutin/moa-mlp" target="_blank">https://www.kaggle.com/alturutin/moa-mlp</a><br> 2) "Resnet"-like multihead NN with cluster embeddings<br> <a href="https://www.kaggle.com/alturutin/moa-resnet" target="_blank">https://www.kaggle.com/alturutin/moa-resnet</a><br> 3) TabNet with hyperparameters almost similar to public notebooks<br> <a href="https://www.kaggle.com/alturutin/moa-tabnet" target="_blank">https://www.kaggle.com/alturutin/moa-tabnet</a><br> 4) Shallow MLP with pretraining on nonscored data with and finetuned to maximize CV to GELU activation<br> <a href="https://www.kaggle.com/alturutin/moa-mlp-gelu" target="_blank">https://www.kaggle.com/alturutin/moa-mlp-gelu</a><br> *) Ensemble notebook:<br> <a href="https://www.kaggle.com/alturutin/moa-inference" target="_blank">https://www.kaggle.com/alturutin/moa-inference</a><br> *) Viz, eda notebook:<br> <a href="https://www.kaggle.com/alturutin/moa-t-test-outliers-segmentation-interactions" target="_blank">https://www.kaggle.com/alturutin/moa-t-test-outliers-segmentation-interactions</a><br> Most of the code incapsulated in dataset scripts.</p> <p>OneCycleLR scheduler for training MLP and Resnet and ReduceLROnplateu for TabNet.<br> Trained 2 type of models: for CV based on train data, because organizers wrote that private is randomized, and, just in case, grug_id based cv.</p> <p>Preprocessing similar to public notebooks: quantile transform -&gt; feature selection, pca, feature stats, clusters.<br> I suppose main difference from other works may be using RFE to MLP permutation importance scores. It's given big boost on CV.<br> Also, I've used prediction bias as metric for model finetuning: for example, if train prior mean = 0.0037 I've tried to get prediction mean &gt; 0.0037 to avoid overfitting, because models tended to overfit to 0. <br> It's possible to solve this problem using regularization, for example, label smoothing, but estimates is not very concise and little lucky-ish I think :)<br> Pseudo labeling worked too and provided small boost, with lr=1e-5 and SGD optimizer.<br> Postprocessing is just clipping to inteval [1e-5; 1 - 1e-5]<br> Pytorch was preferred over tensorflow because it just has given better results for similar hyperparameters.</p> <p>Finally, sorry for my english (it's not my native), I've learned a lot in this competition. </p> <p>List of trained features below.</p> <p>===========================================================<br> worked:</p> <ul> <li>10 folds oof mkf (multilabel stratified)</li> <li>bagging over seeds &amp; splits</li> <li>label smoothing</li> <li>predictions clipping</li> <li>pca</li> <li>pseudo labels</li> <li>quantile transform preprocessing</li> <li>resnet with categorical embeddings</li> <li>tabnet</li> <li>average blending</li> <li>stats feature engineering</li> <li>RFE feature selection</li> <li>stacking top labels</li> <li>nonscored pretraining</li> <li>clusters from kmeans, GMM and outlier detection methods</li> </ul> <p>===========================================================<br> not worked:</p> <ul> <li>fit unsupervised on full data (train + test) -&gt; overfitting</li> <li>gbm (too long to train)</li> <li>kmeans dists</li> <li>dbscan-&gt;svm clustering</li> <li>self-norm nn</li> <li>ttest feature selection</li> <li>using drug group cv</li> <li>MLSMOTE augmentation</li> <li>noise augmentation</li> <li>feature interactions like sum &amp; multiplications</li> </ul> <p>===========================================================<br> had no time to check:</p> <ul> <li>online pseudo labeling on private dataset</li> <li>TVAE; GAN augmentation</li> <li>transformer/attention model</li> <li>convert features to image and fit CNN</li> <li>DAE embeddings</li> <li>contrastive learning (with triplet loss for example) embeddings </li> </ul>
Mechanisms of Action (MoA) Prediction
35th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2350982%2Fbc8d816ce80c4b07e2cab451d1c30085%2Fmoa.png?generation=1606888296177049&amp;alt=media" alt="DSG's 35th place solution"></p> <h2>Notebooks</h2> <ul> <li>Kon's ResNet: <a href="https://www.kaggle.com/yxohrxn/resnetclassifier-fit" target="_blank">https://www.kaggle.com/yxohrxn/resnetclassifier-fit</a></li> <li>Blending: <a href="https://www.kaggle.com/yxohrxn/votingclassifier-fit-without-lightgm-and-svm" target="_blank">https://www.kaggle.com/yxohrxn/votingclassifier-fit-without-lightgm-and-svm</a></li> <li>Inference: <a href="https://www.kaggle.com/yxohrxn/votingclassifier-predict-without-postprocessing" target="_blank">https://www.kaggle.com/yxohrxn/votingclassifier-predict-without-postprocessing</a></li> </ul> <h2>Our approach</h2> <h3>1. Data augmentation</h3> <p>We tried Gaussian noise, Cutout, Mixup, and CutMix, and CutMix improved the CV score the most. In particular, it worked well when we increased the alpha, which is a hyperparameter. Removing ctl_vehicle made CutMix less effective.</p> <h3>2. Weighted seed-fold averaging</h3> <p>Since <a href="https://www.kaggle.com/c/lish-moa/discussion/195195" target="_blank">Deotte's CV scheme</a> takes into account drug_id, a situation often arises where there are no positive examples in the training data, especially for minor classes. The predictions of the model created in this case are completely meaningless. Therefore, we counted the positive examples of each class in the training data for each fold and weighted the predictions using their number as weights.</p> <h3>3. Class-wise blending</h3> <p>We improved <a href="https://www.kaggle.com/gogo827jz/optimise-blending-weights-with-bonus-0" target="_blank">Zhang's notebook</a> to compute weights to minimize the per-class log loss.</p> <pre><code>weights.shape = (n_classes, n_models) </code></pre> <p>To find the best model combination, we evaluated the performance with CV. In addition, early stopping and label smoothing were performed to avoid overfitting. We created nearly 20 models in total, and the above combination was the best.</p>
Mechanisms of Action (MoA) Prediction
3rd Place Public - We Should Have Trusted CV - 118th Private
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We just converted all submission files into <code>sub.iloc[:,1:].values.flatten()</code>. So this includes the control rows which inflate the correlation score (since every model predicts these as zeros). Without the control rows the correltation would be lower. None-the-less, we used the color (in the plot above) more than the value which would remain the same (once rescaled).</p> <p>Thank you. Yes, if the private dataset had more surprises, then the PP could have bumped us into a win!</p> <p>Thanks James</p> <p>Thank you </p> <p>Hi, thanks for your interest. I posted our DAE model <a href="https://www.kaggle.com/c/lish-moa/discussion/200680" target="_blank">here</a>. I'll let Phi <a href="https://www.kaggle.com/ajinomoto132" target="_blank">@ajinomoto132</a> post his EffNet model.</p> <p>Thanks <a href="https://www.kaggle.com/Ultron" target="_blank">@Ultron</a> I just made another 206 submissions today to explore the test distribution. I present my findings <a href="https://www.kaggle.com/c/lish-moa/discussion/200832" target="_blank">here</a></p> <p>Thanks CPMP. Congrats on you and your team's great finish. You are very good at surviving shakeups. Well done!</p> <p>If the test dataset is a random sample from the same distribution as train, then it should be similar within a certain confidence interval. If you analyze public test, you will find that there is only a 1% chance that it comes from the same distribution</p> <p>To compare distribution, you need to check 3 things</p> <ul> <li>Are rows similar enough?</li> <li>Are columns similar enough?</li> <li>Are target distributions similar enough?</li> </ul> <p>The public test dataset failed the last test. I incorrectly assumed that the private test would fail too. However i checked the private test this morning. It is the same as train. I explain more <a href="https://www.kaggle.com/c/lish-moa/discussion/200832" target="_blank">here</a></p> <p>It's simple. Just multiply the predictions for rare targets by 4 and multiply the predictions for frequent targets by 0.75. The idea is that if test is random, then the rare targets will most likely not be rare anymore and the frequent targets will most likely not be frequent. (Because the best guess for any target would be the average of all targets).</p> <p>Thanks Theo congrats on your 53rd finish. That's great.</p> <p>haha thanks yukiya</p> <p>If the test dataset is a random sample from the same distribution as train, then it should be similar within a certain confidence interval. If you analyze public test, you will find that there is only a 1% chance that it comes from the same distribution</p> <p>To compare distribution, you need to check 3 things</p> <ul> <li>Are rows similar enough?</li> <li>Are columns similar enough?</li> <li>Are target distributions similar enough?</li> </ul> <p>The public test dataset failed the last test. I incorrectly assumed that the private test would fail too. However i checked the private test this morning. It is the same as train. I explain more <a href="https://www.kaggle.com/c/lish-moa/discussion/200832" target="_blank">here</a></p> <p>UPDATE: We posted our TabNet code that scores LB 1830 <a href="https://www.kaggle.com/intwzt/inference-tabnet-1830" target="_blank">here</a>. Our DAE that scores LB 1830 <a href="https://www.kaggle.com/c/lish-moa/discussion/200680" target="_blank">here</a>. Our EffNet code that scores LB 1840 with 4 seeds <a href="https://www.kaggle.com/ajinomoto132/efficientnet-swish-augmentation-training" target="_blank">here</a>. And our MX10 that scores LB 1842 <a href="https://www.kaggle.com/intwzt/updated-inference-final-mx10-transfer" target="_blank">here</a></p> <h1>Please Congratulate my Teammates - Thank You</h1> <p>In every competition, i'm the one saying "trust your CV" and often I post a reliable validation scheme such as my double stratified <a href="https://www.kaggle.com/c/lish-moa/discussion/195195" target="_blank">here</a> for MOA comp.</p> <p>In this competition, I analyzed the public test for 2 weeks and was convinced that the test dataset was significantly different than the training dataset. Thus I encouraged my teammates to "<strong>not</strong> trust our CV" and instead, I post processed all of our teams' submissions. <strong>Big Mistake!</strong></p> <p>Please help me congratulate my teammates. </p> <ul> <li>Intwzt <a href="https://www.kaggle.com/intwzt" target="_blank">@intwzt</a></li> <li>Mizanur <a href="https://www.kaggle.com/mizanurr" target="_blank">@mizanurr</a> </li> <li>Phi <a href="https://www.kaggle.com/ajinomoto132" target="_blank">@ajinomoto132</a> </li> <li>Reza <a href="https://www.kaggle.com/vaghefi" target="_blank">@vaghefi</a></li> </ul> <p>They built 12 incredible models of 12 different types which beat all the public notebooks of that type. They worked day and night and a simple average (without weights) of their models would have achieved 11th place gold. I applied post process to all our submissions and decreased our position to 118th. </p> <p>I apologize for my post process. Below I describe their models and I describe my post process. Our final Kaggle notebook with (detrimental) post process can be seen <a href="https://www.kaggle.com/cdeotte/3rd-place-public-lb-1805/" target="_blank">here</a> </p> <h1>TabNet - LB 1830 (private 1622)</h1> <p><a href="https://www.kaggle.com/josemori" target="_blank">@josemori</a> <a href="https://www.kaggle.com/hiramcho" target="_blank">@hiramcho</a><br> <a href="https://www.kaggle.com/josemori/moa-tabnet?scriptVersionId=44549085" target="_blank">https://www.kaggle.com/josemori/moa-tabnet?scriptVersionId=44549085</a>. <br> <a href="https://www.kaggle.com/hiramcho/moa-tabnet-with-pca-rank-gauss?scriptVersionId=45407015" target="_blank">https://www.kaggle.com/hiramcho/moa-tabnet-with-pca-rank-gauss?scriptVersionId=45407015</a><br> These public notebooks achieve LB 1848. We improved these notebooks to achieve LB 1830 and excellent CV. Our TabNet code is <a href="https://www.kaggle.com/intwzt/inference-tabnet-1830" target="_blank">here</a></p> <h1>DAE (denoising autoencoder) - LB 1830 (private 1628)</h1> <p><a href="https://www.kaggle.com/felipebihaiek" target="_blank">@felipebihaiek</a><br> <a href="https://www.kaggle.com/felipebihaiek/prediction-with-swap-auto-encoder-features-0-01865?scriptVersionId=46316906" target="_blank">https://www.kaggle.com/felipebihaiek/prediction-with-swap-auto-encoder-features-0-01865?scriptVersionId=46316906</a><br> This public notebook achieves LB 1865. We improved it to LB 1830 and CV 1560 (double stratified with control). Our DAE code is <a href="https://www.kaggle.com/c/lish-moa/discussion/200680" target="_blank">here</a></p> <h1>EffNet (deepinsight images) - LB 1840 (private 1618)</h1> <p><a href="https://www.kaggle.com/markpeng" target="_blank">@markpeng</a><br> <a href="https://www.kaggle.com/markpeng/deepinsight-efficientnet-b3-noisystudent" target="_blank">https://www.kaggle.com/markpeng/deepinsight-efficientnet-b3-noisystudent</a><br> This public notebook achieves LB 1861. We improved it to achieve 1840 and CV 1495 (single stratified with control). Our EffNet code is <a href="https://www.kaggle.com/ajinomoto132/efficientnet-swish-augmentation-training" target="_blank">here</a></p> <h3>NVIDIA <a href="https://rapids.ai/start.html" target="_blank">RAPIDS</a> UMAP</h3> <p>For diversity some EffNet DeepInsight models use images made from <a href="https://rapids.ai/start.html" target="_blank">RAPIDS</a> UMAP instead of TSNE using <code>import cuml</code> and <code>cuml.UMAP()</code>. These models score the best private LB score of all single models!</p> <h1>MX10 (transfer learning) - LB 1842 (private 1628)</h1> <p><a href="https://www.kaggle.com/tolgadincer" target="_blank">@tolgadincer</a><br> <a href="https://www.kaggle.com/tolgadincer/moa-tensorfow-mx10" target="_blank">https://www.kaggle.com/tolgadincer/moa-tensorfow-mx10</a><br> This public notebook achieves 1862. We improved it to achieve 1842 and excellent CV. Our MX10 code is <a href="https://www.kaggle.com/intwzt/updated-inference-final-mx10-transfer" target="_blank">here</a></p> <h1>PyTorch NN - LB 1833 (private 1619)</h1> <p><a href="https://www.kaggle.com/thehemen" target="_blank">@thehemen</a> <a href="https://www.kaggle.com/kushal1506" target="_blank">@kushal1506</a> <a href="https://www.kaggle.com/riadalmadani" target="_blank">@riadalmadani</a><br> <a href="https://www.kaggle.com/thehemen/pytorch-transfer-learning-with-k-folds-by-drug-ids?scriptVersionId=46944058" target="_blank">https://www.kaggle.com/thehemen/pytorch-transfer-learning-with-k-folds-by-drug-ids?scriptVersionId=46944058</a><br> This public notebook achieves LB 1835. We improved it to achieve LB 1833 and CV 1560 (double stratified with control)</p> <h1>Diversity</h1> <p>Here is a correlation plot of our 12 models showing their diversity. Each model has about 30 folds (for 360 models total). We upload trained fold models, and our Kaggle notebook infers all 360 models in 50 minutes commit and 100 minutes submit! Our submission notebook is <a href="https://www.kaggle.com/cdeotte/3rd-place-public-lb-1805/" target="_blank">here</a></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F1dfeee89deeecc810f995b8e8e8f3214%2Fcorr.png?generation=1606795896928096&amp;alt=media" alt=""></p> <h1>Our Best CV scores Private LB 1603 and 11th Place Gold!</h1> <p>If we use our best CV and do not post process our final submission, we achieve private LB 1603 and public LB 1816. (Post process improves public LB by 0.00011 and degrades private LB by 0.00008)<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2Fba6a39001ebfd95ca087a7aeaec028ad%2FLB.png?generation=1606792461633187&amp;alt=media" alt=""></p> <h1>Post Process Explained</h1> <p>If you plot the predictions and log loss of each target separately, you will discover that models cannot recognize certain targets. For these difficult targets models simply predict the mean value from the train dataset for all test rows. See the two examples below. The x axis is row index. The y axis is prediction probability.</p> <h3>Model Does Not Recognize Adrenergic_Receptor_Antagonist</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2Fc11158914af3b8c4a1a91a3745930728%2Fno.png?generation=1606791968334845&amp;alt=media" alt=""></p> <h3>Model Does Recognize CDK_Inhibitor</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F7df2dc2d1ba64fa4b2137d26e950a94e%2Fyes.png?generation=1606792014668054&amp;alt=media" alt=""></p> <h1>What is Test Mean?</h1> <p>If the test dataset has a different distribution than train, then a target that is rare in train may occur often in test. And a target that occurs often in train may occur rare in test. If we have no idea about the test dataset mean and our model does not recognize a target, it is better to predict a mean closer to the average of all target means. Therefore we post processed our predictions by multiplying the unrecognized "often" target predictions by 0.75 and we multiply the unrecognized "rare" target predictions by 4. (actual multipliers vary based on train mean and statistical expectation and can be seen in the notebook <a href="https://www.kaggle.com/cdeotte/3rd-place-public-lb-1805/" target="_blank">here</a>).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2Fb7014baf8fa77703d2a6955f33be2dd9%2Fpot.png?generation=1606792283964252&amp;alt=media" alt=""></p> <h1>Post Process Improves Public LB 0.00011</h1> <p>The above post process improves public LB by 0.00011. It is not tuned to public test dataset specifically. If our models are predicting any dataset that has a distribution different than train dataset, then our post process will improve log loss. Unfortunately, private dataset is similar to train and our post process hurts our private LB by 0.00008 (The distribution of test dataset is explained <a href="https://www.kaggle.com/c/lish-moa/discussion/200832" target="_blank">here</a>)</p> <h1>Thank you Kaggle</h1> <p>Thank you everyone for posting so many interesting notebooks and discussions. I learned so many new ways from the community and my teammates to build models for tabular data such as TabNet, DeepInsight EffNet, Transfer Learning, Residual MLPs. I learned new feature engineering techniques using Autoencoding, PCA, ICA, Factor Analysis, FeatureAgglomeration, KernelPCA, Quantile Transformer, and many others! This has been a great competition! </p> <p>We posted our final submission notebook <a href="https://www.kaggle.com/cdeotte/3rd-place-public-lb-1805/" target="_blank">here</a>. Our TabNet code that scores LB 1830 <a href="https://www.kaggle.com/intwzt/inference-tabnet-1830" target="_blank">here</a>. Our DAE that scores LB 1830 <a href="https://www.kaggle.com/c/lish-moa/discussion/200680" target="_blank">here</a>. Our EffNet code that scores LB 1840 with 4 seeds <a href="https://www.kaggle.com/ajinomoto132/efficientnet-swish-augmentation-training" target="_blank">here</a>. And our MX10 that scores LB 1842 <a href="https://www.kaggle.com/intwzt/updated-inference-final-mx10-transfer" target="_blank">here</a></p> <p>Thanks <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> . Congrats on your great finish! 95th out of 4400 teams!</p> <p>EffNet is our best private LB single model created by mastermind Phi <a href="https://www.kaggle.com/ajinomoto132" target="_blank">@ajinomoto132</a> . I'll let him post the code. He used EfficientNetB3 with resolution 300 image size 300 TSNE, swap noise <code>swap_prob=0.15, swap_portion=0.2</code>, and cutout data augmentation <code>A.Cutout(num_holes=50, max_h_size=40, max_w_size=40, fill_value=0, p=0.3)</code> , swish activation, 3 seeds, 10 folds single stratified CV, learning rate <code>1e-3</code> batch size 64, loss with smoothing 0.001, 30 epochs.</p>
Mechanisms of Action (MoA) Prediction
3rd source code(To fulfill requirement of prize)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>solution: <a href="https://www.kaggle.com/c/lish-moa/discussion/200540" target="_blank">https://www.kaggle.com/c/lish-moa/discussion/200540</a><br> code: <a href="https://github.com/jxzly/3rd-kaggle-moa-2020-solution" target="_blank">https://github.com/jxzly/3rd-kaggle-moa-2020-solution</a></p>
Mechanisms of Action (MoA) Prediction
40th place solution | MLPs, ResNet, TabNet & Clustering
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I just used<br> from iterstrat.ml_stratifiers import MultilabelStratifiedKFold</p> <p>I think that both clustering and classification are two ways to take advantage of the fact that the training/test set is not split by drugs, and that as a result they are train and test points sharing the same drug id and thus labels. They could be combined indeed! </p> <p>Thanks <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> :) </p> <p>Congratulations to the winners!</p> <p>Here is my approach:</p> <h1>CV:</h1> <p>The most important part of any Kaggle competition, aka finding a good cross-validation framework, was a nightmare in this competition. I started with 10-fold CV, but it was not generalizing at all to the public LB and I switched to MultilabelStratified 10-fold. Generalization to the LB was weak, so I decided to <strong>only keep changes that improved both my CV and the LB</strong>. I think that this was key in this competition. I experienced with <a href="https://www.kaggle.com/c/lish-moa/discussion/195195" target="_blank">Chris Deotte's CV strategy </a>but found that it was not generalizing better than my old CV, so I sticked to MultilabelStratified 10-fold. </p> <h1>Models:</h1> <p>Here are the models used in my final ensemble:</p> <ul> <li>MLPs (4 models):<br> I took inspiration from <a href="https://www.kaggle.com/riadalmadani/pytorch-cv-0-0145-lb-0-01839/output?scriptVersionId=45510876&amp;select=submission.csv" target="_blank">this notebook</a>.<br> In my final ensemble, I used 4 MLPs: 2 different architecture (varying number of layerss), one with the new CV framework (from Chris Deotte), and one predicting non-scored labels as well.<br> CV (for the best MLP): 0.01571 // LB: 0.01834</li> <li>ResNet (1 model):<br> I started by re-implementing <a href="https://www.kaggle.com/rahulsd91/moa-multi-input-resnet-model" target="_blank">this notebook</a> locally, which I converted to Pytorch.<br> CV: 0.01601 // LB: 0.01854</li> <li>TabNet (1 model)<br> TabNet was doing amazingly well in this competition. I was not familiar with the model and it was a pleasant discovery. I started with the implementation described <a href="https://www.kaggle.com/hiramcho/moa-tabnet-with-pca-rank-gauss?scriptVersionId=45407015" target="_blank">here</a>, and brought it down to 0.01827 LB by just changing parameters. Setting n_shared = 1 and n_independent = 1 was key. Also switching to OneCycleLR scheduler, and reducing the weight decay. <br> CV: 0.01569 // LB: 0.01827</li> </ul> <h1>Training:</h1> <p>A few elements were essential. I used these hyper-parameters across all models: </p> <ul> <li>Averaging 5 models with different random seeds per fold</li> <li>weight decay 8e-6</li> <li>label smoothing 1e-4</li> <li>OneCycleLR scheduler with 25 epochs, and max_lr 5e-3 and div_factor 1e3</li> </ul> <h1>Ensembling:</h1> <p>Nothing fancy, just a weighted average. <br> Best weights: <br> 0.5 * MLPs + 0.1 * ResNet + 0.4 * TabNet<br> CV (excluding the MLP trained with the new CV framework): 0.01556 // LB: 0.01821</p> <h1>Clustering:</h1> <p>We know that some drugs appear in both the training set and the test set. Besides, once drug_id was given, we discovered that 8 drugs were dominating in terms of frequency. So I thought that we could do some clustering, and find test drugs close to clusters formed by these 8 drugs. I first trained a t-sne model to reduce the genes+cells feature space to dimension 2, for these 8 drugs + the public test set. Then, when plotting, I noticed that 5 out of these 8 drugs had clear, separate clusters. So for test data points close (in dimension 2, and with L2-distance) to the cluster centers of these 5 drugs, I blended my ensemble's current predictions with the labels from these 5 drugs. Amazingly, that helped my public LB from 0.01821 to 0.01819, which converted to 0.01611 and 0.01609 on the private LB respectively. I did not even check how much this helped CV. To keep safe I only used this clustering with one of my two final submissions. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F965613%2Fb507a12e3054ef12d0f070eaa2df82d4%2Fknn.png?generation=1606784164930624&amp;alt=media" alt=""></p> <p>In red is the public test set, the 8 other colours are the 8 drugs with highest frequency. We can see the 5 forming clear clusters: orange (bottom left), purple (left), green (top left), light blue (top left), and blue (top). When looking at this plot, it is very natural to want to label the red points falling into these clusters with the same label as these drugs :) </p> <p>The drug_ids of the 5 clusters are: 292ab2c28, 87d714366, 8b87a7a83, d08af5d4b and d1b47f29d.</p> <h1>What did not work:</h1> <ul> <li>xgboost</li> <li>Clustering on the full training set</li> <li>Clustering in the original feature space</li> <li>2nd-stage models trained on model predictions</li> <li>Deep stacking </li> <li>DAE features</li> <li>Removing targets with too few positives</li> <li>Other optimizers than Adam </li> <li>T-SNE features for neural nets </li> </ul>
Mechanisms of Action (MoA) Prediction
4th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for your valuable comment !<br> I’ll try it with late sub !</p> <p>First of all, We(<a href="https://www.kaggle.com/kannahashimoto" target="_blank">@Kanna Hashimoto</a> , <a href="https://www.kaggle.com/eemonn" target="_blank">@e-mon</a>, <a href="https://www.kaggle.com/kento1993" target="_blank">@Konbuiyon</a>, <a href="https://www.kaggle.com/toseihatori" target="_blank">@hatry</a>, <a href="https://www.kaggle.com/hikomimo" target="_blank">@hikomimo</a>) would like to thank you to Kaggle and the hosts for hosting the competition.<br> I'm still excited about this result!</p> <h2>Solution Overview</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3948967%2Fe3bcf85de2251538780d7a3631157f4a%2F2020-12-01%2023.41.17.png?generation=1606833703847939&amp;alt=media" alt=""></p> <p>Our approach is very simple. Trust CV &amp; Ensemble.</p> <h2>All of the source code is here</h2> <ul> <li><p><a href="https://github.com/e-mon/lish-moa" target="_blank">github</a></p></li> <li><p><a href="https://www.kaggle.com/kento1993/nn-svm-tabnet-xgb-with-pca-cnn-stacking-without-pp" target="_blank">kaggle notebook</a></p></li> </ul> <h2>Validation Scheme</h2> <p><a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> 's CV methods (Drug and MultiLabel Stratification) w/o cp_type = 'ctl_vehcle'.<br> This method could almost completely correlate CV with LB.</p> <p>(Thx <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> !!)</p> <p><a href="https://www.kaggle.com/c/lish-moa/discussion/195195" target="_blank">Drug and MultiLabel Stratification Code</a></p> <h2>Feature Engineering</h2> <ul> <li><p>statistical features</p> <ul> <li>sum, mean, std, kurt, skew, median</li></ul></li> <li><p>PCA</p> <ul> <li>applied only CELL features</li></ul></li> <li><p>combination features</p> <ul> <li>difference between the two features</li> <li>Because of combinatorial explosion (872C2=379_756), we used only features with variance above the threshold.</li></ul> <pre><code>for c in itertools.combinations(features_g + features_c, 2)): col_name = f"{c[0]}_{c[1]}_diff" d = train_features_df[c[0]] - train_features_df[c[1]] diff_val = np.var(d) if diff_val &gt; 15: train_features_df[col_name] = d var_list.append(diff_val) </code></pre></li> <li><p>quantile transformer</p> <ul> <li>except for PCA features</li></ul></li> </ul> <h2>Modeling</h2> <h4>1st stage models</h4> <ul> <li>stage1-NN (7folds 5seeds)<ul> <li>stacking of 2 models <ul> <li>stage0-NN1 (7folds 5seeds)<ul> <li>labels : scored + nonscored targets (except for all 0 columns)</li></ul></li> <li>stage0-NN2 (7folds 5seeds)<ul> <li>labels : scored targets</li></ul></li></ul></li></ul></li> <li>stage1-tabnet (7folds 5seeds)<ul> <li>labels : scored targets</li></ul></li> <li>stage1-svm (4folds 1seeds)<ul> <li>labels : scored targets</li> <li>create 206 models</li></ul></li> <li>stage1-xgb (5folds 2seeds)<ul> <li>labels : scored targets</li> <li>create 206 models</li></ul></li> </ul> <table> <thead> <tr> <th></th> <th>model</th> <th>cv score</th> <th>blend weight</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>stage0-NN2</td> <td>0.016779</td> <td>-</td> </tr> <tr> <td>1</td> <td>stage1-Tabnet</td> <td>0.016846</td> <td>0.228</td> </tr> <tr> <td>2</td> <td>stage1-SVM</td> <td>0.017554</td> <td>0.160</td> </tr> <tr> <td>3</td> <td>stage1-NN</td> <td>0.016736</td> <td>0.598</td> </tr> <tr> <td>4</td> <td>stage1-xgb</td> <td>0.017366</td> <td>0.0123</td> </tr> </tbody> </table> <ul> <li>correlation matrix for 1st stage models<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3948967%2F8922e5bf92fa0da863a640a579cc52d3%2Fimage%20(12).png?generation=1606833799313456&amp;alt=media" alt=""></li> </ul> <h4>2nd stage models</h4> <ul> <li><p>2D-CNN stacking (5folds 5seeds)</p> <ul> <li>model-wise CNN for 4 stege1 models </li> <li><a href="https://speakerdeck.com/phalanx/imet-7th-place-solution-and-my-approach-to-image-data-competition?slide=24" target="_blank">iMet 7th place solution</a></li></ul></li> <li><p>weight optimization</p> <ul> <li>Scipy minimize function ('Nelder-Mead')</li> <li><a href="https://www.kaggle.com/c/trends-assessment-prediction/discussion/162765" target="_blank">TReNDS 2nd place solution</a></li></ul></li> </ul> <table> <thead> <tr> <th></th> <th>model</th> <th>cv score</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>2D-CNN stacking</td> <td>0.01670</td> </tr> <tr> <td>1</td> <td>weight optimization</td> <td><strong>0.016518</strong></td> </tr> </tbody> </table> <h4>postprocessing</h4> <ul> <li>Drug ID Prediction<ul> <li>Multi-class prediction with LightGBM for frequently appeared 8 drug id's in train datasets.</li> <li>replacing the label of a record whose prediction exceeds the threshold with the hard target of its drug_id</li> <li>target labels<ul> <li>9 classes: ['87d714366', '9f80f3f77', '8b87a7a83', '5628cb3ee', 'd08af5d4b', '292ab2c28', 'd50f18348', 'd1b47f29d', other_drug_ids]</li></ul></li> <li>metric<ul> <li>accuracy: 0.984965 with 5folds 15seed averaging</li></ul></li></ul></li> </ul> <h4>final submission</h4> <p>Our final two submissions are averaging of two predictions (2D-CNN stcking + weight optimization averaging with 1st models ) w/ and w/o postprocessing.</p> <ul> <li>avg weighted_blend + CNN (CV): 0.01647 (PublicLB: 0.01816/PrivateLB: <strong>0.01600</strong>)</li> <li>w/ postprocessing (CV): 0.01644 (PublicLB: 0.01811/PrivateLB: <strong>0.01618</strong>)<ul> <li><a href="https://www.kaggle.com/eemonn/nn-svm-tabnet-xgb-with-pca-cnn-stacking?scriptVersionId=48144329" target="_blank">notebook w/ postprocessing</a></li></ul></li> </ul> <p>Our training &amp; inference processes were almost done only on Kaggle kernel.</p> <p>Try &amp; Error !! and adopted the one increasing cv score !!</p>
Mechanisms of Action (MoA) Prediction
51th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks! Also congrats to you.</p> <p>Public 28/Private 51 0.01609</p> <p><strong>Feature Engineering</strong>:</p> <p>I use four group features.</p> <ol> <li>PCA for cell and gene differently</li> <li>SVD for cell and gene differently</li> <li>GaussRankScaler</li> <li>Sub-set features for the raw features by feature importance.</li> </ol> <p><strong>Model</strong></p> <ol> <li>ResNet</li> <li>Highway for FC layers.</li> </ol> <p><strong>Optimizer</strong></p> <ol> <li>Adam</li> <li>Label Smoothing</li> <li>OneCycleLR</li> </ol> <p><strong>Ensemble</strong></p> <ol> <li>Two models</li> <li>Each model is 3 seed and 5 folds</li> <li>Final result is average these output</li> </ol> <p><strong>Methods don't work for me</strong></p> <ol> <li>TabNet (I don't spend too much time on it)</li> <li>Convert each record to an image and use Conv</li> <li>Use the 1D Conv</li> <li>Pre-train on the non-score targets.</li> <li>All kinds of activation functions other than relu.</li> <li>k-means</li> <li>Bin the features</li> <li>All kinds of lr schedules</li> <li>DeepFM and some other models.</li> <li>autoencoder</li> </ol> <p>Thank those to share your notebooks, I borrow some insights from you, and sorry for can't find them and paste the links here. And congrats to all.</p>
Mechanisms of Action (MoA) Prediction
53rd place in a nutshell
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Merci beaucoup !</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3612572%2F7e56f0bed3d9bf6ffb7e6ec88fcb1b31%2Ffreestocks-nss2eRzQwgw-unsplash%20(1).jpg?generation=1606775950410689&amp;alt=media" alt=""><br> Photo by <a href="https://unsplash.com/@freestocks?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">freestocks</a> on <a href="https://unsplash.com/s/photos/pills?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p> <p>Hey Kagglers !</p> <p>It's been a real pleasure being part of this competition. It was a new experience, inspired us to try a lot of things (the majority of which didn't work, but we're always happy for the few that did).</p> <p>I'd like to thank my amazing teammates <a href="https://www.kaggle.com/amedprof" target="_blank">@amedprof</a>, <a href="https://www.kaggle.com/kneroma" target="_blank">@kneroma</a>, <a href="https://www.kaggle.com/ulrich07" target="_blank">@ulrich07</a> and <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> without whom nothing would have been possible.</p> <p>Our models were trained on HP Z8 with Intel® Xeon(R) Gold 6234 CPU @ 3.30GHz × 32 and ran 8 times faster than on a Kaggle kernel, what enabled us to quickly try a lot of different things.</p> <h2>1. Data Processing &amp; feature engineering</h2> <p>We adopted almost the same pipeline as the public kernels, no real innovation here.</p> <ul> <li><p><a href="https://www.kaggle.com/louise2001/rapids-feature-importance-is-all-you-need" target="_blank">Feature Importance analysis</a></p></li> <li><p>Feature normalization with RankGauss (switching from this to standard normalization didn’t seem to deteriorate our results).</p></li> <li><p>PCA features : Public kernels fit the PCA on the test set, we got rid of this idea as it forces model to be trained during the final submission (which is slow), and makes them less robust.</p></li> <li><p>Feature selection : we stuck to the standard variance threshold feature selector (simple is the new black).</p></li> </ul> <h2>2. Models</h2> <p>We mostly built two ensembles : </p> <h3>Theo's ensemble</h3> <p>I use a MLP and a MLP with skip connection with a training strategy similar to <a href="https://www.kaggle.com/thehemen/pytorch-transfer-learning-with-k-folds-by-drug-ids" target="_blank">this public kernel</a>. </p> <ul> <li><p>I got rid of the checkpointing which artificially boosts CV</p></li> <li><p>Pretraining is only done for 10 epochs</p></li> <li><p>When unfreezing, I progressively unfreeze only half of the layers</p></li> <li><p>I used a linear scheduler with warmup</p></li> <li><p>Batch size of 256 worked better for me</p></li> </ul> <p>I also trained a TabNet, using the following parameters. Unsupervised pretraining didn't help.</p> <pre><code>n_d = 32 n_a = 128 n_steps = 1 gamma = 1.2 n_shared = 1 n_independent = 1 lambda_sparse = 0 mask_type = "entmax" batch_size = 256 virtual_batch_size = 128 epochs = 200 lr = 2e-2 weight_decay = 1e-5 scheduler_params = dict(mode="min", patience=5, min_lr=1e-5, factor=0.1) scheduler_fn = ReduceLROnPlateau </code></pre> <p>My three models achieve CVs of 0.0160~ using <a href="https://www.kaggle.com/chrisdeotte" target="_blank">@chrisdeotte</a>'s folds. I used 7 seeds and 7 folds for each model, and their average score <strong>CV 0.01553 - Public 0.01839 - Private 0.01618</strong>.</p> <p>I also produced another blend by simply retraining the previous models with pseudo-labelling. I used the soft labels directly and it improved my LBs by a few (+0.00003 public, +0.00001 private).</p> <p>I consider my ensembles to be quite weak, the thing is, the public kernel was already heavily tweaked and was therefore quite hard to really improve. </p> <h3>Amed's ensemble</h3> <p>It was hard to beat public kernels, I built several models that didn't perform better than public baselines. Therefore, I decided to start from these public kernels (<a href="https://www.kaggle.com/thehemen/pytorch-transfer-learning-with-k-folds-by-drug-ids" target="_blank">Public1</a>, <a href="https://www.kaggle.com/kushal1506/moa-pytorch-feature-engineering-0-01846" target="_blank">Public2</a>) and in order to improve them I tried some changes on training strategy, feature engineering and validation strategies used.<br> I used the following validation strategies :</p> <ul> <li><p>Strategy 1 : 7 versions of Chris's folds<br> Assumption 1 : frequent targets in train will also be frequent in test.</p></li> <li><p>Strategy 2 : 3 versions of my own custom folds<br> Assumption 2 : targets that are rare in train will be frequent in test and targets that are most frequent in train will be rare in test. Basically the contrary of assumption 1.</p></li> </ul> <p>My final ensemble is a blend of the two models trained on 10 folds with 2 seeds.</p> <h3>Post-Processing :</h3> <p>We trained binary classifiers for the most prevalent drugs in train set, and used them to correct our model probabilities. It gave us a 0.0001 boost on public, but didn't really work on private. Fortunately we also used a safer approach that was enough to grab a 53rd place. We're not sure if it really helped in the end.</p> <h3>Final Blend :</h3> <p>We used a simple average of :</p> <ul> <li><p>The three models in <a href="https://www.kaggle.com/vikazrajpurohit/3-model-training-and-inference" target="_blank">this kernel</a></p></li> <li><p>Theo's blend</p></li> <li><p>Theo's blend with PL</p></li> <li><p>Amed's blend</p></li> </ul> <p>It achieves a public LB of 0.01822 which ranks between 150th and 200th, but we knew it would do better on private. </p> <p>We could not even reproduce our 0.01816 solution which gave us a nice ranking on public, and even if we did we most likely wouldn't have selected it as it was not really robust.</p> <h2>3. What worked, and what didn't</h2> <p><strong>What worked :</strong></p> <ul> <li><p>Weight decay</p></li> <li><p>Correctly scheduling the learning rate, the OneCycleLR one worked pretty well</p></li> <li><p>Transfer Learning for shallow models</p></li> <li><p>Blending : It always works with enough diversity</p></li> </ul> <p><strong>What didn't work :</strong></p> <ul> <li><p><a href="https://www.kaggle.com/amedprof/lgbm-custom-loss-one-lgb-for-206-labels" target="_blank">Custom-loss Lgbm in order to learn simultaneously all classes</a>.</p></li> <li><p>Metric Learning to detect drugs in the test set that were in the train set</p></li> <li><p>Target Encoding</p></li> <li><p>Using a binary classifier to predict if 2 experiments are from same drug or not</p></li> <li><p>Embedding features depending on drug targets</p></li> <li><p>Tabnet pretraining</p></li> <li><p>Second level models / stacking</p></li> <li><p>And a lot more …</p></li> </ul> <h5>Thanks a lot for reading, and happy Kaggling !</h5>
Mechanisms of Action (MoA) Prediction
56th solution and my share
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the kaggle official, thanks to the competition, and thanks to my team.It was one month before the game ended before I started to take part in the game seriously, but this month I was still sick. Fortunately, I persisted to the end and got a result that was enough to comfort myself.<br> My solution is a combination of one TabNet, three NNs, and one ResNet model.</p> <p><strong>1,ResNet</strong><br> Full reference from: <a href="url" target="_blank">https://www.kaggle.com/rahulsd91/moa-multi-input-resnet-model?scriptVersionId=44981070</a> This is the model I integrated on the last day, I haven’t made any changes, but it brings me an improvement of 0.00002 on Private Score. I saw that its public score is 0.01854, and its private score is 0.01636.</p> <p><strong>2. The first NN model</strong><br> Reference from:<a href="url" target="_blank"> https://www.kaggle.com/kushal1506/moa-pretrained-non-scored-targets-as-meta-features?scriptVersionId=46973557</a>, this is also the model I integrated on the last day, I did not make any changes, Neither LB nor Pr brought me improvement. But it did not decline. I saw that its public score is 0.01836, and its private score is 0.01626.</p> <p>**3. The second NN model **<br> Also referenced from <a href="url" target="_blank">https://www.kaggle.com/kushal1506/moa-pretrained-non-scored-targets-as-meta-features?scriptVersionId=46973557</a>, but I removed PCA and used Chris Deotte's method of splitting and adjusting the parameters . I saw that its public score is 0.01837, and its private score is 0.01626.</p> <p><strong>4. TabNet:</strong><br> Reference from: <a href="url" target="_blank">https://www.kaggle.com/optimo/selfsupervisedtabnet</a>, I removed the self-supervised part, because it did not bring me improvement, I also removed the PCA and other operations, I did not trust them at the time. Using Chris Deotte's method , get the public 0.01839, private0.01620.</p> <p><strong>5. The third NN model</strong><br> Reference from: <a href="url" target="_blank">https://www.kaggle.com/thehemen/pytorch-transfer-learning-with-k-folds-by-drug-ids</a>, the pre-training method here has benefited me a lot, and I also removed the PCA Part, using Chris Deotte's method of splitting, get 0.01834 publict and 0.01625 private .</p> <p>My integration does not refer to cv, because the cv gap between models is very large, so I refer to LB to make adjustments. Fortunately, I am lucky.</p> <p>All of my code is learned from the kernel, and there are many good kernels and discussions I haven't listed. This is what Kaggle makes me love. You can always learn, and someone is willing to share and help you.</p> <p>Next, I want to share some of the experience I gained in the competition.<br> 1,After getting the baseline, the first modification is to separate training and inference, save OOF during training, and create files with different folds first. There is also a reliable cross-validation.</p> <p>2.The output of each training process should be checked well. Using early_stop can save a lot of training resources and time. Observing the changes in the training effect of the first fold, you can infer the quality of the schedule used.</p> <p>3.Schedule is a part that must be adjusted. First find a suitable schedule for the baseline, and then adjust other parameters.</p> <p>4.At the beginning of the baseline, when testing the parameters, you should first pay attention to the training time. If the time is very short, you might as well make two seeds, so that on average, you can test out good parameters more stably and avoid overfitting the seeds. Of course, in many competitions, it is impossible to test the two seeds together.</p> <p>5.Note that for games that require kernel inference, you must run under the GPU, and must only run inference, not training.</p> <p>6.Note that the kernel creation time is different, and the installation environment may be different, such as different versions of pytorch.</p> <p>Maybe some of my ideas are wrong, please correct me in time. In any case, persistence is the most important quality for participating in kaggle competitions.😊</p> <p>Thanks, I learned so much from you, and congratulations too</p>
Mechanisms of Action (MoA) Prediction
5th place solution [Updated]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The Overview Page says:</p> <blockquote> <p>In addition, you will have access to MoA annotations for more than 5,000 drugs in this dataset. </p> </blockquote> <p>And we have only 3,289 drugs in the training set. So there should be at least 1,711 unseen drugs in the test set (+ some seen drugs considering the balance). In fact, MultilabelStratifiedKFold did not work so well and way better correlation was obtained using Chris CV. This means the existence of unseen drugs in the test set. I believe that most participants assume that unseen drugs will be included in the test data.</p> <p>Thanks, <a href="https://www.kaggle.com/fatihozturk" target="_blank">@fatihozturk</a>. Nice to see you again in Kaggle! What I wanted to separate from other drugs in the test set is not all seen drugs, but seen drugs which value_counts are more than certain threshold. It's because even if it is a seen drug, models cannot learn its nature without enough observation. First I split training data as popular drugs and minor drugs. After that, 5-fold CV is conducted on the popular drugs. In each loop of the CV, how seen drugs of val fold and minor drugs, which is splitted in the first step, can be discriminated by cosine similarity based on the embedding is examined.</p> <p>Thanks! Updated.</p> <p>Thanks. You can split as both seen and unseen are included in the val set. CV-LB correlation was not bad (r^2&gt;0.9). Not so perfect, though.</p> <p>Thank you to my team mate Anna, all participants, Kaggle Admin, Laboratory for Innovation Science at Harvard who realized this exciting competition. I would like to briefly share our 5th place solution.</p> <h2>Problem settings</h2> <p>The most difficult problem in this competition seems to be that there are both seen and unseen drugs in the test set. This makes construction of the reliable validation difficult.</p> <h2>Validation strategies and their properties</h2> <p>In this competition, there are mainly two types of validation strategies used. One is traditional MultilabelStratifiedKFold, and the other is MultilabelStratifiedGroupKFold which is a group awared one (aka Chris's method). Seeing predictions by models based on these schemes, MultilabelStratifiedKFold seems to give more confident predictions for seen drugs, whereas MultilabelStratifiedGroupKFold gives more balanced and robust predictions which are supposed to be suitable for unseen drugs. I thought that both are useful in different ways.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F570506%2F57b2c2c0ead74739285e3c32cca0cbd5%2FScreen%20Shot%200002-12-01%20at%209.22.30.png?generation=1606782173177185&amp;alt=media" alt=""></p> <h2>Our approach</h2> <p>The setting where both seen and unseen labels are observed in the test set is similar to that in <br> <a href="https://www.kaggle.com/c/bengaliai-cv19" target="_blank">Bengali.AI Handwritten Grapheme Classification</a>. So we decided to discriminate seen and unseen drugs by metric learning (L2 Softmax -&gt; cosine similarity) and then blended models trained on each validation scheme based on the similarity. This separation of seen/unseen seems to be possible to some extent as shown in the figure below. This approach does not require something like pseudo-labelling &amp; retraining. Simple forward processing is enough.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F570506%2F0543298668876b7bcae89f4f82ae2d3e%2FSeen_Unseen.png?generation=1606782212125156&amp;alt=media" alt=""></p> <h2>Features</h2> <p>Raw cell and genes<br> PCA<br> rankGauss<br> PolynominalFeatures<br> Simple statistics like mean, min, max, skew, kurt, (X_train&gt;9).sum(axis=1)<br> etc.</p> <h2>Models</h2> <ul> <li>ResNet-like shallow NN</li> <li>TabNet<br> For especially models for unseen drugs, we used simple networks.</li> </ul> <h2>Reconstruction of drug_id [Updated]</h2> <p>From the very beginning of the competition, I was validating my models based on a method nearly equivalent to Chris CV. For this purpose, I reconstructed drug_id by heuristic methods, including clustering and many manual adjustments. The official drug_id was published in the later stages of the competition, but many participants may have already overfitted. Once anyone has produced a good Public LB score, it's very hard to suspect that it's overfitting. The temptation and pleasure of running up LB is so intense.</p> <p>Below are the CV results from both methods for the same model. My method worked almost as well as the Chris CV. I switched to Chris CV at the end of the competition because it performed a bit better. I think that Chris CV was good at handling drug_id with lots of observations and was similar to the actual splits.</p> <table> <thead> <tr> <th>Method</th> <th>LocalCV</th> <th>Private</th> <th>Public</th> </tr> </thead> <tbody> <tr> <td>reconstructed drug_id +my CV</td> <td>0.017173034</td> <td>0.01613</td> <td>0.01826</td> </tr> <tr> <td>official drug_id + Chris CV</td> <td>0.016907018</td> <td>0.01612</td> <td>0.01825</td> </tr> </tbody> </table> <h2>Result</h2> <p>Public: 0.01812 (14th)<br> Private: 0.01602 (5th)</p> <p>Thanks. I am not specifically referring to the public/private split, but just train/test split. Our solutions are not particularly affected by that split. Anyway, it would not be healthy to rely on Public LB's feedback.</p> <p>Thanks and congrats on your 58th place. Each type has an ensemble of about five models. Each model is seed/fold averaged. The whole inference code runs in about 40 min without GPU.</p>
Mechanisms of Action (MoA) Prediction
7th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks! We also didn't test this too much during the competition. It seemed like a good idea, and when we added it in we got our best model at the time. After that we just ran with it without questioning how much it helped. But for what it's worth, I just ran a quick experiment with normal early stopping and with early stopping by target. These are the results:</p> <table> <thead> <tr> <th>Type</th> <th>CV</th> <th>Public</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>Normal Early Stopping</td> <td>0.01568</td> <td>0.01851</td> <td>0.01638</td> </tr> <tr> <td>Early Stopping by Target</td> <td>0.01479</td> <td>0.01824</td> <td>0.01612</td> </tr> </tbody> </table> <p>It's probably a good idea to run a few more experiments, but it looks like early stopping by target helps a lot.</p> <p>Sure, I attached our MLP and ResNet code. The TabNet architecture is the same as the <a href="https://github.com/dreamquark-ai/tabnet/blob/develop/pytorch_tabnet/multitask.py" target="_blank">pytorch-tabnet library</a>.</p> <p>We didn't really test this too much while the competition was in progress, but here's the results from a few quick experiments. </p> <table> <thead> <tr> <th>Dataset</th> <th>CV</th> <th>Public</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>Train</td> <td>0.01479</td> <td>0.01824</td> <td>0.01612</td> </tr> <tr> <td>Train + Public</td> <td>0.01483</td> <td>0.01825</td> <td>0.01614</td> </tr> <tr> <td>Train + Private</td> <td>0.01479</td> <td>0.01825</td> <td>0.01613</td> </tr> <tr> <td>Train + Public + Private</td> <td>0.01480</td> <td>0.01824</td> <td>0.01611</td> </tr> </tbody> </table> <p>It is a bit weird that not using the test set at all gives better results than using part of the test set. We should probably do more tests to be certain, but maybe it's better to think of this part of our write-up as a "here's what we did" section rather than a "here's what works" section.</p> <p>I see what you're saying, I've heard similar advice before too. In fact, just in this competition <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> advised us to make sure our folds were the same, but that was for stacking in particular. The reason--if I'm not mistaken--is that in stacking, we want to reduce overfitting when training the second-stage model. Since we weren't stacking, we didn't need to worry about this type of overfitting. </p> <p>Thanks Theo! </p> <blockquote> <p>Also, do you know how you beneficiate from doing dimension reduction on the full test set ?</p> </blockquote> <p>We compared inference-only submissions to submissions that retrained using dimenisonality reduction on the full test set. For the most part, we saw better public LB scores when we used the full test set. As to why it helps, there is some good discussion in this <a href="https://www.kaggle.com/c/lish-moa/discussion/200605" target="_blank">Discussion Post</a>. TL; DR: Using the full test set is a form of leakage. </p> <p>Thank you to the LISH Competition Organizers, the Kaggle Team, and everyone who shared notebooks and contributed to the discussions. </p> <p>TL;DR: Our final submission was a simple weighted average of an MLP, TabNet, and ResNet model.</p> <h2>Dimensionality Reduction</h2> <p>We decided to do dimensionality reduction on the full test set. This meant we had to retrain the models when the private test set became available and so we couldn't do an inference-only submission. We used a mix of PCA, SVD, and ICA depending on the model.</p> <h2>Early Stopping for Each Target</h2> <p>All of our models used early stopping for each target. The idea was to keep track of the best epoch for each of the targets separately. For example, this plot shows the log loss on a few of the targets.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F331718%2Fd388ae8ca8d2a44d809290e0fd3b111d%2Ftarget_loss.png?generation=1606854356671598&amp;alt=media" alt="target loss"><br> If we had used a single epoch's weights for every target, we would have overfit some targets and underfit others.</p> <h2>Validation</h2> <p>We tried a lot of variants of CV, but in the end, we went with MultilabelStratifiedKFold and did not use <code>train_drug.csv</code>. Our CV-LB alignment was still shaky and there was clearly a point at which further CV improvement degraded LB. <a href="https://www.kaggle.com/cdeotte" target="_blank">@Chris Deotte's</a> smart <a href="https://www.kaggle.com/c/lish-moa/discussion/195195" target="_blank">CV scheme</a> was promising, but it did not solve this entirely for us. Our LB scores with the smart CV scheme were a bit worse than with MultilabelStratifiedKFold, similar to what other teams noticed.</p> <h2>Final Blend</h2> <p>Our final submission was a simple weighted average of <code>0.5*mlp + 0.3*tabnet + 0.2*resnet</code>. We did not use the public LB to tune these weights, and that probably helped prevent some overfitting.</p> <table> <thead> <tr> <th>Model</th> <th>Public</th> <th>Private</th> <th>Run Time</th> </tr> </thead> <tbody> <tr> <td>Blend</td> <td>0.01817</td> <td>0.01603</td> <td>5883.6s</td> </tr> <tr> <td>MLP</td> <td>0.01825</td> <td>0.01613</td> <td>816.6s</td> </tr> <tr> <td>TabNet</td> <td>0.01828</td> <td>0.01614</td> <td>1619.9s</td> </tr> <tr> <td>ResNet</td> <td>0.01859</td> <td>0.01628</td> <td>2562.4s</td> </tr> </tbody> </table> <h3>MLP</h3> <p>Our MLP was very similar to those in public notebooks. We found that the PReLU activation worked better than ReLU or LeakyReLU.</p> <h3>TabNet</h3> <p>We used the code from <a href="https://www.kaggle.com/optimo" target="_blank">@Optimo's</a> excellent <a href="https://www.kaggle.com/optimo/tabnetmultitaskclassifier" target="_blank">TabNet Multitask Classifier</a> <a href="https://github.com/dreamquark-ai/tabnet/" target="_blank">library</a>. We used only the model code and not the helper functions from the library so that we could have more flexibility with the learning rate scheduler, loss function, etc.</p> <h3>ResNet</h3> <p>Our model was based off of <a href="https://www.kaggle.com/demetrypascal" target="_blank">@Demetry Pascal's</a> clever <a href="https://www.kaggle.com/demetrypascal/2heads-deep-resnets-pipeline-smoothing-transfer" target="_blank">2heads+deep resnets+pipeline+smoothing+transfer notebook</a> and ported to PyTorch.</p> <p>Thanks CPMP, I enjoyed working with you and the rest of the team too!</p> <p>Thanks Chris! I cherry picked the plot above, but I still think it makes a big difference. </p> <p>Thank you! </p> <p>That's basically what we did. The only difference is that we trained for a fixed number of epochs, saved wieghts whenever there was an improvement, then once training was done, we made the test predictions. This might help to speed things up because if the minimum best epoch is 5, then we don't waste time making test predictions for epochs &lt; 5.</p> <p>For CV we used repeated MultilabelStratifiedKFold, the same as in the public notebooks, nothing fancy. You're right, overfitting is more likely with early stopping for each target, but I'm guessing repeated CV helped to smooth things out.</p> <p>Hi Manikanth. Thanks for asking. </p> <ol> <li>Our folds and number of repeats varied slightly for each model. I'm not too sure what other teams used, so I can't really say whether we used less, but here's what we used.<ul> <li>MLP: 7 folds, 5 repeats</li> <li>TabNet: 7 folds, 5 repeats</li> <li>ResNet: 6 folds, 5 repeats</li></ul></li> <li>We used different seeds and hence different folds for each of the models. This probably helped us increase the diversity in our models which in turn helped to create a stronger blend.</li> </ol>
Mechanisms of Action (MoA) Prediction
8th place solution | From multilabel to multiclass
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for your comment <a href="https://www.kaggle.com/mrbhbs" target="_blank">@mrbhbs</a>! You mean because with the reframing the target matrix became sparser?<br> In a sense, treating the problem as multiclass makes it straightforward to address the imbalance of the dataset, as we can reweight the cross-entropy loss by giving more importance to the rarest classes. The approach described on 4.1 <a href="https://openaccess.thecvf.com/content_CVPR_2019/papers/Cui_Class-Balanced_Loss_Based_on_Effective_Number_of_Samples_CVPR_2019_paper.pdf" target="_blank">here</a> was very effective for me, with beta = 0.9.</p> <p>Thanks Evgeniya! I used a simple average, with the same weight to all models, to avoid overfitting.</p> <p>A big thank you to the Laboratory for Innovation Science at Harvard and to the Kaggle team for hosting this competition, as well as to all other participants who created such a friendly and yet challenging competitive environment! This was my first DS competition and I learned a lot from you.</p> <h3>From multilabel to multiclass</h3> <p>The main idea behind my solution was reframing the multilabel prediction problem as multiclass classification to leverage correlations between targets. Even though there were originally 206 binary targets, the training dataset contained only 328 distinct combinations of them. I gave each a class label from 0 to 327, using it as target variable instead. The correspondence between class labels and original targets was stored in a 328x206 matrix.</p> <p>Then I set up NN models with a softmax activation in the last layer, yielding probability matrices of shape (batch_size x 328) and optimizing for categorical cross-entropy. To retrieve predictions for the 206 original targets, I multiplied the NN output by the 328x206 matrix.</p> <p>While in theory this "dual" approach is more limited due to the fact that it is impossible to give perfect predictions for a new drug with MoA combinations not previously seen among the 328 classes, in practice these individual models still had decent performance by themselves (best public/private LB scores were 0.01849/0.1633 with NNs and 0.01830/0.01623 with Tabnet).</p> <p>However, their biggest power came when blending with other models, because this reframing significantly lowered the prediction correlations, as it detects different signals from the training data.</p> <h3>Final solution</h3> <p>My final solution was a simple average (no weighting) of four models: two Tabnets and two NNs (one multiclass and one multilabel for each architecture). Nothing remarkable about the feature engineering pipeline: I used QuantileTransformer, PCA and row-based statistics as it was done in several public notebooks.</p> <p>The details of my top selected submission can be seen below (0.01811/0.01603 LB), but I achieved similar private scores of 0.01603 and 0.01604 with other model combinations.</p> <table> <thead> <tr> <th><strong>Model</strong></th> <th><strong>Nb folds</strong></th> <th><strong>CV (with ctrl)</strong></th> <th><strong>Public LB</strong></th> <th><strong>Private LB</strong></th> </tr> </thead> <tbody> <tr> <td>Multiclass NN</td> <td>7</td> <td>0.01361</td> <td>0.01884</td> <td>0.01659</td> </tr> <tr> <td>Multiclass Tabnet</td> <td>7</td> <td>0.01484</td> <td>0.01830</td> <td>0.01623</td> </tr> <tr> <td>Multilabel NN</td> <td>7</td> <td>0.01453</td> <td>0.01837</td> <td>0.01626</td> </tr> <tr> <td>Multilabel Tabnet</td> <td>7</td> <td>0.01480</td> <td>0.01834</td> <td>0.01620</td> </tr> </tbody> </table> <p>For instance, one of my blends with four different models (Multiclass NN - 0.01633, Multiclass Tabnet - 0.01636, Multilabel Resnet - 0.01636, Multilabel Tabnet 0.01624) also achieves 0.01603 Private LB, but I did not select it because its public LB score was worse (0.01819).</p> <p>As CV scores (MultilabelStratifiedKFold) were very different between my models, it was difficult to come up with a final validation strategy, since optimizing blending weights would give 100% to Model 1. In the end, I went with my gut feeling that I should not be overfitting too much as long as things were kept simple, and my two final submissions combined distinct models to minimize risk.</p> <p>Yes! <em>M[i,j]</em> is equal to one if MoA <em>j</em> was present on drugs of class <em>i</em> in the training set.</p> <p>Thanks <a href="https://www.kaggle.com/smartzdp" target="_blank">@smartzdp</a>, congratulations to you too! I used multi-class cross-entropy with the NN and multi-label logloss with the Tabnet. Even though I got a better individual LB score with the Tabnet, I feel the true difference maker in the blend was the NN, my submission loss increased significantly when I removed it.<br> I experimented with adding the non-scored targets too (this gave 600+ classes, don't remember the exact number), it improved my CV but individual LB was worse, so I did not blend it. Did you use them in your final submission?</p> <p>Thanks Mohammed!</p> <p>Thanks Hamish, congratulations for your great result too!</p> <p>Thanks yukiya!</p> <p>Thanks! Indeed I was surprised to see how well blending worked 😄</p> <p>Thanks Kouki, congratulations on your very strong finish too!</p> <p>Thanks Rahul! I tried XGB and CatBoost too, with little success.</p> <p>Thank you very much <a href="https://www.kaggle.com/aeryss" target="_blank">@aeryss</a>!</p>
Mechanisms of Action (MoA) Prediction
Layer Normalization - 36th place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all the winners.</p> <p>I just wanted to share one trick which had a significant boost on my model and may not have been shared by anyone yet.</p> <p>It is adding LayerNorm on the gene and cell features to the model (It literally means row-wise normalization on gene/cell). The model looks like this:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F427399%2Ff612b7c9c69584f6c52cdfe8bbe7ac59%2FCapture.png?generation=1606951353239586&amp;alt=media" alt=""></p> <p>It has a performance boost on the MLP:</p> <ul> <li>Private: 0.01625 -&gt; 0.01616 (-0.00009) </li> <li>Public: 0.01834 -&gt; 0.01824 (-0.00010) </li> </ul> <p>I also tried adding row-wise normalized gene and cell features on my TabNet. It did not have the same boost as the LayerNorm on MLP, but still has some:</p> <ul> <li>Public: 0.01848 -&gt; 0.01843 (-0.00005)</li> </ul> <p>I guess the model finds some relative signals across the columns by doing this.</p> <p>I published the notebook:<br> <a href="https://www.kaggle.com/tnmasui/moa-mlp-with-layernorm-inference" target="_blank">https://www.kaggle.com/tnmasui/moa-mlp-with-layernorm-inference</a></p>
Mechanisms of Action (MoA) Prediction
MoA solution - Tried to keep it simple
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks a lot to Kaggle and Organizers for organizing such a beautiful competition and congratulations to all participants.</p> <p>There are so many sophisticated solutions provided which consist lot of learning opportunities. I will try to give a short description</p> <h3>Models</h3> <p>2 NN models , 1 Tabnet . All of them seed averaged</p> <h3>Feature Engineering</h3> <p>This part is always nightmare to me. For Tabnet , there was no feature engineering. For NN there were followings ( one after another up to 4)</p> <ol> <li>Remove the features whose max(correlation coefficients with all targets) below a threshold</li> <li>Distribute the corelated(derived based on a threshold) features in the different seed</li> <li>Variance Threshold</li> <li>Quantile Transformer</li> <li>PCA(g:0.4-0.6, c: 0.95-0.97) on raw data (before applying 1)</li> </ol> <p>All done on training data only </p> <p>Finally it's ensemble of these three models </p> <p>Thanks to creator of all public notebooks and discussion posts which helped in many ways, especially I want to mention few of them here</p> <p>Notebooks<br> <a href="https://www.kaggle.com/thehemen/pytorch-transfer-learning-with-k-folds-by-drug-ids" target="_blank">https://www.kaggle.com/thehemen/pytorch-transfer-learning-with-k-folds-by-drug-ids</a><br> <a href="https://www.kaggle.com/hiramcho/moa-tabnet-with-pca-rank-gauss" target="_blank">https://www.kaggle.com/hiramcho/moa-tabnet-with-pca-rank-gauss</a><br> <a href="https://www.kaggle.com/markpeng/deepinsight-transforming-non-image-data-to-images" target="_blank">https://www.kaggle.com/markpeng/deepinsight-transforming-non-image-data-to-images</a> (Though I have not tried it, but so innovative)</p> <p>Discussions<br> <a href="https://www.kaggle.com/c/lish-moa/discussion/195195" target="_blank">https://www.kaggle.com/c/lish-moa/discussion/195195</a></p> <p>Thanks a lot <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> , It also a big reward for me from you</p>
Mechanisms of Action (MoA) Prediction
Our Bronze Medal Solution (if we had picked it)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you! and Thank you so much for your sharing, too!</p> <p>I was trying to sample the data in different ways in the MX10 notebook because the interactions between some targets could be missed with small K depending on the randomness. 1842 is an amazing improvement to my original submission! MX10 really stands out compared to the other models in the correlation plot.</p> <p>I was wrong. Actually it uses 2 hidden layers. </p> <h2>Our Best Solution</h2> <p>I would like to thank the Harvard and Kaggle teams, my teammate <a href="https://www.kaggle.com/hjq133133" target="_blank">@hjq133133</a>, and all other competitors who shared their knowledge and experience. This was the first competition I was ever committed to in Kaggle. I learned and produced a lot, and I truly enjoyed it!</p> <p>Our best solution was in the bronze zone but we did not pick it.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F337744%2F60c2d578227a25901f367e9eb6da2b2c%2FScreen%20Shot%202020-12-01%20at%208.06.58%20AM.png?generation=1606828064660638&amp;alt=media" alt=""></p> <p>This solution, shown in the figure below, was based on a weighted average of three models: <a href="https://www.kaggle.com/tolgadincer/nn-lb-0-1836-t19" target="_blank">a simple NN</a>, TabNet, and <a href="https://www.kaggle.com/tolgadincer/transfer-model-training-v2-t1" target="_blank">ResNet with transfer learning</a>. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F337744%2Fbb9e168cf01bba58b61c32b516647bab%2FFlowchart.png?generation=1606936444743021&amp;alt=media" alt=""></p> <h4>Feature Engineering</h4> <p>The feature engineering, after the data went through a Quantile Transformation, was slightly different for each model but included the following methods:</p> <ul> <li>PCA components</li> <li>Variance threshold</li> <li>Basic statistical quantities</li> <li>Kmeans clusters</li> </ul> <h4>Weight Optimization</h4> <p>At the beginning of the competition, I wrote a <a href="https://www.kaggle.com/tolgadincer/blending-multilabeled-models-with-scipy" target="_blank">code</a> to optimize blend's weights with scipy's optimizer but couldn't get reliable results. Then, I decided to use the Lagrange's multiplier strategy, which worked perfectly. This code is available <a href="https://www.kaggle.com/tolgadincer/ensemble-weight-optimization" target="_blank">here</a>. It's very easy to use and the weights need not be given by hand.</p> <h4>Some of the tricks that improved our experiments</h4> <ul> <li><strong>Bias initialization</strong>: The data was extremely imbalanced and the default bias initialization for the output layers was leading to a very cold start in my initial experiments, not good for an impatient person like me. So, I decided to initialize the output bias with the target mean as I discussed it in <a href="https://www.kaggle.com/c/lish-moa/discussion/185161" target="_blank">here</a>. This allowed me to see better CV values and I believe many of the other competitors benefitted from it.</li> <li><strong>Label smoothing</strong>: A smoothing value of 1e-5 worked very well in all of our experiments. The most critical point was to clip only the training data and not to touch the validation data as I noted in <a href="https://www.kaggle.com/c/lish-moa/discussion/189857#1043953" target="_blank">here</a>.</li> <li><strong>Learning rate schedulers</strong>: I had experience only with the reducelr scheduler but the public notebooks used different schedulers. I explored how various schedulers change the learning rate in <a href="https://www.kaggle.com/tolgadincer/tf-keras-learning-rate-schedulers" target="_blank">here</a>, and this allowed me to fine-tune our models.</li> </ul>
Mechanisms of Action (MoA) Prediction
Post-Processing Based on Drug Classification (15th place)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>You are exactly right in your description of the process. <br> I have not applied specific thresholds - 99,9% is just an example I used for illustration. Instead, I used the classification confidence to clip the predictions of my main models. Even if the classification confidence is just 50% or 30% percent - it's still useful information that improves the overall score.</p> <p>Since you are here, I must mention that the whole approach was inspired by your now-famous post on drug stratification!</p> <p>I have added links to the classification notebook and an inference example - very curious to see if this helps high-performing solutions (if anyone can be bothered). </p> <p>In my case, it helped a lot (0.00009 on the private leaderboard), but I optimized my blend with this post-processing step already included, so it might not be so significant for other models/blends.</p> <p>It turns out, it’s very easy to build a model that can detect one of the 8 most frequent drugs with more than 99% accuracy. And once you identify the drug you can predict all 206 labels.</p> <p>How exactly this information can be used? Initially, I experimented with applying various thresholds based on the classification model’s confidence. But the approach that worked best was using the classification confidence as the clip value to post-process the results produced by other models (or blends). </p> <p>For example, if the classification model predicts drug A with 99.9% confidence:</p> <ul> <li>Clip model predictions to the minimum of 0.999 for targets corresponding to 1's for drug A</li> <li>Clip model predictions to the maximum of 0.001 for targets corresponding to 0's for drug A. <br> This last bit can be further improved by using the squared confidence, i.e. 1e-6 instead of 1e-3 in my example. Cannot fully explain why this squaring works - my intuition is that this is because of how imbalanced the targets are, so the priors are already very low. </li> </ul> <p>This post-processing approach gave a nice boost (~0.00005) to my models, both CV and LB.</p> <p>Classification notebook (fastai): <a href="url" target="_blank">https://www.kaggle.com/dmitryvyudin/moa-drug-classification-fastai/output</a></p> <p>CV and Inference - based on a popular public notebook: <a href="url" target="_blank">https://www.kaggle.com/dmitryvyudin/pytorch-transfer-learning-with-post-processing</a></p> <p>Before post-processing: CV - 0.01564, public LB - 0.01835<br> After post-processing: : CV - 0.01559, public LB - 0.01831</p> <p>Thanks! It's just a simple 2-layer NN with label smoothing and class weights. Re-running the clean version of the notebook now - will add a link later.</p>
Mechanisms of Action (MoA) Prediction
Private 14th / Public 24th place solution!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for asking, that's a good point!<br> I didn't try blending without stacking result. At least I don't recall. Hmm, yeah, you made a valid point. </p> <p>To make my result better, I tried stacking but the CV wasn't good at all. So I just blended all of them and the CV looked good. <br> Looks like a beginner's luck, hahaha 😅</p> <p>Right, some NNs could be trained locally. </p> <p>I think for a next competition, I should consider using 'train outside' strategy for a better result. 2 hours is too short…</p> <p>I hope the solution doesn't disappoint you. </p> <p>Thank you for asking the question.<br> Actually, it doesn't offer me that much a good outcome. <br> The thing is, it introduced more diversity than just a simple NN when I blend. </p> <p>As I needed to finish all of it within 2 hours, I fired a simple NN, hahaha <br> If I had more running time like 3 hours, I might use a simple NN as well to blend. </p> <p>Thank you so much! <br> If my solution help you, it's my pleasure. 😄</p> <p>Thank you! I hope the notebook would be useful for somebody in some way… </p> <p>I'm glad if you can find something useful from my solution!</p> <p>Thank you!!<br> Really? Good to know! Do you share your final submission? <br> I will take some time for few days to read all the solutions. </p> <p>Thank you!!</p> <p>Thank you!!</p> <p>Thank you for your kind comment!<br> Real life ML ninja, I like that. 😁</p> <p>So yeah, actually I thought about that, utilizing drug data. It came in my mind just a day before the dead line so I didn't tried it.</p> <p>My thought was like this;<br> I thought it can be used in a similar way with non-scored data. <br> To do this, I think we can make group with drug_id and cp_type and get mean value. Then we will have 872 features, right? Some rows have the same value as we just made group and took mean. In this way, I was thinking we can extract some kind of trait of drugs.<br> Now we get 872 features for training set.</p> <p>Then we can feed the 872 features as labels to some sort of NN, then we will have 872 features for test set. The same thing I did with non-scored data. </p> <p>I don't know it works or not, but just sharing what I was thinking in the last moment. </p> <p>Thank you so much for the kind words!</p> <p>The post-processing part is, using stacking result and all the result for blending. </p> <p>So I think the word post-processing wasn't a right word for this. I'm sorry that I made a confusing comment. <br> I didn't know how to call applying stacking and blending at the same time. </p> <p>Thank you!!</p> <p>The post-processing part is, using stacking result and all the result for blending. </p> <p>So the thing is, I considered stacking and blending as a kind of post-processing. If it was wrong, sorry about it. It gave me a huge jump on LB. </p> <p>As I've never seen applying stacking and blending altogether, I didn't know it was the right thing to do or not. </p> <p>Oh, and there's one more thing to add.<br> As I participated in this competition since the beginning, I was reading every single discussions. I'm not saying I understand everything. <br> So I feel a bit sentiment with the end of this competition. </p> <p>I'm reading all the other solutions. There's still much things to learn!! </p> <p>First of all, I want to say thank you to the host, Laboratory for Innovation Science at Harvard. <br> Also thank you to kaggle admins and all the participants! <br> Thanks to everybody here in this competition, I learned a lot. </p> <p>Personally, I want to express my biggest gratitude to <a href="https://www.kaggle.com/gogo827jz" target="_blank">@gogo827jz</a> and <a href="https://www.kaggle.com/tolgadincer" target="_blank">@tolgadincer</a> !<br> I learned a lot from especially both of you. Kaggle is really a great place to learn! </p> <p>This is the first competition I am fully committed.<br> It must be one of the memorable competition in my entire life! <br> Even though the outcome is a kind of luck, I really excited to have a medal.<br> The reason I'm saying 'luck' is, I was not sure what I did was right or not for the private. </p> <p>I will explain my solution briefly here. </p> <p>If you have time, please take a look at <a href="https://www.kaggle.com/kokitanisaka/moa-ensemble" target="_blank">my final submission notebook</a>.<br> I explained everything what I did in the notebook. <br> As I did everything in one notebook, you can see everything what I did. <br> I don't have local environment after all. </p> <h1>Overview</h1> <p>This is the flow. <br> The magic is in blending. I blended all these 6 models, including stacking model.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4974940%2F1f01596426bda0efd1c21c79f94a6d6d%2Fmoasolution.png?generation=1606792790231187&amp;alt=media" alt=""></p> <h1>Whole solution</h1> <p>Everything is in <a href="https://www.kaggle.com/kokitanisaka/moa-ensemble" target="_blank">the notebook</a>. <br> If you have time, please give me feed backs! <br> I still don't know I was doing right or not.</p> <p>Thank you again for all of you!!!<br> I'm looking forward to see you guys in other competitions, other platforms, or even in person! 😊</p> <p>Thank you so much! <br> And congratulations for you as well, <a href="https://www.kaggle.com/kushal1506" target="_blank">@kushal1506</a> !</p> <p>I hope you enjoy it! <br> Sorry if it was a bit messy. 😅</p> <p>Thank you <a href="https://www.kaggle.com/tolgadincer" target="_blank">@tolgadincer</a> !<br> I really thank you for your participation for MoA! What you shared are really valuable. </p> <p>Hehe, I'm enthusiastic enough! 😁 Enjoyed it! </p>
Mechanisms of Action (MoA) Prediction
Private 55th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, Congrats to winners.. </p> <p>Thanks to <a href="https://www.kaggle.com/gogo827jz" target="_blank">@gogo827jz</a> and other kagglers who shared wonderful ideas throughout this competition. </p> <p>Thanks to my wonderful teammates <a href="https://www.kaggle.com/ratan123" target="_blank">@ratan123</a> and <a href="https://www.kaggle.com/jagadish13" target="_blank">@jagadish13</a>. We started competition pretty late.. Our submission was an average ensemble of 4 models. </p> <ol> <li>Tabnet</li> <li>Pytorch models based on Drug id's. </li> <li>Two heads Resnet</li> </ol> <p>Each model had 49 different predictions which were averaged out to get final prediction</p> <p>We have made our solution public<br> <a href="https://www.kaggle.com/manojprabhaakr/inference-model-nn-2-models-1-tabnet-and-1-keras" target="_blank">https://www.kaggle.com/manojprabhaakr/inference-model-nn-2-models-1-tabnet-and-1-keras</a></p> <p>What didn't work (atleast for us):</p> <ol> <li>Resnet model based feature extraction on top of existing features. </li> <li>NODE/Catboost model based tuning</li> <li>AutoEncoder based feature selection. </li> <li>EfficientNet based training (Deep Insight Transform). </li> <li>Clustering of features/adding statistical features. </li> <li>Postprocessing - Clipping of values (with different thresholds)</li> </ol>
Mechanisms of Action (MoA) Prediction
Private 58th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all prize winners and Thank you for all participants !<br> I'll show my solution breify.<br> I'm sorry for my spaghetti code …!</p> <p><a href="https://www.kaggle.com/kaerunantoka/moa-blending24/notebook?select=submission.csv" target="_blank">My solution</a> is now public.</p> <h3>Summary</h3> <p>・My final model is 5models (MLP1, MLP2, MLP3, ResNet, TabNet) averaged model.<br> ・I decided to do this comp on solo, so I tried to generate diversity.<br> ・So there is many minor changes.<br> ・They are mainly based on public notebooks, such as <a href="https://www.kaggle.com/kushal1506/moa-pretrained-non-scored-targets-as-meta-features" target="_blank">kushal1506's</a>, <a href="https://www.kaggle.com/rahulsd91/moa-multi-input-resnet-model" target="_blank">rahulsd91's</a>.<br> ・I tuned weight with public LB.</p> <h3>MLP1</h3> <p>・train + infer<br> ・preprocess … QuantileTransformer<br> ・FE … SVD, <a href="https://www.kaggle.com/kaerunantoka/pytorch-moa-dae-v5" target="_blank">DenoisingAutoEncoder</a>, VarianceThreshold<br> ・CV … MultilabelStratifiedKFold(n_splits=n_folds).split(X=train, y=target, groups=np.array(train.drug_id.values))<br> ・single seed, val logloss 0.014776957827222726</p> <h3>MLP2</h3> <p>・train + infer<br> ・preprocess … <a href="https://www.kaggle.com/c/lish-moa/discussion/195788" target="_blank">RankGauss</a><br> ・FE … SVD, DenoisingAutoEncoder, VarianceThreshold, training with non-scored-targets<br> ・CV … Chris 's CV strategy<br> ・5 seeds, val logloss 0.015637875846293534</p> <h3>MLP3</h3> <p>・train + infer<br> ・preprocess … QuantileTransformer<br> ・FE … PCA, cluster, stats, VarianceThreshold, Pretrained with non-scored-targets<br> ・CV … MultilabelStratifiedKFold(n_splits=7,random_state=seed)<br> ・2 seeds, val logloss 0.014639779385801738</p> <h3>ResNet</h3> <p>・train + infer<br> ・preprocess … QuantileTransformer<br> ・FE … PCA, mean, DenoisingAutoEncoder<br> ・CV … MultilabelStratifiedKFold(n_splits=n_folds)<br> ・3 seeds, val logloss 0.016809864</p> <h3>TabNet</h3> <p>・only infer<br> ・preprocess … RankGauss<br> ・FE … PCA, SVD, stats, cluster<br> ・6 seeds, Overall AUC: 0.7873209725163044, Average CV: 0.016399392395897112</p> <p>thanks! you too!</p> <p>Thanks Kouki-san, congrats to your solo-gold medal!</p> <p>Yes, I think so.<br> I used different number of seeds to make differences with many other participants, in particular. <br> That is because I thought many participants made models based on same strong public notebooks in this competition.<br> Different seeds lead to (a little) different results.<br> So, I tried to use different seeds with other participant.</p>
Mechanisms of Action (MoA) Prediction
Public 46th / Private 34th Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks! No, we used Resnet for submission to get robust result.</p> <p>Hello, Kaggler! <br> This is Public 46th / Private 34th Solution. </p> <p>Thanks to my wonderful teammates, <a href="https://www.kaggle.com/ttahara" target="_blank">@ttahara</a> <a href="https://www.kaggle.com/takanobu0210" target="_blank">@takanobu0210</a> <a href="https://www.kaggle.com/masatakashiwagi" target="_blank">@masatakashiwagi</a> <a href="https://www.kaggle.com/taromasuda" target="_blank">@taromasuda</a>.<br> I enjoyed discussion with teammates every weekend!</p> <h3>Final Submission Notebook</h3> <p><a href="https://www.kaggle.com/ttahara/34th-stacking-5-models-by-mlp-1d-cnn-wo" target="_blank">https://www.kaggle.com/ttahara/34th-stacking-5-models-by-mlp-1d-cnn-wo</a></p> <h3>Our Pipeline</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2502130%2Fc5637c0994ec7a680b2f90c0060658f7%2F2020-12-02%2010.42.44.png?generation=1606873393103931&amp;alt=media" alt=""></p> <h3>What worked</h3> <ul> <li>Add stat feature</li> <li>Add PCA feature</li> <li>Rankgauss</li> <li>Label Smoothing</li> <li>Transfer Learning by nonscored for NN, ResNet</li> <li>Shallow model<ul> <li>Short epoch, learning until limit before loss is NaN by NN</li> <li>n_steps=1, n_shared=1 by TabNet</li></ul></li> <li>Thresholdlng NN<br> <a href="https://www.kaggle.com/ttahara" target="_blank">@ttahara</a> customized NN, which preprocesses inputs by element-wise linear functions followed by tanh and feeds the preprocessed values into an ordinary MLP. He calls this model <strong>Thr</strong>esholding NN. </li> <li>Ensemble. In particular, Tabnet and NN's ensemble is effective.</li> <li>2 Stage Stacking by MLP, 1D-CNN, Weight Optimization</li> </ul> <h3>What did NOT work</h3> <ul> <li>SelfSupervised TabNet</li> <li>Transfer Learing by nonscored for TabNet</li> <li>Feature Selection by TabNet's Feature Importance</li> <li>Pseudo Labeling</li> <li>Class Balanced loss, focal loss</li> <li>PostPredict for large logloss target</li> <li>Predict drug id, and label power set.<br> MoA is determined by drug id, So we tried to predict drug id directly(And target converted to label power set), but not work.</li> <li>Using Non-scored as feature<br> We predict Non-scored MoA, and add Non-scored oof and pred to the train, test data, but not work.</li> </ul> <h3>Not enough time</h3> <ul> <li>Target Encoding to g-,c- bin's feature</li> <li>XGBoost, CatBoost, CNN model for single model (Stage 1)</li> <li>GCN model for stacking model (Stage 2)</li> <li>Netflix Blending</li> <li>PostPredict by LGBM<br> We noticed that there are columns that NN can't predict, but LGBM can (e.g. cyclooxygenase_inhibitor). Therefore, we came up with the idea of repredicting only the columns that are good at LGBM. But not enough time.</li> </ul>
Mechanisms of Action (MoA) Prediction
Silver medal in very first competition!!! (A noob first Kaggle Experience)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/gauravsahani" target="_blank">@gauravsahani</a> Thank U!</p> <p><a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> Thank U, sir … Appreciations from you means a lot!!! </p> <p><a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> Thank U !!! Ok, I'm gonna keep your suggestion in mind before publishing anything. Thanks and congrats to u too.</p> <p><a href="https://www.kaggle.com/mhmdsab" target="_blank">@mhmdsab</a> Thank U</p> <p><a href="https://www.kaggle.com/xiaowangiiiii" target="_blank">@xiaowangiiiii</a> Congrats to U too !!!</p> <p><a href="https://www.kaggle.com/riadalmadani" target="_blank">@riadalmadani</a> Thank U ! and Congrats to you too.</p> <p>I remember 3 months ago when I first heard of kaggle, as the biggest data science competing platform. Total noob who just started with his Data science journey starts taking part in competitions Seems Quite stupid, isn't it. Competition is not a noob's play. Yeah, True competition is not a noob play but that's what makes kaggle unique i.e. it's so helpful and kind-hearted community working together with one motive <strong>Learn and Share</strong>. <br> So I stepped into kaggle and there was a buzz about the MoA competition and I was like I don't stand a chance at all among so many experienced experts, masters, and Grandmasters. Yeah, truly a noob doesn't stand a chance before them but then again there comes the motto of kaggle <strong>Learn and Share</strong>. <br> 2 months ago this noob started with the MoA journey in the hope of Learning something new regardless of the result and truly speaking Thanks to the kaggle community learned a lot, a lot.</p> <p>So, today after 2 months of hard work and 2 full days of tension regarding final submissions and The Shake Up finally our team <code>paracetamol</code> won a silver medal.<br> Thanks to my teammates <a href="https://www.kaggle.com/namanj27" target="_blank">@namanj27</a> <a href="https://www.kaggle.com/xiaowangiiiii" target="_blank">@xiaowangiiiii</a> <a href="https://www.kaggle.com/nasirkhalid24" target="_blank">@nasirkhalid24</a> <a href="https://www.kaggle.com/ajaykumar7778" target="_blank">@ajaykumar7778</a> for their constant support and guidance.<br> <strong>I have also made my notebook public please do visit and do review this</strong> <a href="https://www.kaggle.com/kushal1506/moa-prediction-complete-walkthrough-eda-ensemble" target="_blank">Notebook</a>.</p> <p>Also congratulations to the winners. My thanks to <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> and <a href="https://www.kaggle.com/gogo827jz" target="_blank">@gogo827jz</a> <a href="https://www.kaggle.com/tolgadincer" target="_blank">@tolgadincer</a> for their contributions to this competition.</p> <p>ThankYou,<br> Noob</p> <p><a href="https://www.kaggle.com/aeryss" target="_blank">@aeryss</a> Thank U very much!!!</p> <p><a href="https://www.kaggle.com/japandata509" target="_blank">@japandata509</a> Thank U very much!!! <strong>Learn and Share</strong></p> <p><a href="https://www.kaggle.com/firenh" target="_blank">@firenh</a> Congratulations to you too … Yeah it's crazy by the way what was your public LB rank?</p> <p><a href="https://www.kaggle.com/pcy105" target="_blank">@pcy105</a> Congratulations to U too!!!</p> <p><a href="https://www.kaggle.com/namanj27" target="_blank">@namanj27</a> Thank U sir!!!</p> <p><a href="https://www.kaggle.com/pengwang91" target="_blank">@pengwang91</a> Thank U but no worries everybody can understand that 😂 Just enjoy the results please 😁</p>
Mechanisms of Action (MoA) Prediction
Silver to gold? All you need is augmentation
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The idea is quite simple: <strong><em>upsample where positive samples &lt; 10</em></strong>. However, it has reliably boosted our models by 0.0003 - 0.0004: an improvement that is big enough to turn a silver medal into a gold one or bronze to silver. Simply add the following lines to most of the public notebook:</p> <p>skew_target_cols = train_targets_scored.drop('sig_id', axis=1).sum()[lambda x:x&lt;10].index.values<br> print(skew_target_cols)<br> for col in skew_target_cols:<br> tmp_df = train[train[col]==1].copy()<br> for _ in range(4):<br> train = pd.concat((train, tmp_df))<br> print(train.shape)<br> train = train.reset_index(drop=True)<br> target = train[train_targets_scored.columns]</p>
Mechanisms of Action (MoA) Prediction
Simple Preliminary 71st Place Solution Explained
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>max da ge wo de wen ti</p> <p>Thanks for you Kernel. It contributed to my pipeline. Ty for saving all models, oofs and encoders. It's super easy to write an inference kernel on :D</p> <p>I just deleted the bonus. It was superstitious hahaha👀</p> <p>The implementation is very easy if you are using all unseen predictions as pseudo labels.<br> It works as follows:</p> <pre><code>1. Concat train and test features 2. Concat training labels and test predictions 4. Modify train and test together, e.g. add PCA or etc 5. Split folds on the original training set only 5. Add pseudo labels to each fold's training set </code></pre> <p>I guess the rule of thumb is: if it works on your CV, it works.😄Congratulations on your solid finish.</p> <p>Thanks for the organizers for this long-waited tabular competition; it was both fun and challenging. To be honest, I am not good at tabular competitions at all. Thanks to all the public notebook contributors, I learned a lot from all of you guys; you guys are the true heroes of this competition. I also want to say sorry to my friend and ex-teammate <a href="https://www.kaggle.com/gogo827jz" target="_blank">@gogo827jz</a> for missing his best submission; without your sharing, this competition wouldn't be as popular. I also want to say thank you to my teammates <a href="https://www.kaggle.com/wangyijia" target="_blank">@wangyijia</a> <a href="https://www.kaggle.com/nurmannaz" target="_blank">@nurmannaz</a> for the contributions. Additionally, congrats to my previous teammate <a href="https://www.kaggle.com/tiandaye" target="_blank">@tiandaye</a> for winning his first gold medal🏅️ and my teammate <a href="https://www.kaggle.com/nurmannaz" target="_blank">@nurmannaz</a> for getting his first kaggle medal🥈.</p> <h2>TL;DR</h2> <p>Our final solution is mainly based on public kernels in terms of features. I have no idea how to do feature engineering and I don't know how to validate them. Tried couple methods, they all deteriorate the CV except PCA. With my model architectures and the abuse of pseudo labels, we were able to train some solid models. Along with the public kernel: <a href="https://www.kaggle.com/kushal1506/moa-pretrained-non-scored-targets-as-meta-features" target="_blank">https://www.kaggle.com/kushal1506/moa-pretrained-non-scored-targets-as-meta-features</a> by <a href="https://www.kaggle.com/kushal1506" target="_blank">@kushal1506</a>, we were able to achieve 0.01815 public leaderboard and 0.01610 private leaderboard.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5168115%2Ff19ece2ff6e5ad0f0895cf56f3e4b91f%2Fmoa_final_solution.png?generation=1606786588701522&amp;alt=media" alt=""></p> <h2>Timeline</h2> <p>I started this competition at the beginning and tried a few submissions. I figured it's quite difficult to use LightGBM/XGboost and NN is taking over. Then I took a break from this competition to participate in a CV competition. I rejoined the competition 13 days before the competition ends. There were already many great kernels and diverse methods shared in the notebooks section, I went through some of them and decided to tune some parameters and change the model architecture. I then teamed up with <a href="https://www.kaggle.com/nurmannaz" target="_blank">@nurmannaz</a> which a public leaderboard of 1811. The 1811 submission itself is very important for further development of my models using pseudo labels; however, since this is <a href="https://www.kaggle.com/nurmannaz" target="_blank">@nurmannaz</a> 's first kaggle competition, he wasn't aware that there's a private dataset so the submission itself is invalid. I had to rebuild everything from scratch and it was a lot of work. I began to train models and write inferences.</p> <h2>Models</h2> <p>I came up with a method modified from <a href="https://arxiv.org/abs/1507.06228" target="_blank">https://arxiv.org/abs/1507.06228</a>. It can be a good replacement of simple dense layer, implemented below:</p> <pre><code>class Dense_Alternative(nn.Module): def __init__(self, h1, h2, f): super().__init__() self.num_layers = num_layers self.nonlinear = nn.Linear(h1, h2) self.linear = nn.Linear(h1, h2) self.switch = nn.Linear(h1, h2) self.f = f def forward(self, x): switch = torch.sigmoid(self.gate(x)) nonlinear = self.f(self.nonlinear(x)) linear = self.linear(x) x = (1 - switch) * linear + switch * nonlinear return x </code></pre> <p>It allows the neural network to decide the weight between linear and nonlinear units,<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5168115%2F9df6f34914348fa6cf02f5e5d408f1c3%2Fdense_alt.png?generation=1606788885682182&amp;alt=media" alt=""></p> <h2>Validation Strategies</h2> <p>I used both Multilabel Stratified Kfold and <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> 's fold based on drug ID. I wished two different validation schemes can result in more stable ensemble. I still don't know if it helped or not.</p> <h2>Pseudo Labels(PL)</h2> <p>PL is basically using unseen data's prediction as training data. Note that we should not use any of the samples in the unseen data for validations. To my understanding, there are two ways of using pseudo labels, one is to use the entire predictions, which is what I did in this competition; the other is to use soft labels based on probability, e.g. selecting samples with a certain probability threshold. I think PL is a very important part of my solution as it improves my CV and LB significantly. Moreover, I noticed the quality of the pseudo labels really matter. A 1810 submission can improve my CV a lot more than a 1850 submission. There is some potential risks of using pseudo labels, but I will not dive into this here. Namely, making wrong predictions wronger. </p> <p>An example usage of using pseudo label. In this case I first pretrained on all targets and then when I am finetuning, I add the pseudo labels.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5168115%2F288261cf066410c82f19681608e14dde%2Fpl_Explained.png?generation=1606791330584992&amp;alt=media" alt=""></p> <h2>Result</h2> <p>I tried to get more diverse models regardless of public LB and I took the mean of my selected models. It resulted in our best submission. I wish I had a bit more time. It was a rush to get all these done in twoo weeks<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5168115%2F12367d07d3f18922e5dc6c3b0d27fd4d%2Fscore.png?generation=1606789822769643&amp;alt=media" alt=""></p> <h2>Fin</h2> <p>Thanks for reading this. Please comment below if you have any questions. This solution is a lot worse than the best solutions but I hope it helped. I look forward to reading all your solutions.</p>