Title of Competition
stringclasses
343 values
Title of Writeup
stringlengths
6
139
User
stringclasses
1 value
Writeup
stringlengths
419
80.4k
Corporación Favorita Grocery Sales Forecasting
my 10 day journey in this competition and solution sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Corporación Favorita Grocery Sales Forecasting <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>@Linzhi Hmm, then it is quite strange why my models preserve CV_Public LB score consistency, while your models preserve CV_Private LB score consistency, given that we were using similar CV mechanism... </p> <p>Hmm, after a second thought, a possible explanation is that this might be due to the different type of feature engineering efforts (and/or modeling approaches, I didn't try any sequence models in this competition) between us (at least this is the only thing I can imagine for now). Namely, some type of feature engineering efforts might work for validation period and public LB period, while some other type of feature engineering efforts might work for validation period and private LB, due to some subtle difference between public LB and private LB that we haven't identified yet.</p> <p>@Lingzhi: You mentioned that your CV and Private LB is somewhat consistent, but not the public LB. Mine is the other way, namely, CV and Public LB is consistent, but not the private LB. May I know which validation periods/method you were using?</p> <p>Thanks for the info. But why? There should not be any bias for item_nbr between public LB and private LB, and thus this factor cannot explain why my CV_Pub LB relationship is consistent, but the consistency broken in private LB..... As I mentioned above, with my CV mechanism, adding item_nbr as well as other category variable improved both my local CV score and Public LB score consistently by about 0.001+, while private LB score stayed the same. This seems indeed very strange to me, and I have no clues why.... Could anyone else confirmed whether they encountered similar situations (i.e., improve both CV and public LB score consistently, but not private LB) when adding the category variables?</p> <p>@CPMP: Yeah, I just have a feeling that it is probable that we might still miss sth important but sbtle, and if we identify that one, maybe it is possible to achieve consistent CV_Public LB_Private LB score relationship, rather than making the final private LB somehow a lottery. My feeling was due to that : 1) My CV_Public LB relationship is reasonably consistent for my feature engineering ideas; 2) It seems to me that there is no too much difference/bias between public LB and private LB, in terms of those feature engineering efforts I tried. Thus, unless I missed anything, I have such a feeling that achieve consistent CV_Public LB_Private LB score relationship might be possible, there might be just some subtle issue that is hard to figure out that we didn't find it. That's the main motivation why I initiated this discussion about the CV_LB relationship question here.</p> <p>@CPMP: Yeah, the thing made me confusing is that adding category features from stores.csv and items.csv only broke my CV_Private LB relationship, and the CV_Public LB relationship was still preserved nicely. That's what made me confused. If it also broke my CV_LB relationship, then I would say maybe it is over-fitting to CV and has worse generalization. But since it improved both CV and Public LB score in a similar scale but not improved the private LB score, this made me confused. </p> <p>&gt; Improved private LB score by .002 (or less)</p> <p>@Nicolas: So it seems that adding those category variables seems indeed contribute to score improvements in all of your CV, Public LB and Private LB scores, which is different from mine. May I know which validation periods you were using? Also 16 days period starting at 2017/07/26? Or sth else? </p> <p>Thanks for the feedback, CPMP.</p> <blockquote> <p>Let me reverse the question: why would item_nbr help at all? Why would be the pattern for sales for a given item be significantly different from another one?</p> </blockquote> <p>For me the underlying assumption is that, the sales pattern for different items (there are about 10k different items) are possibly somewhat different from each other, though the extent of dissimilarity is hard to tell. At least, it is possibly fair to say that the past sales data for the same item is possibly a bit more helpful than the past sales data for a different item. According to my intuition, adding the item_nbr feature (as well as other category features like store_nbr, family, cluster, etc.) would help the lgb model to more easily identify those most similar records and gave them a bit more weights, while without the item_nbr, since there are about 10k items in total, such info might be just buried in the large number of records and harder for lgb to identify. The same is for other category variables like store_nbr, etc. Anyway, the key for feature engineering efforts is just to make those information more easily captured by the models like lgb. Theoretically speaking, all the information are contained in the given raw data, our feature engineering efforts just convert those information into a form which is easier for the machine learning models to capture, rather than creating new information.</p> <p>Also note that, adding all those category variables (store_nbr, item_nbr, family, class, cluster, etc.) in total gave about 0.001+ improvement in both my CV and Public score, which is not very large but should be nontrivial. It might not necessarily to require "the pattern for sales for a given item being 'significantly' different from another one" in order for item_nbr to be helpful in such a extent of 0.001+ score improvements. As long as there are some nontrivial dissimilarity among the sales patterns for different items (and since there are about 10k items in total, I do bet this is true), it is reasonable to accept that adding item_nbr to be helpful (though it is also acceptable if adding item_nbr not helpful, when considering the generalization mentioned by you), namely, adding item_nbr being helpful seems not to be a weird result for me. </p> <p>What confused me was why adding such category variables (item_nbr, store_nbr, family, cluster, etc.) helps both my CV and public LB, but not private LB. But your raised point about pay day in validation period is indeed a factor that I never considered in my experiments, and it might possibly explain the broken CV_LB relationship on private LB, though I am not sure since I didn't do experiments on this. Also note that the top 1 team also used exactly the same validation period (2017/07/26-2017/08/10) as me, but did not suffer the same issue on private LB, so I am not sure whether the issue is indeed associated with my validation period choice... </p> <p>&gt;From you shared methods, you are using 20170503- 20170719 for training, 20170726 - 20170810 for validation. When you add new features every time, using the validation score to evaluate it, if the new features decrease the validation score for the first 5 days, last 11 days, and the full 16 days then add this features. Is my understand right?</p> <p>&gt;What confused me is that how to debug the overfitting? Can you give some insights of it? Thanks.</p> <p>@Feng: Yes, when I add new features, I checked the cv score, and if they improved the cv score in all of the first 5day period, last 11 days period and full 16 days period, then I added the features. Of course, in addition to that, we need to make sure there is no bug in code ,and for feature engineering we didn't use leaks from true target (since such leak won't be available at all for LB period) for such CV score improvement. If everything seems fine, then I made submissions to see the LB score to verify that they indeed works on the LB period. If not, then I doubts: 1) whether there are any subtle bugs (including future label leaks, etc.) in my code that contributed to CV score improvement ----if yes, fix it; 2) whether there are some significant dissimilarity between my CV set and the test set--if yes, then try the best to make CV set as similar to the test set as possible.</p> <p>Finally, if your feature engineering improved both your CV score and LB score consistently, and you think your CV set is very similar to the test set, then we should have some confidence that such engineered features are indeed good/helpful features.</p> <p>For this competition, my feature engineering efforts improved my CV and public LB score consistently, but not consistent for the private LB. I bet that there are some key (but subtle) difference between the public LB period and the private LB period that we haven't identified yet, which contributed to such phenomenon.</p> <p>Hope it helps.</p> <p>@feng: Let me clarify a little bit more. My framework/flowchart is similar to the one shared in the popular lgb starter script, such as this one <a href="https://www.kaggle.com/tunguz/lgbm-one-step-ahead-c8de0f">https://www.kaggle.com/tunguz/lgbm-one-step-ahead-c8de0f</a>. The training data (20170503- 20170719) including 12 weeks in total, similar to the starter script, we just construct the feature set x_temp, y_temp (line 113-121 in the attached starter script code) on each Wednesday, where x_temp including all historical features (e.g., past sales, past sales average, past sales weekly average, DayOfWeek average, and the associated stats features (median, max, std, sum), etc., while the y_temp are the future 16 days periods sales that we need to predict. Then we append/concate each weeks {x_temp, y_temp} together as the final combined train set.</p> <p>Taking the week starting as 20170719 (wednesday) for example. The y_temp are the future 16 days (20170719-20170804) period sales, the x_temp are the past sales average/(and weekly average, DayOfWeek sales/average sales, etc.) until 20170718. And when I generating the statistical features, these "target_encoding" type features used the most important past_sales features in the x_temp (which only include sales information before or on 20170718) as target, and use those category variables (store_nbr, item_nbr, family, class, cluster, city, state, etc) as the key, and generate the corresponding stats (mean, median, std, sum, ). This is what I called "target_encoded" like statistical features. Obviously, since such generated stats feature only used information in x_temp (which only include sales information before or on 20170718), thus they are impossible to contain leaks from the future 16 days period sales labels y_temp.</p> <p>Note that, "target_encoding" type features for category variable are used quite extensively (and in many case very helpful) in many different kaggle competitions, and you could see a lot of them by looking at previous competition. Usually, in those past competitions, the "target" there meant the true label that we need to predict, and we need to be very careful (and a bit tricky) to use techniques like out-of-fold labels, leave-one-out encoding, etc,, to reduce the overfitting to cv when generating such 'target_encoding" features. Fortunately, in this competition, it is a time series competition, and thus we have the past sales data, which are very predictive (they are top features based on my lgb feature importance), so I just use such past sales as the 'target', and generate the "target_encoding" like features with key being those category variable, in this way the leak from future information (i.e., the true label, future 16 days period sales) are avoided completely, and thus if CV score improved by such features, they are not possibly due to leak from future label (y_temp), but just due to that these features and good engineering features that are indeed helpful (i.e., they become normal feature engineering not associated with the true label y_temp), at least for the time period considered in CV. And if such features improve both CV and LB score consistently, we might have some confidence that they are indeed features that works for both the CV and LB periods.</p> <p>Hope it helps.</p> <p>As promised in the post <a href="https://www.kaggle.com/c/favorita-grocery-sales-forecasting/discussion/47326">https://www.kaggle.com/c/favorita-grocery-sales-forecasting/discussion/47326</a> , now I am sharing my 10 days journey experience in this competition and my approach^_^ </p> <p>In addition to sharing my approach, I am also most interested in discussing why there is such a big shake-up on the private LB, and I would like to hear your thoughts and/or findings as well^_^</p> <p>(Warning: This is going to be a long post, hope you have some patience^_^)</p> <p>As I mentioned in that post, I entered this competition quite late (when there was only 10 days left). At that time I just finished the last master only competition and took a short break. Since I had some spare time, I planed to explore this competition a bit, simply for fun. Finally, I am ranked 8th place on the public LB (score 0.503), but unfortunately shaked down a bit on the private LB (score 0.516). But In general, I am quite happy and satisfied with the score I achieved in this competition, given that I entered this competition so late. </p> <hr> <p><strong>Before introducing the flowchart of my approach, I want to talk a bit about the CV_LB relationship as well as the final private LB shake-up. I would also love to hear your thoughts and findings on this.</strong></p> <p>As I mentioned in the previous post <a href="https://www.kaggle.com/c/favorita-grocery-sales-forecasting/discussion/47326">https://www.kaggle.com/c/favorita-grocery-sales-forecasting/discussion/47326</a> , my local CV score and public LB score correlates somewhat reasonably well (especially when comparing same type of models, e.g., comparing different feature engineering ideas for lgb models, or comparing different feature engineering ideas for nn models, for the comparison between nn and lgb models the CV-LB gap was indeed a bit different (nn showed significantly better cv score thank lgb on the similar feature set, while performed slightly worse than lgb on the LB)), which enabled me make progress somewhat smoothly on the public LB and climbed to 8th place on the public LB in the limited 10 days. The final public LB-private LB shake up is much larger than I expected given my somewhat reasonably good CV-LB relationship, and I have't fully figure out why. I did expected some extent of medium shaking up between public and private LB ranking, but definitely not as large as what it turned out to be. I understood that the promotion information is biased between the train and test set, and feature engineering efforts based on such promotion information might be easily over-fitting if not dealt with very carefully, and I guess many competitors relied on feature engineering ideas based on such promotion features to improve the public LB score, which might possibly to some extent explain the big shake-up. But I have no a certain idea on this since I don't have full knowledge about how other competitor did when getting their public LB score. </p> <p>On my side, I did almost zero additional feature engineering efforts based on the promotion variable (except those already used in the shared lgb starter script), and my improvement of public LB score from 0.513 (lgb starter script score) to 0.505 for a single lgb relied on feature engineering efforts that is not related to promotion variable at all, and my local CV-Pub LB relationship is somewhat consistent, though not perfect. I am a bit confused why the CV-LB relationship for my models on the private LB is much less correlated, even if I almost didn't use any additional promotion-based features (compared with the lgb starter script) to achieve my score improvements in CV and public LB. Ironically, it turned out that my 1st entry of this competition, which is a lgb scored 0.511 on public LB and 0.517 on private LB, is my best single model on the private LB. In comparison, my final best single lgb on the public LB (scored 0.505 on public LB) also only gave 0.517 on private LB (and slightly worse than my 1st entry lgb). Namely, all my added feature engineering efforts (which are not related to promotion variable at all) improved both my local CV and public LB score significantly and somewhat consistently, turned out to adding no extra values (and even some small negative effects) to the private LB score, in comparison with my 1st entry lgb in this competition. Bias in promotion variable in train and test set can to some extent explain the shake up on public/private LB for many competitors if they relied on promotion-based features to climb the public LB, but definitely still could not explain many other scenarios, such as my case here that almost no promotion-based engineered features are used to achieve CV/Public LB score improvement, and CV_Public LB patterns are somewhat consistent. </p> <p>I would really like to hear the thoughts and experiences from other competitors to understand what actually caused such phenomenon. In addition to the promotion variable bias, are there anything else important we missed here that contributed to the shaking up? Or it is just due to the randomness /unstable time series trends, and the shaking is simply due to luck? If many of us are happy to share our thoughts and findings, maybe we would finally be able to figure out what would be the root cause for such a big shake-up and understand this problem much better, rather than simply jumping to the conclusion that the shaking up is due to lacking of comprehensive cv experiments, or bias for promotion variable.</p> <hr> <hr> <hr> <hr> <p>Now let me start introducing my approach.</p> <p>First of all, credits are given to those people who shared the great starter scripts in the kernel, especially to <a href="/tunguz">@tunguz</a>, <a href="/ceshine">@ceshine</a>. Your starter scripts (see links below) gave me a great starting point in this competition, and enable me to quickly work on key feature engineering and modeling experiments for fast progression in this competition. Actually, these shared great starter scripts is also one major motivation for me to enter this competition when there is only 10 days left; usually I will only enter a competition when there is much more time left.</p> <p><a href="https://www.kaggle.com/tunguz/lgbm-one-step-ahead-lb-0-513">https://www.kaggle.com/tunguz/lgbm-one-step-ahead-lb-0-513</a> <a href="https://www.kaggle.com/ceshine/lgbm-starter">https://www.kaggle.com/ceshine/lgbm-starter</a></p> <hr> <p><strong>Sample Data Selection:</strong></p> <p>My final models used only 2017 data to extract features and construct samples. I tried to include more data, such as those 2016-Jul and 2016-Aug data (similar month/week periods as my validation period and LB period) to capture the monthly dynamics, but they only helped marginally or not helpful at all, depending on the models, while take significant longer training time. So I decided to drop those data from my final models.</p> <p>*<em>train data:</em>*20170503- 20170719 for cv experiments, and 20170503-20170809 for LB experiments (namely, for LB submissions I retrained the models adding the most recent days data since they are closest to the LB period and might be most valuable)</p> <p><strong>validation data:</strong> 16 days period, 20170726 - 20170810 (and also tried using validation period 20160817-20160901 for some of my nn models at the very end of the competition, but just some simple trial since no much time left for me, and it turned out to perform worse than using 20170726 - 20170810 ). I computed the CV score for the first 5 days, last 11 days, and the full 16 days (simulating the LB split). I observe somewhat consistent CV-LB score improvement trends in all of my experiments, especially when comparing same type of models (e.g., comparing different lgbs, or comparing different nns). For my feature engineering efforts, I even observed even larger cv score improvement for the last 11 days (similar to the private LB) than the cv score improvement for the first 5 days (similar to the public LB), and thus at some time points neard the end of the competition, I once believed that I probably would climbed up a couple of spots on the private LB, due to such CV-LB score findings. Unfortunately, it didn't work as nice as I expected and I was shaked down a little bit. I still don't understand why for this.....</p> <p>Since starting date 2017/08/16 for LB period is Wednesday, and thus it is essential to also set the starting date for the validation period as Wednesday to preserve the DayOfWeek pattern.</p> <p><strong>One key trick to make my local CV and public LB score more consistent is as follows</strong>:</p> <p><strong>Step I:</strong> I extract all the store_item combinations that appeared in the period from 2017-07-01 to 2017-08-15. </p> <p><strong>Step II:</strong> I only keep the training data records (and drop all other records) whose store_item combination appeared in the extracted store_item combinations in Step I. The underlying assumption is that, if a item was not sold (or with identical 0 sales) at all in a store in the period 2017-07-01 to 2017-08-15, it is very likely that it wouldn't be sold (or with identical 0 sales) in the LB period 2017-08-16 to 2017-08-31 either. By looking at the train data, it seems to me that the new store_item combinations are added very slowly in train set, and when considering a short period like 16 days, the amount of newly added store_item combination (with non_zero sales) is almost negiligible.</p> <p><strong>Step III:</strong> I also only keep the test data records whose store_item combination appeared in the extracted store_item combinations in Step I. I always predict zero target values for the other store_item combination that don't belong to the store_item combinations extracted in Step I.</p> <p>(My such validation trick was also partially inspired by the Kaggle Admin Inversion's feedback on the unseen new items in this post , who mentioned that "I'm jumping in here late, regarding the discussion of new products, in order to clarify a bit. As has been mentioned, there are a very small number of new items that were introduced in the test set. That does not explain the apparent high number of new items seen on the first day of the test set. The reason for this is that the training set does not include records for zero sales. The test set, though, includes all store / item combinations, whether or not that item was seen previously in a store." Thus I made an assumption that "apparent high number of new items seen on the first day of the test set" are mostly items that have been seen previously in a store but just with zero sales, while the additional "small number of new items that were introduced in the test set" are those new store_item combinations if we compare later date of the test set and the first date of the test set. Since this amount is small, I assumed that it is safe to simply always predict zero target values for all those store_item combination that don't belong to the store_item combinations extracted in Step I. And such trick indeed made my CV-public LB relationship reasonably consistent, and I thought that I got it. But I don't know why the same consistency didn't preserve well on the private LB.....)</p> <p>After this 3-step trick and using the 16 days validation period 20170726 - 20170810, (and train period 20170503-20170719), I observed somewhat reasonably consistent CV_LB relationships when testing my feature engineering ideas, especially when comparing same type models (comparing different lgbs, or compering different nns), and it helped me a lot to make smooth progress on the public LB.</p> <hr> <p><strong>Feature Engineering:</strong></p> <ol> <li><p>Just follow the lgb starter scripts and adding more past sales (and DayOfWeek sales) average/ (weekly average) features . I also add (mean, median, std, sum, max, min) for many of such past sales average/ (weekly average) features. This basically gave my 1st entry submission of this competition, which scored 0.511 on the public LB and 0.517 on the private LB. It seems to me that this part didn't break the CV-Public LB-Private LB relationship yet.</p></li> <li><p>Adding a little bit more promotion features similar to the starter lgb script (similar to those "promo_14_2017" , "promo_60_2017":, "promo_140_2017" type features, but just used more periods like 7, 21, 35, etc). This improved both the CV and public LB score by about 0.0015. But private LB score dropped from 0.517 to 0.518. I guess bias in promotion variable might to some extent explain such broken in CV-LB score consistency, but can't fully explain it, since CV_public LB consistency is still preserved by adding these promotion based features. These several features are the only additional promotion-based features that I used in my final publicLB 0.505 lgb. From now on, all the feature engineering efforts that improved my public LB score from 0.509 to 0.505 (and also improved my CV score in a similar scale) are all not related to promotion variable at all.</p></li> <li><p>Adding the category features from stores.csv and items.csv, and apply label encoding. This improved both the CV and Public LB score by about 0.001+, to get 0.508 public LB score for a single lgb. Private LB score stay the same at 0.518. Not sure why, such category feature should not likely to break CV-LB relationship seems they are no biased. Is that possible this is simply due to that the time series trend pattern in the LB period is not very stable, and luck played a significant role? Or anything important we missed here?</p></li> <li><p>Adding statistical features (or "target_encoding" like features for category variables): I use some methods to stat some targets for different keys in different time windows (this is somewhat similar to that one shared the top 1 place solution, with some additional difference being the choice of target and key, as follows:)</p></li> </ol> <p>key:'store_nbr', 'item_nbr','family', 'class', 'city','state', 'type', 'cluster' (i.e., all the used category variables)</p> <p>target: I output the lgb feature importance for the above lgb and selecting those most important past sales average/ (weekly average) features &amp; DayOfWeek features, as well as the associated stats features (std, sum, median, etc.). In particular, the complete list of my chosen "target" features are as follows: 'mean_7_2017_01', 'mean_14_2017_01', 'mean_21_2017_01', 'mean_35_2017_01', 'mean_30_2017_01', 'mean_60_2017_01','ahead7_6', 'ahead7_5', 'ahead7_4', 'ahead7_3', 'ahead7_2', 'ahead7_1', 'ahead0_6','ahead0_5', 'ahead0_4', 'ahead0_3', 'ahead0_2', 'ahead0_1', 'mean_4_dow0_2017', 'mean_8_dow0_2017', 'mean_16_dow0_2017', 'mean_4_dow1_2017', 'mean_8_dow1_2017', 'mean_16_dow1_2017', 'mean_4_dow2_2017', 'mean_8_dow2_2017', 'mean_16_dow2_2017', 'mean_4_dow3_2017', 'mean_8_dow4_2017', 'mean_16_dow3_2017','mean_4_dow4_2017', 'mean_8_dow4_2017', 'mean_16_dow4_2017','mean_4_dow5_2017', 'mean_8_dow5_2017', 'mean_16_dow5_2017', 'mean_4_dow6_2017', 'mean_8_dow6_2017', 'mean_16_dow6_2017', 'day_1_2017','day_2_2017','day_3_2017','day_4_2017','day_5_2017','day_6_2017','day_7_2017' </p> <p>method: mean, median, max, min, std</p> <p>Note that, in order to avoiding possible future target information leak, such statistical features were generated independent for each week of the training data, validation data, and test data.</p> <p>So in total there are about 1400+ such statistical features, and expand my full feature set size to about 2300+. I then run lgb and output the feature importance, and only select the most important 400 features and use them for my later modeling (for lgb and nn). Such feature selection reduce the training time significantly, and also improved CV and public LB score by about 0.0006 simply by itself.</p> <p>Such statistical features are very helpful for improving my CV and public LB score. In particular, it improved my single lgb score from 0.508 on public LB to 0.505 on the public LB, and similar scale of improvement is observed in local CV. However, the private LB score only improved slightly from 0.518 to 0.517.</p> <p>My nn models were basically trained on the same feature set as my best single lgb, except that I replacing the label-encoding category features by one-hot encoding (i.e., dummie features) category features, since label encoding of category variables doesn't make much sense for models like nn. My best single nn scored 0.506 on public LB &amp; 0.520 on private LB, and an average of two nn with some small difference gave me 0.505 score on the public LB and 0.518 on the private LB. Note that I have only explored nn model when there is only 2 days left (after I am almost done for lgb model and feature engineering) and there is no much time for me to tune the nn though. I just made it somehow work. </p> <p>Ensemble of my best lgb and nn gave my final score, 0.503 on public LB and 0.516 on private LB. The ensembling step of nn and lgb does seem to preserve the CV_Public LB_Private LB relationship, namely, they improved the score all by about 0.002.</p> <p><strong>To sum up, based on my experience, what seems to preserve the CV_Public_Private score relationships are</strong>:</p> <p>a) the feature engineering effort 1 described above (adding more past sales weekly average, DayOfWeek sales, as well as the associated mean, std, min, max, sum stats features, this preserve CV-Public-Private relationship nicely) </p> <p>b) feature engineering effort 3 described above ("target_encoding" like statistical features for category variables, this preserves the CV_Pub_Private relationship somehow ok, but not perfect, CV_Pub score improved by about 0.003, private LB score improved by about 0.001), </p> <p>c) ensemble of nn and lgb. </p> <p><strong>What seems to not preserving the CV_Public_Private score relationships are:</strong></p> <p>a) the feature engineering effort 2 described above (i.e., adding some more promotion-based features similar as the lgb starter scripts, i.e., similar to "promo_14_2017" , "promo_60_2017":, "promo_140_2017" type features, but just used more periods like 7, 21, 35, etc). This might be somehow understood due to the bias in promotion variable.</p> <p>b) Adding the category features from stores.csv and items.csv, and apply label encoding. This one didn't preserve the CV_Pub_Private relationship makes me completely confused, since they are not biased between CV_Pub_Private. And note the the CV_Pub score relationship is still nicely preserved when adding such category variables, not sure why it didn't work on private LB.</p> <p>c) for the feature engineering effort 4 described above ("target_encoding" like statistical features for category variables), they improved both CV and public LB score by about 0.003+, but only improved private LB score by about 0.001 also make me a bit confused. Such features are not related to the promotion variable at all, and why it broke the CV_Private LB relationship (given that the CV_Public LB was still nicely preserved)?</p> <hr> <p><strong>Single Model:</strong> </p> <p>public LB 0.505/ private LB 0.517, single lgb</p> <p>public LB 0.506/ private LB 0.520, single nn</p> <p>average of 2 nn with small differences gave public LB 0.505/ private LB 0.518.</p> <hr> <p><strong>Final Ensemble:</strong></p> <p>0.54-0.46 weighted average of lgb and nn, public LB 0.503/ private LB 0.516.</p> <hr> <p>That's all for my approach in this 10 days journey of this competition. Overall speaking, I am satisfied with what I achieved in this competition given the limited time, and I have learned a couple of useful things from competitors and my experiments when addressing this time series challenge. To the end, I would really love to hear about your thoughts and findings about what preserves the CV_LB relationships and what didn't preserve the CV_LB relationships, such as sth similar to what I shared above. For now I still couldn't fully understand why the relationship is not preserved on the private LB, given that CV_Pub LB relationship is somewhat consistent for me. By combining our findings, maybe we could have a deep understanding about why there would be such a big shake up on the private LB, rather than simply jumping to the conclusion that it is due to lacking of comprehensive CV experiments or just luck/randomness. Maybe we missed sth important but subtle, which contributed to the big shake-up on the final private LB, and if we figured out that, maybe there is indeed a nice way to preserve the CV_Pub_Private score relationship very well if done appropriately.</p> <hr> <p>Finally, thanks to Kaggle and the sponsor to host such a great competition, and thanks to all other competitors who made my journey in this competition fun! And, congratulation to the winners!</p> <p>Best regards,</p> <p>Shize</p> <p>@feng: That is a competition long time ago for me, so not sure I still remember the detail exactly. But roughly speaking, at the rossmann competition, we used some normal hold-out period data as validation, together with the public LB. I remembered that the local CV score and public LB score was not very consistent for us (it was somehow correlated but not perfect, especially when comparing different types of models like comparing xgb and ARIMA), but unfortunately we didn't pay too much attentions to adjust/fix the cv mechanism design to make it more consistent. We used the local cv to find improvements for models, and if improvements identified, submitted to the LB. For same type single model (e.g., xgb), local cv score improvements usually results in LB score improvement, but the improvement scale was not always the same (e.g., cv score improved by 0.0002 but LB score improved by 0.0001, or vice versa, etc.). The inconsistency in score improvement is much larger especially when comparing different type of models like xgb and ARIMA. For example, ARIMA worked very well (as a diverse component of our ensemble) on the first couple of days on LB (i.e., public LB) but performed much worse for longer future period on the private LB. Unfortunately, when inconsistency occurred, we trusted a bit more on the public LB rather than our CV, since we thought that the public LB period is closer to the private LB period than our held-out cv period.Thus we decided to assigned a bit larger ensemble weights for the ARIMA models in our final submission, and we were bitten by that quite a bit on final private LB (dropped from #1 on public LB to #7 on private LB). Then in the past two years, I more and more rely on a comprehensive CV mechanism when I want to play a competition seriously, and when the CV_LB score is inconsistent, I paid more attentions /spent much more efforts to design a more sophisticated CV mechanism to achieve the consistency. That's all for now.</p> <p>Hope it helps.</p> <p>Hi zhixing,</p> <p>About generating the statistical features (i.e., 'target-encoding' type features for category variables) , the key is to be careful to avoid utilizing (explicitly or implicitly) any future target information when generating these features. </p> <p>By generating the statistical features for each week independently, we could completely avoid such future target information leak since at that time all the past sales data definitely not including any future sales data information, and the derived statistical features based on these past sales data of course is impossible to contain leak info from future targets.</p> <p>To be more specific, the generation of such statistical features should be put inside a function similar to "def prepare_dataset(t2017, is_train=True):" in the lgb startet script <a href="https://www.kaggle.com/tunguz/lgbm-one-step-ahead-c8de0f">https://www.kaggle.com/tunguz/lgbm-one-step-ahead-c8de0f</a>, i.e., it should be generated inside the function for generating the features X_l and y_l for each week, rather than after operation "X_train = pd.concat(X_l, axis=0)". If you generating the statistical features after operation "X_train = pd.concat(X_l, axis=0)", then features for earlier weeks will implicitly contain future target leak information from later weeks. For example, the past day1 sales for week 2017/07/19 might be future information for the previous week 2017/07/12, and if generating such 'target-encoding' type statistical features for week 2017/07/12 after the operation "X_train = pd.concat(X_l, axis=0)", then it contains the future information leak (e.g., past day1 sales for week 2017/07/19, etc.). And such kind of future info leak is sth we need to avoid to make such 'target-encoding' type features works on the LB period, where future info leak is not possible.</p> <p>Hope it helps.</p> <p>Best regards,</p> <p>Shize</p> <p>@feng:</p> <p>1) About holiday, one key observation is that the LB period almost doesn't include any holiday (except a local holiday), and thus the benefits of considering holiday effect might only be marginal even if appropriately addressed. Not to mention that it is very tricky and time consuming to appropriately address the holiday effects to achieve such marginal improvement. This is why most competitors (including me) did not spend much time on dealing with the holiday effects. If the LB period includes many more National holidays, then it would be a totally different story.</p> <p>2) About 'count features', I didn't see any clear explanations in your provided link, so i am not sure what specific features you were talking about. </p> <p>3) The variance across the time periods is a bit large, thus it is possible (but not very likely) that validation loss is smaller than the training loss, especially when the model is under-fitting to the train set (for example, at the first couple of rounds of my lgb, in quite a couple of times I also observed smaller validation loss than training loss, but by the time early-stopping occurred, the training loss is usually much smaller than validation loss). So my suggestions would be : 1) double check the code to see whether there are any subtle mistakes/bugs; 2) make sure the model is sufficiently trained and not under-fitting to the train set; 3) if everything seems fine, then maybe it is ok-as I said, it is possible (but not very likely) that validation loss is smaller than the training loss for this dataset due to the variance across the time periods.</p> <p>Hope this helps.</p> <p>@feng:</p> <p>1) If the LB period includes more major holidays, a general guideline is to design the CV data set to be as similar to LB data set (e.g., similar period but in 2016; or period close to LB period but also have similar amount of major holidays). But obviously there is not a simple clear answer about the best cv setting in this case, it might require significant amount of time and efforts (more difficult than the case here) to find a CV setting for consistent CV_LB score trend. You can only figured it out by various of experiments, following the key guideline that CV set should be as similar to LB set as possible.</p> <p>2) About the "count features" you mentioned above in Nima's post, they are not some common "count features" we usually use, and in my opinion those features are somewhat specific to the Rossmann data set. (For the term "count features", we (or at least me) usually mean sth like frequency of some category variables, etc. That's what made me confused when you used the term "count feature".) Finally, such type of features are probably much less helpful (though I didn't test it) for this competition since there are no that many "close-open" switching for stores in this data set (compared with rossmann competition data set), plus that there is no way to know whether a store is closed or not on a date of test data period. But maybe using such so called "count features" on the "promotion" variable might be somewhat helpful, though needs to be very carefully dealt with due to the large asymmetry in distribution of "promotion" variable in train and test data.</p> <p>Hope this helps.</p>
Crowdflower Search Results Relevance
1st PLACE - WINNER SOLUTION - Chenglong Chen
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Crowdflower Search Results Relevance <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>1st PLACE - WINNER SOLUTION - Chenglong Chen</strong></p> <p>First, thanks to the Kaggle team and CrowdFlower for such great competition.</p> <p>I have put the code to generate the winning solution (also the best single model using XGBoost with linear booster) on my github: <a href="https://github.com/ChenglongChen/Kaggle_CrowdFlower">https://github.com/ChenglongChen/Kaggle_CrowdFlower</a></p> <p>For the approach, please find details in ./Doc/Kaggle_CrowdFlower_ChenglongChen.pdf</p> <p>You can also find it in the attachment.</p> <p>(I apologize that I am not very good at visualization and making figures, and also my poor English.)</p> <p>Best,</p> <p>Chenglong</p> <p>Updated: Sorry about uploading the file twice, please download the second one.</p> <p>[quote=sirnodin;85477]</p> <p>Besides, one question, have you ever tried word embedding methods, such as Word2Vector, Sentence2Vector (which can be found, for example from Gensim) or DNN? I got the conclusion that all the features above is no better than simple tf-idf, which to me is a bit of surprise.</p> <p>[/quote]</p> <p>I didn't spend much time exploring word embedding method. But I think it worth a try. In fact, as I have mentioned in Sec. 8 of my solution doc, the author of [1] used Word Mover&#8217;s Distance (WMD) metric together with word2vec embeddings to measure the distance between text documents. This metric is shown to have superior performance than BOW and TF-IDF features.</p> <p>[1] Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kilian Q. Weinberger. From word embeddings to document distances. In the 32ndInternational Conference on Machine Learning, 2015.</p>
Crowdflower Search Results Relevance
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Crowdflower Search Results Relevance <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes, it is in utility.py.</p> <p>Sorry, my bad -- forgot to push this file. =) Now it should be ok.</p> <p>Hi guys,</p> <p>First, thanks to the Kaggle team, kagglers and CrowdFlower for such great competition.</p> <p>We have prepared our code and put it on github: </p> <p><a href="https://github.com/geffy/kaggle-crowdflower">https://github.com/geffy/kaggle-crowdflower</a></p> <p>For the approach, please find details in './description.pdf'</p> <p>Best,</p> <p>Mikhail, Stanislav, Dmitry</p>
Crowdflower Search Results Relevance
Feature Engineering and Ensemble Architecture (7th place)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Crowdflower Search Results Relevance <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations again to all the Winners!! I believe that top winners will give many valuable insights soon. :)</p> <p>In this post I would like to share some of our findings and experience in NLP, feature extraction and feature selection tasks, and also my ensemble architecture. Feature engineering helped us a lot in improving the performance of our different models.</p> <p>Here I write only about my own parts and I think @Darragh (Pearson correlation analysis to submissions) and @CodingNeo would share their amazing parts later.</p> <p>Due to the limitation of Web editor, I cannot show my post in well format here. So please check the following link to download and see full post:</p> <p><a href="https://drive.google.com/file/d/0B-20iuF5gIcyQWtpOVlnTTREX3M/view?usp=sharing">https://drive.google.com/file/d/0B-20iuF5gIcyQWtpOVlnTTREX3M/view?usp=sharing</a></p> <hr> <p>R example code for RFE and Boruta feature selection can be downloaded from the attachment.</p> <p>This is my first time to share the details of my feature engineering and ensemble architecture in Kaggle. Please feel free to give any comments.</p> <p>Hope it can help other kagglers to some extent. :)</p> <p>Best regards,</p> <p>Mark Peng</p> <p>[quote=Mark Nagelberg;84144]</p> <p>Awesome! Very useful to see your features/models in such clear detail, and I didn't know about GloVe, RFE, or Boruta - all look like really useful tools. Also liked seeing your feature selection process as mine could use some serious work (i.e. I didn't really use a formal feature selection process at all :S). I also didn't know about using multiple &quot;layers&quot; when ensembling. </p> <p>Thanks Mark!</p> <p>[/quote]</p> <p>Hi Mark,</p> <p>I am glad to know that it is helpful to you. :) Actually I spent more than 12 hours in the past several weekends for this competition. Most of the work were done for feature engineering! It took me two days in full time to compete this write up, because I really would like share my ideas with you!</p> <p>I also learned a lot from the posts by top kagglers in previous competitions regarding ensemble learning too.</p> <p>Thanks,</p> <p>Mark</p> <p>[quote=Khanna;84422]</p> <p>Nice! Alas! I had a similar brainstorming , but kudos you covered a vast array of intuitive cases. I couldnt get any more than 0.56+ on the leaderboard with my similar heuristics. Great job. At once when I read on other kaggle scripts, I almost lost faith in such feature engineering.</p> <p>[/quote]</p> <p>Thanks Khanna! I had several years of experience in information extraction and search engine applications, so I knew that those heuristics should work! GloVe deep learning vectors are also very helpful, in addition to pattern-based features. You can download all my features from the document to see if you can find better ways to use them than mine. :)</p> <p>Thanks,</p> <p>Mark</p> <p>[quote=Mark Peng;85627]</p> <p>[quote=Jing Lu;85549]</p> <p>Hi Mark,</p> <p>Thanks for sharing! Your approach is very interesting, but somehow also complex. Do you split data for layer 0 and layer 1, or just use the same training data? So, for each layer, you do a feature selection with RFE. Why do you choose 200 features for layer 0 and 150 features for layer 1? I was thinking feature selection between layer 0 and 1 may help eliminate some data leaking during training..</p> <p>Thanks, Jing</p> <p>[/quote]</p> <p>Thanks for the comment! Regarding ensemble, I followed the stacked generalization process mentioned in @Triskelion's <a href="http://mlwave.com/kaggle-ensembling-guide/">great post</a> about ensemble learning. Quoted here:</p> <blockquote> </blockquote> <pre><code>Let&#8217;s say you want to do 2-fold stacking: Split the train set in 2 parts: train_a and train_b Fit a first-stage model on train_a and create predictions for train_b Fit the same model on train_b and create predictions for train_a Finally fit the model on the entire train set and create predictions for the test set. Now train a second-stage stacker model on the probabilities from the first-stage model(s). A stacker model gets more information on the problem space by using the first-stage predictions as features, than if it was trained in isolation. </code></pre> <p>I used the same 2-fold CV process to generate predictions as meta-features for layer 1. Feature selection is also done by 3-fold CV (repeats 3 times) using <em>caret</em> package in R. So 200 <strong>raw features</strong> for layer 0 and 150 <strong>meta-features</strong> for layer 1 were chosen by 3-fold CV with RFE. Therefore, the chance to data leaking during training should be very small. Similar ensemble process is mentioned in <a href="http://blog.kaggle.com/2015/06/09/otto-product-classification-winners-interview-2nd-place-alexander-guschin/">Otto competition winner's interview</a> as well.</p> <p>Thanks,</p> <p>Mark</p> <p>[/quote]</p> <p>[quote=Jing Lu;85549]</p> <p>Hi Mark,</p> <p>Thanks for sharing! Your approach is very interesting, but somehow also complex. Do you split data for layer 0 and layer 1, or just use the same training data? So, for each layer, you do a feature selection with RFE. Why do you choose 200 features for layer 0 and 150 features for layer 1? I was thinking feature selection between layer 0 and 1 may help eliminate some data leaking during training..</p> <p>Thanks, Jing</p> <p>[/quote]</p> <p>Thanks for the comment! Regarding ensemble, I followed the stacked generalization process mentioned in @Triskelion's <a href="http://mlwave.com/kaggle-ensembling-guide/">great post</a> about ensemble learning.</p> <blockquote> </blockquote> <pre><code>Let&#8217;s say you want to do 2-fold stacking: Split the train set in 2 parts: train_a and train_b Fit a first-stage model on train_a and create predictions for train_b Fit the same model on train_b and create predictions for train_a Finally fit the model on the entire train set and create predictions for the test set. Now train a second-stage stacker model on the probabilities from the first-stage model(s). A stacker model gets more information on the problem space by using the first-stage predictions as features, than if it was trained in isolation. </code></pre> <p>I used the same 2-fold CV process to generate predictions as meta-features for layer 1. Feature selection is also done by 3-fold CV (repeats 3 times) using <em>caret</em> package in R. So 200 <strong>raw features</strong> for layer 0 and 150 <strong>meta-features</strong> for layer 1 were chosen by 3-fold CV with RFE. Therefore, the chance to data leaking during training should be very small. Similar ensemble process is mentioned in <a href="http://blog.kaggle.com/2015/06/09/otto-product-classification-winners-interview-2nd-place-alexander-guschin/">Otto competition winner's interview</a> as well.</p> <p>Thanks,</p> <p>Mark</p>
Crowdflower Search Results Relevance
Quartet (prize winner, 3rd) approach and code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Crowdflower Search Results Relevance <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>hi folks,</p> <p>No intention to fool anyone :) but this thread is just to tell you that the kaggle team invited us to create a post for their blog where we will share our approach and links to github, where the code will be available.</p> <p>So please give us a few more days and stay tuned!</p>
DFL - Bundesliga Data Shootout
2nd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: DFL - Bundesliga Data Shootout <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks <a href="https://www.kaggle.com/atmguille" target="_blank">@atmguille</a> .</p> <blockquote> <p>What tool did you use to generate them?</p> </blockquote> <p>Sorry. PowerPoint is all you need.<br> I write my solution on a presentation template for winner call since it can be reused. I'm always believing my winning (:<br> <a href="https://www.kaggle.com/WinningModelDocumentationGuidelines" target="_blank">https://www.kaggle.com/WinningModelDocumentationGuidelines</a></p> <blockquote> <p>Did you check the performance of your architecture with 25 FPS?</p> </blockquote> <p>No, I didn't. I designed my model to predict events at 12.5 fps, that couldn't be changed easily because my model is too complicated. To cope with the low frame rate (80ms/frame), I added the regression branch on the classifier to predict the time offset from the actual event occurrence time. But I think your point is correct. Using full 25 fps would lead more precise ball trajectory and time of the event. I should have look into that more.</p> <blockquote> <p>On the other hand, I am not sure if I understand how you applied 1D convolutions to the ball path. Could you expand a little bit on this?</p> </blockquote> <p>1D conv is applied on the normalized ball path and speed on image coordinates. <br> Input to 1D conv has the shape of [batch, frames, channels(= x,y location and delta_x, delta_y as speed)].</p> <blockquote> <p>Did you ensemble the predictions of both classifiers or just take the output of the 2nd?</p> </blockquote> <p>I used only the output of the second. The score of the second is approx. 0.02 better than first. Ensembling the 2nd with the 1st slightly deteriorated the score.</p> <blockquote> <p>Really liked the Dijkstra approach to smooth the ball trajectory!</p> </blockquote> <p>i’m glad to hear that! The Dijkstra approach is one of my confident ideas in this competition.</p> <p>1.Question about cost function</p> <p>Cost is computed as the combination of the confidence of the detector and gaussian probability of distance shown below.</p> <pre><code>def cost_btw_nodes(x0, y0, x1, y1, conf1, inertia_x, inertia_y): x0_pred = x0 + inertia_x y0_pred = y0 + inertia_y dist = sqrt((x1 - x0_pred)**2 + (y1 - y0_pred)**2) cost = - log(conf1 * gaussian_prob(dist, sigma=40)) return cost </code></pre> <p>If the cost is too high, a new node can be added by paying some cost.</p> <pre><code>if cost &gt; &gt; threshold: add_new_node() total_cost += new_node_cost </code></pre> <p>2.How many manually labeled frames?</p> <p>1000 sets of 3 continuous frames for first training. From there, I repeated the training and predictions, finally it reached over 10000 sets. Since most of the predictions were accurate, the actual number of hand annotation is much fewer than 10000.<br> I scored over 0.80 with the first detector trained by 1000 sets. Increasing the hard samples improve the score by about 0.02 in my case.</p> <p>Thanks to organizers for this interesting challenge and congrats everyone who enjoyed it! Since this competition can be approached in a variety of ways, I'm looking forward to see everyone's solution.</p> <h1>Overview of my solution</h1> <p>My pipeline consists of:</p> <ul> <li><strong>Optical Flow to get global(camera) motion.</strong></li> <li><strong>Ball Detector to find a ball on the field.</strong></li> <li><strong>Cost Minimization of Ball Trajectory to refine the ball locations over multiple frames.</strong></li> <li><strong>2-stage Action Recognition to classify the events.</strong></li> <li><strong>Ensemble and post-processing.</strong></li> </ul> <p>Once I found that the attention or cropping near the ball is effective to improve the accuracy of action classifier, I focused on refining the performance of the detector and classifier. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2938236%2Fa7acfaad0313497aa834f8cc4f6a1987%2F00_solution_summary.png?generation=1665797501214798&amp;alt=media" alt="pipeline"></p> <hr> <h1>1. Optical Flow</h1> <p>It's difficult to detect the ball when:</p> <ul> <li>it is occluded behind players.</li> <li>its background is the stand.</li> <li>balls not on play are placed near sideline.</li> </ul> <p>So my idea is to use multiple frames to detect the ball on play which is usually moving by the optical flow.<br> At first, I implemented the optical flow with high resolution image since balls are very small, but it's slow and not suitable for this competition. So I adopted the low resolution OpticalFlow to predict only the global (camera) motion. Flow itself cannot tell which one is the ball on play, but the difference between warped image and original image is very informative.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2938236%2F0d77a90b9a5503f2635e8f3acf28aa07%2F01_motivation_opticalflow.png?generation=1665797595915623&amp;alt=media" alt="opticalflow_motivation"><br> I used RAFT to predict optical flow. It is trained with the provided video by self-supervised manner.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2938236%2F5251a987e82a7fe227028e05d037cbe9%2F02_implement_opticalflow.png?generation=1665797635231529&amp;alt=media" alt="opticalflow_implementation"></p> <hr> <h1>2. Ball Detection</h1> <p>To make the ball detector, I repeated training and annotation several times. <br> (Annotation -&gt; Train Detector -&gt; Predict -&gt; Annotation -&gt; Train Detector -&gt; Predict -&gt; …)<br> I selected CenterNet because I do not like annotation. Annotating point is much easier than rectangle. </p> <p>Since the target(ball) is very small. Inputs to the model is high resolution images.<br> To achieve both inference speed and accuracy, I used the initial part of the encoder. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2938236%2F940076494a1baa149a903c2f85a0a705%2F03_detection.png?generation=1665797710229958&amp;alt=media" alt="balldetector"></p> <p>Inputs to the detector consist of 9 channels:</p> <ul> <li>normal RGB image of the current frame</li> <li>difference between the current frame and previous frame reconstructed by the optical flow.</li> <li>difference between the current frame and next frame reconstructed by the optical flow.</li> </ul> <p>The latter two drastically improved the accuracy of detector especially in the crowded scene.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2938236%2F2a3a23bd77aa4ffd9b5de7d5d5043601%2F04_detection_example.png?generation=1665797754239777&amp;alt=media" alt="detection_example"></p> <hr> <h1>3. Cost Minimization of Ball Trajectory</h1> <p>Ball location can be estimated more accurately by using multi-frame information. I applied cost minimization on selecting the ball path. <br> If we know the trajectory of the ball, we can estimate where it will go next. In the following case, people can select the next point correctly even if the ball is hard to seen.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2938236%2Fc78dbfa829f5fbfebef0b1eaff79dea5%2F05_motivation_costmin.png?generation=1665797922011099&amp;alt=media" alt="ball_path"></p> <p>As I want to consider both detector's score and distance based probability, I defined <br> Cost = Function (Confidence of Detector, Distance between Nodes)<br> and minimized the total cost of the path by Dijkstra Algorithm .</p> <p>It can create a more natural and accurate trajectory than just connecting points with the highest scores.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2938236%2F1b6f0c5060a0c92c8cc22b9bd544a2c0%2F06_costminpath.png?generation=1665798032110142&amp;alt=media" alt="ball_path_image"><br> (If this competition was for the ball detection, I would get first place!)</p> <p>Videos can be seen here. Left one is the max confidence path and right one is the min cost path.<br> <a href="https://twitter.com/i/status/1582747059165495297" target="_blank">https://twitter.com/i/status/1582747059165495297</a></p> <hr> <h1>4. Event Classifier (Action Recognition)</h1> <p>Now we know where the ball is. Next step is to predict event. I used ball trajectory and images cropped near ball over 1 sec to predict action(4 class) and regression of residuals.<br> Since the labeled intervals are very short, it's difficult to make the long sequential training data. I extracted only 1 sec frames randomly from labeled intervals as an input to the model.</p> <p>The model consists of :<br> 1) 2D CNN to extract features from cropped images<br> 2) 1D CNN to extract features from ball path<br> 3) 1D CNN to predict event from upper two sets of features <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2938236%2Fdb149dcd08edcd2d364c9d251cda80db%2F07_eventdet_1ststage.png?generation=1665798262050233&amp;alt=media" alt="event_detect_1st"></p> <hr> <h1>5. Event Classifier [2nd Stage]</h1> <p>The first classifier cannot predict the event with long-term information.<br> At second stage, long sequential inputs(4 sec) can be adopted because the inputs of the model is only feature vectors which is much more memory-efficient than the images.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2938236%2Fe6ade8933d3f927f5365f78873d5e71e%2F08_eventdet_2ndstage.png?generation=1665798290526211&amp;alt=media" alt="second_stage"></p> <hr> <h1>6. PostProcessing</h1> <p>Simply, combination of gaussian weight and peak detection.</p> <ul> <li>Peak detection by max pooling</li> <li>Gaussian filter on the “challenge” score because the peak of “challenge” is ambiguous compared to the other two classes.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2938236%2F8dc5676759cd43223099db8b52ff56ad%2F09_postporc.png?generation=1665798310710848&amp;alt=media" alt="post_process"></li> </ul> <hr> <h1>Final Thought</h1> <p>Since the number of samples of the events is not so large and the most of the area in a image is useless, I thought the model would hard to learn and would overfit easily. So I soon decided to use the ball trajectory.<br> That's why I really amazed that many kagglers suceeded the simple end-to-end approach. I've learned a lot and still need to learn a lot from this competition. Thanks again to kaggle community!</p>
DFL - Bundesliga Data Shootout
7th place brief solution summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: DFL - Bundesliga Data Shootout <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulation to all of the winners!</p> <p>Most of the top solutions are based on 2.5D/3D classification of events from full image. For some reason I didn't believe this kind-of-simple pipeline would benefit and score that high, therefore I decided to stick to multiple stage approach.</p> <p>To be short:</p> <ol> <li>Detect ball (and throwiner)</li> <li>Track ball</li> <li>Classify area around the ball</li> <li>No ensemble (solo detector + solo classifier). I used all the frames from videos in original resolution (FullHD)</li> </ol> <p>To be more specific:</p> <ol> <li>I used images from clips and manually annotated them (around 6k images). Firstly it was only ball detection, but I was successful only with classifying 'play' events, while throw-ins and challenges struggled. Then I added 2nd class of throwiner to my detector. It helped me to boost throwins classification further. I also used synthetic augmentation for throwiner class (removed actual throwiner, applied image inpainting to fill empty background, and added throwiner from another image instead).</li> <li>I used Kalman-IoU tracker to keep the ball position visible even if it disappears. Unfortunately I couldn't tune params too well, and the ball position was unclear during aerial trajectories with high speed and acceleration of the ball.</li> <li>I used 3D classifier with 4 classes (including background). I stacked center frame +- 12 frames, and classified overall 25 frames. </li> </ol> <p>What helped me to improve the score:</p> <ol> <li>Changing from 2.5D to 3D. Many tested 2.5D nets were worse than simple x3d-m.</li> <li>Introducing throwiner class for detection. I had 0.45-0.5 accuracy for throwin events when I used only 3d classification. Adding throwiner class for detector gave me huge boost for throwin events, resulting in 0.8+ accuracy for throwin events. I merged 3d classification results and throwin class from detector.</li> <li>Synthetic data generation for detector resulted in better stability for throwiner and ball near the line detection.</li> </ol> <p>I was also struggling with time limits, since I was always close to 9 hours. Using FileVideoStream from imutils helped me to save some time, since it worked faster than cv2 video reader.</p> <p>Unfortunately I couldn't improve challenges events. The score was always about 0.5-0.6 for it, while play and throwins scored 0.8-0.85 always. The main reason I believe is that I was losing ball due to its fast displacement during aerial and other challenges. My tracker couldn't solve that issue. </p>
DFL - Bundesliga Data Shootout
A simple approach based on baseline notebooks (6th place solution)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: DFL - Bundesliga Data Shootout <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for a very interesting competition, my teammates ( <a href="https://www.kaggle.com/yasushiyui" target="_blank">@yasushiyui</a> and <a href="https://www.kaggle.com/kotaiizuka" target="_blank">@kotaiizuka</a> ) for effort, and all the kagglers who worked hard together. And special thanks to <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a>, the author of great baseline notebooks.</p> <p>There are already many great top solutions out there, but we would like to share ours as an example of how just a few tweaks to the baseline can increase your score.<br>Please feel free to comment if you notice anything or would like to know more details.</p> <h2>Summary</h2> <ul> <li>Basically, we used the baseline notebook processing pipeline.</li> <li>Similar to the Public 1st place solution, three neighboring frames were converted to grayscale and combined in the channel direction for using temporal information. We believe that this idea was probably crucial for our solution to enter the public gold medal zone.</li> <li>We applied the techniques usually used in competitions (high resolution images, modification of loss function, custom data augmentation, ensemble, TTA, etc.).</li> </ul> <h2>Using three neighboring frames</h2> <p>The figure below shows the flow of this technique. To avail temporal information and to use the same processing pipeline as for RGB image input, three adjacent images were converted to grayscale and combined in the channel direction. Most experiments used time_stride=1. In one of the final submissions, we ensembled the models trained with time_stride=1, 2, and 3, respectively, to take advantage of the longer temporal information.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1728889%2Fa1d3ffa2d8738db68a0f533ee5f1a39b%2FUsing_three_neighboring_frames.png?generation=1666108969082746&amp;alt=media" alt=""></p> <h2>Parameter settings</h2> <ul> <li>Training<ul> <li>dataset<ul> <li>fold0: 3c993bd2_0, 3c993bd2_1</li> <li>fold1: 1606b0e6_0, 1606b0e6_1</li> <li>fold2: 35bd9041_0, 35bd9041_1</li></ul></li> <li>input_size: (3, 720, 1280)</li> <li>time_stride: 1, 2, 3</li> <li>amp: True</li> <li>20 epochs (cooldown_epochs=0)</li> <li>evaluation metrics: mAP</li> <li>model<ul> <li>tf_efficientnet_b0_ap</li> <li>tf_efficientnet_b1_ap</li></ul></li> <li>loss: 4-class binary cross entropy</li> <li>dropout: 0.2</li> <li>data augmentation<ul> <li>RandomAffine, RandomPerspective, RandomHorizontalFlip, ColorJitter(brightness, contrast), Mixup</li></ul></li></ul></li> <li>Averaging checkpoints<ul> <li>default settings for avg_checkpoints.py in timm</li></ul></li> <li>Inference<ul> <li>interpolation method for resize: cv2.INTER_AREA</li> <li>crop_pct: 0.875 (default setting for tf_efficientnet_b0_ap)</li> <li>3-models ensemble with horizontal flip tta</li> <li>inference once every 2 frames for speedup (predictions for frames not inferred are generated by linear interpolation)</li> <li>custom data loader to prefetch images behind the GPU inference process</li> <li>parameter tunings for window_size and ignore_width in <code>make_sub</code> function</li></ul></li> </ul> <h2>Scores for each experiment</h2> <p>The table below shows the settings and scores of baseline notebooks and our main experiments.<br> Our public LB best (0.784) model was not selected for the final submission because it almost exceeded the time limit.</p> <table> <thead> <tr> <th>ID</th> <th>description</th> <th>model</th> <th>Val AP</th> <th>LB</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>tito's baseline</td> <td>b5_ap</td> <td>0.215</td> <td>0.249</td> </tr> <tr> <td>2</td> <td>+ TTA</td> <td>b5_ap</td> <td>N/A</td> <td>0.255</td> </tr> <tr> <td>3</td> <td>+ Fix label error</td> <td>b5_ap</td> <td>0.242</td> <td>0.283</td> </tr> <tr> <td>4</td> <td>+ Use three neighboring frames <br>with time_stride=1</td> <td>b5_ap</td> <td>0.312</td> <td>0.425</td> </tr> <tr> <td>5</td> <td>+ 960x540 input and inference <br>once every 2 frames <br>(without tta)</td> <td>b5_ap</td> <td>0.554</td> <td>0.563</td> </tr> <tr> <td>6</td> <td>+ 4-class bce loss</td> <td>b5_ap</td> <td>0.609</td> <td>0.618</td> </tr> <tr> <td>7</td> <td>+ Custom data loader</td> <td>b5_ap</td> <td>0.625</td> <td>0.634</td> </tr> <tr> <td>8</td> <td>+ Custom data augmentation</td> <td>b5_ap</td> <td>0.700</td> <td>0.703</td> </tr> <tr> <td>9</td> <td>+ Replace b5 model <br>with b0 model and <br>predict with 4-fold ensemble</td> <td>b0_ap<br>(4-fold)</td> <td>0.669</td> <td>0.720</td> </tr> <tr> <td>10</td> <td>+ Generate predictions <br>for frames not inferred <br>using linear interpolation</td> <td>b0_ap<br>(4-fold)</td> <td>0.686</td> <td>0.740</td> </tr> <tr> <td>11</td> <td>+ 1280x720 input and <br>3-fold ensemble prediction</td> <td>b0_ap<br>(3-fold)</td> <td>0.722</td> <td>0.772</td> </tr> <tr> <td>12</td> <td>+ Mixup augmentation</td> <td>b0_ap<br>(3-fold)</td> <td>0.747</td> <td>0.761</td> </tr> <tr> <td>13</td> <td>+ Replace one b0 model <br>with b1 model</td> <td>b0_ap (2-fold)<br>b1_ap (1-fold)</td> <td>0.751</td> <td>0.768</td> </tr> <tr> <td>14</td> <td>+ Replace three models <br>with 3 full-fit models <br>(each model uses <br>three neighboring frames <br>with time_stride=1, 2, and 3, <br>respectively)</td> <td>b0_ap (full-fit, <br>time_stride=1, 2, 3)</td> <td>0.753</td> <td>0.782</td> </tr> </tbody> </table> <h2>Reference</h2> <ul> <li>Baseline notebooks<ul> <li><a href="https://www.kaggle.com/code/its7171/dfl-benchmark-training" target="_blank">DFL benchmark training | Kaggle</a></li> <li><a href="https://www.kaggle.com/code/its7171/dfl-benchmark-inference" target="_blank">DFL benchmark inference | Kaggle</a></li></ul></li> <li>Fix label erros<ul> <li><a href="https://www.kaggle.com/code/kmizunoster/dfl-benchmark-training-fix-label-error" target="_blank">DFL benchmark training + fix label error | Kaggle</a></li> <li><a href="https://www.kaggle.com/code/kmizunoster/dfl-benchmark-inference-tta-fix-label-error" target="_blank">DFL benchmark inference + tta + fix label error | Kaggle</a></li></ul></li> </ul> <p><a href="https://www.kaggle.com/atmguille" target="_blank">@atmguille</a> Thank you for your comments.</p> <blockquote> <p>Why did you try changing from CrossEntropy to BCE? I understand that BCE could be useful for multi-label problems, but here a frame should only be classified as one class (so a softmax has more sense than individual sigmoids). Why do you think that it worked better?</p> </blockquote> <p>We tried BCE because we thought BCE would match the competition's evaluation metric better than CE.<br> The metric first calculates the APs for each class, and then averages them to get the averaged AP, so the order of scores within each class is very important.<br> CE requires softmax to calcuate scores for each class, which is dependent on the logits of the other classes and may affect the order of scores within each class. On the other hand, BCE only uses logits for its own class, so there is no need to worry about this.</p> <blockquote> <p>What do you mean by “custom dataloader” and how could the DataLoader change the score?</p> </blockquote> <p>The data loader used in the baseline notebook was designed to export all the frames to jpeg files for inference.<br> In order to reduce processing time, we created a custom data loader that can infer each frame captured by cv2.VideoCapture without going through the file.<br> Although the implementation was not aimed at improving the score, the score increased slightly when the same pre-processing was performed as before.<br> Perhaps the conventional method via jpeg files degraded the image quality and had a negative impact on the score.</p>
DFL - Bundesliga Data Shootout
Our (frustrated) approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: DFL - Bundesliga Data Shootout <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Not sure if I understand your question. Could you rephrase it <a href="https://www.kaggle.com/saikumartamminana" target="_blank">@saikumartamminana</a>?</p> <p>Thank you for your suggestion. I am not very familiar with the Kalman filter. How would you use it here?</p> <p>By decreasing batch size and frame resolution, we had no problems with memory</p> <p>Although it is the norm to share successful solutions, our team thought that our (frustrated) approach could be interesting and thus should be shared.</p> <p>From the beginning, we thought there was a lot of useless noise in the videos that may prevent the model from learning the right features. Based on this, we put a great effort into the preprocessing side, trying to extract only what we considered essential information. The result is the following:</p> <table> <thead> <tr> <th>original</th> <th>preprocessed</th> </tr> </thead> <tbody> <tr> <td><a href="https://www.youtube.com/watch?v=GlUgM2TUPCw" target="_blank"><img src="https://i.ibb.co/3kC6nsc/embed.png" alt="original"></a></td> <td><a href="https://www.youtube.com/watch?v=VoAcptNeF1k" target="_blank"><img src="https://i.ibb.co/kgpLVp2/embed-prerpocessed.png" alt="preprocessed"></a></td> </tr> </tbody> </table> <p>In summary, as can be seen in the video, we did the following:</p> <ul> <li>Detect the lines of the football field, via a Top-hat filter.</li> <li>Detect the players and the ball, via a fine-tuned YOLOv7 first and a custom-trained RetinaNet after doubts about the GPL license arouse.</li> <li>Detect each player's team using K-means with the color of the player's jerseys.</li> <li>Detect the skeletons of the five players closer to the ball, via mmpose.</li> <li>We rendered all this information together over a black background, hoping to aid the model in the learning process. We even cropped each frame centered in the ball to further reduce the noise (the result is available in <a href="https://www.youtube.com/watch?v=IZjJMa4u-Ss" target="_blank">this video</a>).</li> </ul> <p>It was a great challenge to put all these pieces together. Moreover, it was even harder to make the pipeline efficient enough to comply with the time constraints. Nonetheless, after a great effort by the team to optimize the source code of the libraries we used (specially mmpose), we were able to make the preprocessing pipeline run in half of the video duration. This took a great amount of time, but we were very happy with the result. At least until we started training and validating the model… 😅</p> <p>The odyssey began when, no matter which architecture we tried, we could not get reasonable results. General action recognition models (SwinTransformer3D, SlowFast, …), soccer-specific models (NetVLAD++, …), custom architectures based on temporal information, … We tried them all, but none of them worked. Even with the original videos (without preprocessing), we could not get a good result. Even after augmenting the training data with the annotated videos of SoccerNet-v2, the boost was not enough.</p> <p>We have checked and rechecked for bugs, but we could not find any issues. We are very curious to see what top teams tried and check if we missed something critical.</p> <p>Although the outcome was not the one we expected (we are hungry for gold to become Kaggle Masters), it was a great learning experience for the team. Thanks <a href="https://www.kaggle.com/alejandrobravoserna" target="_blank">@alejandrobravoserna</a> <a href="https://www.kaggle.com/miguelgonzalez2" target="_blank">@miguelgonzalez2</a> <a href="https://www.kaggle.com/danielguzmanolivares" target="_blank">@danielguzmanolivares</a> <a href="https://www.kaggle.com/alvarozaera" target="_blank">@alvarozaera</a> for the effort! 🥇will come …</p> <p>If you have any doubts about our approach, do not hesitate to ask!</p> <p>Thank you for your supportive comment <a href="https://www.kaggle.com/zhuyx7" target="_blank">@zhuyx7</a>! It is interesting that preprocessing didn't work either for your team. We bet too much on it. I am very curious to see if preprocessing helped any of the top teams.</p> <p>Thanks for your comment <a href="https://www.kaggle.com/delai50" target="_blank">@delai50</a>! Are you going to Barcelona for the Kaggle Days by any chance? It would be great to meet you there!</p> <p>It's always a pleasure to work together and learn from you. More to come!</p> <p>We also had 2.5D approach in mind, given the good results that we achieved in <a href="https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation" target="_blank">UW-Madison</a> with it. However, I thought that more temporal context was required here and didn't give it a try. In fact, given your results, it seems like the opposite: we were adding too much. Once again: always try the simplest baseline first 😅</p> <p>Thanks for providing info about your approach. Good luck with the final evaluation in December!</p> <p><a href="https://www.kaggle.com/delai50" target="_blank">@delai50</a> I am going as a volunteer! If you happen to be there, let me know!</p> <p>We discarded optical flow based on its excessive time requirements. Nonetheless, it seems that <a href="https://www.kaggle.com/kmat2019" target="_blank">@kmat2019</a> used it as an input to detect the ball (<a href="https://twitter.com/Kmat67916008/status/1580767429558693888" target="_blank">https://twitter.com/Kmat67916008/status/1580767429558693888</a>).</p> <p>In terms of identifying opponents, you are right that it may not be useful for the pass or the throwins, but we thought that it could play an important role in the challenges.</p> <p>We trained the model with short clips, trying to classify the action and doing regression on the frame where the action happens. This and other similar approaches did not work for us. It seems that top teams input far less information than us (we used a window of 1.6 seconds), and maybe the key was there. Thank you for the comment <a href="https://www.kaggle.com/uzodike" target="_blank">@uzodike</a>!</p> <p>Interesting approach <a href="https://www.kaggle.com/alejopaullier" target="_blank">@alejopaullier</a>, maybe a bit extreme as you said. Did you try coloring the ball with a different color?</p>
DFL - Bundesliga Data Shootout
Public 4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: DFL - Bundesliga Data Shootout <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks to organizers for their effort in this amazing competition and congrats to all the winners. Although we need to wait another two months for the final results, let me share my currently 4th place solution. </p> <h1>Overview</h1> <p>The basic pipeline of my solution and the summary of the models are shown below.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1680435%2F34acdf1d2365d73423f2ec07bc214f54%2F2022-10-17%2010.14.55.png?generation=1665969310891529&amp;alt=media" alt=""></p> <table> <thead> <tr> <th>Block</th> <th>Model</th> <th>Backbone</th> <th>Input</th> <th>Output</th> </tr> </thead> <tbody> <tr> <td>Frame Extraction</td> <td>NA</td> <td>NA</td> <td>1920x1080</td> <td>960x540 (odd frame only)</td> </tr> <tr> <td>Candidate Frame Selection</td> <td>Image Classification</td> <td>efficientnetv2_l</td> <td>512x512</td> <td>4-class probability</td> </tr> <tr> <td>Event Area Detection</td> <td>UNet</td> <td>efficientnet_b5</td> <td>512x512</td> <td>Heatmap</td> </tr> <tr> <td>Event Classification</td> <td><a href="https://openaccess.thecvf.com/content_ICCV_2019/html/Tran_Video_Classification_With_Channel-Separated_Convolutional_Networks_ICCV_2019_paper.html" target="_blank">CSN</a></td> <td>ResNet152</td> <td>224x224 ROI</td> <td>4-class probability</td> </tr> </tbody> </table> <h1>Frame Extraction</h1> <p>I used FFmpeg to extract and save frames in a video. Note, to reduce the time for disk I/O, I decided to discard even frames and reduce the size of frame to 960x540 (i.e., half the original size).</p> <h1>Candidate Frame Selection</h1> <p>The purpose of this step is to find the candidate frames in which events are likely to occur. I simply classified each of the extracted frames to four classes, background (not event), play, challenge, and throwin. The backbone is <a href="https://github.com/rwightman/pytorch-image-models" target="_blank">timm</a>'s efficientnetv2_l, and an input frame is resized to 512x512. </p> <p>This process doesn't use temporal information at all, so the accuracy itself is not high, but what I wanted to do here is to find the event frames with a high recall. So I ignored the absolute values of classification scores, and focused on the relative peaks. I used <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.find_peaks.html" target="_blank">scipy.signal.find_peaks</a>, and picked the frames corresponding to the peaks of the classification scores as candidate frames.</p> <h1>Event Area Detection</h1> <p>Through a set of experiments, I found that it's difficult to reach a competitive score if I naively used entire frames. Since <a href="https://www.kaggle.com/competitions/dfl-bundesliga-data-shootout/discussion/342948" target="_blank">hand-labeling is allowed in this competition</a>, I decided to train an event area detection model to extract a ROI from frame. I annotated xy coordinate of (roughly) center of an event area (basically the coordinate of player who is most closely associated with the event) for all the event frames in the training set. This took several weeks :-(</p> <p>The event area detector is a simple UNet, and it outputs a heatmap of event area. The backbone is <a href="https://github.com/rwightman/pytorch-image-models" target="_blank">timm</a>'s efficientnet_b5, and an input frame is resized to 512x512. Some example results are shown in the figure below. In the figure, the left is input, the middle is output heatmap overlapped to the input, and the right is zoom-in image centered at the peak of the heatmap.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1680435%2F6151cf9fe407bb5f9332370ef6a03003%2F2022-10-17%2015.37.39.png?generation=1665988698309792&amp;alt=media" alt=""><br> To detect a ball is one of the promising approaches in this competition, but as discussed <a href="https://www.kaggle.com/competitions/dfl-bundesliga-data-shootout/discussion/343652" target="_blank">here</a>, in many cases a ball is not visible (as you can see from the examples above), or is quite difficult to detect because of various reasons such as background color, lack of resolution and motion blur. So I decided to detect an event area instead of a ball.</p> <h1>Event Classification</h1> <p>The frames from the <strong>Candidate Frame Selection</strong> contains a lot of wrong (non-event) frames, and also the scores are not reliable. To refine the results, this process applies a more sophisticated action recognition model to the candidate frames. </p> <p>After trying several action recognition models, I employed <a href="https://openaccess.thecvf.com/content_ICCV_2019/html/Tran_Video_Classification_With_Channel-Separated_Convolutional_Networks_ICCV_2019_paper.html" target="_blank">3D Channel-Separated Networks (CSN)</a> since it showed good performance for both accuracy and speed. The training set for this model was created based on the outputs of two previous steps, <strong>Candidate Frame Selection</strong> and <strong>Event Area Detection</strong>. First, I applied the candidate frame selection to all the frames in the training set, and set the labels to background for the wrongly selected non-event frames. Of course, the correctly selected frames had their own events' labels. Then, I used the event area detection model to extract a ROI from each frame. The center of ROI is the peak of output heatmap, and its size is 224x224. Since the number of background frames are much larger (x~10) than those of event frames, I randomly selected around 10% of background frames every 5 epochs during training. </p> <p>The backbone of CSN is ResNet125, and the number of input frames is 16 (actual time span is 32 frames because even frames are discarded). At both training and inference, a ROI is extracted from candidate frame, and the same regions are also extracted from +/- 8 frames, and fed to the CSN.</p> <p>The final score is just a probability output from CSN, or ensemble of CSN output and the score from <strong>Candidate Frame Selection</strong>. The latter is slightly better than the former (+0.01). </p> <h1>Computation Time</h1> <p>Since my pipeline consists of several blocks, my biggest challenge was to keep the computation time for each block as low as possible. Using only odd frames is one of the biggest compromises. The per-video computation time for each block in the kaggle kernel is something like below.</p> <table> <thead> <tr> <th>Block</th> <th>Computation Time</th> </tr> </thead> <tbody> <tr> <td>Frame Extraction</td> <td>15 min</td> </tr> <tr> <td>Candidate Frame Selection</td> <td>20 min</td> </tr> <tr> <td>Event Area Detection</td> <td>15 min</td> </tr> <tr> <td>Event Classification</td> <td>30 min</td> </tr> <tr> <td>Total</td> <td>80 min</td> </tr> </tbody> </table> <p>I should have thought about simpler end-to-end method, but I couldn't. Now I'm really impressed by other kagglers' very sophisticated methods, and learning a lot. Thank you, everyone!</p> <p>Yes, your guess is completely correct.</p> <p>As you said, the scores from <strong>Candidate Frame Selection</strong> are not reliable, so basically I used them only to discard background frames. In <strong>Event Classification</strong>, I applied more accurate action recognition model (CSN) to the remaining frames. At first, I simply used the probability output from CSN as final score, but eventually I found that the weighted sum of scores from <strong>Candidate Frame Selection</strong> and scores from <strong>Event Classification</strong> showed slightly better performance. Of course, the weight of <strong>Event Classification</strong> is much higher than <strong>Candidate Frame Selection</strong> (0.8~0.9). </p>
DFL - Bundesliga Data Shootout
Public 5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: DFL - Bundesliga Data Shootout <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The x-axis indicates the number of epochs.<br> Each epoch has 13540 steps and mini-batch size is 16.</p> <p>Yes.<br> To simplify problem, additional experiments are limited to only two frames.</p> <p>I modified the model in Scores ② above and added experimental results on inter-frame differences.</p> <p>I compared two models.<br> The first model takes as input an absolute value image of the difference between current and previous.<br> The second model takes as input a 2-channel image that concatenates the current and previous images.<br> They are denoted as <strong>with_absdiff</strong> and <strong>without_absdiff</strong>, respectively.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1630583%2F4e8f391116cea04ec030ba6f6a4aade9%2Fextra_exp.png?generation=1666051814016820&amp;alt=media" alt=""></p> <p>The metrics for each epoch are shown in the figure below.<br> For most metrics, <strong>with_absdiff</strong> outperforms <strong>without_absdiff</strong>.<br> Generally, CNNs are said to automatically extract features from images and videos, but that is the case when there is a large amount of data.<br> When data is limited, as in this case, it loses out to simple feature extraction, such as absolute value of differences.</p> <p>What is interesting is the relationship between <em>train_loss</em> and <em>valid_loss</em>.<br> Since epoch 3, <strong>without_absdiff</strong> has been smaller than <strong>with_absdiff</strong> on <em>train_loss</em>.<br> On the other hand, with regard to <em>valid_loss</em>, <strong>with_absdiff</strong> is always smaller than <strong>without_absdiff</strong>.<br> This is because <strong>without_absdiff</strong> extracts poor features for the two channels of input (current and previous frames).</p> <p>Thank you for organizing very exciting competition.<br> And also the following codes helped me a lot, thank you very much! <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a><br> <a href="https://www.kaggle.com/code/its7171/dfl-benchmark-training" target="_blank">https://www.kaggle.com/code/its7171/dfl-benchmark-training</a> <br> <a href="https://www.kaggle.com/code/its7171/dfl-benchmark-inference" target="_blank">https://www.kaggle.com/code/its7171/dfl-benchmark-inference</a></p> <h1>Overview</h1> <ul> <li>Image grayscaling, resizing (1024x576), center crop (960x512)</li> <li>Difference from previous frame and apply absolute<ul> <li>Due to the small amount of training data, I thought it would be useful to explicitly extract video features.</li></ul></li> <li>3 class classification (binary cross entropy) by efficientnet_b1 with input of 11 channels in total (±5 frames before and after)</li> <li>5-models ensemble</li> </ul> <h1>Augumentation</h1> <ul> <li>Fliplr and random crop from 1024x576 to 960x512</li> <li>Affine transformation with parameters varying linearly with time<ul> <li>I wanted to synthesize camera zoom, pan, etc. and use them for model training.</li> <li>Assuming that the camera parameters vary linearly within a small range, I created the following function</li></ul></li> </ul> <pre><code> def hoge(src_imgs): # len(src_imgs)==11+1 frames # sampling parametors angle = random.uniform(-0.1, 0.1) translate_x = random.uniform(-1, 1) translate_y = random.uniform(-1, 1) scale = random.uniform(-0.0005, 0.0005) dst_imgs = torch.zeros_like(src_imgs) for i,t in enumerate(-6,6): dst_imgs[i] = torchvision.transforms.functional.affine(src_imgs[i], angle=float(t*angle), translate=(t*translate_y, t*translate_x), scale=1.0 + t*scale, interpolation = torchvision.transforms.InterpolationMode.BILINEAR) return dst_imgs </code></pre> <p>The left column in the figure below shows 11 frames of video before augment, <br> and the center and right columns are examples of the augment applied.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1630583%2F33870c3bcd56543625c2009f2e27af43%2Faugument100000.gif?generation=1665905790619453&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1630583%2Fb292f8832e026f08f0eb4d655baef581%2Faugument175000.gif?generation=1665906158387381&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1630583%2F3b9a6d71b4729efab357078adb0471ff%2Faugument075000.gif?generation=1665906759837057&amp;alt=media" alt=""></p> <h1>Label design</h1> <p>For the labels, I used a Gaussian distribution (sigma=20 frames) as shown below. <br> These soft label designs are same for the three events.<br> Binary cross-entropy was used for the loss function.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1630583%2F38f26f2c7f32dff2713f658fdfdd189a%2Flabel_design.png?generation=1665906218833070&amp;alt=media" alt=""></p> <h1>Ensemble.</h1> <p>Ensemble of 5 efficientnet_b1 models with different CVs.</p> <h2>Additional annotations</h2> <p>It was the last four days and I was frazzled.<br> As a last ditch effort, I tried to annotate all the unannotated time.<br> After annotating 5 videos (1606b0e6_0, 407c5a9e_1, 4ffd5986_0, 9a97dae4_1, ecf251d4_0) I ran out of steam and started training the model.</p> <h1>Speeding up inference</h1> <p>I spent quite a bit of time trying to parallelize cv2.VideoCapture with Dataloader, but it did not work.<br> I also tried decoders such as decord, dali, pyav, howevwer these did not work either.<br> Finally, I used cv2.VideoCapture for video encoding,<br> and grayscaling and resizing of the image were applied on CPU (np.array), and then difference, absolute , creating input-batch (rolling &amp; copy) were done on the GPU (nn.sensor).</p> <p>This speed up allowed us to infer on 6 models (efficientnet_b1 with 11x960x512 tensor) within the time limit.<br> Due to make margins, we submitted an ensemble of five models, finally.</p> <h1>Scores</h1> <table> <thead> <tr> <th>submit</th> <th>LB</th> </tr> </thead> <tbody> <tr> <td>①: my baseline ( 512x288 )</td> <td>0.625</td> </tr> <tr> <td>②: ① + larger iamge ( 1024x576 )</td> <td>0.717</td> </tr> <tr> <td>③: ② + affine augument</td> <td>0.748</td> </tr> <tr> <td>④: ③ + 5-model ensemble</td> <td>0.768</td> </tr> <tr> <td>⑤: ④ + extra annotation</td> <td>0.792</td> </tr> </tbody> </table>
DFL - Bundesliga Data Shootout
Team Hydrogen: 1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: DFL - Bundesliga Data Shootout <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>So actually it is <code>t=5</code> (3d) and <code>c=3</code> (2.5d) making it overall <code>15</code>.</p> <p>There are quite a few different ways of doing it and what works for a specific problem is experimental. In general you can reshape from 2D to 3D and back quite easily, so if you start with <code>bs x frames x channels x H x W</code> you can reshape to <code>bs*frames x channels x H x W</code> for the 2D part and then reshape back to 5 dimensions for the 3D part.</p> <p>Thanks a lot to Kaggle and the hosts for this very, very interesting competition. A 2-stage competition is always a thrilling experience, but then seeing your models generalizing so well to completely unseen data is really exciting.</p> <p>Our solution is based on a single-stage model combining 2.5D and 3D data to properly extract temporal information from the video data. This solution ranks first in both public and private leaderboards. Next, we want to give a high-level overview of this solution.</p> <h3>Validation setup</h3> <p>In this competition, we used a fixed train/validation split to tune our models: the validation videos used for this included 4ffd5986_0, 407c5a9e_1, 9a97dae4_1, and ecf251d4_0. Our best local score was 0.857, and we observed strong correlation between local validation and public leaderboard (LB). For our final submission, we retrained our best models on the full data that was available to us, this is a general procedure we always try to take in the end.</p> <h3>Model setup</h3> <p>As mentioned, we solely rely on a single-stage model architecture. The input for our models was a 1024x1024 grayscale image, with the channels stacked in the time dimension over three neighboring frames. We chose to use grayscale images rather than full color due to the improved generalization and runtime capabilities it offered - and it also seems to generalize extremely well to the unseen private data.</p> <p>The single-stage architecture combined 2.5D and 3D techniques. The backbone of the model was either efficientnetv2_b0 or efficientnetv2_b1, as we found that larger backbones tended to overfit quickly on the small dataset. 3D convolution layers were only used in the last block and final convolution before pooling, which allowed us to stay in 2.5D for most of the backbone and achieve fast inference times with caching.</p> <p>In a bit more detail, if our current frame is <code>t=15</code>, we first stack three neighboring frames <code>{14,15,16}</code> as the channels of the input, and separately pass five different time steps (2 in each direction), each again stacked with three channels. If our time step would be 3, this would mean we pass <code>{8,9,10},{11,12,13},{14,15,16},{17,18,19},{20,21,22}</code> through the backbone, and the final 3D layer aggregates these 5 individual time steps with additional pooling.</p> <p>We also experimented with different stacks over the time domain and found that 15 frames (combined over 2.5D and 3D) worked best for our final submission.</p> <p>We applied the usual set of augmentations and used Mixup as well. The loss function was binary cross-entropy with three target columns. One area where we had room for experimentation and improvement was in how to set the hard or soft labels for training. In the end, our best solution used hard labels in a small window around the actual event.</p> <p>Most of our time during this competition was spent training on only the labeled regions of the competition data. Pre-training on some Soccernet labels improved the score by around 0.02, but we had no success with pseudo label pre-training on unlabeled regions and clips.</p> <h3>Inference</h3> <p>Increasing the inference resolution by 128 also slightly improved our cross-validation (CV) and LB scores. We spent a significant amount of time optimizing the inference kernel to run as efficiently as possible, utilizing both CPU cores with threading. We predicted only every second frame for a single model, and our final solution blended the results of two models, alternating between them frame by frame. We then applied post-processing to reduce the number of false positives.</p> <p>A single model had a runtime of 2.5 hours in the kernel, and processing a single video took 25 minutes. Our final solution with two models had a runtime of around 5 hours. There was potential for using even more models, but we opted to keep things as simple as possible.</p> <p>Big shoutout to my teammates <a href="https://www.kaggle.com/ybabakhin" target="_blank">@ybabakhin</a> and <a href="https://www.kaggle.com/ilu000" target="_blank">@ilu000</a>!</p> <p>We also experimented quite a bit with TSM/GSM &amp; GRU etc., but in the end a more simple setup was more effective. The main benefit was that training time was much faster, so we could experiment more, specifically given that we started quite late into this competition and had to play catchup. I think there is a lot of overfit potential in this competition - insample scores of 0.99 are very quick to reach, and some easy gains can be made by playing with label and regularization setups, regardless of your whole architecture and approach.</p> <p>We are basically just using the PP from that kernel you linked with different optimized parameters.</p> <p>I would say 3 adjacent grayscale improved ~20pts, but we switched around 2 weeks before deadline to it and did not manage to try RGB again towards the end (we wanted to try stacking 3 RGB images) so it is hard to say.</p> <p>We trained most of the models on 3xA6000 - training on 512x512 is fairly fast and already gives very good results, it was around 2hours for 5 epochs. We did most tuning on 512 and then just scaled up. With larger resolution the runtime increases obviously.</p> <p>We also optimized it quite a bit, it is fun :)</p> <p>I don't need to recognize them specifically, you only really need another BG class if you use softmax.</p> <p>Oh my god haha, even B0 was too large I feel like. But it depends on the architecture.</p> <p>The stacking means that you concat e.g. frame 14,15,16 to 3 channels when looking at frame 15.</p> <p>We add lots of BG images, but dont add a separate 4th class, with BCE you can have all label columns zero to indicate BG.</p> <ul> <li>Yes ImageNet pretrained, and no freezing.</li> <li>All through the same backbone</li> <li>Pytorch all the time</li> <li>Mixup, apart from that nothing specific, random flip, bit of resizing</li> </ul> <p>I wonder why :)</p> <p>Added a few more details to the solution! If there are any more questions, please shoot (but don't score!).</p> <p>We are obviously very happy with our result and really impressed by the great generalization capabilities of our solution and the gap on the leaderboard.</p>
DFL - Bundesliga Data Shootout
public 3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: DFL - Bundesliga Data Shootout <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I experienced similar things like training diverges or never learn anything especially when I haven't pretrained the weights. I used focal loss as it's designed for imblanced problem so it might helped, but other winner's still use BCE so it may not be necessary.</p> <p>I stacked 32, 64 or 128 frames depending on the model configuration .</p> <p>Congrats to all the winners, especially for Team Hydrogen( <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a>, <a href="https://www.kaggle.com/ybabakhin" target="_blank">@ybabakhin</a>, <a href="https://www.kaggle.com/ilu000" target="_blank">@ilu000</a>) and <a href="https://www.kaggle.com/kmat2019" target="_blank">@kmat2019</a>, your scores are really impressive!</p> <p>I’ll write up a summary of my solution.</p> <h2>Baseline</h2> <p>It ends up being a very simple end2end 2.5d model.<br> Backbone is EfficientNet with TSM[1], and head is simple 1D UNet.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1554318%2F0059bfdb6ffbaee44c3c3ac0a20e8030%2FUntitled%20Diagram.drawio.png?generation=1665851779731895&amp;alt=media" alt=""><br> But there are many tricks in both training and inference.</p> <hr> <h2>Training</h2> <h3>1. Pretrain with ball detection</h3> <p>Even though I used ImageNet pretrained model for efficientnet, inserting TSM module changes model structure drastically, so it was hard to control training. (ex. easy to overfit or diverge.) And intuitively it seems important to teach a model to focus around the ball, so I used ball annotation in the SoccerNet tracking dataset[2] to pretrain the model. </p> <p>Pretrain set up is following:</p> <ul> <li>Efnet + TSM (+ shallow conv2d head) </li> <li>Output is the same resolution as the final feature map of efnet, 1/32 scale heatmap. </li> <li>Label is 2d gaussian heat map like CenterNet, but didn’t add regression branch for simplicity </li> </ul> <p>At first I was not sure whether we can use SoccerNet or not, so I decided to annotate balls for the video clips by myself, it ended up being more than 20,000 frames. But after confirming we can use SoccerNet, I switched to it. Pretrain performance is better than mine, but the final score is not much different.</p> <p>Prediction is like this. I tried to use it as additional input to the next stage, but no successful result.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1554318%2F9c95367ddb5ea5437ec652ff2a9612d0%2Fepoch7_1606b0e6_0_5724_5786_ball.gif?generation=1665852280112233&amp;alt=media" alt=""></p> <h3>2.Label design</h3> <p>1d gaussian heat map +- 5 frames around the event. Here is an example for challenge at frame 13. (num_frames=16)<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1554318%2Fb89a208458dc230637b4beaf60000c53%2FUntitled%20Diagram-Page-3.drawio.png?generation=1665852495588359&amp;alt=media" alt=""><br> Also create a binary mask to represent whether the frame is in eval interval or not.</p> <h3>3.Sampling</h3> <p>Half pos and half neg. Pos means at least 1 event in the label and neg means randomly sampled from eval interval. </p> <h3>4. Augmentation</h3> <p>Basic ones like RandomHorizontalFlip, RandomRotation, ColorJitter.<br> Also manifold mixup[3] works well. I applied mixup on image-wise features.</p> <h3>5. Model architecture</h3> <ul> <li>EfficientNet + TSM at the end of each blocks</li> <li>Simple 1d UNet (some variants have LSTM and GRU at the end)</li> <li>LayerNorm instead of BatchNorm in the UNet (BN didn’t converge maybe due to small batch size)</li> </ul> <h3>6. Loss</h3> <p>Base loss is focal loss[6]. <br> The tasks for play, challenge and throwin are actually independent. But challenge and throwin are difficult to learn as there are much fewer samples than play. So I adjusted the alpha factor in focal loss based on the number of samples in each class. It’s inspired by class balanced loss[7], but the formulation is modified. This stabilizes training, especially before using the ball pretrained model.</p> <pre><code>def _calc_weight(self, samples_per_class, beta=0.9999): effective_num = 1.0 - beta**samples_per_class weights = (1.0 - beta) / effective_num weights = weights / torch.sum(weights) weights = weights.unsqueeze(0).float() return weights.cuda() </code></pre> <h3>7. Fine tune with long sequence</h3> <p>After training with a relatively short clip like 32 frames, I fine tune the model with a long sequence like 128 frames, but with freezing backbone so that it fits in gpu memory.</p> <h3>8. Others</h3> <ul> <li>LR warmup was important somehow</li> <li>EMA[4]</li> <li>Gradient checkpointing[5] helps to fit large model in memory</li> <li>Mixed precision + cast image to float16 before feeding to the model</li> </ul> <hr> <h2>Inference</h2> <h3>1.Video loading</h3> <p>cv2.VideoCapture + multi worker pytorch dataloader doesn't work well. Video decoding can be easily bottleneck, so I brought imutils[8] library’s threading implementation and modified it a bit. <br> Also, I used multi image sizes [(540, 960), (480, 800), (360, 640)]. It’s faster to create small images by resizing (540, 960) image after transferring it to the GPU.</p> <h3>2.Overlapping predictions</h3> <p>My model predicts 64 frames at the same time. But I realized that predictions around edges are relatively inaccurate, so I decided to drop them and take only center 48 frames out of 64 frames. Below diagram shows how it works. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1554318%2F11ee7e5b0bc00a2d178a0ed1f50af8a1%2FUntitled%20Diagram-Page-2.drawio.png?generation=1665852445476883&amp;alt=media" alt=""></p> <h3>3.Postprocess</h3> <p>Simply apply nms. Thresholds are [12, 6, 6] for [challenge, play, throwin] respectively.</p> <h3>4.Ensemble</h3> <p>To maximize the effect of the ensemble, I prepared various models with different folds.</p> <ul> <li>Image size is one of [(540, 960), (480, 800), (360, 640)] </li> <li>Backbone is one of [b2, b3, b4]</li> <li>Sequence length is one of [32, 64, 128] </li> </ul> <p>Single best model scores 0.768. <br> ((360,640), efnet-b2, seq_len=128, all data) <br> Ensemble 6 models with various combination reaches 0.827.</p> <h2>What didn’t work</h2> <p>Too many to list up.</p> <ul> <li>Crop around ball area</li> <li>Crop out no player area</li> <li>Feed ball heatmap as additional input</li> <li>Transformer head like ActionFormer or VideoMAE</li> <li>Larger backbone</li> <li>Pseudo label</li> <li>Many others..</li> </ul> <h2>Reference</h2> <p>[1] <a href="https://github.com/MIT-HAN-LAB/temporal-shift-module" target="_blank">https://github.com/MIT-HAN-LAB/temporal-shift-module</a><br> [2] <a href="https://github.com/SoccerNet/sn-tracking" target="_blank">https://github.com/SoccerNet/sn-tracking</a><br> [3] <a href="https://arxiv.org/abs/1806.05236" target="_blank">https://arxiv.org/abs/1806.05236</a><br> [4] <a href="https://github.com/rwightman/pytorch-image-models/blob/a2727c1bf78ba0d7b5727f5f95e37fb7f8866b1f/timm/utils/model_ema.py" target="_blank">https://github.com/rwightman/pytorch-image-models/blob/a2727c1bf78ba0d7b5727f5f95e37fb7f8866b1f/timm/utils/model_ema.py</a><br> [5] <a href="https://github.com/cybertronai/gradient-checkpointing" target="_blank">https://github.com/cybertronai/gradient-checkpointing</a><br> [6] <a href="https://arxiv.org/abs/1708.02002" target="_blank">https://arxiv.org/abs/1708.02002</a><br> [7] <a href="https://arxiv.org/abs/1901.05555" target="_blank">https://arxiv.org/abs/1901.05555</a><br> [8] <a href="https://github.com/PyImageSearch/imutils" target="_blank">https://github.com/PyImageSearch/imutils</a> </p> <p>It's not easy to understand code, but I created the label like this;</p> <pre><code>def gaussian_kernel(length, sigma=3): x = np.ogrid[-length:length+1] h = np.exp(-(x**2) / (2 * sigma * sigma)) h[h &lt; np.finfo(h.dtype).eps * h.max()] = 0 return h offset = 5 sigma = 3 # labels has 1 in exact event frame and 0 otherwise. shape is (frame_len, 3). for c, event in enumerate(EVENT_CLASSES): labels[:, c] = np.convolve(labels[:, c], gaussian_kernel(offset, sigma), mode='same') </code></pre> <p>Like this;</p> <pre><code>model_name = 'tf_efficientnet_b2_ns' num_frames = 32 n_div = 8 model = timm.create_model(model_name) for i, block in enumerate(model.blocks): model.blocks[i] = TemporalShift(block, n_segment=num_frames, n_div=n_div) </code></pre> <p><a href="https://www.kaggle.com/kyoshioka47" target="_blank">@kyoshioka47</a> <br> No, I inserted TSM every after blocks.</p>
Data Mining Hackathon on (20 mb) Best Buy mobile web site - ACM SF Bay Area Chapter
How did you do it?
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Data Mining Hackathon on (20 mb) Best Buy mobile web site - ACM SF Bay Area Chapter <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yasser, could you elaborate on the string similarity that you used?</p> <p>What is TDM?</p> <p>First of all, congratulations to the winners!</p> <p>Now a question for all participants: How did you do it? Which methods and tools did you use? What worked? What did not work?</p> <p>I started with Foxtrot's Python-based solution, which basically does some normalization and suggests the most popular items for the given query, with the overall most popular items as fallback. Thank you for that contribution -- I think I would not have even entered the competition without reading this blog post ...</p> <p>I then added some functionality for cross-validation to the script, and started playing around.</p> <p>My last submission used weighted kNN based on the Jaro-Winkler &quot;distance&quot;, with some smoothing applied before to the frequency statistics, and some category-specific changes to the query normalization function. The public leaderboard MAP@5 was 0.75717.&nbsp;Jaro-Winkler worked better for me than Jaro or normalized Levenshtein.</p> <p>Before that, I tried the most popular items for the given query, extended by the queries up to a certain Levenshtein distance threshold, which also gave decent results.&nbsp;Within this method, I tried out query normalization based on Soundex, Double Metaphon, and NYSIIS, but that was worse than the basic normalization.</p> <p>Coding was done in Python, using the Levenshtein, jellyfish, and fuzzy libraries.</p> <p>Details and source code will follow.</p> <p>How about you?</p>
Data Mining Hackathon on BIG DATA (7GB) Best Buy mobile web site
How did you do it?
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Data Mining Hackathon on BIG DATA (7GB) Best Buy mobile web site <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I ran kNN on the query strings.</p> <p><a href="http://www.kaggle.com/c/acm-sf-chapter-hackathon-small/forums/t/2777/how-did-you-do-it/14966">Same question as in the small dataset forum: How did you do it?</a></p> <p>I basically used the same approach as for the small dataset, except:</p> <ul> <li>there was no category-specific normalization </li><li>kNN computed from similar queries within the same category </li><li>if there were less then 5 items from the kNN, extend the list with most popular items by category, most popular (global) items by query, and globally most popular items </li></ul> <p>My last submission was 0.57142.</p> <p>Instead of cross-validation, I used a 10% split for validation, and sometimes a smaller subset for development. Overfitting was no problem, and the differences between validation results were good predictions for performance on the leaderboard.</p> <p>The Python script for the final submission took 21.5 minutes to run on my laptop, so no real need for a cluster or cloud computing to tackle this problem ... well of course experiments would still run faster if you had several machines at your fingertips ...</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p>
Data Mining Hackathon on BIG DATA (7GB) Best Buy mobile web site
Solution to this competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Data Mining Hackathon on BIG DATA (7GB) Best Buy mobile web site <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><span>Congrats to everyone in this competition. I notice that the rule&nbsp;<span>states that:</span></span></p> <h3>WINNING SOLUTIONS MUST BE POSTED TO THE FORUMS</h3> <p>And some teams have present their solutions in small version of this competition in foruma. I here present our solution to the big version of this competition in this topic.</p> <p>We use naive bayes as our algorithm in this competition. The features we used is:</p> <ol> <li>query </li><li>time </li><li>user </li></ol> <p><strong>A.</strong> We want to know the probability 𝑝(𝑖│𝑐) that user click sku 𝑖 in context 𝑐. We use naive bayes to predict this probability. Then&nbsp;select 5 item with highest predicted probability as prediction in context 𝑐. Here&nbsp;context 𝑐 is query&nbsp;&nbsp;𝑞.</p> <p>The first context we used is query. By naive bayes, we have:</p> <div style="padding-left:30px">𝑝(𝑖│𝑐)∝𝑝(𝑖)×∏𝑝(𝑤_𝑘 |𝑖)&nbsp;</div> <div>here&nbsp;𝑝(𝑖) is prior and we use the frequency that item&nbsp;𝑖 appears in its category as prior.</div> <div>∏𝑝(𝑤_𝑘 |𝑖)&nbsp;is the likelihood and&nbsp;𝑝(𝑤_𝑘 |𝑖) is the probability that word&nbsp;𝑤_𝑘 apprears when we see item&nbsp;𝑖.</div> <div><strong>B.</strong> We use time information in our model. We d<span>ivided data into 12-day time periods based on click_time. Then used a smoothed frequency of items&nbsp;𝑖 appears in its category in its time period as prior in A.</span></div> <div><span><strong>C.</strong> We also use a bigram model to improve naive bayes. The ∏𝑝(𝑤_𝑘 |𝑖)&nbsp;&nbsp;likelihood in A use words conditional probability. We generate a bigram model and use naive bayes model to fit it too. We generate bigram data for each query&nbsp;𝑞 as follows:</span></div> <div> <ol> <li>suppose use query “xbox call of duty” </li><li>rerank to “call duty xbox&quot; by&nbsp;alphabetic order ( with elimination of &quot;of&quot; as stopwords) </li><li>bigram: [”call duty”, ”call xbox&quot;, “duty xbox“] </li><li>use naive bayes to fit it as the same as above. </li></ol> </div> <div>Then we use a linear combination to ensemble the unigram prediction and bigram prediction.</div> <div><strong>D.</strong>&nbsp;We did some data processing. The first is query cleanning. In big version of this competition, we did 1,lemmatization;2,split words and numbers such as &quot;xbox360&quot; to &quot;xbox 360&quot;. we also did some words correction in small version of this competition.</div> <div><strong>E. </strong>We rank the items that user have clicked by query&nbsp;&nbsp;𝑞 lower. Suppose we have prediction a,b,c,d,e when he query &nbsp;𝑞 and&nbsp;user &nbsp;<em>i </em>clicked items a,c. We rerank prediction b,d,e,a,c for user <em>i</em>.</div> <div><strong>F.</strong> We use python to implement all algorithms. The only 3rd lib need to install is nltk. We use it for&nbsp;lemmatization in data processing. The rest use puer python. The version of python is 2.7, as there is one function not supported in 2.6- version. &nbsp;For more information, Check readme.txt in the soucecode.&nbsp;Please download the second attached file. Because I find nothing can delete the first attached file in the system.<br> &nbsp;</div>
Data Science Bowl 2017
7th Place short overview.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Data Science Bowl 2017 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks! Including the probability maps lowered the error on our local holdout set from around 0.46 to 0.44 and significantly reduced training time. The networks trained with probability map inputs needed half of the training epochs to converge. This was especially helpful for the 3D networks, which train about 10hours. We have two main explanation attempts why the maps improve performance:</p> <p>1) Since we trained the segmentation network on a larger patch of the lung slice, the probability map can contain more context information than the actually crop, which the classifier has as input. Here is an example for illustration: The segmentation network can recognize a certain nodule because its characteristic distance to the lung wall and draws a higher/larger/... probability map. When cropping the nodule with a smaller crop size, the wall vanishes as well as the crucial hypothetical distance feature. This would lead to a worse prediction. The same logic applies for example for pairs of nodules. Where the existence of a second nodule raises the probability for the first one. By including the probability map of the segmentation network, the "out of crop range"-features can still be considered by the classification network. </p> <p>2) The segmentation map can be used as an "attention" mechanism for the second classification network. By looking at the segmentation map the classification network has a trivial indicator which pixels in the input are the most important ones - namely the ones where the probability map channel has non-zero values. Since we have a very limited amount of training data this can counteract overfitting, since it favors the generalizing solution (looking at the nodules) for classifying cancer, instead of remembering e.g. characteristic pixels in the upper left corner of the input and simply remembering that this patient has cancer (which of course would not translate to unknown data).</p> <p>Of course these reason are just very rough, first ideas, but they maybe give an intuition about why the maps actually can help. Your argument with the very extreme values either at 0 or 1 (if true on all maps, haven't checked that yet) make my first point less likely, but in principle even the difference between 0.95 and 0.99 in the map could make a difference. One could actually test that easily by setting everything above 0.9 to 1.0 and measure the performance again.</p> <p>Thanks! The machine we ran the final pipeline on was a Ubuntu 14.04 system with 32GB of RAM, an i7-4930K and a single GTX 1080. The entire preprocessing + network training + prediction takes around 72hours on this machine.</p> <p>Thanks! 1) Yes, we used a lung segmentation to reduce the computation load. We did it with a segmentation network, for which we annotated around 40 lungs manually. We then cropped each lung to the smallest rectangle which contains both lung wings. Using a network was actually the fastest way for us to get robust results. 2) We used all slices for training which section the nodule annotation centroid, not just the center. From these sectioning slices, we cropped an 128x128 pxl area randomly. We resampled all Scans to 1x1x1 mm resolution for the segmentation.</p> <p>Hello guys, here we provide a first, quick overview over the 7th place solution. We will follow that up with a more detailed write-up, this just serves as a short methodological overview. Congrats to the top performing teams!</p> <p><strong>DL package</strong>: We implemented all networks directly in tensorflow on ubuntu 14.04 machines.</p> <p><strong>Approach</strong>: We solely relied on the nodule annotations of LUNA/LIDC and the dsb cancer labels for building our framework. The general approach can be divided into 3 main steps, namely:</p> <p><strong>1) Segmenting nodules:</strong> We first trained a segmentation network (standard UNET architecture) with the LUNA/LIDC information to segment lungs nodules. The network was trained on random 128x128 crops of lung slices which were sampled randomly from all 3 room axis (not only axial) with the corresponding nodule annotation mask (we used elliptical annotations). We just sampled slices which section a nodule annotation. We treated every sampled slice as an independent data record. At prediction time we apply this network to all slices of a CT Scan <strong>in all 3 room axis</strong>. This leaves us with three 3D probability map tensors, which we simply average. This results in a single 3D probabilty map tensor for each lung CT scan.</p> <p><strong>2) Find nodule candidates:</strong> We use an iterative cluster algorithm to find clusters in the 3D probability clouds from step 1). The main idea here was to threshold the 3D probability maps from the segmentation network with a low value (0.2) and run a clustering. This will result in probability cloud clusters of arbitrary size. If the size of a cluster exceeds the plausible nodule size, we further increase the threshold in this cloud and again run the clustering to detect single or multiple subclusters. We repeat this process until all probability cloud cluster is "condensed" to single clusters of a plausible nodule size. The center of these final clusters is used to extract a 64x64x64 region from this location from the lung. This will result in &lt;100 candidates for each lung. We sort the candidate tensors with the sum of the probability contained in the 64x64x64 crop and <strong>only use the first 20 candidates</strong> for the final step.</p> <p>2.1) (optional / we wish we had) At this step it would be straight forward to apply an malignancy classification network to each of the extracted candidates and obtain a more meaningful candidate set for the final classifier. Unfortunately, we did not notice the malignancy information contained in the LIDC dataset. There is a good chance for improvement here.</p> <p><strong>3) Predict cancer on patient level</strong>: From step 2) <strong>we obtain 20 64x64x64 candidates for each patient</strong> in the dsb dataset. We use those candidates to train a 3D resnet for cancer classification. We do not only use the pixel information from the lung CT scan but also the probability map information from the segmentation network as a second input channel. The intuition behind including the probability map from the segmentation network is that the segmentation is trained on larger patches (128x128) and has, therefore, more context information than the stage 2 classifier. By including the probability map the stage 2 classifier is able to reuse the extracted context information of the segmentation network, which is contained in the predicted segmentation maps.</p> <p>We <strong>feed the 20 candidates simultaneously</strong> in the network. This means a single input record has the dimension (20,64,64,64,2), where the 1. dimension stands for the 20 candidates, 2nd-4th are the spatial dimensions, and the 5th dimension encodes the two channels (lung CT pixel information, 3D probability output from the segmentation net in stage1). We assign the dsb patient label to this 5D tensor, resulting in the following <strong>datarecord-label-pair: (20,64,64,64,2) &lt;-&gt; cancer label</strong>.</p> <p>The network architecture is a 3D convolution network designed in a way that it reuses its weights in the candidate dimension. We did that by simply reshaping the candidate dimension into the batch dimension while training/predicting: (Batchsize,20,64,64,64,2) -&gt; (Batchsize x 20,64,64,64,2). We then apply a resnet 3D network, which convolves the 3 spatial dimensions and reduces the input tensor with shape (Batchsize x 20,64,64,64,2) to an output tensor with dimension (Batchsize x 20, 1). On this tensor, we apply a sigmoid function the normalize the values. We reshape this tensor back to (Batchsize, 20) and <strong>apply a reduction function</strong> to arrive at a single probability for each patient. We experimented with a few different reduction functions, for example, Noisy-AND/Noisy-OR, but in practice, a simple maximum function performed the best. The max(tensor, axis=1)-function reduces the (Batchsize, 20)-tensor to an (Batchsize, 1)-tensor. The intuition behind reducing the 20 candidate probabilities with the maximum function to a single patient probability is, that the most "cancerous" nodule also determines the cancer likelihood of the whole patient. This final output scalar will be the prediction for a patient. This setup is directly end-to-end trainable from X candidates per patient to a single prediction per patient.</p> <p><strong>Additional notes on the multicandidate classifier</strong>: It is important to use a reduction function which is <strong>invariant to permutation of the candidates</strong> (Max, Average, Noisy-AND, ...) since their ordering is expected to be imperfect due to the fact that the segmentation network which produces the candidates is trained to predict <strong>all</strong> nodules, not only malignant ones. So when e.g. using an MLP to reduce the (Batchsize, 20) tensor to a single (Batchsize, 1) output, the classifier can actually fail when permutating the 20 candidates, since a individual weight is used for every candidate. A simple example with 5 candidates, where only one correlates with the actual patient label: A (0, cancer, 0, 0, 0) input should produce the same prediction as a (cancer, 0, 0, 0, 0) input. When having a relatively small dataset where it's necessary to combine multiple equivalent candidates one can greatly reduce overfitting effects by making the classifier invariant to those candidate permutations.</p> <p><strong>General Model Optimization:</strong> We did spend nearly no time optimizing the network architectures or optimizer parameters, since we simply ran out of time and the runs are lengthy.</p> <p>We used the sklearn implementation of DBSCAN: <a href="http://scikit-learn.org/stable/modules/generated/sklearn.cluster.DBSCAN.html">http://scikit-learn.org/stable/modules/generated/sklearn.cluster.DBSCAN.html</a></p>
Data Science Bowl 2017
9th place solution - Deep Breath
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Data Science Bowl 2017 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>You can read a post about our approach here: <a href="https://eliasvansteenkiste.github.io/machine%20learning/lung-cancer-pred/">https://eliasvansteenkiste.github.io/machine%20learning/lung-cancer-pred/</a></p> <p>It is an initial version, we will update it with more details later.</p>
Data Science Bowl 2017
Document and code of the grt123 team
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Data Science Bowl 2017 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://github.com/lfz/DSB2017/blob/master/solution-grt123-team.pdf">Document</a></p> <p><a href="https://github.com/lfz/DSB2017">Code</a></p>
Data Science Bowl 2017
Very quick 1st summary of julian's part of 2nd place solution.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Data Science Bowl 2017 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I trained on backup computer with a GTX 980. <br> Since I was on 32x32x32 you did not need a heavy card.<br> I think having SSD harddrives was more imprortant for high throughput.</p> <p>The viewer is a hasty job and depends on specific folder structures. It would be a pain to run.</p> <p>Not much but I'd like to think that it just made the difference. There are about 10 cases in the trainset with a lot of strange tissue and no or small nodules. Of those 10 cases about 5 of them are cancer. So with strange tissue the chance of being cancer is higher. Instead of a hard negative the classifier with raise the chance a bit. So instead of a big logloss penalty you get a more mild penalty.</p> <p>Here an example. <img src="https://retinopaty.blob.core.windows.net/ndsb3/tissue.png" alt="enter image description here" title=""></p> <p>Thanks,<br> Yes, we'll publish the code.<br> Everybody that wants to win a prize has to if I understood correctly.<br> Just have to clean up a bit. :)</p> <p>Hello, thanks!<br> For stage 1. public LB. Logloss 0.43 AUC was 0.85.<br> For stage2 (0.40) we do not have the labels..<br></p> <p><img src="https://retinopaty.blob.core.windows.net/ndsb3/roc_public_lb2.png" alt="ROC" title=""></p> <p>Hello, here a very quick 1st summary on my part our 2nd place solution. Of course a real writeup and code will follow.</p> <p><strong>Tools.</strong><br> I used Keras + Tensorflow on windows 64. It just came out and wanted to give it a go. For visualisation + debugging I used a specially built viewer. I used that one for checking labels, predictions etc.</p> <p><strong>General approach.</strong><br> The main insight for me (and as a coincidence Daniel) was to use <strong>the malignancy information from the LIDC database</strong> as labeled by the doctors and combine it with the LUNA16 negative labels. I trained up a 32x32x32 3d convnet to detect nodules AND predict malignancy in one go. I did <strong>not use segmented lungs</strong> but rather trained on the full images to not miss any nodules located near the edges.</p> <p>Furthermore it was appearantly extremely important you <strong>not let yourself get too distracted by the leaderboard</strong>. As Daniel said "In CV we trust".</p> <p><strong>Preprocessing</strong><br> Scale everything to 1x1x1 mm. Make sure that all patients have the same orientation in both LUNA and NDSB. Connect the LIDC data to the LUNA data. I expressed all coordinates as 0.0 to 1.0 so that it was scale independent. I used my viewer to check if all positive and negative labels. </p> <p><strong>Labels</strong><br> I used ALL positive LIDC doctor assessments as positive, malignancy label info. Even when only one doctor saw something. The though was that the malignancy info was still very valuable. Then I used the negative entries of candidatesv2.csv from LUNA16 as negatives labels. Note that some overlapped with nodules that were tagged by only one doctor. To train on the full images I samples extra negative labels on non-lung tissue. I found that tissue by taking the segmenter from the forum and sampling around the mask edges. Then I trained a first network and predicted on the LUNA set. All false positives I added as extra negative examples. All in all ca. 400.000 candidates, 100.000 non lung tissue, 10.000 false positives, 2000 LIDC positives.</p> <p>For my 2nd model I predicted on the NDSB data and then played "doctor julian" and manually annotated positive nodules and false positives and added those to the trainset.</p> <p>Another important observation was that in LUNA16 nodules &gt; 3cm (masses) were NOT labeled. So I excluded negative labels that overlapped these masses since it migt confuse the network.</p> <p><strong>3D convnet nodule, malignancy detector.</strong><br> I trained a 32x32x32 3D convnet on extracted, labeled crops . The architecture was a butchered version of the C3D architecture, which is basically VGG like. I spent very (too?) little time optimizing the network architecture.</p> <p>Somehow I made it so to do multitask learning. So I trained on Nodule Y/N, Malignancy in one go. This worked best for me but Daniel did it in 2 stages.</p> <p>The labels were very unbalanced so I upsampled the positive cases onto 1:20. Heavy (lossless) augmentation somehow worked. I'm still a bit surprised how few positive cases were needed.</p> <p>Using the trained network I prediced on the NDSB data using a 12 mm stepsize which gave a lattice with nodule malignancies. For every patient I only used the max malignancy and the z-location. I did this at 3 zoom levels to also be able to detect bigger nodules.</p> <p><strong>Strange tissue detector.</strong><br> A small part of the solution was a "strange tissue" detector. I basically used some examples of strange tissue from LUNA, NDSB, a few axamples from the extra data thread. I labeled them with the same program I used for the 2nd NDSB. Then I trained a small U-Net. This net gave per patient the amount of strange tissue. I also tried the same trick for overal holistic info and empysema but they did not have any significant impact.</p> <p><strong>Final classifier.</strong><br> I used Xgboost on the max malignancy at 3 zoom levels, the z-location and the amount of strange tissue. I used min-childweight of 60 to prevent overfitting. Local CV was around 0.39 but the LB had a will of its own and I had a very hard time finding improvements that improved both local CV and LB.</p> <p><strong>Daniels models.</strong><br> I joined with Daniel Hammack and we both were convinced we had an original idea. Turned out that we both used malignancy info from LIDC. However, his network architecture was 64x64x64, resnet like and used 2 stages. Also he used extra nodule info from LIDC. So in the end our models were still quite complimentary. More info on his stuff will be in his writeup. My strategy was to average the models into one final solution. </p> <p><strong>My thanks to Daniel.</strong><br> He really kept his cool when the LB was at complete chaos with all the overfitting taking place. You are extremely bright and much more sophisticated than me in your ML approach :)</p> <p><a href="https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI#31cc3a04bc704e7f88d81c8df1a9e662">This link</a><br> Then detailed description.<br> Then "XML File Documentation"<br></p> <p>@sh1ng Loss for nodule yes/no: binary cross entropy Loss for malignancy: Squared Error Loss for tissue segmenter: Dice Predictions were not per 1mm but I used a stride of 12mm. So a 240x240x240 patient scan gave 20x20x20 predictions. Yes one for malignancy and one for nodule yes/no..</p> <p>I will post some screenshots..</p> <p>@FangzouLiao Yes.. 32x32x32 was a bit small but combined with different zoom levels it was quite adequate. The 0.429 LB was a result of blending our models. Best single model was 0.441 LB, 0.385 CV. Somehow blending Daniel's stuff with mine did good on LB. However.. I really struggled with LB. Better CV usually gave worse LB and vise versa. I looked for directional improvements for both at the same time. Daniel's mantra was "In CV we trust" :)</p> <p>I was not clear about that.<br> I trained the net on 1x1x1 mm per pixel.<br> Since the receptive field size was 32x32x32 it sometimes missed larger nodules.<br> So during prediction I predicted on 3 scales<br></p> <ul> <li>1x1x1 mm per pixel.</li> <li>1.5x1.5x1.5 mm per pixel.</li> <li>2x2x2 mm per pixel.</li> </ul> <p><br> I used the same network every time.<br> It was a cheeky fix but somehow it worked. </p>
Data Science Bowl 2017
docker from 0.465 team
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Data Science Bowl 2017 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to the winners and everyone! </p> <p>Here's a solution from the 0.465 team, packaged in nviddia-docker image which can be directly used to make predictions.</p> <p><a href="https://github.com/aaalgo/plumo">https://github.com/aaalgo/plumo</a></p> <p>Sample output package of one typical case from the SPIE dataset (100MB), for interactive viewing with a browser.</p> <p><a href="http://aaalgo.com/plumo/plumo-example.zip">http://aaalgo.com/plumo/plumo-example.zip</a></p> <p><img src="http://www.aaalgo.com/plumo/plumo1.png" alt="enter image description here" title=""> <img src="http://www.aaalgo.com/plumo/plumo2.png" alt="enter image description here" title=""> <img src="http://www.aaalgo.com/plumo/plumo3.png" alt="enter image description here" title=""></p> <p>@Johnny "ft model" is for feature extraction. The TCIA-LIDC dataset has expert annotation of each nodule. In addition to the nodule contour, the annotation includes a list of features for each nodule, e.g. subtlety, sphericity, margin, etc. The "ft model" is trained with such annotation and can be used to extract a feature vector from each nodule detected.</p>
DecMeg2014 - Decoding the Human Brain
Model Upload and Required Documentation
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: DecMeg2014 - Decoding the Human Brain <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello all,</p> <p>Thanks for a great competition! &nbsp;Attached is documentation and associated code of the model used to produce the 3rd-place submission. &nbsp;My code is released under the GNU GPLv3, as provided in the zip folder. &nbsp; &nbsp;For any 3rd-party code, i.e. SVM-Light, please refer to the licenses included in&nbsp;the corresponding&nbsp;folders. &nbsp;</p>
Deepfake Detection Challenge
12th place in a nutshell
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Models which size is in bytes are meta-models, they take a bunch of pretrained effnets outputs as input and generalize them. Look at the second cell of the train_stacked_model.ipynb, there is its structure.</p> <p>Thanks!</p> <p>UPD: uploaded all pretrained models to <a href="https://github.com/1M50RRY/dfdc-kaggle-solution/tree/master/inference/pretrained">github</a></p> <p><em>Disclaimer: It is my first experience in CV competition, so I apologize if my code or approach insult someone.</em> </p> <h2>Data preparation</h2> <ul> <li>1st dataset: extracted single random frame from each video</li> <li>2nd dataset: extracted 10 random frames from each video (used for stacking)</li> <li><strong>Blazeface</strong> for face extraction from <a href="/humananalog">@humananalog</a> kernel</li> <li>output images resized to 256x256</li> <li>main augmentations are jpeg compression and downscale with a chance of 0.5</li> <li>secondary augmentations (not all were used at the same time): blur, gaussian noise, random brightness, horizontal flip</li> <li>classes were balanced by oversampling for single model training and by undersampling for stacked model training</li> </ul> <h2>Models</h2> <ul> <li><strong>EfficientNet B0-B6</strong> (imagenet)</li> <li><strong>EfficientNet B1-B6</strong> (noisy student)</li> </ul> <p>Efficientnet gave me a huge boost on LB, but more importantly was to choose augmentations and hyperparameters right for this kind of models.</p> <h2>Training and Validation</h2> <p>I took #40-49 data chunks as validation data and it correlated with LB pretty well (about 10% difference), input size is 256x256 for every model - single model training: input size depends on model (from 224 to 260), adam 1e-4, StepLR with step 2 and gamma 0.1 - stacked model training: input size 256x256, adam 1e-3</p> <p>For every model it took about 3-5 epochs to train and start overfitting.</p> <h2>Stacking &amp; Ensembling</h2> <p>I trained a bunch of decent single models with public LB 0.38-0.4 on the first dataset and then combined different models into a stacking using second dataset. </p> <p>Then I took best 5 stacked models from this experiment, their validation score and made frame-wise weighted ensemble</p> <h2>What didn't work for me</h2> <ul> <li><strong>MesoNet, RNN</strong> - underfits on train data</li> <li><strong>Strong augmentations</strong> (e.g. cutout, huesaturation, 90 rotate)</li> <li><strong>Stacking of stacked models</strong> - seemed like it overfits, but not too much</li> <li><strong>FaceForensics++</strong> - the most I managed to get from this thing is 0.68 public LB</li> </ul> <h2>Links</h2> <ul> <li><strong>Refactored inference</strong> - <a href="https://www.kaggle.com/ims0rry/dfdc-inference-15-lb-private">https://www.kaggle.com/ims0rry/dfdc-inference-15-lb-private</a></li> <li><strong>Original inference</strong> - <a href="https://www.kaggle.com/ims0rry/inference-demo">https://www.kaggle.com/ims0rry/inference-demo</a></li> <li><strong>The whole project on github</strong> - <a href="https://github.com/1M50RRY/dfdc-kaggle-solution">https://github.com/1M50RRY/dfdc-kaggle-solution</a></li> </ul> <p>NS weigths are ported for pytorch in <strong>timm</strong> library which I used for some of the models</p>
Deepfake Detection Challenge
1st Place Removed Solution - All Faces Are Real Team
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you <a href="/cpmpml">@cpmpml</a>. We really appreciate all the support from the community. </p> <p>Unfortunately we can't experiment on LB to prove any point. </p> <p>0.02 is not too much, but Flickrface have faces different than the ones that appears in the competition dataset, so it may help to generalizes to new faces. Also I told that are 3 more models in the blend of the best solution, the difference could be coming from that models diversity.</p> <p>Thanks and I really appreciate all the support here. Also I would like to hear all the answers.</p> <p>I can't agree more 😂 </p> <p>We extracted only 16 Deepfake videos from Youtube with Creative Commons License and I'm sure it made no difference in our scores because the videos from Youtube are made using a different deepfake algorithm and also the competition dataset have 100k+ videos. </p> <p>Unfortunately late submission are not enabled to test some hypothesis, but Its more probable that the 0.02 difference comes from better generalization in our models due Flickrface dataset or by the fact that we blended 3 additional models in out best solution.</p> <p>Yes, its not my first time... and my team mate won the USD1M Zillow competition. Maybe I'm cursed... people won't teamup with me anymore 😭 </p> <p><strong>Kaggle, Facebook Host Team and Fellow Competitors:</strong></p> <p>First of all, we want to put on record our gratitude for Kaggle and the Facebook host team for putting the effort into creating the dataset and hosting this competition, and we give our congratulations to all eventual prize winners.</p> <p>We'd like to use this statement to further explain the circumstances which led to our winning solution being voided, and our position on the LB being moved with accordance to our second solution. </p> <p>In anticipation of shake-up of the competition on the private LB, we prepared our two solutions which finished with private LB scores 0.42320 and 0.44531 respectively. For the 0.44531 solution, which scored better on the public LB, we used competition data only and an unweighted mean of 12 models: this is the solution that enabled us to retain our 7th position on the LB. For our original winning solution (0.42320) we mixed 6 models trained using competition data with 9 models trained with some additional external data (our more adventurous submission).</p> <p>For our original winning model, we used the following additional data:</p> <ul> <li><p><strong>The flickrface dataset</strong>: we used a <a href="https://www.kaggle.com/xhlulu/flickrfaceshq-dataset-nvidia-resized-256px">resized version</a> of this dataset. A few of these images had licenses which didn't allow commercial use, so in line with clarifications from Kaggle in the external data thread, we used the license information available from the original github to select and train <strong>only</strong> on images with license types that are acceptable for this competition (<a href="https://creativecommons.org/licenses/by/2.0/">CC-BY</a>, <a href="https://creativecommons.org/publicdomain/mark/1.0/">Public Domain Mark 1.0</a>, <a href="https://creativecommons.org/publicdomain/zero/1.0/">Public Domain CC0 1.0</a>, or <a href="http://www.usa.gov/copyright.shtml">U.S. Government Works</a>)</p></li> <li><p><strong>Youtube videos images</strong>: we manually created a face image dataset from a handful of youtube videos with <a href="https://support.google.com/youtube/answer/2797468?hl=en-GB">CC-BY</a> license, which explicitly <a href="https://creativecommons.org/licenses/by/3.0/">allows for commercial use</a>.</p></li> </ul> <p>We chose these data sources with the belief that they met the rules on external data, specifically that external data must be <em>"available to use by all participants of the competition for purposes of the competition at no cost to the other participants"</em>, and the additional statements in the external data thread that they must be available for commercial use and not restricted to academics etc.</p> <p>However, in our discussions with Facebook and Kaggle, we were told that despite fulfilling this we were contravening the rules on Winning Submission Documentation:</p> <blockquote> <p>WINNING SUBMISSION DOCUMENTATION (Section 4 of the Competition-Specific Rules) In addition to compliance with the Kaggle Documentation Guidelines at <a href="https://www.kaggle.com/WinningModelDocumentationGuidelines">https://www.kaggle.com/WinningModelDocumentationGuidelines</a>, the winning submission documentation must conform with the following guidelines:</p> <p>A. If any part of the submission documentation depicts, identifies, or includes any person that is not an individual participant or Team member, you must have all permissions and rights from the individual depicted, identified, or included and you agree to provide Competition Sponsor and PAI with written confirmation of those permissions and rights upon request.</p> <p>B. Submission documentation must not infringe, misappropriate, or violate any rights of any third party including, without limitation, copyright (including moral rights), trademark, trade secret, patent or rights of privacy or publicity.</p> </blockquote> <p><strong>Specifically, we were asked to provide "additional permissions or licenses from individuals appearing in [our] external dataset"</strong>. Unfortunately, since the data was from public datasets, we didn't have specific written permission from each individual appearing in them, nor did we have any way of identifying these individuals. We didn't realise while competing that external data in this competition falls under 'documentation' as well as the external data rules, so we did not secure these permissions from individuals depicted above and beyond the licensing requirements. </p> <p>We suspect that most competitors also did not realise these additional restrictions existed - we are unable to find any data posted in the External Data Thread which meets this threshold with a brief scan. During the competition, the rules on external data were repeatedly clarified, so this leaves us wondering why Kaggle never took the opportunity to clarify that external data must additionally follow the more restrictive rules for winning submission documentation.</p> <p>An additional concern brought to us was that <strong>Facebook felt some of our external data "clearly appears to infringe third party rights" despite being labelled as CC-BY</strong> (it's not clear what data they were referring to specifically). Even if this were the case, it seems unreasonable to us that a Kaggle team should have to trace and verify that someone who publishes a dataset themselves has the rights to do so, and that we should have to engage rights clearance services in order to make a competition submission - it was suggested to us that we could have run our external data past our lawyer before making our submissions.</p> <p><strong>While we feel that these extra rules could have been made clear during the competition, and we hope that Kaggle will begin to clarify these rules in future competitions, we understand that there is little we can do in this instance.</strong> We have had a constructive call with both Kaggle and Facebook which we thank them for. After this call, it was agreed that because we did not knowingly seek to undermine any rules, that our submission that did not use any external data should be allowed to remain and only the winning submission is to be disqualified.</p> <p>That being said, we are very disappointed by this outcome after spending so many months on the competition. <strong>Successful Kaggle competitions rely on a trust between competitors and Kaggle that the rules will be fairly explained and applied, and this trust has been damaged.</strong> We welcome any thoughts from the community on this matter.</p> <p>Giba, Mikel, Yifan, Gary and Qishen <br> All Faces are Real</p> <p><a href="/humananalog">@humananalog</a> what about using non-licensed images for pretrained models and face detectors? Do they infringe someone else's right?</p> <ul> <li>images from Youtube = BAD</li> <li>images from pretrained model scraped from internet = GOOD</li> </ul> <p>That is the problem, or everything is bad or everything is good, there is no difference there.</p> <p>wondering where did they grab the "real" Deepfake videos to score the private test 🤔 </p> <p>People ask me why I got disqualified and I still don't know why. Every time we asked hosts we got a different answer. </p> <p>Please, tell me: what is the "real" reason for disqualifying us?</p> <p>After reading your explanation it still completely unfair to disqualify our top solution.</p> <p>Taking into account what you said about "mis-licensed" videos, I must ask you about "mis-licensed" pre-trained models, specially the ones that contains images of people scraped from internet.</p> <p>One example: MTCNN face detection algorithm was used by many top teams. This model was trained with faces scraped from internet. It uses two datasets: </p> <ul> <li><p><a href="http://www.cbsr.ia.ac.cn/english/CASIA-WebFace/CASIA-WebFace_Agreements.pdf">CASIA-WebFace</a>: The images in this dataset are crawled from internet and are for non-commercial use only.</p></li> <li><p><a href="https://www.robots.ox.ac.uk/~vgg/data/vgg_face2/">VGGface2</a>: This dataset is scraped from internet and realeased under CC license, but it doesn't hold individual licenses for the people in the dataset.</p></li> </ul> <p>Are these datasets really allowed and do not infringe any license? Or after training an open-source model with "mis-licensed" datasets it becomes licensed?</p> <p>It would have been so easy if external data had not been allowed from the beginning...</p> <p>This is a misunderstanding from the journalist. Facebook didn't allowed us to resubmit our best solution without Youtube data. Certainly this would scored very the same as our winning solution 0.423. </p>
Deepfake Detection Challenge
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you! </p> <p>I was really surprised that your submission that used external data with CC-BY license led to its disqualification. At the beginning I wanted to collect CC-BY videos as well as they seemed to be legit. Asked about that a lot of times. Beacuse I have not received a confirmation from organizers I considered this data in the gray area (neither organizers nor host could provide definitive answers) and decided to be on the safe side. <br> Hopefully this situation will be a lesson for Kaggle to make rules very clear for the future competitions.</p> <ol> <li>32 frames</li> <li>only faces, used SSIM masks during my UNet experiments</li> <li>for B5 it was 30 epochs with 2500 steps each, batch size=80</li> </ol> <p>PolyLR - polynomial learning rate decay. It's usually used for semantic segmentation models like DeepLabV3. Cosine decay would be better I guess. </p> <p>Opensourced my solution <a href="https://github.com/selimsef/dfdc_deepfake_challenge">https://github.com/selimsef/dfdc_deepfake_challenge</a></p> <p>1 - GlobalAvgP is a default choice for classification. Actually I tried GlobalMaxP and GWAP - they were not better. 2 - no, did not freeze anything</p> <p>I did not know it existed, otherwise I would reuse it. The only difference is that I used different interpolations for scaling up/down. </p> <p>it is calculated between real video and fake video. Obviously this info is not availble during inference. The only way is to predict this mask with U-Net, but I did not get any improvement from segmentation. </p> <p>When performed image dropout augmentations checked that it doesn't delete all the pixels that make fake face really fake. I.e. left some part of artefacts on the image, otherwise this augmentation will hurt model performance. </p> <p>I used timm library <a href="https://github.com/rwightman/pytorch-image-models">https://github.com/rwightman/pytorch-image-models</a> and fine-tuned noisy student weights. But ordinary imagenet weights gave the same results. I have not even tried audio fake detection as it seemed that the main purpose of the challenge is to detect faceswap related deepfakes. That was a right choice I guess. Though maybe other winning solutions use audio deepfakes, hard to say as they did not post descriptions yet.</p> <ol> <li>used for U-Net experiments and partially for augmentations</li> <li>My intuition was that visual artefacts (strange eyes, mouth) are quite easy to spot. Therefore I needed to destroy them. In addition a lot of works use something like grid dropout (<a href="https://arxiv.org/abs/2001.04086">https://arxiv.org/abs/2001.04086</a>) etc. for better generalization. Here it applies as well, just needs to be done carefully. </li> </ol> <p>Yes, I fine-tuned ImageNet pretrained weights. Though it would be interesting to compare results with training from scratch, the task is quite different, maybe pretrained weights don't even matter here. </p> <p>I saved checkpoint for every epoch, then selected best ones looking at validation loss graphs (fake loss, real loss, loss)</p> <p>Yes, after the official results are announced. </p> <p>First of all I read overviews of existing methods <a href="https://arxiv.org/abs/2001.06564">https://arxiv.org/abs/2001.06564</a> and <a href="https://arxiv.org/abs/2001.00179">https://arxiv.org/abs/2001.00179</a> . Then investigated some methods if they seemed promising or came up with my own ideas.</p> <p><strong>Multitask-Learning</strong> (did not affect public leaderboard) - segmentation with soft probabilities obtained by SSIM masks. It worked and highlighed artefacts correctly. - segmentation with thresholded SSIM diffs masks - segmentation with face convex hull masks</p> <p><strong>Using low level CNN features</strong> (did not affect public leaderboard) - <a href="https://arxiv.org/abs/1909.06122">https://arxiv.org/abs/1909.06122</a></p> <p><strong>Using contrastive loss and pairwise learning</strong></p> <p><strong>CNN + RNN on features - was overfitting badly</strong></p> <p><strong>Different pooling layers</strong>, GWAP worked well enough but again no improvements, paper <a href="https://arxiv.org/pdf/1809.08264.pdf">https://arxiv.org/pdf/1809.08264.pdf</a>, <a href="https://github.com/BloodAxe/pytorch-toolbelt/blob/c830e02eac3162f6066ccd9fd5b02c3c2cb19018/pytorch_toolbelt/modules/pooling.py#L45">https://github.com/BloodAxe/pytorch-toolbelt/blob/c830e02eac3162f6066ccd9fd5b02c3c2cb19018/pytorch_toolbelt/modules/pooling.py#L45</a></p> <p><strong>Using deep and shallow networks simultaneusly</strong></p> <p><strong>Multicrop network</strong> - i.e. 5 crops from image: (topleft, topright, bottomleft, bottomright, center crop), for 256x256 got five 128x128 =&gt; concatenated CNN features + FC + classifier. </p> <p><strong>Self-supervision</strong> - real face warping + blending and changing label to fake - rotate image 0, 90, 180, 120 and predicting angle as well</p> <p><strong>Data cleaning</strong> - used SSIM masks and removed failed fakes, or not fakes from multiface videos - the loss on validation dropped dramatically, but it gave worse results on public LB</p> <p>I regret that I have not tried NeXtVlad to achieve better video classification quality. </p> <p>btw <a href="/humananalog">@humananalog</a> I modified your kernel as a starter. Thanks for nice work!</p> <p>Relied on local validation and public leaderboard. Regarding private set I don't know as I have not used it. It could work though... </p> <p><code> from skimage.measure import compare_ssim score, score_mask = compare_ssim(real_img, fake_img, multichannel=True, full=True) diff = ((1 - score_mask) * 255).astype(np.uint8) diff = cv2.cvtColor(diff, cv2.COLOR_BGR2GRAY) </code> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F534152%2F05c52a369beab0a56db3f044c81a8073%2F34_0_diff.png?generation=1587728193334275&amp;alt=media" alt=""></p> <p>answered before <a href="https://www.kaggle.com/c/deepfake-detection-challenge/discussion/145721#819137">https://www.kaggle.com/c/deepfake-detection-challenge/discussion/145721#819137</a></p> <p>Actually no. If we consider that private set has 50% of synthetic data then other top-5 competitiors' models performed much better on organic videos than my models. One of the way to achieve that (no brainer) - to add organic videos the to trainining set. </p> <p>I did not perform any augmentations on my local validation set. </p> <p>For SSIM masks and landmarks 1. used precomputed MTCNN landmarks to dropout eyes, nose, mouth from images 2. used precomputed SSIM masks to check that half image/grid dropout augmentation doesn't remove all fake data 4. calculated convex hull landmarks with dlib on the fly to dropout half face 3. used precomputed SSIM masks for UNet experiments</p> <p>In my training set I extracted 30 frames per video. Used random sampling for each epoch.</p> <h1>Keep it simple</h1> <p>I used a frame-by-frame classification approach as many other competitors did. Tried a lot of other complex things but in the end it was better to just use a classifier.</p> <h3>Data preparation</h3> <ul> <li>extracted boxes and landmarks with MTCNN and saved them as json</li> <li>extracted crops in original size and saved them as png</li> <li>extracted SSIM masks with difference between real and fake and saved them as png</li> </ul> <h3>Face-Detector</h3> <p>I used simple MTCNN detector. Input size for face detector was caluclated for each video depending on video resolution. - 2x resize for videos with less than 300 pixels wider side - no resize for videos with wider side between 300 and 1000 - 0.5x resize for videos with wider side &gt; 1000 pixels - 0.33x resize for videos with wider side &gt; 1900 pixels</p> <h3>Input size</h3> <p>As soon as I discovered that EfficientNets significantly outperform other encoders I used only them in my solution. As I started with B4 I decided to use "native" size for that network (380x380). Due to memory costraints I did not increase input size even for B7 encoder.</p> <h3>Margin</h3> <p>When I generated crops for training I added 30% of face crop size from each side and used only this setting during the competition.</p> <h3>Encoders</h3> <p>I tried multiple variants of efficient nets at the beginning:</p> <ul> <li>solo B3 (300x300) - 0.29 public</li> <li>solo B4 (380x380) - 0.27 public</li> <li>solo B5 (380x380) - 0.25 public</li> <li>solo B6 (380x380) - 0.27 public (surprisingly it was worse than B5 and I have not tried B7 until competition last week)</li> <li>solo B7 (380x380) - 0.24 public </li> </ul> <p>In the end I used two submits with: - 15xB5 (different seeds) with heursitic overfitted for Public LB which was trained with standard augmentations - 10th place on private - 7xB7 (different seeds) with more conservative avergaing heursitic and trained with hardcore augmentations - 3rd place on private </p> <h3>Averaging predictions:</h3> <p>I used 32 frames for each video. For each model output instead of simple averaging I used the following heuristic which worked quite well on public leaderbord (0.25 -&gt; 0.22 solo B5).</p> <p><code> def confident_strategy(pred, t=0.87): pred = np.array(pred) size = len(pred) fakes = np.count_nonzero(pred &amp;gt; t) if fakes &amp;gt; size // 3 and fakes &amp;gt; 11: return np.mean(pred[pred &amp;gt; t]) elif np.count_nonzero(pred &amp;lt; 0.2) &amp;gt; 0.6 * size: return np.mean(pred[pred &amp;lt; 0.2]) else: return np.mean(pred) </code></p> <p>I.e. I used only confident predictions for averaging if they passed some thresholds. Though it worked well on public leaderbord to be safe for the second submit (3rd private) I used more conservative thersholds for real videos. </p> <h3>Validation strategy</h3> <p>I used 0-2 folders as holdout at the beginning. But 400 videos from the public test had more correlation with Public LB and I switched to this approach.</p> <p>I tracked two log loss metrics (on averaged probabilities per video) - logloss on real videos - logloss on fake videos </p> <p>From the validation I can say logloss on FAKE videos was much lower than on REAL. I.e. fakes were too easy to spot. Which was not encouraging. I guess public leaderboard also has this characteristic as my validation had strong correlation with public LB. It is not the case on private set as it would contain new/different FaceSwap methods. </p> <h3>Augmentations</h3> <p>I used image compression, noise, blur, resize with different interpolations, color jittering, scaling and rotations <code> def create_train_transforms(size=380): return Compose([ ImageCompression(quality_lower=60, quality_upper=100, p=0.5), GaussNoise(p=0.1), GaussianBlur(blur_limit=3, p=0.05), HorizontalFlip(), IsotropicResize(max_side=size) PadIfNeeded(min_height=size, min_width=size, border_mode=cv2.BORDER_CONSTANT), OneOf([RandomBrightnessContrast(), FancyPCA(), HueSaturationValue()], p=0.7), ToGray(p=0.2), ShiftScaleRotate(shift_limit=0.1, scale_limit=0.2, rotate_limit=10, border_mode=cv2.BORDER_CONSTANT, p=0.5), ] ) </code></p> <h3>Generalization approach</h3> <p>I expected it won't be enough for generalization and spent a few weeks working on domain specific augmentations. </p> <p>I wanted to push models to learn the following properties: - visual artifacts (models learn that easily even without any augmentations) - different encoding of face from other part of the image. Big margin helps with that. - face warping artifacts. Big marging helps with that as well. Related article <a href="https://arxiv.org/abs/1811.00656">https://arxiv.org/abs/1811.00656</a> to catch FWA - blending artifacts - here we need either to predict blending mask (<a href="https://arxiv.org/abs/1912.13458">https://arxiv.org/abs/1912.13458</a>) but it's not possible to obtain ground truth mask as there is no big difference in pixels/SSIM on the edge due to blurring and other techniques used to reduce blending artifacts or come up with augmentations that <strong>destroy</strong> visual artifacts.</p> <p>To catch face blending artefacts: 1. removed half face horisontally or vertically. Used dlib face convex hulls. 2. blacked out landmarks (eyes, nose or mouth). Used MTCNN landmarks for that. 3. blacked out half of the image. To be safe I checked that it will not delete highly confident difference from masks generated with SSIM. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F534152%2Fcd745d9fb666f2316ac699165d629c4f%2Faugmentations.jpg?generation=1587717153025245&amp;alt=media" alt=""></p> <p>I don't really know if it helped on private leaderboard though.</p> <h3>Training schedule</h3> <ul> <li>sampled fake crops based on number of real crops i.e. <code>fakes.sample(n=num_real, replace=False, random_state=seed)</code></li> <li>2500 iterations per epoch</li> <li>SGD, momentum=0.9, weight decay=1e-4</li> <li>PolyLR with 0.01 starting LR</li> <li>75k iterations</li> <li>used Apex with mixed precision</li> <li>trained on 4 GPUs with SyncBN and DDP. Batch size 16x4 for B5, 12x4 for B7.</li> <li>label smoothing - for me label smoothing with 0.01 eps was optimal on public leaderboard. Also it allowed not to use clipping at all. </li> </ul> <h3>Hardware</h3> <p>Unfortunately I did not ask aws credits from organizers, hoped that my home workstations would be enough - I was wrong! I have two devboxes: one with 2xTitan Vs, the other with 4xTitan Vs</p> <p>Huge thanks to hostkey provider (<a href="https://www.hostkey.com/">https://www.hostkey.com/</a>) that gave me a grant <a href="http://landing.hostkey.com/grants?_ga=2.3307699.2051560741.1587714719-1038061670.1587714719">http://landing.hostkey.com/grants?_ga=2.3307699.2051560741.1587714719-1038061670.1587714719</a> ! I got a devbox with 4x2080Ti for two months! Additional 4 GPUs helped me a lot to iterate faster as my models required all of them for training a single model. </p> <h3>Things that I tried but did not work well enough</h3> <ul> <li>Metric learning - was worse than a simple classificator</li> <li>UNet with SSIM difference prediction - got the same LB score.</li> <li>Self supervision with blending real faces. Extracted convex hull for real faces warped/resized/compressed face and then blended it back and changed label to fake. Models learned that quickly but that did not give any boost on validation/public leaderboard.</li> <li>Self supervision with rotations like here <a href="https://stanford-cs221.github.io/autumn2019-extra/posters/110.pdf">https://stanford-cs221.github.io/autumn2019-extra/posters/110.pdf</a> - did not improve my score</li> <li>Temperature scaling - risky, no big difference. Decided not to use it. </li> <li>Using steganalysis. Did not imporve validation score.</li> <li>RNN - I guess it would work, but due to CPU bottleneck it would be hard to extract multiple chunks in kernel. My first model with RNN + Resnet34 scored around 0.4 on public and a simple classifier was better. </li> </ul> <h3>No External data</h3> <p>That was a huge concern for me because nothing is basically allowed. I decided to be safe and did not use any external data.</p> <h3>Github</h3> <p><a href="https://github.com/selimsef/dfdc_deepfake_challenge">https://github.com/selimsef/dfdc_deepfake_challenge</a></p> <p>yes, because I applied different level image compression augmentation after that. </p> <p>used undersampling as described in the post <code>fakes.sample(n=num_real, replace=False, random_state=seed)</code>. Class weights did not work for me.</p> <p>No, actually I think that models learn to detect FWA by default provided that the margin is large enough and there is lot of training data. Agressive augmentations help with that as well.</p> <p>Yes, this ensemble only helps to calibrate probabilities. Using different resolutions would be better. I trained B7 in the last days of the challenge and did not have time to test different resolutions, that's why I decided to just ensemble multiple checkpoints trained on differently sampled data. </p>
Deepfake Detection Challenge
27th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Approach</h2> <p>We used a simple frame-by-frame classification approach.</p> <h2>Face-Detector</h2> <p>We used simple MTCNN detector. Input size for the face detector was scaled down by 50% to significantly reduce inference time and memory usage without much loss in detection accuracy. We then used the bounding box to extract a square shaped face with image center coordinates and height being the same, but adjusted width.</p> <h2>Final Ensemble</h2> <p>Our final ensemble consists of B5/B6/B7 single model ensemble. <a href="/ratthachat">@ratthachat</a> will give further details on model training in the comment section.</p> <h2>Models</h2> <p>We mostly used EfficientNets as we did not have much success with others Models. (1 Xception was used in the worse performing final essemble) We played around with input sizes and sticked with 224*224 as most cropped faces in the training dataset were around that size.</p> <p>We started with B0 - B2 with some success. But our final solutions of B5-B7 outperformed them.</p> <h2>Memory Usage and Inference Time</h2> <p>Due to memory constraints we used a batch generator that extracts and predicts only on two frames from a video at once. This also enabled us to use parallel inference to significantly reduce run-time and predict on every 27th frame in the end.</p> <h2>Things that may have worked</h2> <p>We considered organizers willl introduce organic videos and different deepfake methods in the test set. Thats why we constantly evaluated our models on organic videos using Predictions and GradCam and used face crops without big margin, we thought this may generalise better. We used a margin of 14 pixels from the MTCNN face detector. </p> <p>The final prediction was a simple average of all predictions per video. We had better results splitting predictions per actor, but due to time reasons and as we considered organizers introducing &gt; 3 actors videos and multiple deep faked faces per video and did not want to run in these edge cases in the test set, we decided against the idea.</p> <p>Using more frames per video was superior to using TTA in our experiments. We clipped prediction using 0.01 and 0.99.</p> <h2>Acknowledgments</h2> <p>Thank you to <a href="/ratthachat">@ratthachat</a> <a href="/hmendonca">@hmendonca</a> <a href="/hmmoghaddam">@hmmoghaddam</a> for the incredible experience and Congratulations to <a href="/hmmoghaddam">@hmmoghaddam</a> on becoming Kaggle expert. </p> <p>Thank you to the organizers for hosting this competition.</p>
Deepfake Detection Challenge
39th Private LB: small and simple
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I used Adam as the optimizer, reduced LR on plateau (reduction factor = 0.1). In order to speed things up I just used 1 of the 5 folds for local validation (folders 41 - 49). This correlated reasonably well with the leaderboard. However I was wary of both the LB and local validation because I thought they were giving too positive a picture... Hence why I tried to underfit them.</p> <p>For image screening, this was automated. I extracted face crops and facepart locations using Blazeface. In some instances it couldn't find a face, or it found the face but couldn't identify both eyes, etc. I screened the videos based on this: for a video to be included in training, all of the frames had to have at least one face, as well as all face parts.</p> <p>Well done! The simple things that I think made a difference for me were ensembling on different folds (rather than different seeds), and image sizes.</p> <p>Congratulations to everyone, and I hope that those who had scores that errored out get that sorted… Very frustrating.</p> <p><strong>My approach in a nutshell - keep it simple and write off the hard deep fakes:</strong> </p> <p>I figured that a complex model would not get these right anyway, and would end up messing up the easy ones too because it would be prone to overfitting. I ended up with 39th on the private LB (0.477); around 100th public LB (0.328).</p> <p><strong>Details:</strong></p> <p>My main worry was that models were not going to generalise well, primarily because of the training data we had available to us. My reading of the rules was that any external data was at best in a grey area, and more probably out of bounds, so I only used the data given to us.</p> <p>I built as simple and as small a model as possible, knowing that this wouldn’t be able to detect hard deep fakes, but figuring that it might detect the easier ones more robustly.</p> <p><strong>Dataset:</strong></p> <p>I used @humananalog Blazeface solution to save off faces in 32 frames from each video to the hard drive (evenly spaced throughout the video). I screened the videos, only allowing training cases where all of the face parts (eyes, nose, mouth, etc) were visible in all frames. This reduced the training set by c.25% and I think this was important in getting the models to converge properly. </p> <p><strong>Sampling:</strong></p> <p>I sampled 50-50 real/fake videos in each epoch (all real videos + 1 sampled corresponding fake, each epoch choosing a random rake). Once a video was sampled, I randomly sampled 1 frame from the 10 frames with the highest confidence in that video. Overall, I’m not sure how much the sampling scheme affected the score. It didn’t have a huge impact on my validation set or the public leaderboard, but maybe it counted for something in the private leaderboard.</p> <p><strong>Augmentation:</strong></p> <p>Random horizontal flips, jpeg compression, brightness contrast. Randomly selecting frames as outlined above also did some of this job.</p> <p><strong>Models:</strong></p> <p>I used EfficientNet-b0 as it was the smallest model that gave reasonable results.</p> <p><strong>Resolution:</strong></p> <p>I used three different input sizes (separate models for each): 180, 196, 224. I used smaller rather than larger resolutions in a bid to make things more robust (reasons outlined above).</p> <p><strong>Ensemble:</strong></p> <p>Each model was fit over 5-fold samples (folder split). The end result averaged these.</p>
Deepfake Detection Challenge
3rd place solution and code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Summary</h1> <p>My solution consists of three EfficientNet-B7 models (I used the Noisy Student pre-trained weights). I did not use external data, except for pre-trained weights. One model runs on frame sequences (a 3D convolution has been added to each EfficientNet-B7 block). The other two models work frame-by-frame and differ in the size of the face crop and augmentations during training. To tackle overfitting problem, I used mixup technique on aligned real-fake pairs. In addition, I used the following augmentations: AutoAugment, Random Erasing, Random Crops, Random Flips, and various video compression parameters. Video compression augmentation was done on-the-fly. To do this, short cropped tracks (50 frames each) were saved in PNG format, and at each training iteration they were loaded and reencoded with random parameters using ffmpeg. Due to the mixup, model predictions were “uncertain”, so at the inference stage, model confidence was strengthened by a simple transformation. The final prediction was obtained by averaging the predictions of models with weights proportional to confidence. The total training and preprocessing time is approximately 5 days on DGX-1.</p> <h1>Model and code</h1> <p>You can use my <a href="https://github.com/NTech-Lab/deepfake-detection-challenge">code</a> to reproduce the result.</p> <h1>Key ingredients</h1> <h2>Mixup on aligned real-fake pairs</h2> <p>One of the main difficulties of this competition is a severe overfitting. Initially, all models overfitted in 2-3 epochs (the validation loss started to increase). The idea, which helped a lot with the overfitting, is to train the model on a mix of real and fake faces: for each fake face, we take the corresponding real face from the original video (with the same box coordinates and the same frame number) an do a linear combination of them. In terms of tensor it’s <br> <code>python input_tensor = (1.0 - target) * real_input_tensor + target * fake_input_tensor </code> where target is drawn from a Beta distribution with parameters alpha=beta=0.5. With these parameters, there is a very high probability of picking values close to 0 or 1 (pure real or pure fake face). You can see the examples below: <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F409174%2Fb2c49a9ba741584da43e34306ea65505%2Fmixup_example.jpg?generation=1592040431827143&amp;alt=media" alt=""> Due to the fact that real and fake samples are aligned, the background remains almost unchanged on interpolated samples, which reduces overfitting and makes the model pay more attention to the face.</p> <h2>Video compression augmentation</h2> <p>In the paper [1] it was pointed out that augmentations close to degradations seen in real-life video distributions were applied to the test data. Specifically, these augmentations were (1) reduce the FPS of the video to 15; (2) reduce the resolution of the video to 1/4 of its original size; and (3) reduce the overall encoding quality. In order to make the model resistant to various parameters of video compression, I added augmentations with random parameters of video encoding to training. It would be infeasible to apply such augmentations to the original videos on-the-fly during training, so instead of the original videos, cropped (1.5x areas around the face) short (50 frames) clips were used. Each clip was saved as separate frames in png format. An example of a clip is given below: <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F409174%2F621e97ea700ccd2e6930ca18835936f0%2Fclip_example.jpg?generation=1592040594526237&amp;alt=media" alt=""> For on-the-fly augmentation, ffmpeg-python was used. At each iteration, the following parameters were randomly sampled (see [2]): - FPS (15 to 30) - scale (0.25 to 1.0) - CRF (17 to 40) - random tuning option</p> <h2>Model architecture</h2> <p>As a result of the experiments, I found out that the EfficientNet models work better than others (I checked ResNet, ResNeXt, SE-ResNeXt). The best model was EfficientNet-B7 with Noisy Student pre-trained weights [3]. The size of the input image is 224x192 (most of the faces in the training dataset are smaller). The final ensemble consists of three models, two of which are frame-by-frame, and the third works on sequence.</p> <h3>Frame-by-frame models</h3> <p>Frame-by-frame models work quite well. They differ in the size of the area around the face and augmentations during training. Below are examples of input images for each of the models: <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F409174%2F6b85a72e94d2d33b9f92393ff1ab702a%2Ffirst_and_second_model_inputs.jpg?generation=1592040647615387&amp;alt=media" alt=""></p> <h3>Sequence-based model</h3> <p>Probably, time dependencies can be useful for detecting fakes. Therefore, I added a 3d convolution to each block of the EfficientNet model. This model worked slightly better than similar frame-by-frame model. The length of the input sequence is 7 frames. The step between frames is 1/15 of a second. An example of an input sequence is given below: <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F409174%2Fb92e6241ef295c6f719fd4987144c58f%2Fthird_model_input.jpg?generation=1592040677053674&amp;alt=media" alt=""></p> <h2>Image augmentations</h2> <p>To improve model generalization, I used the following augmentations: AutoAugment [4], Random Erasing, Random Crops, Random Horizontal Flips. Since I used mixup, it was important to augment real-fake pairs the same way (see example). For a sequence-based model, it was important to augment frames that belong to the same clip in the same way. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F409174%2F1a8e42e972ea9a755cd29ef49217ffb2%2Faugmented_mixup.jpg?generation=1592040717314961&amp;alt=media" alt=""></p> <h2>Inference post-processing</h2> <p>Due to mixup, the predictions of the models were uncertain, which was not optimal for the logloss. To increase confidence, I applied the following transformation: <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F409174%2F747b96390d72602fec15438d627e92ed%2Fpred_transform.jpg?generation=1592040745015626&amp;alt=media" alt=""> Due to computational limitations, predictions are made on a subsample of frames. Half of the frames were horizontally flipped. The prediction for the video is obtained by averaging all the predictions with weights proportional to the confidence (the closer the prediction to 0.5, the lower its weight). Such averaging works like attention, because the model gives predictions close to 0.5 on poor quality frames (profile faces, blur, etc.). </p> <h2>References</h2> <p>[1] Brian Dolhansky, Russ Howes, Ben Pflaum, Nicole Baram, Cristian Canton Ferrer, “The Deepfake Detection Challenge (DFDC) Preview Dataset” <br> [2] <a href="https://trac.ffmpeg.org/wiki/Encode/H.264">https://trac.ffmpeg.org/wiki/Encode/H.264</a> <br> [3] Qizhe Xie, Minh-Thang Luong, Eduard Hovy, Quoc V. Le, “Self-training with Noisy Student improves ImageNet classification” <br> [4] Ekin D. Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, Quoc V. Le, “AutoAugment: Learning Augmentation Policies from Data”</p>
Deepfake Detection Challenge
43rd place private LB solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here is a brief description of the solution that led us to the 43rd place on the private LB! The core idea is described in our paper "Video Face Manipulation Detection Through Ensemble of CNNs" available on <a href="https://arxiv.org/pdf/2004.07676.pdf">arXiv</a>. To reproduce the paper results, please refer to our <a href="https://github.com/polimi-ispl/icpr2020dfdc">GitHub repository</a>. Kaggle notebook for inference is also now available <a href="https://www.kaggle.com/nicobonne/43-rank-ispl-ensamble-10-model">here</a>.</p> <h1>Model</h1> <p>We started from EfficientNetB4 and we tweaked it a bit, by adding an attention mechanism in the middle of its convolutional blocks chain. We call it EfficientNetB4Att.</p> <h1>Training</h1> <p>We resorted to two different training paradigm, end-to-end and Siamese training with triplet loss, both considering frames as samples.</p> <p>We then trained 5 instances for each model in a 5-fold strategy based on folders. For each fold, we selected 40 consecutive folders for training and the remaning 10 for validation. There was no overlap between the folds. By doing this we ended up with 10 models, 5 trained in a end-to-end fashion and 5 trained in a siamese fashion.</p> <p>During training, we considered only frames with one face, keeping the best face found by Blazeface. As augmentation we used addictive noise, change of saturation, brightness, downscale and jpeg compression. We trained using Adam for 20k iterations max (more details on this in the <a href="https://arxiv.org/pdf/2004.07676.pdf">paper</a>).</p> <h1>Inference</h1> <p>At inference time, we considered 72 frames per video and looked at all the faces found by Blazeface, keeping only those with a score above a certain threshold. In case a frame had more than one face above the threshold, but with discordant scores, we took the maximum score above them. The rationale is that if we have multiple faces and just one face is fake, we want to classify the frame as fake. We then averaged the scores of all the networks, the scores of all the frame of the video and computed the sigmoid. You can look into the inference code in details <a href="https://www.kaggle.com/nicobonne/43-rank-ispl-ensamble-10-model">here</a>.</p> <p>What a journey! Big big thanks to all my teammates from <a href="http://ispl.deib.polimi.it/">Image and Sound Processing Lab (ISPL)</a> of Politecnico di Milano. @unklb197 @edoardodanielecannas @saramandelli @bestagini</p> <p>Thank you! We initialize the architecture with ImageNet weights and then we add the attention layer.</p> <p>That’s a good point, we don’t have any experimental evidence of what vanilla EfficientNet is considering. We’ll look into it, thank you.</p>
Deepfake Detection Challenge
5th (from 20th) place approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Ha, thanks, and done :)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3108186%2Fc48aafd2dcdc7ff90d51c894946eb8d3%2Fapproach.PNG?generation=1585749236643739&amp;alt=media" alt=""></p> <p><a href="https://james.dev/img/approach_large.PNG">Full size/res picture available here.</a></p> <h1>Intro</h1> <p>Our solution is largely based around 3D CNNs which we felt would generalise well to 'different' methods of deepfakes. Instead of focusing on specific pixel patterns in frames, we hope they might identify temporal problems like the often tried-and-failed 2D CNN-&gt;RNN models, but with a much lower tendency to overfit. The 3D models have similar numbers of parameters to 2D models, and yet also the kernels have a 'depth', so they are relatively modest in their ability to fit in the 2D plane.</p> <p>Only a month ago we had a solution using a few 3D CNNs, a 2D CNN-&gt;LSTM and 2D CNN and we were 3rd on the leader board. However, we found it hard to keep up over the last couple of weeks and have dropped to 22nd.</p> <p>Fingers crossed for the private leader board result, and good luck everyone!</p> <h1>Face extraction</h1> <p>The basis of the pipeline is the face extractor. This was written within the first week of the competition and since then it's undergone only minor tweaks. In summary, every nth frame (we settled on 10) is passed through MTCNN. The bounding box co-ordinates are then used to set a mask in a 3D array. Faces which are contiguous (overlapping) in 3D are assumed to be a single person's face moving through face and time. We then extract a bounding box which includes this entire face over time and create a video from this region of interest, including every frame (not just every 10th). </p> <p>One nice thing about this method, even ignoring the video aspect, is it greatly reduced false positives, because the 'face' had to be present for a long period of time of the video to count as a face.</p> <p>Separately, we use the bounding boxes of MTCNN for a traditional 2D CNN inference.</p> <h1>3D CNNs</h1> <p>We trained 7 different 3D CNNs across 4 different architectures (I3D, 3D ResNet34, MC3 &amp; R2+1D) and 2 different resolutions (224 x 224 &amp; 112 x 112). The validation losses of the 3D CNNs ranged between 0.1374 and 0.1905. I initially struggled to fit models with spatial-level augmentation, finding they were so disruptive that the model struggled to tell real apart from fake. I had therefore only been using only pixel-level augmentation such as brightness (and Hflip) and some random cropping. However, Ian developed what we hope is a very successful 3D cutmix approach which led to models with good CV scores to be trained and were added to the ensemble.</p> <p>To put the benefits of ensembling with these models into context, when we last tried the I3D model (lowest validation loss) alone we ended up with a public LB score of 0.341. With ensembling we’ve improved to 0.253. </p> <p>Both these networks and the 2D CNNs, these were trained with AdamW (or sometimes Ranger) with a 1-cycle learning rate scheduler.</p> <h1>2D CNNs</h1> <p>We ended up including a single 2D CNN in the ensemble in the end. It was trained with quite aggressive augmentation. Ian settled on SE-ResNeXT50 for the final model, though we tried multiple others including Xceptions and EfficientNets.</p> <h1>Things that didn't work, or didn't work enough to be included.</h1> <ul> <li><p>Using a series of PNGs for videos instead of MP4s. I think this is interesting. We trained these models by generating videos using an identical face extraction pipeline and saving the faces as MP4s (~216,000 fake faces, ~42,000 real, 17Gb of video). Towards the end we wondered if saving the faces as MP4s led to a loss of fidelity and so we saved every frame of these videos as a PNG and tried retraining the networks that way. This took almost a week, resulted in over 500 Gb of data, and resulted in the 3D CNNs overfitting even with reference to the validation data very early. It was a very time-consuming experiment that we still find surprising, given the PNG data is more like what the feature extractor is seeing at test time.</p></li> <li><p>Using fewer frames. This might seem obvious but it's a bit more nuanced. We only used faces for videos if they were contiguous over at least 30 frames, up to a maximum of 100. That 100 was set by time constraints. However, using fewer frames would also mean a face is able to move around less. We found in some videos someone would walk across the entire screen in a second. This meant that the face only took up a tiny proportion of the entire region of interest. We therefore tested reducing the maximum frame number to, say, 64 ensure we still got a couple of seconds of video (we trained on chunks of 64 frames) but faces moved around less. However, this gave a worse LB score. We also tried more intelligent methods, such as using at least 30 frames, but then ‘stopping’ as soon as a face had moved &gt; 1 face diameter away from its start point. Again, this was worse (this was particularly surprising). It probably shows that deepfakes are more and less detectable through different stages of the video, and so more frames is always better.</p></li> <li><p>CNN -&gt; RNNs did work a <em>little</em>, but not enough to use. Before Ian joined and we became a team, my pipeline did include one of these, but with his better models we dropped it. This seems in contrast to a lot of people who seemed to have 0 success. I suspect this is because we trained these models with the MP4 files rather than image files (see the previous point).</p></li> <li><p>A segmentation network to identify altered pixels. Ian developed masked by 'diffing' the real and fake videos and trained a network to identify altered pixels. It had some success, but overfitting meant it had an LB score of ~0.5 and wasn't enough to provide benefit in ensembling.</p></li> <li><p>Skipping alternate frames as a form of test-time-augmentation to amplify deltas between frames certainly didn't help, and possibly hindered.</p></li> <li><p>Training a fused 3D/2D model which allowed the ensemble weighting to be 'learned' on validation videos did worse than a simple 50:50 pooling.</p></li> <li><p>Using more complicated methods of pooling predictions across faces than taking a mean. I previously thought the 'fakest' value should be used, for example, if we were using very high thresholds for detection.</p></li> <li><p>Some 3D models. For example, architectures such as 'slowfast' were very easy to train but overfitted profoundly, akin to a 2D CNN. Other networks such as MiCT we struggled to fit at all!</p></li> </ul> <p>Oh interesting! You are the only other person who I've seen use 3D CNNs. Did you use 3D in your final submission?</p> <p>We just used chunks 45-49 as validation. This is because a couple of these chunks seemed to contain errors in later frames, and because we only used the first n frames for validation, it sense to use these for validation so every video could still be useful.</p> <p>It's interesting you only tried slowfast, as we found it was the only network (other than MiCT) which didn't work for this task!</p> <p>Thank you! Congrats to you too!</p>
Deepfake Detection Challenge
67th Place Solution: Kaggle Kernals Only + Code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I have just updated this post with a brief write up! Thanks for the kind words everyone!</p> <p>This is a brief write-up, if you want to you can read my full ramblings here along with my code: <a href="https://github.com/GreatGameDota/Deepfake-Detection">https://github.com/GreatGameDota/Deepfake-Detection</a></p> <h1>TLDR: Ensemble for the win!</h1> <p>Once again an amazing Kaggle competition that I had a ton of pleasure participating in! I will like to thank Kaggle and the hosts for putting together this competition and working hard to make sure it went as smooth as it did!</p> <p>I will also like to shout out some great people who helped me a lot in the competition:</p> <p>Thanks to <a href="/phunghieu">@phunghieu</a> Hieu Phung for his amazing dataset that I used which can be found here: <a href="https://www.kaggle.com/c/deepfake-detection-challenge/discussion/128954#736780">https://www.kaggle.com/c/deepfake-detection-challenge/discussion/128954#736780</a></p> <p>Thanks to <a href="/unkownhihi">@unkownhihi</a> Shangqiu Li and team for all of the things they shared over the course of the competition. Most importantly the Mobilenet face extractor which can be found here: <a href="https://www.kaggle.com/unkownhihi/mobilenet-face-extractor-helper-code">https://www.kaggle.com/unkownhihi/mobilenet-face-extractor-helper-code</a></p> <p>Thanks to <a href="/humananalog">@humananalog</a> for his starter code notebook: <a href="https://www.kaggle.com/humananalog/binary-image-classifier-training-demo">https://www.kaggle.com/humananalog/binary-image-classifier-training-demo</a></p> <h2>Models</h2> <ul> <li>Xception</li> <li>EfficientNet B1</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3543139%2Fdf0bb9e27b1053f0638883ba4d0422a5%2Fmodel.png?generation=1587741613868657&amp;alt=media" alt=""></p> <h2>Dataset</h2> <p>Image resolution: 150x150 Mobilenet was used to extract faces for training and inference The dataset I used is public and can be found here: <a href="https://www.kaggle.com/c/deepfake-detection-challenge/discussion/134420">https://www.kaggle.com/c/deepfake-detection-challenge/discussion/134420</a></p> <h2>Augmentation</h2> <ul> <li>Horizontal flip</li> <li>Jpeg Compression</li> <li>Downscale</li> </ul> <h2>Training</h2> <ul> <li>Adam optimizer</li> <li>20 epochs</li> <li>Binary CrossEntropy Loss function</li> <li>ReduceLROnPlateau</li> <li>Save checkpoint based on validation LogLoss</li> <li>Folder-wise Validation split: 0-40 train, 41-49 validation</li> <li>Simple undersampling was used to balance the data</li> <li>1 frame was randomly picked from every video while training</li> </ul> <h2>Ensembling/Blending</h2> <p>I ensembled all the models using the geometric mean of their probabilities.</p> <h2>Final Submission</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3543139%2F63e59bfe9c96f5014bd4c42bab52849c%2Fscores.png?generation=1587741641643531&amp;alt=media" alt=""></p> <h2>What didn't work</h2> <ul> <li>LRCN (CNN + RNN)</li> <li>External data</li> <li>Post-processing/Clipping</li> </ul> <h2>Final Thoughts</h2> <p>This was another great Kaggle competition! I'm so happy I achieved such a great performance on a competition that a lot of people just skipped due to the amount of data. I'm also surprised I performed so well with only Kaggle kernals!</p> <p>I'm so excited to get my second ever competition medal and I'm even more excited because it is my first silver medal! I now graduate to Kaggle competition expert!</p> <h2>Some helpful Links</h2> <p>My baseline kernal: <a href="https://www.kaggle.com/greatgamedota/xception-classifier-w-ffhq-training-lb-537">https://www.kaggle.com/greatgamedota/xception-classifier-w-ffhq-training-lb-537</a> My submission kernal: <a href="https://www.kaggle.com/greatgamedota/69th-place-solution-mobilenet-inference/">https://www.kaggle.com/greatgamedota/69th-place-solution-mobilenet-inference/</a> Image dataset kernal: <a href="https://www.kaggle.com/greatgamedota/deepfake-detection-full-data">https://www.kaggle.com/greatgamedota/deepfake-detection-full-data</a></p> <p>As said in my write-up, each model varies in the architecture. My finished code I cleaned up and deleted the commented out batchnorm calls because my last experiments didn't use them. A version that still has the batch norms is my baseline notebook model.</p>
Deepfake Detection Challenge
79st Place solution: A Hotchpotch of EfficientNets
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for the kind words!</p> <p>Thank you!</p> <p>👍 </p> <h3><strong>Introduction:</strong></h3> <p>Hi Kagglers! </p> <p>Congratulations to everyone who survived the leaderboard shakeup. Some of you probably expected that it would be hard to generalize to new videos and especially DeepFake videos “in the wild”. In essence, we tried to include as much model diversity in our ensemble as possible in order to survive a leaderboard shakeup.</p> <p>For us ( <a href="/koenbotermans">@koenbotermans</a> , <a href="/kevindelnoye">@kevindelnoye</a> and I) this competition was 3 months of blood, sweat and tears with ups and downs and a radical switch from Keras to Pytorch halfway in the competition. I hope the insights in this post will be helpful to other competitors.</p> <p>As many other competitors we also use a frame-by-frame classification approach for detected faces. We agonized over adding the audio data in our model or doing sequence modeling (e.g. LSTM cells), but ended up not experimenting with it. This gave us a lot of time to focus on creating datasets and training classification models.</p> <p>The final kernel that we used for our submission can be found here: <a href="https://www.kaggle.com/carlolepelaars/efficientnet2xb5-b6200-b6finetuned-b4-2xres-01clip">https://www.kaggle.com/carlolepelaars/efficientnet2xb5-b6200-b6finetuned-b4-2xres-01clip</a></p> <h3><strong>Data Preparation:</strong></h3> <p>Over the course of the competition we created several image datasets of face detections. The first baseline (ImagesFaces1) only contained the first face detection in a video using MTCNN as our face detector. Fine-tuning EfficientNetB5 on this dataset already gave us a pretty reasonable baseline (0.23083 local validation, 0.39680 on public leaderboard). Half-way we switched to RetinaFace as face detector as it had better performance and was faster. The first dataset after this switch (ImagesFaces3) contains 200K faces as training data. The final dataset (ImagesFaces6) featured 2M face detections from the training videos. Our final solution has both models that are trained on ImagesFaces3 and ImagesFaces6. It is arguable that 2M face detections is overkill, but luckily we had sufficient computational resources to experiment with it. In all datasets we took a 20% sample as validation data avoiding data leakage due to the limited number of actors. The datasets we created were balanced so we had 50% real and 50% fake faces.</p> <h3><strong>Face Detection:</strong></h3> <p>We started out with MTCNN as our face detector, but we thought it was too slow and that we could have better performance. Eventually we settled on RetinaFace as it had better performance and faster inference time.</p> <p>The MTCNN library we used is built on Tensorflow so naturally our classification models where based on Tensorflow/Keras. We did experiments using XCeptionNet and EfficientNetB0-B8. When we switched to RetinaFace we could only find a Pytorch implementation and as both Tensorflow and Pytorch allocate all GPU memory we felt forced to choose one framework for face detection and classification models. With 1.5 months left we therefore chose to go with RetinaFace and start experimenting with Pytorch models.</p> <h3><strong>Models and Ensembling strategy:</strong></h3> <p>Our final 81st place solution is a hotchpotch of EfficientNet Models. This haphazard combination was intentional to include diversity and to have a good chance at generalizing well to new data. The final models that were used:</p> <ol> <li>EfficientNetB6, 200x200 resolution, ImagesFaces6</li> <li>EfficientNetB5, 224x224 resolution, ImagesFaces3</li> <li>EfficientNetB5, 224x224 resolution, ImagesFaces6</li> <li>EfficientNetB6, 224x224 resolution, ImagesFaces6, finetuned for one epoch on our validation data.</li> <li>EfficientNetB4, 224x224 resolution, ImagesFaces6, data augmentation and label smoothing.</li> </ol> <p>All models are trained with a starting learning rate of 0.001 and using a “Reduce on Plateau” learning rate schedule with a patience of 2 and with a multiplier of 0.5. All models have a dropout of 0.4 before the final layer and group normalization so we don’t lose performance with small batch sizes (e.g. 20).</p> <p>We settled on a simple mean from all models for each video. We made 25 predictions per video on evenly spread frames. No weighting of models was done. The predictions were clipped between 0.01 and 0.99.</p> <p>In order to speed up the inference process we decreased the frame resolution by a factor of 2. Most original video frames were already high resolution so the face detector was still effective on a frame with a reduced resolution.</p> <h3><strong>Generalization:</strong></h3> <p>Throughout the competition we generally had a large correlation between our local validation scores and public leaderboard scores. A lower log loss on the local validation generally meant a better leaderboard score, but our final validation scores were extremely low (approx. 0.068 log loss). This means that we most likely had some data leakage in our training data, but we don’t know exactly where it was coming from.</p> <h3><strong>Data Augmentation:</strong></h3> <p>One thing I regret is that we underestimated the power of data augmentation for this competition. With only a month left we started using rotation (15 degrees) and flipping to augment the data, but looking at other public solutions we probably could have gotten a much higher score with more extensive data augmentation. </p> <h3><strong>What we would have liked to try but didn’t:</strong></h3> <ul> <li>LSTM cells</li> <li>Mixed precision (with NVIDIA DALI or apex).</li> <li>Include audio data and extract features using LSTM cells.</li> <li>Including ResNeXT101_wsl models. We trained a few models, but the weight files were too large to add it to the ensemble.</li> <li>Stochastic Weight Averaging (SWA).</li> <li>Fine-tuning “Noisy Student” weights (The Noisy Student paper was fairly new and not implemented in the libraries we used yet.)</li> <li>Training with Mixup.</li> <li>Taking the difference between frames and include it as a channel.</li> </ul> <h3><strong>What didn’t work:</strong></h3> <ul> <li>Taking the median of predictions for a video.</li> <li>Naïve postprocessing (Changing a prediction of 0.8 to 0.95, etc.)</li> <li>Clipping more than 0.01 (Sometimes there was a leaderboard improvement, but we decided it was too risky).</li> <li>Test Time Augmentation (TTA) with rotations. </li> </ul> <p>I hope you got some insights from this solution overview! Feel free to ask questions or give feedback on this approach!</p> <p>Thank you!</p> <p>Hi, we used the raw bounding box predictions without margin. This has probably been a weakness in our approach.</p>
Deepfake Detection Challenge
92th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>With the publication of the final results, we will share in this post the details of the machine learning pipeline that got us into the 92nd place (top 5% of all contestants).</p> <p>We want to share insight into what we found worked well and what did not for this challenge.</p> <h2>ML Pipeline</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2541996%2F559ac0932600a1a49a16c482b09fafc1%2Fpipeline_DFDC.png?generation=1592040705943795&amp;alt=media" alt=""> <em>Our ML pipeline, including randomly sampling frames from videos, cropping around the face, inference with a trained model, and averaging the predictions to make a prediction for the video.</em></p> <p>The first choice we had to make involved what to process: should we focus on the entire videos, or on individual frames sampled from the videos? This choice would affect the type of machine learning model we used, the computation time, and probably the final performance. After observing that many fake videos contained obvious facial manipulations and could be identified from still images, we decided early on to focus on detecting a deepfake from individual frames.</p> <p>We first created a balanced training set by sampling 10 frames from real videos and 2 frames per fake video. We then cropped these images to focus on the faces, used transfer learning to train a pretrained model on this face dataset, and finally for inference we used ensembling with several models and video frames.</p> <p>We reached the top 5% of participants thanks to a combination of choices we made throughout the ML pipeline, focusing on:</p> <ul> <li>face detection</li> <li>model selection</li> <li>generalization methods</li> <li>frame and model ensembling</li> </ul> <h2>Face Detection</h2> <p>Since the video manipulations focused on people’s faces, which were only a small part of the videos, it was essential to develop a good face detection system. Specifically, we sought a face detection system that would maximize the detection probability and minimize the false positive rate.</p> <p>We benchmarked and compared several different open source face detectors, and ended up testing two different approaches:</p> <ol> <li>Face detection with no false positives. We hypothesized that noise in the data created by frames wrongly identified as faces could adversely affect training. For this approach, we used a combination of two different face detectors: faced [1] and dlib [2]. By requiring that a face must be detected by both detectors, we reached 0.1% false positives, which came at a cost of a rather low detection rate of 60%.</li> <li>Balanced false positive rate and detection probability. For this approach, we used the BlazeFace [3] detector and immediately reached 97% detection rate and 7% false positives. However, we noticed that this detector performed poorly when the faces were far from the camera. By adding an additional step in which we created horizontal, vertical, and matrix sliding windows with overlapping cells, and then taking those windows to perform face detection, we reached a 99% detection rate with 8% false positives.</li> </ol> <p>We found that the second approach yielded better results on the Kaggle leaderboard, and then adopted it as our baseline for face detection. We parallelized this approach to make it faster and we were extracting frames from videos, detecting faces, and performing model inference at 25 frames per second in the Kaggle environment.</p> <h2>Choice of model</h2> <p>By choosing to focus only on frames as opposed to entire videos, we had reduced the challenge to an image classification problem and could rely on pre-existing methods. Specifically, we used transfer learning with Convolutional Neural Networks (CNNs) trained on the ImageNet classification task [4]. Although the ImageNet dataset and the Deepfake dataset are very different, all natural images share common features, and we expect that starting with a pretrained model allows it to focus on the specifics of the problem at hand.</p> <p>As we tested several models, we found a correlation between the model’s results on ImageNet and the validation loss on our dataset. As a result, we selected ResNext [5] and DenseNet [6] models.</p> <h2>Generalization methods</h2> <p>The biggest problem we had to solve in this DeepFake challenge was that of generalization, as the training data provided and the test data used for the leaderboard came from different distributions. Consequently, we found that there was often a significant gap between our validation score and the leaderboard score.</p> <p>We combined three different approaches to improve generalization:</p> <ul> <li>Regularization: we tried standard methods such as dropout, weight decay, and label smoothing. However, we found that these methods only yielded a small improvement.</li> <li>Adding more data: to increase the diversity of images in our training set, we added data from a different source. Specifically, we used the FaceForensics dataset [7], released in 2018, which is also focused on deepfake detection.</li> <li>Data augmentation: We synthetically created more data by randomly introducing slight variations to the existing images using translation, rotation, brightness variation, compression, and resolution reduction.</li> </ul> <p>We found that the data augmentation techniques led to the highest improvement in our score. On our final submissions, we only used data augmentation and regularization.</p> <h2>Ensembling</h2> <p>To make predictions on videos, we performed two types of ensembling:</p> <ul> <li>Frame ensembling: we randomly sampled frames from a video, performed face detection, made individual predictions for each face detected, and then used the average over all the predictions as our prediction for the video. We started out doing this averaging with 5 frames per video, but quickly realized the more frames we used, the better. In our final submission, we were taking the average over 100 frames per video.</li> <li>Model ensembling: since different models tend to learn slightly different patterns in the data, to achieve better generalization, we averaged the predictions made by several different ResNext and DenseNet models.</li> </ul> <p>The use of both types of ensembling significantly improved our score and made us shoot up about 100 places in the ranking.</p> <h2>What we did and did not try</h2> <p>There were a few other things we tried that ended up not making the cut in our final pipeline:</p> <ul> <li>We hypothesized that we may be missing out on important information captured in the temporal dimension of the video. We thus trained a Recurrent Neural Network (RNN) on top of the pre-trained Convolutional Neural Network (CNN) with sequences of frames as input. However, we found that our results were not very different from training just a CNN and averaging several predictions. We thus concluded that most information about the real or fake nature of the data could be found in individual frames.</li> <li>We hypothesized that simply averaging predictions made by a CNN over several frames of a video was suboptimal and that better results could be reached by training a separate network to aggregate the predictions. However, despite having a huge impact on our validation score, we found that this procedure led to overfitting to our train dataset and worse generalization to the test dataset.</li> </ul> <p>Some things we did not try but could have improved our score are:</p> <ul> <li>We knew from the start that around 5% of the videos were fake because of manipulations not of the image, but of the audio. Although we decided to focus on the visual fakes, extra gains could have been made, either by sticking with our approach and cleaning our train dataset to remove the fake audios or by building a more comprehensive model that could deal with both fake audios and fake visuals.</li> <li>Training a classifier with an auxiliary (but relevant) objective is known to often lead to improvement. For instance, we could have trained our model to not only classify the video frames into fake and real categories but also predict which pixels were most likely to have been altered. <ul><li>We expect that more data augmentations and fine tuning of our models and pipeline would have led to improved generalization. For example, we found after the competition had officially ended that a wider crop around the faces we detected would have led to a better score.</li></ul></li> </ul> <h2>Our results</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2541996%2F6c5a267127b0fe3e6ac6f75e452342ec%2Fscore_evolution.png?generation=1592041933312850&amp;alt=media" alt=""> <em>Evolution of our score (yellow), ranking in the leaderboard (blue) over time, and main technology driver behind score improvement (green banner)</em></p> <p>The plot above shows the evolution of our leaderboard score (in yellow, smaller is better) and of our ranking among competitors (blue, higher is better). Our score consistently improved over time, as we incrementally tested different approaches and made changes to our pipeline.</p> <h2>Acknowledgments</h2> <p>Thank you to our team members: <a href="/berlitos">@berlitos</a> <a href="/wrclements">@wrclements</a> <a href="/bastien7898">@bastien7898</a> <a href="/bennox">@bennox</a>. Thank you to the organizers for hosting this competition.</p> <h2>References</h2> <p>[1] <a href="https://github.com/iitzco/faced">iitzco/faced: 🚀 😏 Near Real Time CPU Face detection using deep learning</a> [2] <a href="http://dlib.net/">dlib C++ Library</a> [3] <a href="https://github.com/hollance/BlazeFace-PyTorch">hollance/BlazeFace-PyTorch: The BlazeFace face detector model implemented in PyTorch</a> [4] <a href="https://paperswithcode.com/sota/image-classification-on-imagenet">ImageNet Leaderboard | Papers with Code</a> [5] ResNext: <a href="https://arxiv.org/abs/1611.05431">https://arxiv.org/abs/1611.05431</a> [6] DenseNet: <a href="https://arxiv.org/abs/1608.06993">https://arxiv.org/abs/1608.06993</a> [7] <a href="https://github.com/ondyari/FaceForensics">ondyari/FaceForensics: Github of the FaceForensics dataset</a></p>
Deepfake Detection Challenge
EDIT) Private 1089th Place Approach (BIG DROP from public 7th)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We cropped face with 2x extra margin. But at the train time, we cropped it again so that extra margin is randomly chosen in [0.8, 1.2] and random translation offsets for two axes in [-0.2, 0.2]. They are distributed as truncated normal. After some EDA, we thought that detected crop box is too tight to contain whole modified area in video frames. So after we changed our face detector from BlazeFace to MTCNN, we always used margins bigger than 1/2. We also wanted to test different margins but we didn't have enough time. So analysis on margin itself was not enough. (we changed multiple things at the same time when we experiment our settings)</p> <p>EDIT) We shared our approach with concern about shake-up before private leaderboard was revealed. Unfortunately, we experienced way bigger shake-up than we expected in private LB. Since we have no objective evidence about our bad private score, we cannot specify which part was wrong in our submissions. So please take care while reading our approach.</p> <p>+) Congratulations for winners and those who did overcome shake-up. </p> <p>================ Our Sharing Before Private LB ===============</p> <h1>0. Intro</h1> <p>It was great experience for us to participate in this competition. So we want to say thank you to organizers of this challenge and others helped us a lot. </p> <p><a href="/pudae81">@pudae81</a> Your great introductory seminar about kaggle and code competition helped us a lot in joining this competition. We'll keep our fingers crossed for your journey to new career. 🤞 </p> <p><a href="/humananalog">@humananalog</a> We couldn't make it this far without great jobs shared by you. We really appreciates your sharing of awesome training and inference kernels. And that's why we share our humble approach to community even if we're not sure whether it's overfitted to public LB or not.</p> <h1>1. Overview of Our Approach</h1> <p><img src="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure1.png" alt="Method Overview"> <br> original image: <a href="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure1.png">https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure1.png</a></p> <p>Figure above shows our approach. We used MTCNN as face detector and attention-based models as prediction model.</p> <h1>2. Face Detector : MTCNN</h1> <p>We used MTCNN to detect face in video frame. We resized frame to 1280x720 before we pass it to MTCNN. And parameters passed to MTCNN is - thresholds of [0.6, 0.7. 0.95] - min_face_size of (720/30)</p> <h1>3. Model Architecture</h1> <p><img src="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure2.png" alt="Model Architecture"> <br> original image: <a href="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure2.png">https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure2.png</a></p> <p>Model architecture we used is shown above. We first extract feature vectors from each face crop, then apply self-attention based model to sequence of feature vectors to aggregate information. </p> <p>ImageNet pretrained efficient-b4 was used as our face crop feature extractor. We froze first 4 blocks due to our GPU memory capability. </p> <p>We have two similar models to get prediction of video from sequence of sampled face features.</p> <h3>3.2.1. Model 1</h3> <p><img src="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure3.png" alt="Model 1 figure"> <br> original image: <a href="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure3.png">https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure3.png</a></p> <p>Our first model uses self-attention between all sampled faces. "self-attention module" in figure contains KQV projection, self-attention, and residual connection. Then fully-connected layer follows to get prediction for each face crop. Finally, we take max prediction value for video prediction values. </p> <p>The best score of this model in LB was 0.24827 without any TTA.</p> <h3>3.2.2. Model 2</h3> <p><img src="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure4.png" alt="Model 2 figure"> <br> original image: <a href="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure4.png">https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure4.png</a></p> <p>As figure shows, our second model takes self-attention between faces in the same frame first, then takes self-attention between all sampled faces. After that, the procedure is same with Model 1. </p> <p>The best score of this model in LB was 0.25178 without any TTA.</p> <h1>4. Training Method</h1> <h2>4.1. Face Crops Preparation</h2> <ul> <li>sample 100 frames uniformly for each videos.</li> <li>use MTCNN to detect face in frame.</li> <li>2x extra margin added to face crop box(so the size is 9 times bigger in area sense) for further augmentation at train time </li> <li>face aligned with five face landmarks.</li> </ul> <h2>4.2. Train/Valid Split</h2> <ul> <li><p>One of our teammates clustered all real videos so that no actor appears in the videos from different clusters at the same time. After that, he splitted some clusters as validation set so that validation set has videos of different resolutions. </p></li> <li><p>With this cluster based real video split, their corresponding fake videos are splitted to the group of their original videos.</p></li> <li><p>Every train epoch, we used all real videos and sampled fake videos randomly to balance the number of real and fake samples.</p></li> </ul> <h2>4.3. Augmentation at train time</h2> <ul> <li>Sample 1 to 20 frames from previsously sampled 100 frames and load corresponding face crops.</li> <li>Videowise (i.e. applied to all face crops samely) <ul><li>DFDC preview dataset paper based quality augmentation <ul><li><a href="https://arxiv.org/abs/1910.08854"></a><a href="https://arxiv.org/abs/1910.08854"></a><a href="https://arxiv.org/abs/1910.08854">https://arxiv.org/abs/1910.08854</a></li></ul></li> <li>Various image transforms such as rotate, equalize, solarize, posterize etc...</li> <li>Random horizontal flip</li></ul></li> <li>Imagewise <ul><li>Random Crop <ul><li>Randomness occures in margin and face-center translation</li></ul></li> <li>Resize Interpolation <ul><li>we use three different resize interpolation : bilinear, bicubic, nearest</li></ul></li></ul></li> </ul> <h2>4.4. Optimizer</h2> <p>We used same optimizer settings for both model. - SGD with momentum 0.9 - Learning Rate Schedule - 5 epochs of linear warm-up from 0 to 0.001 - Cosine annealing from 0.001 to 0 through 35 epochs (total 40 epochs)</p> <h2>4.5. Further Regularization</h2> <ul> <li>weight decay: 0.0001 </li> <li>dropout layer right before final fully connected layer: p=0.4</li> <li>label smoothing epsilon: 0.01</li> </ul> <h2>4.6. Validation losses of model 1 while training</h2> <p><img src="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/model1.png" alt="Model 1 loss"> original image: <a href="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/model1.png">https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/model1.png</a></p> <h2>4.7. Validation losses of model 2 while training</h2> <p><img src="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/model2.png" alt="Model 2 loss"> original image: <a href="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/model2.png">https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/model2.png</a></p> <h1>5. Inference Strategy</h1> <p>With our inference strategy below, we got 0.23086 in LB</p> <h2>5.1. Sample Faces</h2> <ul> <li>Sample 20 frames per video uniformly. </li> <li>Detect and crop at most 3 faces from each frame with threshold stated above.</li> </ul> <h2>5.2. Ensemble</h2> <p>We averaged predictions from two models</p> <h2>5.3. Test Time Augmentation for each model</h2> <p>For each face detected, make 8 different crops of different margin and offset - 2 of 8 : two different margins of 1.1 and 1.15 without offsets. - 2 of 8 : two different y-axis offset of 0.15, -0.15 with margin of 1.15 - 4 of 8 : horizontal flips of above 4 crops</p> <h1>6. Outro</h1> <p>Thank you for reading. All questions are welcome.</p> <p>We want our model to be able to aggregate information from similar faces so that it doesn't get confused by unmodified faces in the videos (or image patch sampled by face detector which is not a face). That's why we chose to use self-attention mechanism and take max prediction between them.</p> <p>question about min_face_size will be answered later</p> <p>To be honest, I'm not sure whether our models generalize well or not. (totally hope so) We've never tested our models to "organic videos with and without deepfakes"</p> <p><a href="/jaideepvalani">@jaideepvalani</a> I guess this paper would be helpful. <a href="https://arxiv.org/abs/1706.03762">https://arxiv.org/abs/1706.03762</a> our attention looks similar to scaled dot-product attention in this paper.</p> <p>As I wrote in the article, "self-attention module" in our figure includes - Query, Key, Value projection(from input) - Self-attention output with projected Q, K, V (dot-product attention) - Residual Connection (which means input + self-attention output is final output of this module)</p> <p>We didn't use decoder with attention mechanism in model 1 and 2.</p> <p>I think we should check these also. - Generalization of our work (private LB score) - Reproducibility (We couldn’t do the same experiment multiple times due to lacking time)</p> <p>One of my teammates generated those figures with MS PowerPoint.</p>
Deepfake Detection Challenge
Journey to 0.3. Public 64th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Nothing for preprocessing, just used the codes from authors. By 'it worked' I mean that it worked well for some deepfakes, but for majority it didn't. So I guess it can be used for some specific df method where the pixels change is significant on the face borders or eyes.</p> <p>Hi to all the competitors and organizers of DFDC, this was my first Kaggle competition and here I want to share the methods I've experimented with during the competition and share the details of how I've finally crossed the 0.3 border. </p> <p>Firstly I would like to write a list of methods that didn't work for me (or they worked but not as good as my final solution).</p> <h1>Classical methods</h1> <ul> <li><a href="https://arxiv.org/abs/1811.00661">Exposing Deep Fakes Using Inconsistent Head Poses</a></li> </ul> <p>Code: <a href="https://bitbucket.org/ericyang3721/headpose_forensic/src/master/">https://bitbucket.org/ericyang3721/headpose_forensic/src/master/</a> In few words this method uses difference between face and head poses. Firstly I've implemented it myself, then found the code from authors on bitbucket. Approach didn't work for the majority of cases. I guess it only works good for FaceSwap faces.</p> <p>Didn't test it against Public LB</p> <ul> <li><a href="https://arxiv.org/abs/1911.00686">Unmasking DeepFakes with simple Features</a></li> </ul> <p>Code: <a href="https://github.com/cc-hpc-itwm/DeepFakeDetection">https://github.com/cc-hpc-itwm/DeepFakeDetection</a> Based on analysis of high frequency components in fourier spectra. I've tested the method on a part of dfdc dataset, found out that it works, yet not as good to beat single CNN model.</p> <p>Didn't test it against Public LB</p> <h1>Deep Learning-based methods</h1> <ul> <li><a href="https://arxiv.org/abs/1811.00656">Exposing DeepFake Videos By Detecting Face Warping Artifacts</a></li> </ul> <p>Code: <a href="https://github.com/danmohaha/CVPRW2019_Face_Artifacts">https://github.com/danmohaha/CVPRW2019_Face_Artifacts</a> This was my first submission to the competition. I've used pretrained model provided by authors.</p> <p><strong>Public Score: 1.16260</strong></p> <ul> <li><a href="https://arxiv.org/abs/2001.07444">Detecting Face2Face Facial Reenactment in Videos</a></li> </ul> <p>I've liked this approach and it even improved my score at some point. I've used ResNet-18 models as stated in the paper and compared it to single ResNet-18 model that was trained on the same data. The improvement in public score was about 0.045. This is either because the approach really works and paying attention to different face regions really matters or because this is just a kind of ensembling (5 similar models, but trained on different image regions). Or maybe both, I don't know 😃. But this approach didn't work with EfficientNet models, which I've used later. </p> <p><strong>Public Score: 0.55683 (for single ResNet-18)</strong> *<em>Public Score: 0.51062 (for ResNets-18)</em>* <strong>Public Score: 0.38760 (for EFNets-4)</strong></p> <ul> <li><a href="https://arxiv.org/abs/1809.00888">MesoNet: a Compact Facial Video Forgery Detection Network</a></li> </ul> <p>Code: <a href="https://github.com/DariusAf/MesoNet">https://github.com/DariusAf/MesoNet</a> I've tested only pretrained models. Didn't train it on dfdc dataset.</p> <p><strong>Public Score: 1.54918</strong></p> <ul> <li><a href="https://arxiv.org/abs/1912.11035">CNN-generated images are surprisingly easy to spot... for now</a></li> </ul> <p>Code: <a href="https://github.com/peterwang512/CNNDetection">https://github.com/peterwang512/CNNDetection</a> I've already gave a thought about this approach here: <a href="https://www.kaggle.com/c/deepfake-detection-challenge/discussion/134295#766094">https://www.kaggle.com/c/deepfake-detection-challenge/discussion/134295#766094</a></p> <p>Didn't test it against Public LB</p> <ul> <li><a href="https://engineering.purdue.edu/~dgueraco/content/deepfake.pdf">Deepfake Video Detection Using Recurrent Neural Networks</a></li> </ul> <p>The method which took the majority of my time, for some reason I've thought it has to outperform single CNN model. I've spend several weeks experimenting with Conv-LSTM models. </p> <p>I've firstly tried to use CNN model as feature extractor (removing last prediction layer) and train LSTM model on those features.</p> <p>I've tried CNN-LSTM model (similar to <a href="https://www.kaggle.com/unkownhihi/dfdc-lrcn-inference">https://www.kaggle.com/unkownhihi/dfdc-lrcn-inference</a>), experimented with dropouts, num of parameters, training generators, undersampling, class imbalance, custom losses, augmentations, etc.</p> <p>I've experimented with number of frames (16 and 32) and with changing the gap between them (sequential frames from different parts of video, every third frame and 2-3 frames per second).</p> <p>I've sad that it didn't perform even close to single CNN model, I don't know the reason why. Maybe I needed to experiment more or did some mistakes in architecture or training progress.</p> <p><strong>Public Score: 0.44083 (The best one across many experimental tries. Based on EFNet-4)</strong></p> <ul> <li><a href="https://arxiv.org/abs/1910.12467">Use of a Capsule Network to Detect Fake Images and Videos</a></li> </ul> <p>Code: <a href="https://github.com/nii-yamagishilab/Capsule-Forensics-v2">https://github.com/nii-yamagishilab/Capsule-Forensics-v2</a></p> <p>Trained it for few epochs on subset of DFDC dataset (1:1 real/fake). I suggest it could perform much better with more experiments, but I didn't have time for it. This is the method which I could regret I didn't pay much attention to.</p> <p><strong>Public Score: 0.60641</strong></p> <ul> <li><a href="https://hal.inria.fr/hal-02140558/document">MARS: Motion-Augmented RGB Stream for Action Recognition</a></li> </ul> <p>Code: <a href="https://github.com/craston/MARS">https://github.com/craston/MARS</a></p> <p>This is the only method I didn't have time to fully experiment with. The problem is in preparation of optical flows for training. Even with CUDA compiled OpenCV, it takes around 100 hours to extract x and y direction optical flows from 32 frames (the method requires sequences) on my PC. I so much wanted to train/test this approach, but due to the time constraints I've decided to stop it. And this is the second and the last method I could regret I didn't test.</p> <h1>My Solution</h1> <p><strong>Public score: 0.29477</strong></p> <p>The notebook is available here: <a href="https://www.kaggle.com/vostankovich/efnet6-ensemble-5-clip-1e-15">https://www.kaggle.com/vostankovich/efnet6-ensemble-5-clip-1e-15</a></p> <p>Now I will describe step by step how I've been improving single CNN model and ended up with ensemble. I'm kind of disappointed that after experimenting with other approaches not any of those came closer to single CNN model. But okay, let's continue.</p> <h3>Data preparation</h3> <p>Video reader: <a href="https://www.kaggle.com/humananalog/deepfakes-inference-demo">https://www.kaggle.com/humananalog/deepfakes-inference-demo</a> (Big thanks to <a href="/humananalog">@humananalog</a>) Face extractor: <a href="https://github.com/1adrianb/face-alignment">https://github.com/1adrianb/face-alignment</a></p> <p>I've used this specific face extractor, because I wanted to train my model on face only, not the whole head. Since the lib gives landmarks, I could crop face region easily. After some time I've realised that landmarks extraction takes much time (even though it fits to extract 16-20 frames and process them within 8 secs) and decided to keep only detection part, removing all the stuff related to landmarks. The detector used in the lib is S3FD. I've additionally used two more face detection models (MTCNN and BlazeFace) for better confidence. Using additional detectors helped me to remove some FPs and save some FNs. This is especially true for very dark images. Finally I've came up with extracting 32 frames for each person from each video. The reason for that is because I had many experiments with Conv-LSTM models. After all I had around 4M .png images of 224x224 size, but I didn't use so big number of faces for training my final single CNN model. I've randomly picked half of those (so I had 16 faces per person).</p> <h3>Training</h3> <p>Training EFNet-4 with setting Keras <code>class_weights</code> during training worked well for me. But I've found out better and what's more important much faster way to train my model. The idea is to balance fake and real data on each epoch. Since the training data has around 5:1 imbalance, each epoch we take all real examples but only 0.2 part of fake examples. In such approach 5 epochs is enough for model to see all examples. But i've experimentally found out that 6 epochs gave the best results. During the 6th epoch I've randomly picked fake examples. </p> <p>I was afraid that my model would overfit to real faces, but seems like it didn't. Augmentations helped to solve this issue. Couple of words about augmentations. I've used different albumentations functions, but during training I've checked if the image is too dark and applied more brightness with larger aug probability.</p> <p>If someone needs the generator, here is the code: (I'm too lazy now to clean up my codes and release on github). Note that here <code>fake_images</code> list is 5 times larger than <code>real_images</code>. Amount of images for each epoch was around 750K (350K real and 350K fake).</p> <pre><code>def generator(real_images, fake_images, preprocess_input_fn, batch_size=1, input_shape=(224,224,3), do_aug=False): i = 0 k = 0 images = real_images + fake_images[k*len(real_images):(k+1)*len(real_images)] random.shuffle(images) while True: x_batch = np.zeros((batch_size, input_shape[0], input_shape[1], 3)) y_batch = np.zeros((batch_size), dtype=np.single) for b in range(batch_size): if i == len(images): i = 0 k += 1 if k &amp;gt; (len(fake_images)//len(real_images) - 1): images = real_images + [str(i) for i in np.random.choice(fake_images, size = len(real_images), replace = False)] else: images = real_images + fake_images[k*len(real_images):(k+1)*len(real_images)] random.shuffle(images) x = Image.open(images[i]) x = np.array(x) if do_aug: x = augment(x) y = 1. if '/fake/' in images[i] else 0. x_batch[b] = x y_batch[b] = y i += 1 x_batch = preprocess_input_fn(x_batch) yield (x_batch, y_batch) </code></pre> <p>After I've trained several EFNet-4 models, I've found out that all models perform best after 6 epochs. Then I've started to train models for ensemble without validation. I've used RAdam optimizer with initial learning rate of 0.001. This learning rate was changed for 4th and 5th epochs to 0.0005 and for the last 6th epoch I've used 0.0001. Loss is binary crossentropy.</p> <h3>Inference</h3> <p>In the last two or three days I decided to switch from my EFNet-4 ensemble of 13 models to EFNet-6 ensemble of 5 models and trained them. Boost was around 0.005. Inference could be completely viewed in my released notebook here: <a href="https://www.kaggle.com/vostankovich/efnet6-ensemble-5-clip-1e-15">https://www.kaggle.com/vostankovich/efnet6-ensemble-5-clip-1e-15</a>.</p> <h3>Tricks that worked</h3> <ul> <li>Using <code>read_frames_at_indices()</code> along with <code>capture.get(cv2.CAP_PROP_FRAME_COUNT)</code> instead of <code>read_frames()</code>. I don't know the reason why, but it works a little bit faster.</li> <li>Resizing video for the sake of saving time for detector.</li> <li>Removing small detected objects. These are mainly false positives and don't contain faces. I've used the threshold of <code>min(video.shape)//20</code>.</li> <li>Zooming out of face. Didn't find a reason why, but it works better.</li> <li>Median averaging across frames. I was surpised, but it gave a boost of more than 0.01 compared to mean averaging.</li> </ul> <h3>Tricks that didn't work</h3> <ul> <li><strong>Clipping</strong>. It worked for single models, but for ensembles of 5+ models it always made worse. Even though for safety I've submitted two versions, one with 0.01-0.99 clip and the second one without it (well there was a clip but very small one of 1e-15).</li> <li><strong>TTA</strong>. I've tried with complex TTAs as well as with simple Horizontal Flip. But it turned out that it is better to process more frames without TTA, then apply TTA to less frames.</li> <li><strong>Judge by one person</strong>. I've tested the approach of giving FAKE prediction if one of the persons in video is FAKE. I've sorted the multiple faces by x-coordinates, and then gave prediction to each person, finally giving FAKE prediction if one is FAKE. It performs slightly worse than judge all persons equally.</li> </ul> <p><strong>Public score: 0.34 (for single EFNet-4 model)</strong> *<em>Public score: 0.30 (for ensemble of 13 EFNet-4 models)</em>* <strong>Public score: 0.295 (for ensemble of 5 EFNet-6 models)</strong></p> <p>Agree</p>
Deepfake Detection Challenge
Our solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, we congratulate the winners of the Deepfake Detection competition and also would like to thank every competitor who worked hard to solve the deepfakes problem. We learned a lot throughout this amazing journey with you all, and we hope to see you again in the upcoming Deepfake challenges.</p> <p>Before intensely experimenting with deep neural network models, we decided to inspect the deepfake data with established computer vision techniques, to see if there's a simple cue which can reveal a deepfake, having these features in our mind for later feature-engineering purposes. Initially, we could observe indicators of deepfakes in frame inspections; such small artifacts were too hard to generalize for a novel solution. Although these experiments haven't resulted with great success, they provided a better understanding of deepfakes for us. Later on, we started experimenting with deep learnig based models, which then showed significant performance increase in detecting fake faces.</p> <h2>Face Extraction</h2> <p>For face extraction and processing purposes, we used FaceNet-Pytorch,</p> <ul> <li>Extracted landmarks and bounding boxes via MTCNN, then faces were cropped and saved in PNG format for high-quality image representation.</li> <li><p>Also, we used InceptionResNET features to differentiate individuals in the same frame from each other.</p> <h2>Data Preparation</h2></li> <li><p><strong>Margin</strong>: After square cropping faces with respect to bounding boxes, we’ve scaled each face with “1.2”, which is determined by experimentation.</p></li> <li><strong>Resize</strong>: Scaled faces resized to 224x224, with BICUBIC sampling.</li> <li><strong>Augmentations</strong>: We’ve experimented with “kornia” and “albumentations”.</li> <li>Used techniques such as Adaptive Histogram Equalization, Object Detection algorithms to increase detection robustness.</li> <li>Fine-tuned MTCNN for robust and rapid face detection.</li> <li>Used specific methods such as Compression, Edge-detection, Landmark Extraction, and Low-Quality to generate several datasets.</li> <li>Additionally extracted features such as Quality Metrics, Optical Flow, FFT, DoG, Glcm, Lbp</li> </ul> <h2>Augmentations and generalization</h2> <p>During our experiments, we have used different levels of augmentation techniques to increase robustness towards uncertainty in the wild. </p> <pre><code>def chain_aug(img): return A.Compose([A.HorizontalFlip(p=1), A.JpegCompression(quality_lower=70, quality_upper=100, p=1.), A.Blur(blur_limit=7, p=1.), A.IAASharpen(alpha=(0.2, 0.5), lightness=(0.5, 1.0), p=1.) ], p=1)(image=img)['image'] </code></pre> <hr> <pre><code>def light_aug(img): return A.Compose([A.HorizontalFlip(p=1), A.RandomBrightnessContrast(p=1), A.RandomGamma(p=1), A.CLAHE(p=1), ], p=1)(image=img)['image'] </code></pre> <hr> <pre><code>def draw_ellipse(img): img = Image.fromarray(img) draw = ImageDraw.Draw(img) w, h = img.size cx, cy = w//2, h//2 x0 = np.random.randint(0, cx) y0 = np.random.randint(0, cy) x1 = np.random.randint(cx, w) y1 = np.random.randint(cy, h) draw.ellipse((x0, y0, x1, y1), fill='black') return np.asarray(img) </code></pre> <h2>Models</h2> <p>Early studies preferred backbones such as <strong>ResNet</strong>, <strong>Xception</strong>, <strong>VGG</strong> whereas we have updated our backbone choice as <strong>EfficientNet</strong>. The main idea with the backbone network was to train a simple binary classifier and to discriminate between real and fake faces. Experiments for binary-classification models started from EfficientNet-b0 and scaled up to EfficientNet-b5. Here’s a couple of things we’ve tried. - Low res effnet-b3 + Effnet-b5 (224x224) 0.30656 - Effnet-b4 0.38793 (224x224) - Effnet-b2 0.43068 (224x224) - Triplet loss models (224x224) - An ensemble of XGB, LGB, CATBoost, KNN models</p> <p>For final submissions: - An ensemble of EfficientNets: 0.31499 [<strong>FAILED</strong>] (Although scripts worked Kaggle submissions, as well as tests, sadly, our best performing model failed on private board scoring.) - Ensembles of EfficientNets + Other models mentioned above: 0.33552</p> <h2>Training schedule</h2> <p>In our development environment, we used Nvidia-docker, with a 4x 2080TI Lambda workstation.</p> <ul> <li>A balanced sampling of reals and fakes from the dataset.</li> <li>Batch size of 16 per GPU (4x), 625 iterations per epoch. </li> <li>12,500 iterations in total.</li> <li>Adam with weight_decay =10e-6</li> <li>OneCycleLR, with lr set to min 3e-6 and max 3e-4</li> <li>FP16 training</li> </ul> <h2>Validation</h2> <p>At first, like many competitors, we measured our performance with 400 public videos, after some time, we switched to folders 0-1-2. Later on, our team developed a validation split strategy based on similarity scores. </p> <p>During the competition, we were tracking; - <strong>Log loss</strong> score for each class - Accuracy, confusion matrix scores - Error, gradient map analysis with custom debug scripts</p> <h2>Things that didn’t work</h2> <ul> <li>Experimented with masks with different models such as U-Net.</li> <li>SVM, Decision tree algorithms used EfficientNet bottleneck features as well as flatten face images but we didn’t see any improvement in results.</li> <li>LightGBM with Quality Metrics dataset.</li> <li>SVM with DFT, mentioned in Unmasking DeepFake with simple Features</li> <li>LightGBM with DoG + GLCM + LBP</li> <li>Temporal Model with LSTM, for capturing temporal inconsistency</li> <li>Optical flow features, however, the DFDC dataset was too noisy for robust optical flow estimations.</li> <li>We experimented with an idea where we subtract frame n from n+1 </li> </ul> <p>Through the DFDC journey, we have had many challenges of generalizing our models to deepfakes in the wild. Training with only the DFDC dataset, our models resulted in high performance with "face swap" auto-encoder based generation techniques. Still, they failed to generalize other variations of deepfakes and generation techniques. We are very excited for future contributions, and we are happy to announce a deployed version of our solution at <a href="https://deepware.ai">https://deepware.ai</a>. Once again, we would like to thank the amazing Kaggle community and especially the Facebook team and everyone who worked hard on the Deepfakes problem to provide novel solutions.</p>
Deepfake Detection Challenge
Public 2nd (Private 30th) Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It's good to hear that thanks! Yeah you are right, but not random. I center cropped so that the final face width is x1.7.</p> <p>I remember about one week after</p> <p><a href="/limerobot">@limerobot</a> Thanks! You should have achieved great result if you started this comp a bit earlier 😂 </p> <p><a href="/jaideepvalani">@jaideepvalani</a> , no. Like I said in the post, I tried the dataloading function that guarantees fake and corresponding real to appear together in one batch, but it didn't help. I just did weighted loss with standard random dataloader.</p> <p>Averaging predictions of each frame didn't work when <strong>training</strong>. That is, I tried architecture that uses <code>n</code> frames and puts each of them into efficientnet. The output will be <code>(n_frames, n_channels, h, w)</code>. Then the model calculates statistical features such as mean, std over the <code>n_frames</code> axis. Then it concatenates them, and connects to dense layer to output single logit. But it didn't work out good. I did averaging when inferencing. I'll edit the post to avoid confusion.</p> <p>I'll update code after release of private leaderboard</p> <p>Long 4 months have passed. Applaud to participants who have been working hard and thanks to hosts who arranged this dataset and competition.</p> <p>In social aspect, automatic deepfake detection algorithm will be a must in near future. In personal aspect, this competition was held in a good timing for me. So dfdc took priority in my head for the last 4 months.</p> <p>Luckily, I somehow managed to attain 2nd place in public leaderboard. While waiting for the private leaderboard to be revealed, I'll share my solution. </p> <p>Note that all of the following are what <strong><em>I</em></strong> have done for this competition. (This post does not include what my teammates have done)</p> <p>For those who are curious, I'll first share the major methods that I came up with, which improved public lb score.</p> <h1>Ingredients for Public LB</h1> <h3>1. Augmentations</h3> <p>Extensive augmentations significantly improved public lb score and reduced cv-lb gap. I guess it makes model robust on varying data.</p> <h3>2. Face Margin</h3> <p>I set margin when cropping face, where <code>new_face_width=face_width*(1+margin)</code> (same with height). Margin of 0.5~0.7 worked significantly better than 0 and further reduced cv-lb gap. I guess model learns to detect some inconsistency between manipulated region and surrounding region.</p> <p>Tuning augmentations and face margin were the two main ingredients that boosted public lb significantly.</p> <h3>3. Model Capacity</h3> <p>Efficientnet-b4 did quite better than efficientnet-b0. With extensive augmentations, appropriate model size improved the score.</p> <h3>4. Multi-task Learning</h3> <p>I used Unet with classification branch for model architecture. I calculated pixel-wise difference between fake and corresponding original video, generated mask with pixels that have outstanding differences, and used it as segmentation part target. By using mask information, the model is able to know which part of the image is modified and I guess it helped boost performance.</p> <h3>5. Conservative Fix</h3> <p>Train data and test data have quite different distribution, as can be seen in cv-lb discrepancy. Multiplying constant (&lt;1) to the logit then taking sigmoid helped improve logloss in this situation.</p> <h3>6. Ensemble</h3> <p>Ensemble always helps.</p> <h3>7. N Frames when Inferencing</h3> <p>I used simple average of frames to get probability, so the more frames extracted from the video, the better, until it hits 9 hours restriction.</p> <p><br><br></p> <p>Now, I'll go into details of my journey. I'll separate it into 10 phases, each of where I concentrated on certain subject. (number inside parenthesis in the title is approximate public lb score at that phase)</p> <h1>Phase 1. Setting Pipeline (0.69)</h1> <p>This was my first encounter to video type vision task and deepfake detection. So at the very beginning of the competition, I started searching google for articles and papers, and also there were nice posts in kaggle discussion that introduced related papers.</p> <p>At first, I tried to go with network architectures that take care of temporal information, but I found out that they lack pretrained weights and are very heavy to train, so I decided to make baseline based on XceptionNet which was introduced in faceforensics paper( <a href="https://arxiv.org/abs/1901.08971" target="_blank">https://arxiv.org/abs/1901.08971</a> ).</p> <p>At my initial experiments, I tried to integrate audio part in the image model by using spectrogram as second input, but it didn't help that much and complicated the process, so I abandoned audio part. Anyway my teammate wanted audio information, so I set up the pipeline which uses ffmpeg with subprocess to extract audio stably in both train and public test videos. (I used PyAv first, but it wasn't stable with public test videos.)</p> <p>Reading video was the main bottleneck for the runtime, so I struggled with optimizing the process. At last, I selected method kindly provided at <a href="https://www.kaggle.com/c/deepfake-detection-challenge/discussion/122328" target="_blank">https://www.kaggle.com/c/deepfake-detection-challenge/discussion/122328</a> which was the fastest among those I tried. I extracted 20 frames evenly. Duration is 10 seconds so I extracted frame every 0.5s.</p> <p>Next step was to select fast but accurate face detector. I searched and found out that <code>retinaface</code> is the best available face detector evaluated on widerface dataset( <a href="http://shuoyang1213.me/WIDERFACE/WiderFace_Results.html" target="_blank">http://shuoyang1213.me/WIDERFACE/WiderFace_Results.html</a> ). Thereafter, I used its pytorch implementation <a href="https://github.com/biubug6/Pytorch_Retinaface" target="_blank">https://github.com/biubug6/Pytorch_Retinaface</a> .</p> <p>To save data loading time when training the network, I first processed each <code>original</code> videos and saved them as compressed joblib files at disk. I extract 20 frames from each video, detect face, crop them, and with additional meta information extracted, save them. The saved video will be a numpy array with shape <code>(n_people, n_detected_frames, height, width, 3)</code>. There were a lot of trivial issues and bugs when processing videos. I'll introduce some.</p> <ul> <li>Cropped frames don't have equal size -&gt; Just resized to match the maximum size within one video</li> <li>Some videos have 2 people -&gt; Set threshold to confidence score and if the confident score of second confident face detected is bigger than the threshold, confirm there are 2 people</li> <li>Detector finds face in some frames but doesn't in other frames within one video -&gt; just ignore no-detected frames (so the output can have &lt;20 frames)</li> <li>In some videos, detector fails to detect face -&gt; lower the confidence threshold for first confident face</li> </ul> <p>After processing all original videos, I used the bounding box information of original videos to process corresponding fake videos. I could use multiprocessing to speed up this process, since it didn't use cuda. After I processed all videos, I updated given metadata with the new extracted information.</p> <p>It took about 12 hours on my computer to process all of the train dataset. Then I trained the model with processed data.</p> <p>Inferencing in Kaggle notebook was another obstacle. Initially my notebook failed many times with varying error messages. I concluded there are some corrupt videos in public test set, so I introduced <code>try except</code> block and the submission went well thereafter.</p> <h1>Phase 2. UNet Architecture (0.6~)</h1> <p>In the Understanding Cloud Organization competition( <a href="https://www.kaggle.com/c/understanding_cloud_organization/discussion" target="_blank">https://www.kaggle.com/c/understanding_cloud_organization/discussion</a> ), I learned that multi-task learning with classification and segmentation helps improve model's classification score.</p> <p>I could generate masks by selecting pixels from the fake video that have large differences from corresponding real video. All of the training videos are compressed, so the masks were not perfect, but the strategy worked in my validation set and public test set (about 0.01 improvement in lb). </p> <p>By using mask information, the model was able to know which part of the image is modified and I guess it helped boost performance.</p> <p>I used UNet with classification branch. Also, since efficientnets are the state of the art architectures in imagenet, I used them as the encoder of Unet.</p> <p>At this moment, I was using first frame from each videos.</p> <h1>Phase 3. CV-LB Discrepancy (0.6~)</h1> <p>Then this huge problem came. My validation score and public lb score didn't correlate and had large gap (0.2 vs 0.6). It was the main problem in this competition from the start to the end. I managed to decrease the gap to ~0.05 at the end, but still don't fully understand how public test data differ from train data.</p> <h1>Phase 4. Augmentations (0.40)</h1> <p>I was only using horizontal flip since I thought augmentation will distort features that were introduced by manipulation. But it was wrong after all. Augmentation made the model robust on public test set.</p> <p>I added some basic augmentation such as shift, scale, rotate, rgbshift, brightness, contrast, hue, saturation, value. Also, referencing dfdc preview paper( <a href="https://arxiv.org/abs/1910.08854" target="_blank">https://arxiv.org/abs/1910.08854</a> ), I added JpegCompression and Downscale augmentation too.</p> <p>Local validation score improved a bit but the public score jumped massively to 0.4.</p> <h1>Phase 5. Large Models &amp; Learning Rate (0.33)</h1> <p>I was using <code>efficientnet-b0</code> and I switched to <code>efficientnet-b3</code>. It scored 0.36, then I switched to <code>efficientnet-b7</code> and it scored 0.33.</p> <p>By that time I thought the improvement was solely due to model capacity, but some of the improvement was actually due to learning rate - batch size relationship.</p> <h1>Phase 6. Experiments (0.3)</h1> <p>I thought I had some correlation with cv and lb by this time, so I tried a lot of experiment with local validation. I'll introduce some.</p> <h3>What Didn't Work for LB</h3> <ul> <li>Different learning rate between encoder - decoder</li> <li>Pad rather than resize when feeding the image</li> <li>CNN-LSTM architecture</li> <li>Construct model so that it uses statistical information across frames (ex. mean, std)</li> <li>Tune segmentation loss weight</li> <li>Guarantee fake-original pair to exist in each batch</li> <li>Use scse option in decoder</li> <li>Decrease face margin</li> <li>Use Vggface2 pretrained inceptionresnetv1 provided by FaceNet pytorch as encoder</li> <li>Validating with compress/downscaled validation set</li> <li>Split validation set with actors grouped by face encoding + KMeans</li> <li>One cycle learning rate scheduler</li> <li>Integrate retinaface confidence score as additional feature</li> <li>Stacking across frames with LGBM</li> </ul> <h3>What Worked for LB</h3> <ul> <li>Tune learning rate considering batch size (batch size 24 - lr 0.0002)</li> <li>RAdam + ReduceLROnPlateau</li> <li>Tune ratio of decreasing the learning rate when plateau (0.1-&gt;0.3)</li> <li>Use different frames of each video every epoch (#0-&gt;#7-&gt;#14-&gt;#1-&gt;#8-&gt;…)</li> <li>Increase resolution</li> <li>Use <code>conservative fix</code>: multiply constant(&lt;1) to the logits than take sigmoid. It helps improve logloss when the training and testing distributions differ.</li> </ul> <h1>Phase 7. More Augmentations (0.27)</h1> <p>After testing the tuned model on public lb, I once again found out cv and lb didn't correlate at most experiments I did. I needed to select cv vs lb at this time, and decided to value lb more, since the hosts implied that the private test set will be different than train set. So I started to validate experiments on public lb. Also, I decided to use b5 for my experiment, since I judged b0 had too low capacity when augmentations are applied.</p> <p>Then I remembered I got significant boost when I did more augmentations, so I added extensive augmentations including noise and blur, and increased the degree of augmentations. Score jumped to 0.269. I did harder augmentations but it didn't improve, so I stopped tuning augmentations.</p> <h1>Phase 8. More Experiments (0.27)</h1> <p>I did more experiments, but sadly nothing worked.</p> <ul> <li>Mixup fake-original pair</li> <li>Label smoothing</li> <li>Undersample rather than weighted loss</li> <li>Generate a mask using landmark information extracted by retinaface, and use it as an augmentation</li> <li>Use difference itself as a segmentation target, not (difference&gt;threshold)</li> </ul> <h1>Phase 9. Increase Face Margin (0.214)</h1> <p>I had an experience of increasing the face margin from 0.05-&gt;0.1, and it seemed to have increased overall public lb score. So I decided to increase face margin more.</p> <p>I experimented face margin of 0.5, and single b5 scored astonishing 0.222. So I increased face margin to 1.0 and started to tune how much I should crop the margins.</p> <p>With margin crop of 0.15 (margin of 0.7), I could get 0.218, and could get 0.214 with b4.</p> <h1>Phase 10. Ensemble (0.199)</h1> <p>I trained 7 b4s and 3 b5s with different seeds and did simple average. Also I increased nframes to 30 when inferencing. It resulted in my final public lb score of 0.199.</p> <p>I found increasing too much margin worsen the results.</p> <p><a href="/olegtrott">@olegtrott</a> I didn't understand your question. I increased face margin so that it contains more outer part of face and some backgrounds.</p> <p>Yeah I also read them. Uncertainty over licensing was an obstacle to us too. The model is publicly available so I decided to use it.</p> <p>0.9 and yes it is tuned on public score</p> <p>You can explore <a href="https://paperswithcode.com/sota">https://paperswithcode.com/sota</a> which compares models on diverse datasets.</p> <p>I used kfold or sometimes just fold0 with folder-wise split for the experiments</p> <p>1) Yeah I used that one for experiments. 2) I used all training data and used <code>pos_weight</code> parameter in <code>nn.BCEWithLogitsLoss</code> parameter to balance the loss. I didn't gain anything from changing the data loading method, and didn't dig further, so I cannot consult your specific situation.</p> <p>Thanks, <a href="/samusram">@samusram</a> . I used 256 resolution for my experiments. I don't recall precisely, but b0 took about 3 hours and b7 took about 12 hours. My final model b4 with resolution 352 took about 9 hours to train.</p> <p>1 2080ti With slight exaggeration, I can say that local computer ran 24 hours daily for last 1~2 months. Approximately 9 hours per model</p> <p>Thanks! Like said in the post, augmentations and face margin decreased the cv-lb gap. I think these two methods made the model learn robust features. And no I didn't have such problem that many real faces are classified to fake in my cv.</p> <p><a href="/catochris">@catochris</a> Thanks. The saved faces are resized to match the maximum face size within each video so that I can resize flexibly in the dataloader.</p> <p>If 2 faces are spotted and <code>n_frames=20</code>, I get 40 outputs, and I simply average them. I tried max between prob of each face and mean of 20 largest predictions but they didn't work.</p> <p>I used default config</p> <p><a href="/gaohaibin">@gaohaibin</a> I use apex fp16 <a href="/lookcc">@lookcc</a> I sample one frame per epoch, like I said in the post <code> Use different frames of each video every epoch (#0-&gt;#7-&gt;#14-&gt;#1-&gt;#8-&gt;…) </code></p> <ol> <li>I used my own machine</li> <li>1.5tb</li> <li>It is quite ambiguous to tell, but I tried to make my computer run 24 hours.</li> </ol> <p>Taking median didn't work for my validation set, so I abandoned it, but it seems it worked on public lb! Achieving your score within only 1 month or so is very impressive. Looking forward for your solution. </p> <p>I tuned manually</p> <p><a href="/godaibo">@godaibo</a> Yeah I used unet in all of my models. If the hosts want to utilize mask informations, they have uncompressed videos, so they know ground true modified regions, and the process will be much cleaner.</p> <p>Yep</p> <p>My single pytorch B5 only takes 54.51MB. I did <code>model.half()</code> to save space.</p> <p>Thanks <a href="/carlolepelaars">@carlolepelaars</a> . I did label smoothing, but it strangely gave me significantly bad cv. I thought mixup or label smoothing will make model more conservative and generalize better, but it didn't work for my case. I found B4 did better than B5 in my last version and settled on b4+b5.</p> <p>Thanks <a href="/youhanlee">@youhanlee</a> Hoping for your last step to GM!</p> <p>I'll update code after private leaderboard reveals. Since I didn't need decoder part when inferencing, I excluded them. Also I did <code>model.half()</code>. but since my dataset is less than 500MB, halving wasn't needed after all.</p> <p>Yeah if it is real mask contains only 0. I generated mask (segmentation target) by following pseudo code. <code> (abs(frame - original).mean(axis=-1) &amp;gt; adaptive_threshold).astype(int) </code></p> <p><a href="/maralski">@maralski</a> I used fixed pos_weight</p> <p><a href="/xavierlin">@xavierlin</a> No I didn't</p> <p>No I didn't</p> <p>I followed basic approach - select best epoch with val set logloss.</p> <p>I used multi-task in the final solution.</p> <p>hmm I didn't know the paper and also didn't try that method.</p> <p><a href="/jaideepvalani">@jaideepvalani</a> 1) every <code>frame</code>. I'll edit it to <code>frame</code>. 2) <code>constant*mean_of_abs_pixel_differences</code> for each frame. 3) as it is. <code>frame - original</code> will yield 0 mask if frame is original 4) like I mentioned in the post, I worked in <code>channel_last</code> format except in training. 5) albumentations library BTW, I cannot possibly explain every details of my methods. You should consult the code later(I'm planning to share it when I retain similar position in private lb)</p> <p><a href="/olegtrott">@olegtrott</a> about 700GB. I saved them as compressed numpy arrays, using joblib.</p> <p><a href="/xavierlin">@xavierlin</a> 1. Yeah but shuffled 2. k=5 3. No</p> <p><a href="/cesb45">@cesb45</a> I set adaptive threshold <code>constant*mean_of_abs_pixel_differences</code> for each frame.</p> <p>What do you mean by <code>multi-fullyConnected net</code>?</p> <p><a href="/user155897">@user155897</a> you've done great too! I've done nothing special with two faces. Just used averaging. I didn't use attention.</p> <p>Like I said in another reply, I split folder-wise, but in the end pretty much ignored local validation and rather validated on public lb.</p>
Deepfake Detection Challenge
Public 34th/Private 47th place insights - Part1 (Inference)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi all,</p> <p>There is a lot to say about this competition. First I would like to share about our inference pipeline here and then I will make another post about the training part. Our solution uses only videos/frames (no audio at all) and total frames used in inference was quite important for us, the more the better. We tried to move frames into GPU early in the pipeline to benefit from GPU fast computation in the next stages. We've fighted with NVidia DALI and Decord as they can load/decode videos through GPU, unfortunately it never worked, Decord had a memory leak and DALI finished with timeout during submission (impossible to troubleshoot). So we used OpenCV and moved decoded frames in GPU just after. </p> <p>Then, for <strong>faces extraction</strong>, we've used MTCNN with facenet-pytorch with small changes to make it support GPU input tensors. Prior to MTCNN, we added a GAMMA correction (still in GPU) on frames to help MTCNN (some dark videos or bad contrast). Outputs are faces resized to 256x256, aspect ratio preserved and large margin (30 pixels on each border).</p> <p>Next steps is <strong>face tracking</strong> to identify how many faces in a videos, cleanup artifacts (face detected but not face) with different rules (based on tracked confidences and maximum faces). It is based on centroid boxes tracked across even spaced frames.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F698363%2F682f224bba7f3b69371c8903ac717158%2Fpipeline_inference.png?generation=1585728553095780&amp;alt=media" alt=""></p> <p>Next is <strong>models inference</strong>. We've 3 to 4 EfficientNet CV5 models with different input shapes (256x256, 240x240, 224x224) cropped by the normalizers. Each model was trained with a different validation strategy. Some with full data, some with partial data to make hold-out validation.</p> <p>Final step is <strong>post-processing</strong>. One may notice that some videos have blinking fake faces (a few frames real and a few frames fake within the same video). Probabilities of our models was moving up and down which means it worked but using simple average for final probablity would not work:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F698363%2F58a2b09dc3e876009533a523b95eb571%2Ffake_frames.png?generation=1585729302156617&amp;alt=media" alt=""></p> <p>So we decided to evaluate what could be the thresholds to detect such behavior and use maximum probability instead of average in this case only. The question was how many frames with high probability should we have to consider it's a fake:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F698363%2Fae7596e7d77a9f827e375d39a0ebedcb%2Ftheshold.png?generation=1585729646472824&amp;alt=media" alt=""></p> <p>And the answer was around 20%. So if 20% of frames have probability higher than around 0.85 then we prefer selecting maximum probability instead of average.</p> <p>There is an additional/optional step with ridge regression (not classification) built on hold-out data and applied to models' output.</p> <p>This pipeline works only if we have enough frames and we were able to run it up to 100 frames per video. Each single EFB model got around LB=0.32. Ridge regression on ensemble got LB=0.29 and thresholds provided boost to LB=0.27.</p> <p>I would like to add that: - TTA was not used as it was more important to have various frames than same frames with TTA. Score with TTA was just around +0.001. - Floating Point 16 was used to speed up inference. It did not modify probabilities (minor).</p> <p>We've tried 0.7 to 0.9 range and variation was around 0.002. But true if private set is totally different then it could fallback to average of probabilities and then -0.02</p> <p><a href="/moshel">@moshel</a> Average of probabilities &gt; 0.85 if more than 20%, correct? It should give similar results.</p> <p><a href="/hmendonca">@hmendonca</a> Our implementation is: ```</p> <h1>A few frames detected as fake then it's fake video</h1> <h1>Probability of each frame (mean of each model), around 65 frames</h1> <p>filename_prob = np.nanmean(clf_predict_probas, axis=1) # (65 rows)</p> <h1>Number for frames above prob 0.85</h1> <p>T = 0.85 TH = np.sum(filename_prob &gt; T)/len(filename_prob) <br> if TH &gt; 0.20: filename_prob = np.nanmax(filename_prob) else: filename_prob = np.nanmean(filename_prob) ``` Yes, our HO data is balanced 1:1. In the graph you can notice that any values between 0.05 and 0.25 should work. On LB we've tried TH in [0.20, 0.22] with T in [0.7, 0.8, 0.85] and it provided around +0.02 boost whatever the ensemble. We've also tried TH=0.02 (blind test prior working on HO data) and it was not good -0.03.</p> <p>We also have another threshold that I've not described because benefit was very low: If 100% of frames are below 0.5 and 90% of frames are below 0.2 than use min(probabilities).</p> <p>Finally, if video has 2 faces detected then we use the highest face probability because one (or more) face could be fake and the other not. </p> <p>34th after LB cleanup 😏 . </p> <p>X = frames ratio threshold: 10%, 20%, 30% of frames per video with high probability. Y = log loss. T = different values for high probability (0.7, .0.8, 0.9 ...)</p> <p>Dataset to build this graph is our hold-out data.</p> <p>49th on private. Not so bad, we survived the shake up!</p> <p><a href="/philculliton">@philculliton</a> But only one of our submissions had been evaluated. The other failed and we don't know why (we didn't remove any dataset and it was running fine for public LB). It would be great if Kaggle/Organizers would allow us (and all other teams with the same issue - around 6.5%) to make it evaluated.</p> <p>First they should provide the underlying reason/exception that lead to not have the submission.csv. Second, we could instruct them to try minor changes in the code (e.g. number of workers or total frames) to have it run successfully. And as soon as it works, it's considered as a result, no fine tune allowed.</p> <p>I'm sure each evaluation failure is due to a minor issue such one video bigger than expected that causes Out Of Memory (RAM or GPU).</p>
Deepfake Detection Challenge
Silver zone solo, no credits used.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My congratulations too for your first silver!</p> <p>I regret I did not join this competition earlier, I had just one month of intensive work. This should be my first silver medal. I have read many write ups about getting good score and many of them lack one detail: some guys work pretty hard to climb high - this is very important detail. When I read comments like: "I worked 10 hrs/day on this competition" - I could not believe(maybe it is a bit exaggerated), before that I thought - well, ok, 2-3 hrs/week. This time I quit my job partially because I wanted to join some Kaggle competition and take it seriously plus I had no computer vision experience at my job and DFDC seemed very interesting to me. Finally I can say that I spent 30-40 hrs/week in average on this competition, working also over the weekends. I did not use any external machines for training, I joined late and could not use credits. My PC specs are 32 GB RAM, 128 GB SSD + 256 GB SSD, GeForce GTX 1080 Ti. SSD helped a lot in this competition.</p> <p>I believe the main "magic thing" for me was proper train-validation technique. For example, I have trained EfficientNetB1 which receives 224x224 image as input and outputs a binary number(fake or real) using the next data organization: For each epoch: 1. Select single face image from unique real video(random frame). 2. Select random fake video which was originated from a real video in a way that you <em>do not take two or more fake videos</em> originated from one unique real video. Then you pick the random frame and random face from each fake. Eventually you get the same number of subsampled fakes as for real samples. 3. Every epoch you pick faces/frames/videos randomly. 4. Also you split train/validation set by fixed folder-wise split for all epochs. 1 EPOCH takes just 5-10 minutes(if you have all images extracted on your SSD)! And whole training takes just 1-1.5 hours! Finally I had validation score ~0.22-~0.25 for different models. I extracted 20 frames during inference, classified them and calculated the median of predicitons.</p> <p>I trained LRCNN(EFNB1 as backbone) as well, its input is only 3 consequent frames. My final solution is the average of three single frame models multiply by 0.85 and one LRCNN model multiply by 0.15.</p> <p>I tried toooooons of things, I trained Increption-ResNet V2 during 2 days, attached the fan to my PC and I even was afraid that I will wake up in the house full of smoke :) I believe I could do better if a day had 50 hours instead of 24, if I had teamed up with someone. This competition has no end, we can apply here infinite number of approaches.</p> <p>I would like to thank guys like Human Analog and Shangqiu Li - they deserve many upvotes. And thank you all for participating!</p> <p>Resource-limited sounds offensive :D Haha, just kidding. Yes, JPGs on SSD in their original size. As I remember it occupied like 60 GiB, but not sure. I wanted also to test the idea with PNGs and used my external USB Hard Drive(1 TiB) in the last 3 days of competition. But that was the idea full of pain and I stopped checking it.</p>
Diabetic Retinopathy Detection
3rd Place Solution Report
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Diabetic Retinopathy Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We&#8217;d like to thank the California Healthcare Foundation and EyePACs for sponsoring and providing data for the competition. Many thanks to the Kaggle community. We leaned heavily on work shared by past competition winners.</p> <p>Our solution was an ensemble of 9 convolutional neural networks. We used a variety of model architectures. Our best performing were variations of <a href="http://arxiv.org/abs/1409.1556">Simonyan and Zisserman</a>, closely followed by models featuring the <a href="http://arxiv.org/abs/1412.6071">fractional max pooling layers</a> developed by the 1st place finisher (<a href="https://www.kaggle.com/btgraham">Graham</a>). Also included was a model utilizing <a href="http://benanne.github.io/2015/03/17/plankton.html">cyclic pooling</a> as described in the winning solution to the National Data Science Bowl (<a href="https://www.kaggle.com/sedielem">Dieleman</a>). We found that using large image sizes and combining information from both eyes were key to getting good performance.</p> <p>A more in-depth summary and submission code are attached.</p> <p>Edit: Fixed documentation around cyclic pooling model. See v2 of the PDF.</p>
Diabetic Retinopathy Detection
6th place solution - deepsense.io
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Diabetic Retinopathy Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here's a short summary of our approach: <a href="http://deepsense.io/diagnosing-diabetic-retinopathy-with-deep-learning/">http://deepsense.io/diagnosing-diabetic-retinopathy-with-deep-learning/</a></p> <p>I know it's been a long time since the competition ended, but somehow we've forgotten to put the link on the forum ;) </p>
Diabetic Retinopathy Detection
82nd place solution summary :)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Diabetic Retinopathy Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We used LevelDB. Please check the last paragraphs of the <a href="http://yerevann.github.io/2015/08/17/diabetic-retinopathy-detection-contest-what-we-did-wrong/#choosing-training--validation-sets">training and validation sets</a> section. The following command created the DB:</p> <pre><code>./build/tools/convert_imageset -backend=leveldb -gray=true -shuffle=true data/train.g/ train.g.01v234.txt leveldb/train.g.01v234 </code></pre> <p>This was our very first project in computer vision and in machine learning in general. We didn't manage to reach the top 10% (as we hoped), but we learned a lot from this contest. Now we would like to share our experience with everyone. Hopefully this will be helpful for the beginners.</p> <p>Also we would be very grateful if you point out more mistakes we have made :)</p> <p><a href="http://yerevann.github.io/2015/08/17/diabetic-retinopathy-detection-contest-what-we-did-wrong/">http://yerevann.github.io/2015/08/17/diabetic-retinopathy-detection-contest-what-we-did-wrong/</a></p> <p>Thank you all for the great contest.</p> <p>Usually overfitting started between 40000-60000 iterations</p> <p>No&#8228;&#8228; I don't know any way to compensate for the class imbalance by modifying the architecture.</p> <p>Hi,</p> <p>I think we did it manually: we copied some of the lines to train.csv and the remaining ones to val.csv. Caffe's <code>convert_imageset</code> tool (which creates a LevelDB or LMDB database) requires a space separated file which contains lines like this:</p> <pre><code>image1.jpg 0 image2.jpg 3 </code></pre> <p>where <code>0</code> and <code>3</code> are the class labels (they must be integers!). So technically it's not a CSV (comma seperated) anymore.</p>
Diabetic Retinopathy Detection
Competition report (min-pooling) and thank you
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Diabetic Retinopathy Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Julian, Daniel: Hello. I am hoping to improve upon my competition solution so suggestions are welcome.</p> <p>Daniel: 1) For this competition, I initialised the networks using the training data, at each layer scaling the Uniform[-1,+1] distribution so that the hidden units roughly had standard deviation 1.</p> <p>2) I found that ensembling did not add much. This surprised me as on things like CIFAR-10 and ImageNet, ensembling is massively useful. (I don't want to compare fractional max-pooling with VGG-style convnets on the basis of this competition as I didn't have enough time to be as thorough as I would have liked.)</p> <p>3) I don't think there is anything fundamentally wrong with very small batches. It is just a trade-off between training a smaller network for more epochs or a larger network for fewer.</p> <p>4) Up to a week. I used SparseConvNet because it's familiar and it implements fractional max-pooling. It is designed to handle sparse input data efficiently, which has not really relevant for this competition. For dense images, SparseConvNet cannot compete speed-wise with Caffe+cuDNN. But it is a work in progress, and I have high-hopes for optimizing it further.</p> <p>Hello. Attached is my competition report. </p> <p>Many thanks to the competition organisers at Kaggle, EyePACS and the California Healthcare Foundation. Thank you also to everyone who posted interesting observations about the dataset on the forums.</p> <p>Some links:</p> <ul> <li><p>The SparseConvNet library (GPL): <a href="https://github.com/btgraham/SparseConvNet">https://github.com/btgraham/SparseConvNet</a></p></li> <li><p>A Git branch for the competition with pre-/post-processing scripts, etc <a href="https://github.com/btgraham/SparseConvNet/tree/kaggle_Diabetic_Retinopathy_competition">https://github.com/btgraham/SparseConvNet/tree/kaggle_Diabetic_Retinopathy_competition</a></p></li> <li><p>Fractional max-pooling <a href="http://arxiv.org/abs/1412.6071">http://arxiv.org/abs/1412.6071</a></p></li> </ul>
Diabetic Retinopathy Detection
My Experience Writeup
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Diabetic Retinopathy Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Kaggle and fellow competitors!</p> <p>Here's a <a href="http://ilyakava.tumblr.com/post/125230881527/my-1st-kaggle-convnet-getting-to-3rd-percentile">writeup of my experience</a>. Looking forward to reading other people's experiences too!</p>
Diabetic Retinopathy Detection
Team o_O Solution Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Diabetic Retinopathy Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>@old-ufo thanks. Yes it's maxout, pooled over 2 units. I updated the summary.</p> <p>@Jeffrey, I agree it's probably what made up for most of the difference in score. It took some time to get the get the learning rate, L1 and L2 factors and most importantly resampling strategy to work well together. What I quite liked about it was that I could iterate on it rapidly without having to wait days to see results because it only takes a few minutes to train the blending network. When we trained a better/different network for feature extraction at a later point most of the improvements made to the blending network would persist.</p> <p>@Andrey, I think it originally appeared in the <a href="http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf">Imagenet Paper</a> in section 4.1. I used standard deviation 0.5 for net A and standard deviation 0.25 for net B as opposed to 0.1 in the paper. I guess our numbers are rather high but the images in this dataset looked like they vary a lot in terms of color composition and I was trying to take this into account. I'm uncertain if adding the color augmentation helped in the end but I figured even if didn't improve the results there's a good chance it would help against overfitting so we kept it.</p> <p>Thanks guys.</p> <p>@sedielem At this point I can't say with confidence if it made any difference at all. Around 2 months ago I tried replacing all MaxPool layers with RMSPool layers but that didn't work very well. But when using RMS pooling only in the last pooling layer the results were similar to using only max pooling layers and from then on I just stuck with it. It would be interesting to run it once more without the RMS pooling layer but the entire process of training one network takes 3-4 days. I'll let you know if I get around to it. I also tried starting with leaky rate 0.5 and then gradually decreasing it down to zero using a shared variable. During training this worked great, but when trying to extract features or make predictions I noticed I was unable to affect the results by changing the leaky rate. Not knowing what was going on at this point I didn't pursue it any further.</p> <p>One thing I wanted to look at but didn't get around to yet was how much variance there was in the trained filter weights within each layer for different training strategies and rectifiers. I was also wondering if it's possible to apply a kind of penalty that &quot;forces&quot; the filters within each conv layer to become more diverse. Hoping this could help speed up training or allow smaller networks with similar performance. I trained my first neural net around the beginning of this year so I'm not sure how much sense this all makes.</p> <p>@julian I didn't try to use the same weights to bootstrap larger nets with different architecture so I can't comment on that. But I can give you some numbers of how much (or little) we gained from ensembling</p> <pre><code>net set of weights blend iter augment averages public LB private LB B 1 1 20 0.84803 0.83916 B 1 1 50 0.84963 0.83973 A+B 1 1 50 0.85090 0.84291 A+B 1 10 50 0.85201 0.84308 A+B 2 1 50 0.85425 0.84379 A+B 2 10 50 0.85199 0.84425 A+B 3 1 50 0.85339 0.84479 A+B 2 1 50 0.85316 0.84674 * (*in the last row thresholds were optimized for kappa on validation set) </code></pre> <p>@Hrant I did try using softmax classification in the beginning but the results weren't encouraging. I wondered quite a bit about how one could implement something close to kappa in theano but I never figured it out. At the end of the training phase both networks had a kappa score of about 0.80 (using info from one eye only). When feeding the output of the RMSPool layer (without averaging over different augmentations or blending patient eyes) into the blend network using only the features for one eye kappa is 0.805. When doing the same but using features averaged over 50 pseudo random augmentations kappa is 0.812. Then adding the standard deviation of the features brings kappa to 0.816. These numbers are however subject to some fluctuations.</p> <p>@Ali we haven't released it yet. As far as I understand we'll first submit it to kaggle and when we fulfill all the requirements release it on github.</p> <p>First I'd like to thank team mate Stephan, the hosts and everyone else who joined and congratulate the other winners! </p> <p>Below is a brief solution summary while things are fresh before we get time to do a proper writeup and publish our code.</p> <h2>Team o_O Solution Summary</h2> <p>Neural networks trained using <a href="https://github.com/Lasagne/Lasagne">lasagne</a> and <a href="https://github.com/dnouri/nolearn">nolearn</a>. Both libraries are awesome, well documented and easy to get started with.</p> <p>A lot of inspiration and some code was taken from winners of the national data science bowl competition. Thanks!</p> <ul> <li><a href="https://github.com/benanne/kaggle-ndsb">&#8779; Deep Sea &#8779;</a>: data augmentation, RMS pooling, pseudo random augmentation averaging (used for feature extraction here)</li> <li><a href="https://www.kaggle.com/c/datasciencebowl/forums/t/13166/happy-lantern-festival-report-and-code">Happy Lantern Festival</a> Capacity and coverage analysis of conv nets (it's built into nolearn)</li> </ul> <h3>Preprocessing</h3> <ul> <li>We selected the smallest rectangle that contained the entire eye.</li> <li>When this didn't work well (dark images) or the image was already almost a square we defaulted to selecting the center square.</li> <li>Resized to 512, 256, 128 pixel squares.</li> <li>Before the data augmentation we scaled each channel to have zero mean and unit variance.</li> </ul> <h3>Augmentation</h3> <ul> <li>360 degrees rotation, translation, scaling, stretching (all uniform)</li> <li>Krizhevsky color augmentation (gaussian)</li> </ul> <p>These augmentations were always applied.</p> <h3>Network Configurations</h3> <pre><code> net A | net B units filter stride size | filter stride size 1 Input 448 | 4 448 2 Conv 32 5 2 224 | 4 2 224 3 Conv 32 3 224 | 4 225 4 MaxPool 3 2 111 | 3 2 112 5 Conv 64 5 2 56 | 4 2 56 6 Conv 64 3 56 | 4 57 7 Conv 64 3 56 | 4 56 8 MaxPool 3 2 27 | 3 2 27 9 Conv 128 3 27 | 4 28 10 Conv 128 3 27 | 4 27 11 Conv 128 3 27 | 4 28 12 MaxPool 3 2 13 | 3 2 13 13 Conv 256 3 13 | 4 14 14 Conv 256 3 13 | 4 13 15 Conv 256 3 13 | 4 14 16 MaxPool 3 2 6 | 3 2 6 17 Conv 512 3 6 | 4 5 18 Conv 512 3 6 | n/a n/a 19 RMSPool 3 3 2 | 3 2 2 20 Dropout 21 Dense 1024 22 Maxout 512 23 Dropout 24 Dense 1024 25 Maxout 512 </code></pre> <ul> <li>Leaky (0.01) rectifier units following each conv and dense layer.</li> <li>L2 weight decay factor 0.0005</li> <li>Mean squared error objective.</li> <li>Untied biases.</li> </ul> <h3>Training</h3> <ul> <li>We used 10% of the patients as validation set.</li> <li>128 px images -&gt; layers 1 - 11 and 20 to 25.</li> <li>256 px images -&gt; layers 1 - 15 and 20 to 25. Weights of layer 1 - 11 initialized with weights from above.</li> <li>512 px images -&gt; all layers. Weights of layers 1 - 15 initialized with weights from above.</li> <li>Training started with resampling such that all classes were present in equal fractions. We then gradually decreased the balancing after each epoch to arrive at final &quot;resampling weights&quot; of 1 for class 0 and 2 for the other classes.</li> <li>Nesterov momentum (0.9) with fixed learning rate schedule over 250 epochs. <ul><li>epoch 0: 0.003</li> <li>epoch 150: 0.0003</li> <li>epoch 220: 0.00003</li> <li>For the nets for 256 and 128 pixels images we stopped training already after 200 epochs.</li></ul></li> </ul> <p>Models were trained on a GTX 970 and a GTX 980Ti (highly recommended) with batch sizes small enough to fit into memory (usually 24 to 48 for the large networks and 128 for the smaller ones).</p> <h3>&quot;Per Patient&quot; Blend</h3> <p>Extracted mean and standard deviation of RMSPool layer for 50 pseudo random augmentations for three sets of weights (best validation score, best kappa, final weights) for net A and B.</p> <p>For each eye (or patient) used the following as input features for blending,</p> <pre><code>[this_eye_mean, other_eye_mean, this_eye_stddev, other_eye_stddev, left_eye_indicator] </code></pre> <p>We standardized all features to have zero mean and unit variance and used them to train a network of shape,</p> <pre><code>Input 8193 Dense 32 Maxout 16 Dense 32 Maxout 16 </code></pre> <ul> <li>L1 regularization (2e-5) on the first layer and L2 regularization (0.005) everywhere.</li> <li><a href="http://arxiv.org/abs/1412.6980">Adam Updates</a> with fixed learning rate schedule over 100 epochs. <ul><li>epoch 0: 5e-4</li> <li>epoch 60: 5e-5</li> <li>epoch 80: 5e-6</li> <li>epoch 90: 5e-7</li></ul></li> <li>Mean squared error objective.</li> <li>Batches of size 128, replace batch with probability <ul><li>0.2 with batch sampled such that classes are balanced</li> <li>0.5 with batch sampled uniformly from all images (shuffled)</li></ul></li> </ul> <p>The mean output of the six blend networks (conv nets A and B, 3 sets of weights each) was then thresholded at <code>[0.5, 1.5, 2.5, 3.5]</code> to get integer levels for submission.</p> <h3>Notes</h3> <p>I just noticed that we achieved a slightly higher private LB score for a submission where we optimized the thresholds on the validation set to maximize kappa. But this didn't work as well on the public LB and we didn't pursue it any further or select it for scoring at the end.</p> <p>When using very leaky (0.33) rectifier units we managed to train the networks for 512 px images directly and the MSE and kappa score were comparable with what we achieved with leaky rectifiers. However after blending the features for each patients over pseudo random augmentations the validation and LB kappa were never quite as good as what we obtained with leaky rectifier units and initialization from smaller nets. We went back to using 0.01 leaky rectifiers afterwards and gave up on training the larger networks from scratch.</p> <p>That is the expected output you get when installing with pip.</p> <p>Newer versions of lasagne don't have the Objective class. You can use the lasagne version specified in requirements.txt , there is some more information in the README file.</p> <p>Maybe it would work if you install the following theano commit instead:</p> <pre><code>pip install git+https://github.com/Theano/Theano.git@dfb2730348d05f6aadd116ce492e836a4c0ba6d6 </code></pre> <p>It does regression instead of classification and the output is then thresholded at 0.5, 1.5, 2.5, 3.5 to get integer levels for predictions.</p>
Diabetic Retinopathy Detection
Write-up of my competition experience
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Diabetic Retinopathy Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi all,</p> <p>I had a lot of fun in this competition and learned quite a lot. I prepared some write-up of what I've learned at: <a href="http://jeffreydf.github.io/diabetic-retinopathy-detection/">http://jeffreydf.github.io/diabetic-retinopathy-detection/</a>.</p> <p>Feel free to let me know if something isn't clear enough or if you have any questions. :-)</p> <p>Best,</p> <p>Jeffrey</p> <p>I can't add it as a GitHub link on Kaggle but I published <a href="https://github.com/JeffreyDF/kaggle_diabetic_retinopathy">some code with a trained model</a>. Most interesting will probably be the <a href="https://github.com/JeffreyDF/kaggle_diabetic_retinopathy/blob/master/notebooks/Sample_prediction.ipynb">sample notebook</a>.</p>
Display Advertising Challenge
3 Idiots' Solution & LIBFFM
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Display Advertising Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>[quote=jwjohnson314;59345]</p> <p>Very nice solution. I'm curious - how did you determine the average CTR's on the public/private leaderboards (slide 16)?</p> <p>[/quote]</p> <p>Suppose we have an impression whose label is 1. If the prediction for this<br>impression is 0, then we should get an infinite logloss. In practice this is<br>not desired, so on the submission system there is a ceiling (C) for logloss.<br>If the logloss for an impression is greater than this ceiling, then it will be<br>truncated. Note that we do not know the value of this ceiling.</p> <p>Using this feature, we can hack the average CTR by two submissions.</p> <p>The first submission contains all ones, so the logloss we get is P1 =<br>(nr_non_click*C)/nr_instance.</p> <p>The second submission contains all zeros, so the logloss we get is P2 =<br>(nr_click*C)/nr_instance.</p> <p>We can then get the average CTR = P2/(P1+P2).</p> <p>[quote=rokh;58729]</p> <p>Thanks for such a great solution! Hope you don't mind another little question.<br><br>How and when did you choose threshold for transforming infrequent features?</p> <p>[/quote]</p> <p>This threshold was selected based on experiments. We tried something like 2, 5, 10, 20, 50, 100, and chose the best one among them.</p> <p>[quote=wugui;57754]</p> <p>congratulations&#65292;i am a newer in competition, please bear with me.</p> <p>i have a question about click rate feature.</p> <p>because i think click rate feature may be useful for predicting, and i do not find it in your ppt, so i want to know why you don't use click rate feature in your solution?</p> <p>[/quote]</p> <p>Yes, we tried what you mentioned, but did not get better results.</p> <p>[quote=clustifier;57412]</p> <p>@guestwalk,</p> <p>Does&nbsp;special care is needed for generating the gbdt features to avoid overfitting?</p> <p>I didn't thought it needed&nbsp;but it looks like I do have some kind of overfitting problem when trying to implement if naively.</p> <p>Thanks,</p> <p>C.</p> <p>[/quote]</p> <p>Sorry for the late reply. If you find gbdt overfits, I think reducing the number of trees and the depth of a tree may help.&nbsp;</p> <p>1. Right.</p> <p>2. Yes.</p> <p>[quote=saurk;56356]</p> <p>The 4 tuple (f1,j1,f2, j2) uniquely indexes the parameter dot product, right ? Can we construct a unique 2 tuple &nbsp;(f1__j1, &nbsp;f2_j2) from the given 4 tuple which indexes to the same dot product value ?</p> <p>[/quote]</p> <p>[quote=Torgos;56807]</p> <p>Thank you for posting your solution, but the linked explanation pdf currently returns the error &quot;You don't have permission to access /~r01922136/kaggle-2014-criteo.pdf on this server.&quot;</p> <p>[/quote]</p> <p>It should be back now. Please try again. Thanks.</p> <p>I do not quite understand what do you mean. May you give a&nbsp;mathematical formulation or an example? Thanks!</p> <p>[quote=saurk;56266]</p> <p>If we incorporate the field awareness in the feature value itself, say the last two digits of each feature value maps one on one with the source column from which it came, will this FM be same as Rendel's libFM implementation ??</p> <p>[/quote]</p> <p>You are welcome. :)</p> <p>1. Yes. A 0.001 improvement is very significant in this competition.</p> <p>2. I see. The same as your previous question (3). We have tried that, but it did not</p> <p>give us better result.</p> <p>[quote=irsneg;56264]</p> <p>Thanks for the quick answer :).</p> <p>1. OK, interesting. The improvement is rather marginal though. Is it even significant?</p> <p>2. I meant w^T * x from slide 13 (without interaction terms), so that the overall model is with phi(w,x) = w^T * x + sum &lt; w_j1,f1,w_j2,f2&gt; * x_j1 * x_j2 and not only the sum as in the slides, right (I cannot remove the strange =&quot;&quot; which was added automatically...)?</p> <p>3. OK.</p> <p>[/quote]</p> <p>Thanks. :) Please see my opinions:</p> <p>1. As mentioned on page 14 of our slide, this approach is firstly proposed by<br> Michael Jahrer et al. in KDD Cup 2012 Track 2. In our experiments, this<br> approach is better than the standard FM model. (0.001 - 0.002 improvement)<br> The standard FM shares the same latent space, and the field-aware FM has<br> dedicated latent space for each pair of fields. I think this is the reason<br> why the field-aware FM outperforms the standard FM in this competition.</p> <p>2. Sorry I do not quite get what you mean by &quot;simple features&quot;. May you explain<br> more about it?</p> <p>3. We tried that, but not including these terms gave us better<br> results. I do not remember what the difference was, so I cannot report a<br> number.</p> <p>[quote=irsneg;56198]</p> <p>Hi!</p> <p>Congrats! Nice and creative approach :). Thanks for sharing and answering all the previous questions; that's very interesting. I have a few addition questions that might interest the community as well; please have a look:</p> <p>- How did you come up with this &quot;field-aware&quot; version of FM? Is it performing much better than Rendle's original approach on this problem? Do you have any thoughts on why this is the case?<br>- What are you doing with simple features? (They are not in the formula of phi(w,x) on slide 14)?<br>- Why did not you include interactions terms xj * xj (when j1 == j2 on slide 14)?</p> <p>Thanks!</p> <p>[quote=guestwalk;55762]</p> <p>Dear all,</p> <p>We have prepared our codes and documents.</p> <p>For the codes, please see <a href="https://github.com/guestwalk/kaggle-2014-criteo">here</a>; and for the documents, please see <a href="http://www.csie.ntu.edu.tw/~r01922136/kaggle-2014-criteo.pdf">here</a>.</p> <p>Your comments are very welcome. Thanks!</p> <p>-- 3 Idiots</p> <p>[/quote]</p> <p>[/quote]</p> <p>Sorry for the late reply!</p> <p>For your question, I don't think it is a binary classification problem. That<br>is, from some points of view, they are generic; but from some others, they are<br>customized.</p> <p>It is generic because the input to these solvers are just a label vector and a<br>matrix. Therefore, for any problem, as long as it can be transformed into the<br>required format, our solvers work.</p> <p>On the other hand, it is customized because:</p> <p>1. So far we only support binary sparse matrices (sparse matrices whose values<br> are either zero or one). Actually it is not difficult to support generic<br> sparse matrices. Two main reasons we support binary sparse matrices only are<br> that it consumes less memory and that it runs faster.</p> <p>2. There are some usability issues. For example, we do not save model, and<br> test set is forced to be specified during training.</p> <p>To make these solvers more generic, I think there is still a long way to go.<br>Though it is not difficult to address these issues, it definitely needs some<br>hard works.</p> <p>BTW, we just wrote README to introduce the input file formats of GBDT and<br>FM. We are still improving the documents and the codes.</p> <p>[quote=saurk;56066]</p> <p>Congratulations ! Wondering if the FM and GBDT source code you have here is generic or it's customized to Criteo problem ? &nbsp;If generic, maybe we can use this instead of R or Python's GBM packages.&nbsp;</p> <p>[/quote]</p> <p>[quote=Inspector;56305]</p> <p>I am struggling to see the difference. Are their any tutorials or explanations online you know of that explains field aware FM versus standard FM?</p> <p>[/quote]</p> <p>The only reference I know is <a href="https://storage.googleapis.com/kaggle-competitions/kddcup2012/2748/media/OperaSlides.pdf">these slides</a>&nbsp;provided by&nbsp;Opera Solutions&nbsp;in KDD Cup 2012. Please check page 11. Though the symbols are quite different from ours, the concept behind is the same.</p> <p>No, field-aware FM is nothing to do with hashing collision. For the difference between field-aware FM and standard FM, please see&nbsp;<a href="https://www.kaggle.com/c/criteo-display-ad-challenge/forums/t/10555/3-idiots-solution/55932#post55932">this post</a>.</p> <p>[quote=saurk;56287]</p> <p>Please correct me if wrong but my understanding was that the reason you introduce field-awareness is because the hashing on feature-pairs result is collision. If the dimensions were much less so there was no need for hashing, would we still use field-aware FMs or thats redundant ?</p> <p>[/quote]</p> <p>Dear all,</p> <p>We have prepared our codes and documents.</p> <p>For the codes, please see <a href="https://github.com/guestwalk/kaggle-2014-criteo">here</a>; and for the documents, please see <a href="http://www.csie.ntu.edu.tw/~r01922136/kaggle-2014-criteo.pdf">here</a>.</p> <p>In this competition and <a href="https://www.kaggle.com/c/avazu-ctr-prediction">Avazu's CTR competition</a>, we find field-aware factorization machines (FFM) are powerful models. We have implemented it in the package <a href="http://www.csie.ntu.edu.tw/~r01922136/libffm/">LIBFFM</a>.</p> <p>Your comments are very welcome. Thanks!</p> <p>-- 3 Idiots</p> <p>I see. Many thanks for your comments! :)</p> <p>[quote=Tianqi Chen;55966]</p> <p>Having to do early stopping seems to be a very common thing that can not be avoided in factorization model. Sometimes using an alternative regularization(e.g max-norm on latent factor) can make the long tail effect comes slower, however it will still come, and the best solution is usually worse than what you can have in optimal iteration.&nbsp;</p> <p>Among the factorization models, bayesian version is stable to long-tail effect, however they are not practical(at least not &nbsp;the plain version) because you have to store all the generated samples for prediction.&nbsp;<span style="line-height: 1.4">I have no idea how this can be avoided easily:) Maybe using other stable model to calibrate what is generated from &nbsp;FM models...</span></p> <p>The regularization you use appears to be the common one used by MF folks. Since lambda is applied only when nonzero feature occurs. &nbsp;It is not standard L2 regularizer. It is a weighed L2 regularization where weight on feature i is lambda times number of nonzero feature i</p> <p>Thanks for answering so many questions! I learned a lot from your solution</p> <p>[/quote]</p> <p>Tianqi:</p> <p>Because I do not know how other people used FM, sorry I have no idea about the<br>&quot;difference&quot;. I guess the following three settings might be helpful:</p> <p>1. Transforming infrequent features into a special tag. Conceptually, infrequent<br> features should only include very little or even no information, so it<br> should be very hard for a model to extract those information. In fact, these<br> features can be very noisy. We got significant improvement (more than 0.0005)<br> by transforming these features into a special tag.</p> <p>2. Transforming numerical features (I1-I13) to categorical features.<br> Empirically we observe using categorical features is always better than<br> using numerical features. However, too many features are generated if<br> numerical features are directly transformed into categorical features, so we<br> use</p> <p>v &lt;- floor(log(v)^2)</p> <p>to reduce the number of features generated.</p> <p>3. Data normalization. According to our empirical experience, instance-wise<br> data normalization makes the optimization problem easier to be solved.<br> For example, for the following vector x</p> <p>(0, 0, 1, 1, 0, 1, 0, 1, 0),</p> <p>the standard data normalization divides each element of x by by the 2-norm<br> of x. The normalized x becomes</p> <p>(0, 0, 0.5, 0.5, 0, 0.5, 0, 0.5, 0).</p> <p>The hashing trick do not contribute any improvement on the leaderboard. We<br>apply the hashing trick only because it makes our life easier to generate<br>features. :)</p> <p>Tianqi:</p> <p>(1) We have not submitted one using the global latent factor version. On our validation set, the difference was around 0.001. (Both of them did not use GBDT features.)</p> <p>(2) We compare them in the last page of our slide. &quot;FM+GBDT&quot; and &quot;FM&quot; respectively represent training FM with/without adding the GBDT features. We can see the around 0.001 improvement by adding the GBDT features.</p> <p>For the questions about the sensitivity and the situation after the best<br>iteration, please see the attachment. (tr_loss and va_loss respectively<br>represent the logloss on the training and the validation set.) I do this on our<br>own validation set; also, I removed GBDT features so that the experiments can<br>run faster.</p> <p>We can see the setting eta = 0.1 indeed achieve the best performance on the<br>validation set. For eta = 0.2, eta = 0.05, and eta = 0.02, a similar<br>performance can be achieved. The smaller eta is, the more iterations are<br>required to achieve a good performance; it is the classic SGD behavior. If eta<br>is too large (in this case, 0.5), then the result is significantly worse; for<br>the setting eta = 0.01, it do not converge at 100th iterations. My opinion is<br>choosing a good eta is still crucial to SGD. For a large eta, the accuracy can<br>be hurt; for a small eta, it takes a long time to converge.</p> <p>On the other hand, the &quot;long tail&quot; iterations indeed hurts a lot. In fact, we<br>also used this learning rate schedule to solve the simple matrix factorization<br>(MF) problem, and the performance in the long tail iterations is quite stable<br>(i.e., close to the best performance). I think the difference may be that FM is<br>much complicated compared to MF, so the overfitting problem is much harder to<br>be avoided. (In some sense this FM problem is a combination of 39*38/2 MF<br>problem, given there are 39 features for each impression.) This problem is<br>really annoying because it makes our solution less useful in practice. Any<br>suggestion?</p> <p>For regularization, our implementation is very simple. Every time we choose an<br>instance, we only update the corresponding coordinate of the non-zero features.<br>For example, if the non-zero coordinates of an instance x are 376, 248, 571,<br>and 942, then only w376, w248, w571, and w942 are updated. Take w248 as an<br>example, the corresponding gradient is</p> <p>g248 = lambda*w248 + (the gradient of the loss term)</p> <p>Note that lambda is a fixed parameter, we don't change it during training at<br>all. Not sure if it is one of the approaches you mentioned.</p> <p>Finally, the gradient of regularization is also accumulated.</p> <p>[quote=Tianqi Chen;55931]</p> <p>Interesting! This is the first time I see AdaGrad to work on factorization models.</p> <p>Sometimes&nbsp;it works great for convex models and some neuralnets. But I wasn't able get it work better than constant learning rate or decayed learning rate on factorization models.</p> <p>How sensitive is the result to the initial learning rate, I suspect it is just for lower rate, you run for longer rounds, but still get similar results?</p> <p>I get your point about choosing the stopping condition. Since you use AdaGrad, I guess the raising up period after optimal point will be more smoother? Sometimes Adagrad in convex models tends to less overfit, say 50 rounds gets you there, and running for 70 rounds won't hurt too much. I am wondering what is in your case.</p> <p>For L2 regularizer, how do you implement it? If it is standard L2 regularizer, you don;t want to regularize parameters every rounds, so either some lazy trick or divide the regularizer by #occurance can be used. In many MF codes, sometimes ppl simply take another regularizer lambda #occraunce_of_fi w_i^2 and run weight decay every time you see the parameter.</p> <p>Does the gradient of regularizer goes into the accumulated gradient?</p> <p>Tianqi</p> <p>[/quote]</p> <p>You are very welcome!</p> <p>(2) They are similar but not the same. The FM model in the paper you provided<br> is field unaware. The difference between equation 1 in the paper and the<br> formula on page 14 of our slide is that our w is not only indexed by j1 and<br> j2, but also indexed by f1 and f2. Consider the example on page 15, if<br> Rendle's FM is applied, it becomes:</p> <p>w376^Tw248x376x248 + w376^Tw571x376x571 + w376^Tw942x376x942<br> + w248^Tw571x248x571 + w248^Tw942x248x942<br> + w571^Tw942x571x942</p> <p>BTW, we use k = 4, not k = 2.</p> <p>Please let me know if you have more questions.&nbsp;:)</p> <p>[quote=Inspector;55906]</p> <p>1) I think this helped a lot. I was confused what the hashing trick was doing. I was thinking perhaps the value of a feature, say 5a9ed9b0, was REPLACED by an integer. So, I understand now that one-hot-encoding is still being used, it is just the indexing of the data which is improved (memory wise) when hashed.</p> <p>2) So this is essentially the same model as shown in the Rendle paper http://www.ismll.uni-hildesheim.de/pub/pdfs/Rendle2010FM.pdf (equation 1) , where you used k=2 (the number of factors)? Is this correct?</p> <p>Thanks very much!!</p> <p>[/quote]</p> <p>You are very welcome!</p> <p>Here are my answers:</p> <p>(1) For the weight decay of the learning rate (step size), we use<br> per-coordinate learning rate schedule. For a specific coordinate, the<br> update rule is:</p> <p>G = G + g*g<br> <br> w = w - eta*(1/sqrt(G))*g,</p> <p>where g is the gradient of that coordinate, G records the accumulated<br> squared sum of the gradients in the past updates, eta is the initial<br> learning rate, and w is the model. Note that the initial value of G is set<br> to 1.<br> <br> This learning rate schedule has been studied in, for example, <a href="http://www.magicbroom.info/Papers/DuchiHaSi10.pdf">this paper</a>.<br> Because we know this learning rate schedule before the competition, we did<br> not spend any time tuning.</p> <p>(2) For the value of initial learning rate (eta), we tried 0.5, 0.2, 0.1, 0.05,<br> 0.02, and 0.01 at the early stage we used FM. Then we found that 0.1 is a<br> good balance between the performance and the training time, so we just<br> fixed it and had never changed it.</p> <p>Because at that time we haven't included GBDT features, FM ran about three<br> times faster compared to the current setting. These experiments took us<br> around 6 hours.</p> <p>(3) For the stopping criterion, we did not find a good one. What we did was:</p> <p>1. Run experiments on our validation set, and record the number of<br> iterations to achieve the best performance.<br> <br> 2. Use this number of iterations to train the model on the whole<br> training data.</p> <p>So for each of our submissions, we run experiments at least two times: one<br> on validation set and the other on the whole data set.</p> <p>(4) There is another parameter need to be tuned: lambda (regularization<br> penalty). Like other machine learning models such as SVM or standard LR,<br> this parameter need to be tuned. We ran experiments very similar to what we<br> mentioned in tuning eta. The time cost of tuning this parameter was also<br> around 6 hours.</p> <p>Please feel free to let me know if you have more questions. :)</p> <p>[quote=Tianqi Chen;55890]</p> <p>Thanks for detailed answer!</p> <p>I like your comment on the data normalization trick. This is also what I observed repeatly in experimenting with factorization models. I think this is mainly because the MF models applies L2 projection and L2 inner product, so keeping the features' L2 norm constant makes different product roughly in the same scale without introducing instance bias.</p> <p>Here is another question:) I took a look over your solver and seems it is SGD based solver, so I assume you need to tune the step size and weight decay and do early stopping on validation set? How does the three factors affect your experiment? I guess usually different step size and weight decay could lead to slightly different stoping round.&nbsp;How much effort do you spent on tuning them?</p> <p>Tianqi</p> <p>[/quote]</p> <p>(1) Actually the fact is completely contrary to what you mentioned. We one-hot<br> encode everything. The &quot;integer&quot; you mentioned is actually the index of a<br> feature. For example, consider the feature &quot;C1:5a9ed9b0&quot;, and it is hashed<br> to an integer, say, 872342. It does not mean that a feature's value is<br> 872342; it actually means that the value of 872342nd feature is non-zero.</p> <p>(2) Consider the example in the 15th page of the slide. x is an instance of<br> training data, and w is the model. Because x is very sparse, we use sparse<br> format to represent it. That is, only the 376th, the 248th, the 571st, and<br> the 942nd features are non-zero. phi(w,x) can be obtained as the slide<br> shows. So, for your question, x_j1, x_j2, w_j1,f2, and w_j2,f1 are symbols<br> used to represent the corresponding w and x in each term. For example, in<br> the first term, x_j1 is x_376, x_j2 is x_248, w_j1,f2 is w_376,2, and<br> w_j2,f1 is w_248,1.</p> <p>Not sure if you think the questions are answered. Please let me know if it is<br>not clear. :)</p> <p>[quote=Inspector;55862]</p> <p>For those of us (me ;-)) just starting to learn about FM, hope you don't mind a couple elementary questions.</p> <p>1) Are you only using numeric variables in the model - no one-hot encoding at all? Everything is hashed to an integer?</p> <p><br>2) On Slide #14-15 I am unsure what {x_j1, x_j2 } and {wj1,f2, wj2,f1} are. Would you explain?</p> <p>[/quote]</p>
Display Advertising Challenge
Document and code for the 3rd place finish
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Display Advertising Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>[quote=Aymenj;56289]</p> <p>Hi,</p> <p>Thanks a lot for sharing your methodology with us. I have a few questions regarding your approach:</p> <ul> <li>How do you create your feature groups? What kind of basic statistic do you use to reduce collinearity?</li> <li>What do you mean by hierarchical features?</li> <li>How do you deal with missing values?&nbsp;e.g. do you &quot;fill&quot; the categorical features with 'NaN'?</li> </ul> <p>Thanks in advance for your answers.</p> <p><br>Aymen</p> <p>[/quote]</p> <p>For Q1, in fact, by eyeballing, I found features&nbsp;&quot;C3&quot;, &quot;C4&quot;, &quot;C12&quot;,&nbsp;&quot;C16&quot;, &quot;C21&quot;, &quot;C24&quot; having missing values at the same time, and then made them in a group. Is it not scientific? OK, it is not hard to turn an intuition into a scientific approach. Eg. you can define unified distance between two features by cardinality (f1 x f2) / max (cardinality(f1), &nbsp;cardinality(f2)). And then do clustering. The basic approach to avoid further (second order) colinearity is not make join-features among colinear features.</p> <p>For Q2, please look at Page 11 at&nbsp;<a href="http://people.csail.mit.edu/romer/papers/TISTRespPredAds.pdf">Oliver's paper</a></p> <p>For Q3, Yes.</p> <p>The document is attached here.</p> <p>The code can be found at <a href="https://github.com/songgc/display-advertising-challenge">github</a>.</p>
Display Advertising Challenge
Required model documentation and code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Display Advertising Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello, a team declined the prize so for me to get it I must submit my solution.</p> <p><a href="https://github.com/juliandewit/kaggle_criteo/blob/master/ModelDocumentation.pdf" target="_blank">Here is the model documentation</a></p> <p><a href="https://github.com/juliandewit/kaggle_criteo" target="_blank">Here is the code repository</a></p> <p>Note that the neural network was home-grown and still work-in-progress. I &nbsp;fixed bugs and added features as needed during the competition. It's also written in C# which is rather... &quot;exotic&quot; here I think :)</p> <p>Any feedback and tips are welcome.</p> <p>If it&nbsp;was that simple you would see more neural network solutions in this competition.<br>You can read the paper why some modifications were necessary.</p> <ol> <li>Caffe does not support sparsity. Some features had 100000+ categories.<br>My net had 250000 inputs. With dense&nbsp;matrices this won't work speedwise and memorywise.&nbsp;This was the reason why you did not see&nbsp;many neural nets in this competition</li> <li>I like to build stuff myself :)</li> </ol>
Don't Overfit!
Lessons Learned
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Don't Overfit! <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone. Thanks Phil for holding such an interesting competition. Congratulations to all the winners who didn't overfit! It was the best contest I ever participated. I think everyone who participated in this competition has won. Thanks competitors who shared their methods in the forum.</p> <p>I learned some lessons in this competition:</p> <ol> <li>I understand that the prize is not the only thing which competitors try for. </li><li>For the first time I used R. &nbsp; </li><li>I got more familiar with NN, SVM, LR, SVM-RFE (I used them as black boxes before) </li><li>It was also a great feature selection challenge and most of the feature selection algorithms failed in case of overfitting . there is not a good general feature selection algorithm which can tackle such problems. </li><li>I shouldn't trust cross validation on train set to avoid overfitting, for example I used an RFE method for feature selection and got an AUC of 0.997-0.999 on train set using 5-fold CV in several runs with different random seeds, but these variables didn't have AUC better than 0.9 on test set. </li><li>Tim showed us that sometimes with low number of submissions we can perform well if we think better. </li><li>Good competitions teaches hidden knowledge which you can't even learn in college or read in papers.&nbsp; &nbsp; </li></ol>
Draper Satellite Image Chronology
1st place - How to win the competition if you know nothing about image processing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Draper Satellite Image Chronology <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>[quote=Dmitry Grishin;125485]</p> <p>Vladimir, can you share day A compiled images for all 4 locations please? Do they all have same rotation within one day or mostly at least?</p> <p>[/quote]</p> <p>Stitching to the background (background = satellite images from Google Maps).</p> <p><strong>Location 1:</strong> Point Loma (dayA = 5th day after solving) + San Diego (dayC = 5th day)</p> <p><strong>Location 2:</strong> Landfill + Sycamore Estates (both dayA = 2nd day)</p> <p><strong>Location 3 (LA):</strong> Port + Carson (3rd day)</p> <p><strong>Location 4:</strong> La Jolla (dayA = 1st day, dayB = 2nd day; see the post above)</p> <p>Landfill didn't stitch properly because the landfill on the aerial images is little bit different than the landfill on the satellite images. Maybe if I used that image from Draper's web as a background it would be a better fit.</p> <p>Port have a lot of water around + the arrangement of the containers change every day, so the stitching is epic fail. I chose dayC because it produced best results.</p> <p>For this competition I trained the neural network (the one inside my skull). </p> <p>It took about 2 weeks of staring at the images until the magic happened.</p> <p>Here are the main ideas.</p> <h2>High level algorithm:</h2> <ol> <li><p>Split the data by location</p></li> <li><p>For each location match together the pictures that were taken in the same day</p></li> <li><p>Solve each location using any clues from any picture set within that location</p></li> </ol> <h2>Ad 1:</h2> <p>We know that the pictures were taken in the southern California and on the pictures we see ports, airports, landfill, golf field, Point Loma Stadium ...</p> <p>So using Google Maps it is relatively easy to find all the locations:</p> <p>Train set: Carson (LA County), Port of Long Beach (LA County)</p> <p>Test set: San Diego (near downtown), Point Loma (W peninsula of SD), La Jolla (N of SD), Sycamore Landfill (E of San Diego/Santee), Sycamore Estates (N of landfill)</p> <p>It is not necessary to point the exact locations of the set, discovering the locations that were taken within one flight is sufficient. (I think that Sycamore Landfill + Sycamore Estates were taken within one flight, similarly San Diego + Point Loma and Carson + Long Beach, so there might be only 4 locations if we merge these together).</p> <h2>Ad 2:</h2> <p>The idea is that the plane flew into the area, took all the pictures within a couple of minutes and then flew away. Thus if the pictures were taken in the morning, all the shadows should point to the left, if it was evening then they should point to the right, and during the noon to the top. If it was cloudy during the flight over that location we should see only weak shadows (there was one such a day over the whole San Diego area, that was very easy to spot). If we look at the map with marked locations, we see that the plane followed the straight lines when it took the pictures, so the neighboring images within that line should have very very similar orientation and zoom and in many cases also some overlap.</p> <p>Overlaps and shadows were most important features in this step (shadows were used mainly in cities because they very easier to evaluate by hand + they could bind the areas that didn't overlap; overlaps were the most reliable, but it took a little bit longer to match the pictures by hand using overlap than using shadows, they were mainly used in forests when the shadows very hardly visible, and when the two shadows seemed very similar).</p> <p>In some cases we could also use secondary features like presence/absense of cars at some location (indicator of the weekend), presense of clouds, presense of puddles after a rain. In the train set the zoom was an important feature.</p> <h2>Ad 3:</h2> <p>In this step we could use any clue from any set from the location to solve the whole location.</p> <p>Building sites, tracks in the mud, piles of sand were the most important and reliable features.</p> <p>Presence/absence of the cars in from of hypermarkets (weekend), presence of the cars in front of church (Sunday), presence of puddles after a rain (Point Loma) were also very important.</p> <p>Parking patterns of personal cars were not very reliable, because people use them every day and they may have a reserved or preffered parking place and the presence of the car at the very same place might not be a strong indicator of the closeness of that days. However the parking patterns of the bigger trucks and boats were pretty reliable.</p> <p>[quote=the1owl;125397]</p> <p>Can you send me a copy of your Neural Network :) ?</p> <p>[/quote]</p> <p>I wouldn't send a copy of my brain to anyone!</p> <p>But this competition clearly showed that humans are superior to machines for this kind of task - for now.</p> <p>[quote=the1owl;125397]</p> <p>How do we know they were all taken in southern California (before solutions were posted)? </p> <p>[/quote]</p> <p>As Rich already said, it was in the data description.</p> <p>We even knew how the landfill looks like:</p> <p><a href="http://www.draper.com/news/cracking-code-satellite-data">http://www.draper.com/news/cracking-code-satellite-data</a></p> <p>[quote=the1owl;125397]</p> <p>Did you use any automation or just browse through Google maps to find locations? </p> <p>[/quote]</p> <p>For the Step 1 I took advantage of how human brain works (associative memory). I first browsed throught all images in test set and tried to describe every image (e.g. pentagon building, route layout that resembles trident, rounded rectangles etc.) and kept those descriptions in the LibreOffice sheet. Then when I discovered a new location I just wandered around on Google Maps and the associations started to emerge. I then looked in the LibreOffice sheet for the reference so I don't have to browse through all image sets over and over again.</p> <p>The easiest locations to find was Port of Long Beach and Port of San Diego - I just followed the coastline. Carson was near the Long Beach and Point Loma near San Diego (plus it has also this eye-catching Point Loma Stadium). The others were a little bit harder, but when you look carefully you see that all sets in test set have similar light conditions (e.g. one day with practically no shadow). So it was clear that they should be somewhere near San Diego. La Jolla was detectable by its golf field and the landfill is such a big object that it should catch your eyes sooner or later. The last one (Sycamore Estates) was the hardest, I discovered it by following the power lines from landfill area.</p> <p>After my first submission (score ~0.95) I used stitching to the background and visual inspection, but it turned out that I had it correct on the first attempt, I just made a mistake for Step3 in San Diego and Point Loma.</p> <p>I don't know if it comply with the Google rules, because they gave me 24h ban for auto-downloading their maps and after that I had to increase the sleep period significantly in order to not receive it again (or switch to Bing Maps).</p> <p>Thankfully this is only optional step. Moreover it didn't work well over the landfill and other locations that looks differently on aerial pictures than on satellite images of Google. It also had problems with the cloudy pictures.</p> <p>Alternative to this is intra-set stitching like you did here:</p> <p><a href="https://www.kaggle.com/the1owl/draper-satellite-image-chronology/stitch-and-predict">https://www.kaggle.com/the1owl/draper-satellite-image-chronology/stitch-and-predict</a></p> <p>and then measuring relative rotation, zoom and shift; the problem is that there are some sets that do not follow the common pattern, so you need to do outlier detection and then pass these sets to human for the fix. Plus there is still problem with cloudy pictures.</p> <p>Or you can do inter-set stitching, but this time you need to define the neighbors (otherwise it would be O(n^2)), but not all neighbors overlaps, some only have the same rotation/zoom, but shows very little or no overlap.</p> <p>So the best approach would be to do combination of intra and inter-set stitching with predefined neighbor relationships backed-up with the outlier detection so that human can fix possible problems and this is so difficult that given the size of the dataset you would prefer to do it by hand.</p> <p>By hand it wasn't that hard, once I trained &quot;the neural network&quot; it went pretty fast. There was one day with practically no shadow, one day with the shadow to the left, one with the shadow to the right and this held true for practically every image set. The last two days could be usually separated in the sets that had an asphalt road (when you looked carefully one shadow was usually weaker or was oriented a little bit more to the left than the other shadow). And for the other sets I matched them to the already resolved sets by other features like overlap/orientation or in one case (landfill) I had to use &quot;presence of the cars&quot; feature to distinguish between two days.</p>
Draper Satellite Image Chronology
Pure ML solution 0.768 Public/ 0.692 Private
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Draper Satellite Image Chronology <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Pure ML (blind) analysis getting 0.76786 in Public and 0.69209 the Private Leaderboard. I&#8217;m convinced that with more training data, this methodology can get a score of .8x or even .9x </p> <p><strong>FEATURE GENERATION</strong> (most of the tools are available at <a href="https://www.kaggle.com/vicensgaitan/draper-satellite-image-chronology/image-registration-the-r-way">https://www.kaggle.com/vicensgaitan/draper-satellite-image-chronology/image-registration-the-r-way</a> )</p> <p><strong>a) Key point detection</strong> For each image: Detect keypoints points using Harris. <br> Generate descriptors: 30x30 (downsampled to 9x9) oriented patches for each keypoint </p> <p><strong>b) Inter set registering</strong>: For all sets, for every couple of image: Identify common points using RAMSAC and fit a homomorphic transformation. Keep values of the transformation and patches for the identified inliers</p> <p><strong>c) Extra set registering:</strong> sampling from the full database of descriptors, using knn, find candidates to neighbor sets, and then register every possible image from one set to every image to the second set. Look for &#8220;almost perfect&#8221; matching taking into account a combination of number of identified common keypoints and the rmse in the overlapping region as a mathing value</p> <p><strong>d) Set Clustering:</strong> using the previous matching information built a graph with sets as nodes and edges between neighbor sets with weight proportional to the matching value. The connected component of this graph gives the set clusters without need to explicitly georeference the images.</p> <p><img src="https://github.com/gaitanv/Draper/blob/master/Clusters.png?raw=true" alt="enter image description here" title></p> <p><strong>PATCH LEVEL MODEL (Siamese gradient boosting)</strong> The model is trained over pairs of common keypoints in images of the same set, trying to predict the number of days between one image and the other (with sign).The features used are:</p> <p><em>Patch im1 (9x9), Patch im2 (9x9), coefficients from the homomorphic transformation (h1..h9) , number of inliers between both images and rmse in the overlapping region</em></p> <p>The model is trained with XGBoost to a reg:linear objective using 4-fold cross validation (assuring that images in the same cluster belongs to the same fold) Surprisingly, this model is able to discriminate the time arrow quite well</p> <p><img src="https://github.com/gaitanv/Draper/blob/master/holdout.png?raw=true" alt="enter image description here" title></p> <p><strong>IMAGE LEVEL MODEL</strong></p> <p>Average the contribution of all patches in an image. Patches with absolute value of prediction below certain threshold level <em>alpha</em> are discarded. If the Patch Level Model were &#8220;perfect&#8221; this will result in the average of difference in days from one image to the rest of images in the set, so the expected values for an ordered set will be -2.5,-1.5,0,1.5,2.5 respectively. In practice, the ordering is calculated by sorting this average contribution.</p> <p>Additionally, the average is refined by adding the average of images from overlapping sets (using the graph previously defined) weighted with the edge value times a coefficient <em>beta</em>, and iterating until convergence. </p> <p>The local CV obtained is around 0.83 +/-0.05</p> <p>The optimal <em>alpha</em> and <em>beta</em> are adjusted using public leader board feedback ( training set is not enough representative of test set) and as expected, there were some overfitting and a dropping of five positions in the private leaderboard&#8230; :(</p>
Driver Telematics Analysis
Trip Matching - our methods
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Driver Telematics Analysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Update - here's an example of the cluster mapping - this is driver 1 - a cluster of 141 trips, they don't all match each other but you can walk a path from one to the other. &nbsp;If you start with the most connected trip you can rotate and align every other trip back that that trip. &nbsp;In this way I was able to create a customized map of the driver's routes. &nbsp;Note the map spans 30 kilometers and the method I used for matching matched on sub portions. So for some of the really long trips (15 km) small initial errors in alignment add up - but it's good for the short trips, and even on the longer ones you can understand the layout. &nbsp;</p> <p>Cardal - thanks for posting! &nbsp;Yeah the trip matches contained a lot of data. That now answers a question I had, was there any other information I missed, now I can sleep at night!</p> <p>2 questions for you - did you try using trip matching to identify the mixed in trips? &nbsp;If you submit you matches directly without using any other classifier (I.e. using velocity/acceleration etc), what do you score? &nbsp;I'm really curious to compare. &nbsp;</p> <p>Congratulations, that was very impressive.&nbsp;</p> <p>I'm curious how the other teams in the top 5 did it. &nbsp;We received a pretty good boost by matching trips. &nbsp;I did several different methods, I think this was the order I did them in, but I'm sure I'm forgetting some dead ends.</p> <p>Approach 0:</p> <p>Align the start and end to the x axis.&nbsp;</p> <p>Approach 1:</p> <p>Simplify trips using RDP (Ramer&#8211;Douglas&#8211;Peucker), then create just a list of the distances between turns over a threshold along with a second list giving the size of the turn - ignore right or left. &nbsp;This is a great first pass - very quick and you can weed out a lot of trips this way &nbsp;I would compare forward and backward (this solved the starting from 0,0 every time bit) just looking at distance between turns, then compare the size of the turn within a threshold (this ignores mirroring) - all together that took care of rotation as well. &nbsp;We used&nbsp;42 sets of tolerances for RDP and angles - at one point I think I was up to 180 tolerance / angle pairs, but found most were redundant and wasted time. &nbsp;</p> <p>By setting a minimum number of required turns to match we could generate quickly a set of matches. &nbsp;Testing revealed this worked very well. &nbsp;I can't recall the jump, but when I first teamed up with Ben we went from 11th to 3rd using just an override setting matches to 1.0's &nbsp;- the most basic way of using matches.</p> <p>Approach 2:</p> <p>Widen the matching angle to find more matches, then use Dynamic Time Warping to score the trips. &nbsp;This worked OK, but not really any better.</p> <p>Approach 3:</p> <p>Interpolate the trip into points spaced at an even interval, then trip DTW, this just couldn't score better than approach 1.</p> <p>Approach 4:</p> <p>Align the matched sub-trips based on the locations of the turns, and use Singular Value Decomposition to align the sub-trips and get the RSME error for the points. &nbsp;This worked very well.</p> <p>Approach 5:</p> <p>Keep track of who matches to whom - an adjacency tree, use it to create clusters of matched trips. &nbsp;Try various features based on the size of the cluster a trip is in, even did a page rank style analysis of the match trip network...maybe I wasn't smart enough but that feature didn't help (though we didn't start using it until the last 2 days or so, not much time to adjust it).</p> <p>Approach 6:</p> <p>Use use RDP turn / angle approximations and SVD for alignment, score with RSME for the sub-trip, then align the full trip and measure the distance between the start and end of the trip.</p> <p>Approach 6a:</p> <p>If we know the adjacency tree to walk from one trip to another we can rotate and align all the trips in a cluster to each other! &nbsp;Then by viewing the map of the driver's matches we could adjust the parameters for RSME and the minimum distance between the start and end of a trip (to account for trimming). If the minimum distance between the start or end of the aligned trip is greater than a certain value, ignore the match. &nbsp;It's a really elegant idea (if I do say so myself) and it failed [edit, turns out it failed when used as a feature giving to our ML classifier, however I've done a few more submissions now that the contest is over, with the matches from this approach boosting the classifier's results and turns out this would have worked]....in the end just using an RSME score for the sub-trip alignment found by SVD worked best.</p> <p>Take away - RDP to simplify the trip, break up into distances between turns (rounded to the nearest 100 meters), record turn sizes (not directions), look for matches forward, backward and with some allowance for slightly larger / smaller turns, then align using SVD, compare the alignment, keep track of how many matches a trip has, and in the end feed that feature fed into a ML algorithm - worked well.</p> <p>The speed of the basic method made it possible to check for matches between all the drivers. &nbsp;Starting with the bottom 50,000 (per our classifier) but in the end I ran it on on all matches paired against each other, on a i7 quad core it would take about 20 hours.</p> <p>I would later move onto EC2 clusters which made it possible to iterate over small changes in the algorithm and have the results in 1 to 2 hours for a basic version - last night I had 16 machines running at one time, eleven&nbsp;- 32 core machines and five - 36 core machines - managed to run two different matching algorithms in about 3.5 hours total - one of which helped us to finally climb over 97%.</p> <p>Derek - we used the results from trip matching as features for the machine learning algorithm. &nbsp;We tried various features, but in the end settled on the number of times a stretch of 5 turns were matched (distance between turns and turn size) with other trips - this worked well for the trip matching done using the approach 1 I described at the top of this thread. &nbsp;For the last approach we used the RSME score to count the number of sub-trip matches and used these as additional features. &nbsp;</p> <p>After the competition ended I found that some of our later matching algorithms were able to find some matches that the others missed and if we did a post training boost on those matches it would improve the score &nbsp;(just a simple submission *= 1.1 if a match was found) .</p>
Dstl Satellite Imagery Feature Detection
0.51 public LB approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Dstl Satellite Imagery Feature Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Preprocessing</strong></p> <p>All images were rescaled to RGB image size and concatenated into 20 channel images.</p> <p><strong>ANNs</strong></p> <p>I used two UNET-like ANNs with 160x160 input and binary cross-entropy objective. The first one (7c) learned to predict simultaneously: Buildings, Misc Manmade structures, Road, Track, Trees, Crops, Waterway + Standing water as one class. The second ANN (2c) learned to predict simultaneously: Vehicle Large, Vehicle Small.</p> <p><strong>Training</strong></p> <p>For ANN 7c I made corresponding autoencoder and trained it on random patches from all images. Then I transferred encoder part weights to ANN 7c, fixed the weights, and trained the net on random rotational patches. Then I unfixed the weights and trained the net for some additional time on random rotational patches. I started train ANN 2c on random rotational patches that contain vehicle with probability ~0.5. Then I trained ANN 2c on random rotational patches for additional time.</p> <p><strong>Predicting</strong></p> <p>To construct prediction on entire image I used only the central part of ANN predictions. The coverage was 4x (varied offsets, rotations). To predict Waterway and Standing water I used ANN predictions and CCCI index from awesome kernel by Vladimir Osin. To separate Waterway from Standing water I used not just area, but a bit more complex score which includes shape properties (Waterway more likely elongated).</p>
Dstl Satellite Imagery Feature Detection
0.51276 Public LB Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Dstl Satellite Imagery Feature Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Currently preparing the code.</p> <p>I made video with segmentation examples:</p> <p><strong><a href="https://www.youtube.com/watch?v=rpp7ZhGb1IQ">https://www.youtube.com/watch?v=rpp7ZhGb1IQ</a></strong></p> <p>Yes I used it (Dropout(0.1)) in my latest models. I can't say for sure if it was good or not, but my score became better.</p> <p>I couldn't use big batches because I choose to big image as input. My batch was 22 - higher batches didn't fit in memory. Probably if I'd start from scratch I'd use 112x112 input.</p> <p>I afraid of float16 )</p> <p>Good question. If made straigtforward random sample from image, then it could lead to bad result due to class imbalances. Most of batches will contain small ammount of predicted class. What is worse in some cases batch won't contain predicted class. Probably using pregenerated positive cases in union with random sample from images will work better than my solution. But I think my solution is ok, due to large variance of samples. The other good thing about my approach: </p> <p>1) I generated samples one time with balanced classes and can use them "as is" just with random() function.</p> <p>2) Reading all train images in memory (to random sample from them) requires ~32 GB of RAM. My current solution don't require big ammount of memory. But require SSD because of heavy reading from disk.</p> <p>To solve this problem I used: Python + Keras 1.0.8 + Theano under Windows 10. As hardware I had NVIDIA GTX 980 8 GB. I used only one GPU which worked almost 24h a day during competition. After join the team I used 2 additional GPUs (TITAN 12GB). In short my solution can be described like this:</p> <ol> <li>My main CNN is modified UNET with input shape (20, 224, 224). It has higher depth, with added batch normalization and dropout layers. As loss function is used Jacquard Coefficient. I used grid search with different parameters and choose the best model with highest validation score. </li> <li>I create separate models for each class (so 10 independent models) and tuned them independently. I actually think that I lost some information about class interaction this way, but it was much easier to tune models. I partially fix it with postprocessing step. </li> <li>Each model is actually set of K different weights obtained with KFold. For most classes I used 5 KFold. For class 7 - 2 KFold. For class 9 – 4 KFold. I split train set by image ID (there were only 25 images). I made it once by hands independently for each class. Each fold contains the same number of images with existed class. So it’s actually stratified split.</li> </ol> <h2>Data preprocessing</h2> <p>Each “image object” (with particular img_id) had set of images made with different wave lengths. In total 20 channels. I resized all images to 3360x3360 pixels (3360 = 15*224 and 3360 is closest to panchromatic image resolution). And join them along the axis. So at the end I had tensor of following shape: 20x3360x3360. I created the pixel masks from given polygons with same size: 3360x3360 pixels.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/166431/6037/Tensor_example_700px.png" alt="Set of images example" title=""></p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/166431/6029/Mask_example.png" alt="Mask example" title=""></p> <p>I decided to use UNET with input shape 20x224x224. I choose 224 for 2 reasons:</p> <ol> <li>224 = 2*2*2*2*2*7 – I have 5 MaxPooling layers in UNET and on lowest layer has 7x7 pixel size. Which in my experience the best.</li> <li>The same size I could use with pretrained VGG16 or ResNet.</li> </ol> <p>The next step was made independently for each class:</p> <ol> <li>Split input images on 15*15 parts forming tensors of size 20x224x224. </li> <li>Because of class imbalances we need to increase number of cases where mask exists. For some cases like class 10 (with vehicles) I have around 99% of empty masks after first step. So I added more cases using sliding window around non-zero mask points.</li> </ol> <p>Final statistics for classes:</p> <pre><code>Number of tests for class 1: 14010. Empty files: 4431 Percent: 31.62% Number of tests for class 2: 9835. Empty files: 3997 Percent: 40.64% Number of tests for class 3: 20548. Empty files: 5238 Percent: 25.49% Number of tests for class 4: 12411. Empty files: 2976 Percent: 23.97% Number of tests for class 5: 10161. Empty files: 294 Percent: 2.89% Number of tests for class 6: 10716. Empty files: 3720 Percent: 34.71% Number of tests for class 7: 10328. Empty files: 5504 Percent: 53.29% Number of tests for class 8: 11349. Empty files: 5469 Percent: 48.18% Number of tests for class 9: 12035. Empty files: 5574 Percent: 46.31% Number of tests for class 10: 16173. Empty files: 5336 Percent: 32.99% </code></pre> <p>UNET requires the distribution to be close to normal. Ranges for different channels was: P_3, P_P, P_M – 2048, P_A – 16384. At first I divide every channel to its maximum possible value. I’ve seen on forum that some people removed some values from the both ends of histogram but I didn’t do it.</p> <p>Then I needed to calculate mean and stdev. I calculated it for each channel independently, but at the end used the same mean and stdev for whole tensor: mean = 0.219613 stdev = 0.110741</p> <h2>Creating models</h2> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/166431/6038/model_700px.png" alt="Modified UNET" title=""></p> <pre><code>____________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ==================================================================================== input_1 (InputLayer) (20, 224, 224) 0 ____________________________________________________________________________________ conv2d_1 (Convolution2D) (32, 224, 224) 5792 input_1 ____________________________________________________________________________________ batchnorm_1 (BatchNormal (32, 224, 224) 64 conv2d_1 ____________________________________________________________________________________ activation_1 (Activation) (32, 224, 224) 0 batchnorm_1 ____________________________________________________________________________________ conv2d_2 (Convolution2D) (32, 224, 224) 9248 activation_1 ____________________________________________________________________________________ batchnorm_2 (BatchNormal (32, 224, 224) 64 conv2d_2 ____________________________________________________________________________________ activation_2 (Activation) (32, 224, 224) 0 batchnorm_2 ____________________________________________________________________________________ maxpool2d_1 (MaxPooling2D)(32, 112, 112) 0 activation_2 ____________________________________________________________________________________ conv2d_3 (Convolution2D) (64, 112, 112) 18496 maxpool2d_1 ____________________________________________________________________________________ batchnorm_3 (BatchNormal (64, 112, 112) 128 conv2d_3 ____________________________________________________________________________________ activation_3 (Activation) (64, 112, 112) 0 batchnorm_3 ____________________________________________________________________________________ conv2d_4 (Convolution2D) (64, 112, 112) 36928 activation_3 ____________________________________________________________________________________ batchnorm_4 (BatchNormal (64, 112, 112) 128 conv2d_4 ____________________________________________________________________________________ activation_4 (Activation) (64, 112, 112) 0 batchnorm_4 ____________________________________________________________________________________ maxpool2d_2 (MaxPooling2D)(64, 56, 56) 0 activation_4 ____________________________________________________________________________________ conv2d_5 (Convolution2D) (128, 56, 56) 73856 maxpool2d_2 ____________________________________________________________________________________ batchnorm_5 (BatchNormal (128, 56, 56) 256 conv2d_5 ____________________________________________________________________________________ activation_5 (Activation) (128, 56, 56) 0 batchnorm_5 ____________________________________________________________________________________ conv2d_6 (Convolution2D) (128, 56, 56) 147584 activation_5 ____________________________________________________________________________________ batchnorm_6 (BatchNormal (128, 56, 56) 256 conv2d_6 ____________________________________________________________________________________ activation_6 (Activation) (128, 56, 56) 0 batchnorm_6 ____________________________________________________________________________________ maxpool2d_3 (MaxPooling2D)(128, 28, 28) 0 activation_6 ____________________________________________________________________________________ conv2d_7 (Convolution2D) (256, 28, 28) 295168 maxpool2d_3 ____________________________________________________________________________________ batchnorm_7 (BatchNormal (256, 28, 28) 512 conv2d_7 ____________________________________________________________________________________ activation_7 (Activation) (256, 28, 28) 0 batchnorm_7 ____________________________________________________________________________________ conv2d_8 (Convolution2D) (256, 28, 28) 590080 activation_7 ____________________________________________________________________________________ batchnorm_8 (BatchNormal (256, 28, 28) 512 conv2d_8 ____________________________________________________________________________________ activation_8 (Activation) (256, 28, 28) 0 batchnorm_8 ____________________________________________________________________________________ maxpool2d_4 (MaxPooling2D)(256, 14, 14) 0 activation_8 ____________________________________________________________________________________ conv2d_9 (Convolution2D) (512, 14, 14) 1180160 maxpool2d_4 ____________________________________________________________________________________ batchnorm_9 (BatchNormal (512, 14, 14) 1024 conv2d_9 ____________________________________________________________________________________ activation_9 (Activation) (512, 14, 14) 0 batchnorm_9 ____________________________________________________________________________________ conv2d_10 (Convolution2D) (512, 14, 14) 2359808 activation_9 ____________________________________________________________________________________ batchnorm_10 (BatchNorma (512, 14, 14) 1024 conv2d_10 ____________________________________________________________________________________ activation_10 (Activation)(512, 14, 14) 0 batchnorm_10 ____________________________________________________________________________________ maxpool2d_5 (MaxPooling2D)(512, 7, 7) 0 activation_10 ____________________________________________________________________________________ conv2d_11 (Convolution2D) (1024, 7, 7) 4719616 maxpool2d_5 ____________________________________________________________________________________ batchnorm_11 (BatchNorma (1024, 7, 7) 2048 conv2d_11 ____________________________________________________________________________________ activation_11 (Activation)(1024, 7, 7) 0 batchnorm_11 ____________________________________________________________________________________ conv2d_12 (Convolution2D) (1024, 7, 7) 9438208 activation_11 ____________________________________________________________________________________ batchnorm_12 (BatchNorma (1024, 7, 7) 2048 conv2d_12 ____________________________________________________________________________________ activation_12 (Activation)(1024, 7, 7) 0 batchnorm_12 ____________________________________________________________________________________ upsamp2d_1 (UpSampling2D) (1024, 14, 14) 0 activation_12 ____________________________________________________________________________________ merge_1 (Merge) (1536, 14, 14) 0 upsamp2d_1 activation_10 ____________________________________________________________________________________ conv2d_13 (Convolution2D) (512, 14, 14) 7078400 merge_1 ____________________________________________________________________________________ batchnorm_13 (BatchNorma (512, 14, 14) 1024 conv2d_13 ____________________________________________________________________________________ activation_13 (Activation)(512, 14, 14) 0 batchnorm_13 ____________________________________________________________________________________ conv2d_14 (Convolution2D) (512, 14, 14) 2359808 activation_13 ____________________________________________________________________________________ batchnorm_14 (BatchNorma (512, 14, 14) 1024 conv2d_14 ____________________________________________________________________________________ activation_14 (Activation)(512, 14, 14) 0 batchnorm_14 ____________________________________________________________________________________ upsamp2d_2 (UpSampling2D) (512, 28, 28) 0 activation_14 ____________________________________________________________________________________ merge_2 (Merge) (768, 28, 28) 0 upsamp2d_2 activation_8 ____________________________________________________________________________________ conv2d_15 (Convolution2D) (256, 28, 28) 1769728 merge_2 ____________________________________________________________________________________ batchnorm_15 (BatchNorma (256, 28, 28) 512 conv2d_15 ____________________________________________________________________________________ activation_15 (Activation)(256, 28, 28) 0 batchnorm_15 ____________________________________________________________________________________ conv2d_16 (Convolution2D) (256, 28, 28) 590080 activation_15 ____________________________________________________________________________________ batchnorm_16 (BatchNorma (256, 28, 28) 512 conv2d_16 ____________________________________________________________________________________ activation_16 (Activation)(256, 28, 28) 0 batchnorm_16 ____________________________________________________________________________________ upsamp2d_3 (UpSampling2D) (256, 56, 56) 0 activation_16 ____________________________________________________________________________________ merge_3 (Merge) (384, 56, 56) 0 upsamp2d_3 activation_6 ____________________________________________________________________________________ conv2d_17 (Convolution2D) (128, 56, 56) 442496 merge_3 ____________________________________________________________________________________ batchnorm_17 (BatchNorma (128, 56, 56) 256 conv2d_17 ____________________________________________________________________________________ activation_17 (Activation)(128, 56, 56) 0 batchnorm_17 ____________________________________________________________________________________ conv2d_18 (Convolution2D) (128, 56, 56) 147584 activation_17 ____________________________________________________________________________________ batchnorm_18 (BatchNorma (128, 56, 56) 256 conv2d_18 ____________________________________________________________________________________ activation_18 (Activation)(128, 56, 56) 0 batchnorm_18 ____________________________________________________________________________________ upsamp2d_4 (UpSampling2D) (128, 112, 112) 0 activation_18 ____________________________________________________________________________________ merge_4 (Merge) (192, 112, 112) 0 upsamp2d_4 activation_4 ____________________________________________________________________________________ conv2d_19 (Convolution2D) (64, 112, 112) 110656 merge_4 ____________________________________________________________________________________ batchnorm_19 (BatchNorma (64, 112, 112) 128 conv2d_19 ____________________________________________________________________________________ activation_19 (Activation)(64, 112, 112) 0 batchnorm_19 ____________________________________________________________________________________ conv2d_20 (Convolution2D) (64, 112, 112) 36928 activation_19 ____________________________________________________________________________________ batchnorm_20 (BatchNorma (64, 112, 112) 128 conv2d_20 ____________________________________________________________________________________ activation_20 (Activation)(64, 112, 112) 0 batchnorm_20 ____________________________________________________________________________________ upsamp2d_5 (UpSampling2D) (64, 224, 224) 0 activation_20 ____________________________________________________________________________________ merge_5 (Merge) (96, 224, 224) 0 upsamp2d_5 activation_2 ____________________________________________________________________________________ conv2d_21 (Convolution2D) (32, 224, 224) 27680 merge_5 ____________________________________________________________________________________ batchnorm_21 (BatchNorma (32, 224, 224) 64 conv2d_21 ____________________________________________________________________________________ activation_21 (Activation)(32, 224, 224) 0 batchnorm_21 ____________________________________________________________________________________ conv2d_22 (Convolution2D) (32, 224, 224) 9248 activation_21 ____________________________________________________________________________________ batchnorm_22 (BatchNorma (32, 224, 224) 64 conv2d_22 ____________________________________________________________________________________ activation_22 (Activation)(32, 224, 224) 0 batchnorm_23 ____________________________________________________________________________________ conv2d_23 (Convolution2D) (1, 224, 224) 33 activation_22 ____________________________________________________________________________________ batchnorm_23 (BatchNorma (1, 224, 224) 2 conv2d_23 ____________________________________________________________________________________ activation_23 (Activation)(1, 224, 224) 0 batchnorm_23 ==================================================================================== Total params: 31459619 </code></pre> <p>LOSS Function:</p> <pre><code>def jacard_coef(y_true, y_pred): y_true_f = K.flatten(y_true) y_pred_f = K.flatten(y_pred) intersection = K.sum(y_true_f * y_pred_f) return (intersection + 1.0) / (K.sum(y_true_f) + K.sum(y_pred_f) - intersection + 1.0) </code></pre> <p>Typical example of training process: Class 1 (Fold 2). Parameters: lr=0.05, optim=SGD, rotation=False, dropout=enabled, UNET version=BatchNorm, patience=8, samples train: 9997, samples valid: 4013</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/166431/6031/Training.png" alt="Typical example of training process" title=""></p> <h2>Tuning models and validation</h2> <p>The main problem was that training process is very unstable. Sometimes it can go to local minimum or start predicting as mask full image etc. So most of the time I spend on finding optimal parameters to maximize validation score. It was the most time consuming part. I used grid search with following parameters: </p> <ul> <li>Optimizer (Adam, SGD) </li> <li>LR SGD: (0.05, 0.01, 0.001) </li> <li>LR Adam: (0.01, 0.001, 0.0001) </li> <li>Rotation (enabled, disabled) </li> <li>Type of model: UNET 224x224, UNET with dropout 0.1, UNET with Batch Normalization </li> <li>Number of samples per epoch (Fraction from ½ up to 1)</li> </ul> <p>Due to limitation of computational power I checked only some of parameters. At the end I use mostly Adam optimizer with BatchNormalization version of UNET with rotation enabled. Just tune learning rate. I use loss function value to stop training with early stopping with patience 8-15 epochs. After I obtain all 5 Folds models I had the predicted segmentation for each of train images. So I was able to predict score for full image set using the same method as it was made on Kaggle Leaderboard. Obtained score was very representative, so increasing the score locally almost always leads to increasing the score on leaderboard. Also I was able to find optimal threshold value for heatmap with this method.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/166431/6032/LB-Scores.png" alt="LS vs LB Scores" title=""></p> <h2>Processing the test data</h2> <p>Test set consists of 429 images. Each test image was processed separately. Each image was resized and normalized the same way as training data to 20х3360х3360 tensor. </p> <p>In the beginning I create two zero arrays HEATMAP and COUNT with 3360х3360 shape. Then use sliding window approach to predict segmentation. All 5 folds predict on image extracted from current position and acquired probabilities sum up to HEATMAP array. COUNT array at image position added by 1. At the end HEATMAP is divided by COUNT to get real heatmap array. After this heatmap was thresholded at value acquired from validation, typically 0.5. On basis of this 2D-array I created polygons with rasterio and shapely.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/166431/6033/Sliding%20Window.png" alt="Sliding window" title=""></p> <p>Additional ideas to increase the accuracy provided below. Not all of them was used in final submit because processing of test images was very long process, around 8-10 hours for whole test set.</p> <ol> <li><p>Decreasing sliding step from 112px to 56px and below always increased the accuracy of predictions, but increase the computational complexity as O(N^2). </p> <ul><li>For example class 4 on validation 0.385714 vs 0.403683.</li></ul></li> <li><p>It looks like UNET predict worse on the edges, so it’s good idea to use only central part for prediction. Example for class 5 below. </p> <ul><li>Default score: 0.507446 </li> <li>200x200 center part from 224x224 square + 100 px sliding window step: 0.512844 </li> <li>160x160 center part from 224x224 square + 80 px sliding window step: 0.514557</li></ul></li> <li><p>I usually used threshold 0.5 without checking optimum on validation, but this can be useful too. </p> <ul><li>Class 6 example: </li> <li>THR 0.1: 0.738943 </li> <li>THR 0.5: 0.757858 </li> <li>THR 0.9: 0.759850</li></ul></li> </ol> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/166431/6039/Masks_700px.png" alt="Masks for class 5" title=""></p> <h2>Ensembles</h2> <p>The most obvious ways to do ensembles is to use heatmaps. But out team consisted of 2 people was merged at the last stage of competitions, so we actually don’t have heatmaps for all classes at this stage. So we made ensembles directly on polygons, using UNION and INTERSECTION from shapely. We have different training process and models, so for some classes we had good boost after merge of this type. </p> <pre><code>Class 6 LB: 0.08149 + LB: 0.08103 (Intersection) Score: 0.08179 Class 8 LB: 0.03890 + LB: 0.05322 (Intersection) Score: 0.06194 Class 9 LB: 0.02113 + LB: 0.02713 (Union) Score: 0.03254 </code></pre> <p>Bad thing, we mostly use leaderboard to check if our ensemble gave boost, which could lead to overfitting. It would be much easier if we have 3rd independent solution to use voting mechanism.</p> <h2>Post processing</h2> <p>After analysis of class interactions in train I made the following postprocess types: - Remove too large polygons (from class 8, 9 and 10 for example) - Subtract predicted classes polygons like water from car classes</p> <h2>Class 10 with small objects</h2> <p>For class 10 default approach works not very well. The main problems with this class that it’s have very small objects, bad train segmentation and it’s total area is too small comparing to full area. So I created other UNET CNN for this class. It had input of 32x32 pixels. Data for this class was heavily augmented, rotations, different shifts etc. I also used loss function with big penalty for false positives based on <a href="https://en.wikipedia.org/wiki/Tversky_index">Tversky index</a>.</p> <pre><code># https://en.wikipedia.org/wiki/Tversky_index def tversky_coef(y_true, y_pred): y_true_f = K.flatten(y_true) y_pred_f = K.flatten(y_pred) alfa = 0.1 false_positive = K.sum(y_pred_f * (1 - y_true_f)) false_negative = K.sum((1 - y_pred_f) * y_true_f) true_positive = K.sum(y_true_f * y_pred_f) return (true_positive + 1.0) / (false_negative*alfa + (1 - alfa) * false_positive + true_positive + 1.0) </code></pre> <p>It allowed me to get 0.00362 score on LB.</p> <h2>Other ideas I tried</h2> <ul> <li>At the early beginning I made single XGBoost model which works on squares of size 10x10. Model tries to predict to which class given square is belong. It easily got me higher than baseline. Probably creating independent XGBoost models for each class could give good results. But I switch to CNN without additional experiments with XGBoost approach.</li> <li>Pre-trained VGG16 for rare class localization. I tried to predict if car exists in given area with VGG16 on RGB images for later ensemble with UNET predictions.</li> </ul> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/166431/6040/vgg16_test_6060_3_2_cls_10_700px.png" alt="Example of class 10 localization with VGG16" title=""></p> <ul> <li>Pre-trained VGG16 for segmentation. The main idea here is that we use final layer (which is now sigmoid instead of softmax) as indication in which part of 224x224 image given class exists. I used 16x16 = 256 neurons for this task. But result wasn’t very good, may be because of low resolution - 14x14 square as single pixel.</li> <li>There are big bunch of different indexes used for automatic segmentation without CNN: <a href="http://www.indexdatabase.de/db/i.php">http://www.indexdatabase.de/db/i.php</a> It works great for water as shown in <a href="https://www.kaggle.com/resolut/dstl-satellite-imagery-feature-detection/waterway-0-095-lb">Waterway Kernel</a>. And these indexes can be added as additional planes during learning process. Unfortunately they didn’t help much in my experiments, so I abandon them. </li> <li>I tried to use CRF (Conditional random field) methods with <a href="https://pystruct.github.io/">pystruct</a> to improve predicted masks. But it somehow won’t work for me.</li> </ul> <h2>Independent classes best scores</h2> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/166431/6036/Best-LB.png" alt="Best Public LB" title=""></p> <h2>Segmentation example</h2> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/166431/6048/6100_1_2_mask.png" alt="enter image description here" title=""> <img src="https://storage.googleapis.com/kaggle-forum-message-attachments/166431/6049/6100_1_2_proj_700px.jpg" alt="enter image description here" title=""></p> <p>Checkout the video: <strong><a href="https://www.youtube.com/watch?v=rpp7ZhGb1IQ">https://www.youtube.com/watch?v=rpp7ZhGb1IQ</a></strong></p> <h2>Code</h2> <p>GitHUB link here later. I plan to release code after publication of final results.</p> <p>Actually I had the same issue - very long run time for full bunch of my train images. And CRF run on small subset lead to bad result. May be the link you provided is better for this task.</p> <p>I asked kaggle support about sharing the code and they said as a prize winner I can't publish it. Sorry. (</p>
Dstl Satellite Imagery Feature Detection
0.53 Public LB Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Dstl Satellite Imagery Feature Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Summary of the DSTL Challenge</h1> <p>In this document I will review all my work in this challenge</p> <h2>1. First steps</h2> <h3>Preprocessing the images</h3> <p>When I looked at the rgb images I noticed that the white balance was no good. So I applied white balance like in the link below. <br> <a href="https://docs.gimp.org/en/gimp-layer-white-balance.html">https://docs.gimp.org/en/gimp-layer-white-balance.html</a> <br> Instead of applying white balance to single images I made collages(for example all the images that start by 6010) and I applied white balance to the whole collage. I made this for all the satellite channels. I also corrected the missalignement between the different channels using simple traslations.</p> <h3>First trainings</h3> <p>At the beginning I tried to train a single model to predict all the classes. I found it very difficult and I think it was because the great imbalance between the classes. <br> <img src="http://imgur.com/exlz22p.png" alt="enter image description here" title=""> <br> I decided that I was going to train a different model for each class, I tried that and it worked better so I continue with that aproach until the end.</p> <h3>Sampling</h3> <p>The low frequency of the classes make hard to train them. So I decided that instead of training each epoch with all the samples I was going to take some samples with positive instances and some samples without. That way I could change the frequency of the classes and help the model to learn.</p> <h3>Creating the submission</h3> <p>This was probably the most painfull part of the challenge. The slow evaluation method of kaggle forced me to simplify the predictions. I could not make a succesfull submission for trees without simplifying. <br> I used shapely for making the simplifications, and at the start I was simplifying without preserving the topology so the scores were badly hurt. I discovered this when I made a tool for visualizing submissions, I was just visualizing predictions so this shows how important is to visualize each step of the pipeline. <br> My last submission was made with tolerance 8e-6 ~ 4 pixels.</p> <h2>2. Tools</h2> <p>I have used python and the main libraries used are: * Keras * Opencv * Shapely </p> <p>I started the challenge using a laptop with a 980m gpu and 8GB of RAM. At the half of the challenge I bought a pc with a gpu 1080 and 32GB of RAM, and in the last week of the challenge I bought another gpu 1080 and another 32GB of RAM :)</p> <h2>3. Models</h2> <p>I think this competition was pleasing because it was really nice to watch how the model was learning to paint a map just by using the images. Below you can find some visualizations of images from the test set. </p> <p><img src="http://imgur.com/1vdZ5em.png" alt="enter image description here" title=""> <img src="http://imgur.com/8jPC2UM.png" alt="enter image description here" title=""> <img src="http://imgur.com/R5QQiJG.png" alt="enter image description here" title=""></p> <p>Main characteristics of the models </p> <ul> <li>I used unet architecture on all my models. </li> <li>I trained using cross-validation with 3 and 5 folds. </li> <li>For each class I tuned the depth of the model, the window size and the resize of the images. </li> <li>I used as input all satellite bands except for the models with finer detail. In the case of structures and vehicles I only used rgb images and p band. </li> </ul> <p><img src="http://imgur.com/7LHvaOr.png" alt="enter image description here" title=""></p> <h3>Water</h3> <p>This is my most worked model. At the beginning I tried to train a model for rivers and another for lakes. But then I realized that it was really hard for a model to segment the image pixel by pixel and also make an abstraction and decide if something is a lake or a river. </p> <p>So I decided to train a model just for predicting water, and after that add a high level classifier to make the division between lakes and rivers.</p> <p>Another idea was to use images with different scale for training the model. I realized that unlike the other classes a lake or a river can have all sort of shapes and sizes (all cars will have a similar size). So I trained the model with images with scale 1:1, 1/2, 1/4 and 1/8. That way my training set was much bigger.</p> <p>Before feeding the images to the high level classifier I made collages with them because sometimes in a single image we don't have enought information to decide to which category they belong. </p> <p><img src="http://imgur.com/9K6XUdg.png" alt="enter image description here" title=""> The high level classifier had only two steps: in the first I use a threshold on the contour size to search for big rivers. In the second I use the distance to the big rivers to decide if a contour was a lake or a river. </p> <p>Having a good water model was important, because later I used this model to clean the output of some of the others.</p> <h3>Building, road, tree, crops, track</h3> <p>I don't think this models have something special. I just tried with different parameters such as: resize, window size, depth, sampling until I find a good configuration.</p> <h3>Structure</h3> <p>This model was hard to train. I had to use extracost of 2 or 4 on white pixels to force the model to learn. Otherwise it painted all black.</p> <h3>Small and big vehicles</h3> <p>I spend the last days of the contest trying to improve this two categories. <br> My final submission consisted on averaging the prediction of a lot of models (up to 15 different models). For trainign different models I used cross-validation with different random seeds. <br> I cleaned the prediction using the prediction of the other models. I used the water, building and tree model for cleaning.</p> <h2>4. Scores</h2> <p><img src="http://imgur.com/9EJggch.png" alt="enter image description here" title=""></p> <p>Please let me now if something is not clear or if I have to elaborate more on some topic. <br> I will be adding more info if necessary to the first post. </p> <p>Thanks to the organizers and the participants. <br> ironbar</p> <p>I upsize the different bands to match the size of RGB channels and simply stack all the layers together.</p> <p>I used as input all satellite bands except for the models with finer detail. In the case of structures and vehicles I only used rgb images and p band. </p> <p>Hi Raymond, <br> I did just like the gimp link. <br> <a href="https://docs.gimp.org/en/gimp-layer-white-balance.html">https://docs.gimp.org/en/gimp-layer-white-balance.html</a> </p> <blockquote> <p>To do this, it discards pixel colors at each end of the Red, Green and Blue histograms which are used by only 0.05% of the pixels in the image and stretches the remaining range as much as possible.</p> </blockquote> <p>Yes you are right on the second question. I did this because there are some images that for example only have trees. I thought that by combining the 25 images I will have a better white balance. <br> I also tried combining all the images of the dataset but the results seemed to be worse. </p> <p>Very easy, make a submission leaving all the other classes empty. The score is the average of all the classes, so when you make a submission of only one class the score is 1/10 of the score of the class.</p>
Dstl Satellite Imagery Feature Detection
133rd place solution with code;)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Dstl Satellite Imagery Feature Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here is code derived from my 133rd place solution trained on M images <a href="https://github.com/danzelmo/dstl-competition">https://github.com/danzelmo/dstl-competition</a> (0.39 public, 0.35 private)</p> <p><strong>Edit</strong> in description of what this model is: cnn used here is based on dilated convolutions, it does not use max pooling. No padding is used in order to have good border predictions. Dilation is used early in network to reduce size. Above score is from a very simple version that is not tuned, higher scores achievable if resolution is increased. This is something I am experimenting with. </p> <p>My actual solution was made up of 9 of these models one for each class trained on P image resolution data. </p> <p>Thanks to everyone who posted solutions, and helpful starter code. I learned so much about image processing and cnn from reading the forums. </p> <p>This competition reminded me of various important ideas. </p> <p><strong>Look at the full data before anything else.</strong> spent a lot of time looking at images, didn't put the grids together and take a look until half way through, and realizing this perspective simplified thinking about the problem immensely. </p> <p><strong>Good software design, and engineering habits are really important</strong> Wrote a bit of brittle code that required re-engineering whenever I changed my mind on how to solve things. Wasted time and got frustrated. Also unit testing would have helped a lot initially, spent a lot of time using pdb which is a plus. </p> <p><strong>Reduce the problem size, get the simplest full solution done immediately, then iterate.</strong> Most of the complications I encountered were because I neglected to follow this very good advice. And I knew I wasn't the whole time, so doubly silly of me. I didn't downsample the images. I assumed that the polygon labels would correspond best to the rgb resolution, and went about resizing and then aligning the M, and A images. Now my dataset is 200+ gigs and I have no ssd drive:)</p> <p><strong>Wandering around, and trying weird solutions can be fun and informative</strong> Because I chose to neglect some of the the above advice, I read many interesting papers, learned several things that I would not have if I took the practical route. Some times following whatever aspect is interesting can be very rewarding. </p> <h3>Somewhat chronological notes on things I tried.</h3> <ul> <li><p>I made the decision to base my first solution off of logistic regression and lightgbm. </p> <ul><li>But with 4 billion pixels to predict this actually takes a while with lightgbm. </li> <li>With subsampling was able to do hold one image out cross validation using logistic regression and lightgbm</li></ul></li> <li><p>Found this paper: Features, color spaces, and boosting: New insights on semantic classification of remote sensing images <a href="https://www.researchgate.net/profile/Jan_Wegner2/publication/264564101_Features_Color_Spaces_and_Boosting_New_Insights_on_Semantic_Classification_of_Remote_Sensing_Images/links/553e2b7e0cf294deef6fc6bf.pdf">link to paper</a></p> <ul><li>Seemed easy enough to implement, and would be faster with lightgbm</li> <li>Learned a tiny bit of cython and what integral images are so that I could implement a lot of this feature processing code.</li> <li>Realized I could do it much faster with opencv</li> <li>Saving anything intermediate to disk was not feasible(up to 14k features per pixel location)</li> <li>Implemented this so all features computed on-line</li> <li>Still way to cumbersome so abandoned idea. </li></ul></li> <li>Wrote various sampling code for training lightgbm. </li> <li>Wrote too much code before having a good idea what my solution would be like. <ul><li>Alternate the above with coming up with new approaches + too much data equals a ton of frustration with my code base.</li> <li>Several failed approaches</li> <li>Ended up just using jupyter notebooks, and made sure each one was somewhat self contained and left something on disk when finished.</li></ul></li> <li><p>Learned a bit about <a href="https://github.com/spotify/luigi">luigi</a> and used it to organize feature generation batch jobs.</p></li> <li><p>I struggled with ethical concerns regarding looking at the test data, as this seemed like a really good way to validate my methods. There was no explicit hand labeling, but a quick glance at the test data could give a lot of information.</p></li> <li>Used kmeans to cluster individual images <ul><li>much easier to do this on full grids.</li></ul></li> <li>Figured I could reduce, computation time if I just never predicted on images that had low probability of having certain classes. </li> <li><p>Generated 128x128 patches of the rgb resolution data</p> <ul><li>generate descriptive features for patches, train lightgbm for each class, and multiplied by four because I split the data into forest, desert, mountain, and deforested groups. Thinking like a decision tree. <ul><li>Lead to engineering headaches.</li></ul></li> <li>At test time only predict on patches that are classified to contain the given class. <ul><li>Implies I also need individual classifiers for each class, and for the four partitions of the data, plus keep everything organized <ul><li>too much overhead, eventually abandon to work on neural nets.</li> <li>Even if cnn take more effort to learn to use they are very efficient time wise for segmentation.</li></ul></li></ul></li></ul></li> <li><p>got into using <a href="http://toolz.readthedocs.io/en/latest/">cytoolz</a>, and learned about currying. This was really fun. </p> <ul><li>Learned that some api's(like sklearn) work really nicely with currying and functional style.</li> <li>Others do not pandas was a pain to try to wrap. Things didn't translate well.</li> <li>Wrote a lot of silly bad functions, this was fun:) </li> <li>Abandoned this because anything I tried I could do faster and easier using default api for numpy/pandas</li></ul></li> <li><p>I didn't have any experience using convolutional neural nets, and wanted to learn.</p></li> <li>Saw this <a href="http://juliandewit.github.io/kaggle-ndsb/">blog post</a> by Julian de Wit, using unet. Translated his mxnet code into keras.</li> <li>Unet posted on kaggle, I found I had bugs in my unet code. This helped get it running.</li> <li><p>Decided to try batchnormalization, forgot to add activation function</p> <ul><li>cnn can learn something even under bad circumstances, makes debugging a pain</li> <li>I found several bugs, that should have made learning very difficult, but still somehow it learns a little.</li> <li>Unit testing critical since the net won't tell you when its broken and only half learning.</li></ul></li> <li><p>Read this <a href="https://arxiv.org/abs/1606.00915">deeplab crf</a> paper posted by Vladimir Iglovikov</p></li> <li><p>Found <a href="https://github.com/lucasb-eyer/pydensecrf">pydensecrf</a> a partial wrapper for the crf code used in the deeplab paper</p> <ul><li>This gave quite a bit of improvement(validation on separate held out images), for some classes, not much for others</li> <li>Also took quite a while to run on full resolution images. </li></ul></li> <li><p>Read <a href="https://arxiv.org/abs/1511.07122">Multi-Scale Context Aggregation by Dilated Convolutions</a></p></li> <li><p>Read many github comments, and academic papers. </p> <ul><li>Noticed a trend of people being dissatisfied with using max pooling for segmentation networks.</li> <li>Noticed that all current segmentation nets are based on architectures designed explicitly for classification</li> <li>Noticed classification nets use a lot of padding which is not good for edge predictions.</li> <li>Decided to tinker around with network architectures that could bypass these issues. </li></ul></li> <li><p>network architecture doesn't use padding and has cropping combined with dilated convolution early in the network to cut down on image size. </p> <ul><li>Doesn't have coarse grained view of data(no max pooling)</li> <li>Nice details but miss larger relationships</li> <li>Tried some average pooling for decimation. </li> <li>Endless variation to try. </li></ul></li> <li><p>Designing architecture for cnn is time consuming. 1.5 months, I just tinkered and tried things, was a ton of fun</p> <ul><li>Eventually competition is ending and I realized I still have not got to probably the most important part for having a good score in the competition: conversion between raster and polygon.</li> <li>Many ways to get creative with post processing(subtracting water from other class, remove small object, etc etc)</li> <li>Make one submission consisting of 9 of my nets, still experimenting with architecture so not are all the same, </li> <li>Ran out of time several of the nets were not fully trained</li></ul></li> <li><p>Get a very nice score on public leaderboard;) </p> <ul><li>Since I haven't been paying attention to practicalities of kaggle don't realize public dataset is tiny(evidence was there from some of the stats people were posting)</li> <li>Blend my solution with public benchmark for safety. </li> <li>Drop 113 places</li></ul></li> <li><p>Post competition, simplified everything, then simplified it some more. </p></li> <li>Net on all classes, on only M images gets 0.39 public, 0.35 private(I wouldn't have selected such a model with bad understanding of leaderboard.) </li> </ul> <p>Overall this competition was really fun. Managing goals, whether caring about learning various things verses just getting done what works was a little frustrating. Lots of useful code posted on the forums helped me bypass steps and improve naive solutions. It would be nice if the competition was basically a collaborative effort, still learning how to balance information sharing with competition.</p> <h3>A couple of unanswered questions,</h3> <ul> <li>low resolution labels generate bad solutions on the leaderboard and in validation <ul><li>I didn't notice this so much when working with full resolution, very noticeable at M resolution</li> <li>Just upsampling the final low resolution predictions before conversion to polygons gave a large increases in score.</li> <li>It seems like there should be better ways to handle label conversion and creation to prevent mismatch. Did anyone have a great solution to this?</li></ul></li> <li><p>Did anyone else split the images into groups based on terrain type(I had four groups: desert, mountain, forest, and deforested). </p> <ul><li>Selecting thresholds by group instead of over all the images seemed to improve validation score.</li> <li>It seems like splitting like this would simplify things the learning algorithm has to learn, or would it not learn as general of a solution, thus degrading performance on new types of terrain in test set?</li></ul></li> <li><p>Are there any other segmentation architectures out there that don't use max pooling?</p></li> <li>Any ideas on creating the equivalent to pretrained weights similar to imagenet but with segmentation nets?. Has this been done anywhere?</li> </ul>
Dstl Satellite Imagery Feature Detection
4th place overview from deepsense.io
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Dstl Satellite Imagery Feature Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes, it was 4 256x256 patches per batch. We used GPU with 8GB memory. It should be enough to train with larger batch size but because of some implementation details we quickly got to memory limit. When I think of it now, I wish I hadn't optimized the code for larger batch size.</p> <p>After competition we found simple way to train models with arbitrarily large batch size in Pytorch. Because in Pytorch you control when optimizer does updates and when gradients are zeroed out, the algorithm is straightforward:</p> <pre><code>1 zero out gradients values 2 run a couple of times forward pass and backword pass (sample new batch at each time) - gradients are automatically accumulated 3 call optimizer which does weights update 4 Repeat 1-&gt;2-&gt;3 </code></pre> <p>It's definitely something I will try in further competitions. It may be slow, but good for convergence.</p> <p>Our results per class in final submission:</p> <pre><code>classid,private,public 1,0.06295,0.07791 2,0.02311,0.01732 3,0.04618,0.07758 4,0.04410,0.04219 5,0.06445,0.05492 6,0.08549,0.07358 7,0.08887,0.09295 8,0.01817,0.05559 9,0.01414,0.03328 10,0.01140,0.01590 total,0.45886,0.54122 </code></pre> <p>We've just posted high level overview of our solution. It's written for people who haven't participated in the competition, so may lack a lot of details. But if you have any questions, I'll try to answer them here, in this thread.</p> <p><a href="https://deepsense.io/deep-learning-for-satellite-imagery-via-image-segmentation/">https://deepsense.io/deep-learning-for-satellite-imagery-via-image-segmentation/</a></p> <p>Final architecture was implemented in PyTorch. For convolutional upsampling we used this: <a href="http://pytorch.org/docs/nn.html#convtranspose2d">http://pytorch.org/docs/nn.html#convtranspose2d</a> but Keras has something similar: <a href="https://keras.io/layers/convolutional/#conv2dtranspose">https://keras.io/layers/convolutional/#conv2dtranspose</a></p> <p>It's transposed convolution with 3x3 kernel which is learned to upsample image. Here you'll find details for UPCONV layer: <a href="https://deepsense.io/wp-content/uploads/2017/04/architecture_details.png">https://deepsense.io/wp-content/uploads/2017/04/architecture_details.png</a></p> <p>We normalized each input channel independently to have a zero mean and unit variance. This approach requires some pre-computed mean and std per channel. You can get this numbers in a couple of ways:</p> <p>1) for each input image compute its own mean and std and use them during normalization</p> <p>2) compute mean and std for entire dataset and use always the same number or any input image</p> <p>3) compute mean and std per each 5x5 region and normalize input images by region statistics to which image belongs to (I think it worked the best in our case)</p>
Dstl Satellite Imagery Feature Detection
9th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Dstl Satellite Imagery Feature Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I build models for each class. This enable to train models carefully for each class.<br> I didn't enough time to research about differences between SegNet and Unet.<br> You can find code of my models here.<br> <a href="https://github.com/toshi-k/kaggle-satellite-imagery-feature-detection/blob/master/source/02_class_1-6_8-10/2_model.lua">https://github.com/toshi-k/kaggle-satellite-imagery-feature-detection/blob/master/source/02_class_1-6_8-10/2_model.lua</a></p> <p>I think 3-band images are made from M-band. It seems to be redundant in some way.<br> I hear that ZFTurbo use all of them, which may improve score.<br> <a href="https://www.kaggle.com/c/dstl-satellite-imagery-feature-detection/discussion/29747">https://www.kaggle.com/c/dstl-satellite-imagery-feature-detection/discussion/29747</a></p> <p>Hi Everyone,</p> <p>Ending final validation, I share my solution.<br> I use 3 band images and SWIR images.<br> Main approach is Deep Learning based on SegNet (for class 1~6, 8~10).<br> I also use V.Osin's kaggle script (for class 7).</p> <p>Code: <a href="https://github.com/toshi-k/kaggle-satellite-imagery-feature-detection">https://github.com/toshi-k/kaggle-satellite-imagery-feature-detection</a></p> <p><img src="https://raw.githubusercontent.com/toshi-k/kaggle-satellite-imagery-feature-detection/master/img/solution.png" alt="conceptual diagram" title=""></p>
Dstl Satellite Imagery Feature Detection
Basic UNet approach, good for small objects
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Dstl Satellite Imagery Feature Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here is my approach which I used before teaming up with @alno and deepsystems.io. It gives ~0.476 public LB score, or ~0.495 if water from <a href="https://www.kaggle.com/resolut/dstl-satellite-imagery-feature-detection/waterway-0-095-lb">this kernel</a> by Vladimir Osin is used. This score is without voting of several models, which can improve the score quite a bit for many classes. Here are the public LB scores by class:</p> <pre><code>0.07679 Buildings 0.02179 Structures 0.07970 Road 0.02548 Track 0.04688 Trees 0.07776 Crops 0.07540 Waterway 0.03897 Standing Water 0.02582 Vehicle Large 0.00713 Vehicle Small 0.47572 Total </code></pre> <p>So it's a far cry from 0.55698 on the public LB which we managed to achieve as a team - kudos to my teammates Alexey Noskov, Renat Bashirov and Ruslan Baikulov.</p> <h2>General approach</h2> <p>UNet network with batch-normalization added, training with Adam optimizer with a loss that is a sum of 0.1 cross-entropy and 0.9 dice loss. Input for UNet was a 116 by 116 pixel patch, output was 64 by 64 pixels, so there were 16 additional pixels on each side that just provided context for the prediction. Batch size was 128, learning rate was set to 0.0001 (but loss was multiplied by the batch size). Learning rate was divided by 5 on the 25-th epoch and then again by 5 on the 50-th epoch, most models were trained for 70-100 epochs. Patches that formed a batch were selected completely randomly across all images. During one epoch, network saw patches that covered about one half of the whole training set area. Best results for individual classes were achieved when training on related classes, for example buildings and structures, roads and tracks, two kinds of vehicles.</p> <p>Augmentations included small rotations for some classes (±10-25 degrees for houses, structures and both vehicle classes), full rotations and vertical/horizontal flips for other classes. Small amount of dropout (0.1) was used in some cases. Alignment between channels was fixed with the help of <code>cv2.findTransformECC</code>, and lower-resolution layers were upscaled to match RGB size. In most cases, 12 channels were used (RGB, P, M), while in some cases just RGB and P or all 20 channels made results slightly better.</p> <h2>Validation</h2> <p>Validation was very hard, especially for both water and both vehicle classes. In most cases, validation was performed on 5 images (6140_3_1, 6110_1_2, 6160_2_1, 6170_0_4, 6100_2_2), while other 20 were used for training. Re-training the model with the same parameters on all 25 images improved LB score.</p> <h2>Some details</h2> <ul> <li>This setup provides good results for small-scale classes (houses, structures, small vehicles), reasonable results for most other classes and overfits quite badly on waterway.</li> <li>Man-made structures performed significantly better if training polygons were made bigger by 0.5 pixel before producing training masks.</li> <li>For some classes (e.g. vehicles), it helped a bit to make the first downscaling in UNet 4x instead of default 2x, and also made training 1.5x faster.</li> <li>Averaging of predictions (of one model) with small shifts (1/3 of the 64 pixel step) were used for some classes.</li> <li>Predictions on the edges of the input image (closer than 16 pixels to the border) were bad for some classes and were left empty in this case.</li> <li>All models were implemented in pytorch, training for 70 epochs took about 5 hours, submission generation took about 30 minutes without averaging, or about 5 hours with averaging.</li> </ul> <h2>Other things tried</h2> <p>A lot of things that either did not bring noticeable improvements, or made things worse:</p> <ul> <li>Losses: jaccard instead of dice, trying to predict distance to the border of the objects.</li> <li>Color augmentations.</li> <li>Oversampling of rare classes.</li> <li>Passing lower-resolution channels directly to lower-resolution layers in UNet.</li> <li>Varying UNet filter sizes, activations, number of layers and upscale/downscale steps, using deconvolutions instead of upsampling.</li> <li>Learning rate decay.</li> <li>Models: VGG-like modules for UNet, SegNet, DenseNet</li> </ul> <p>I ignored border both during training and during predicting, throwing it away in the output layer. Here is an example (it's much simpler than UNet but the idea is exactly the same), <code>x[:, :, b:-b, b:-b]</code> does that:</p> <pre>class MiniNet(BaseNet): def __init__(self, hps): super().__init__(hps) self.conv1 = nn.Conv2d(hps.n_channels, 4, 1) self.conv2 = nn.Conv2d(4, 8, 3, padding=1) self.conv3 = nn.Conv2d(8, hps.n_classes, 3, padding=1) def forward(self, x): x = F.relu(self.conv1(x)) x = F.relu(self.conv2(x)) x = self.conv3(x) b = self.hps.patch_border return F.sigmoid(x[:, :, b:-b, b:-b]) </pre>
Dstl Satellite Imagery Feature Detection
Meetup in San Francisco. 3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Dstl Satellite Imagery Feature Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Russian version is just a story that describes a path to the final solution. </p> <p>I am planning to finish blog post for a Kaggle forum this week, and it will be much richer with technical details.</p> <ol> <li><p>We tried to use Tiramisu, but it showed similar to Unet performance, but due to its complexity it was converging much slower. @alno had some success with a simplified version of it, but I did not ask him about details.</p></li> <li><p>Mask R-CNN looks interesting and I am planning to implement it, at least to check how does it work on the ImageNet dataset that is ultra clean and nice. Maybe when I will get comfortable with it I will try to apply it to this problem.</p></li> </ol> <p>It would be great to see you there.</p> <p>To my knowledge, there we many participants from Russia, and they already had drinks in Moscow, Saint Petersburg and other cities with similar discussions. :)</p> <p>Kaggle is not really popular in the Bay Area, I mean, nearly every Data Scientist heard about it, but I would say only one tenth of one percent tried to do an actual submission. =&gt; I am not expecting more than 2-5 people who actually tried to work on this problem at the meetup.</p> <p>In couple weeks I will give a talk about our solution at a Meetup in San Francisco.</p> <p><a href="https://www.meetup.com/San-Francisco-Kagglers/events/239113801/">https://www.meetup.com/San-Francisco-Kagglers/events/239113801/</a></p> <p>I already wrote a <a href="https://habrahabr.ru/company/ods/blog/325096/">blog post in Russian</a> that describes our approach, problems that we were facing and how did we overcome them. </p> <ul> <li>UPD blog post: <a href="http://blog.kaggle.com/2017/05/09/dstl-satellite-imagery-competition-3rd-place-winners-interview-vladimir-sergey/">http://blog.kaggle.com/2017/05/09/dstl-satellite-imagery-competition-3rd-place-winners-interview-vladimir-sergey/</a></li> <li>UPD preprint: <a href="https://arxiv.org/abs/1706.06169">https://arxiv.org/abs/1706.06169</a></li> </ul> <p>I am willing to share the code, but we can not get a clear answer from organizers if it is allowed by the winner's license.</p> <p>Talk will be repeated in the H2O headquarters on May 24th.</p> <p><a href="https://www.meetup.com/Silicon-Valley-Big-Data-Science/events/239683067/">https://www.meetup.com/Silicon-Valley-Big-Data-Science/events/239683067/</a></p> <p>Video: <a href="https://www.youtube.com/watch?v=lCcLjPOCof0">https://www.youtube.com/watch?v=lCcLjPOCof0</a></p> <p>Presentation: <a href="http://slides.com/vladimiriglovikov/using-deep-learning-for-satellite-imagery-feature-detection-challenge/fullscreen">http://slides.com/vladimiriglovikov/using-deep-learning-for-satellite-imagery-feature-detection-challenge/fullscreen</a></p> <p>We already submitted it to kaggle. I believe it is a question of a few days.</p>
Dstl Satellite Imagery Feature Detection
My scalable and industrializable solution with Scala+Spark
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Dstl Satellite Imagery Feature Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The score is a bit weaker than some other Python-based solution, but my version is ready for industrialization --&gt; would just fly on massive datasets</p> <p>Enjoy the reading on <a href="https://blog.kensu.io/2017/03/scalable-geospatial-data-analysis-with-geotrellis-spark-sparkling-water-and-the-spark-notebook/">https://blog.kensu.io/2017/03/scalable-geospatial-data-analysis-with-geotrellis-spark-sparkling-water-and-the-spark-notebook/</a></p>
ECML/PKDD 15: Taxi Trajectory Prediction (I)
How to get top 10 with a non-ML approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ECML/PKDD 15: Taxi Trajectory Prediction (I) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>@bluebalam - Are you implying my title is click-bait? ;-)</p> <p>Yes I will admit that anyone with a ML background will see step 4 as KNN, but the concept of closest starting point on a map is also understandable as a simple geometric exercise to anyone without a ML background.&nbsp;</p> <p>I was trying to point out that compared to the beat-the-benchmark code I saw you can do well simply by focusing on the geometric data and doing what amounts to trip matching.</p> <p>This scores a&nbsp;2.17807 on the private leader board (would get 10th).</p> <p>The approach only uses the trajectory data - nothing else.</p> <p>1. Convert the lat / lon data to meters (I used python and basemap for this).</p> <p>2. Simplify the trip data - using&nbsp;<a href="https://en.wikipedia.org/wiki/Ramer–Douglas–Peucker_algorithm">RDP</a>&nbsp;on the gps data to simplify the routes (I used an RDP threshold of 30 meters). &nbsp;</p> <p>3. Convert the trip point data from the sparse points into something more continuous - the idea being we want to compare trips to see if the taxi is on the same road or not. &nbsp;Here I interpolated between the RDP points and placed a point every 40 meters.</p> <p>4. Find the closest 5000 training trips based on the starting point of the test trip.</p> <p>5. Ignore any training trips that don't have a point within 50 meters of the last point of the test trip, that travel &gt; 1km in 15 seconds or that are less than 95% of the length of the test trip.</p> <p>6. Use Dynamic Time Warping to compare the trips (this helps to remove training trips that happen to pass by the final point of a test trip, but otherwise are very different - for example they pass the point by are going in the opposite direction) - drop trips that have a low DTW score - defined by (DTW / trip length) &lt; 3.</p> <p>7a. Check to see if a trip is 'returning home' because the taxi operator forgot to turn off the GPS unit, or they're doing a round trip or whatever (do this by seeing if the end point is closer to the start than the mid-point of the trip). &nbsp;If the trip is 'returning home' use the starting point as the estimated destination.</p> <p>7b. If not 'returning home' use the matched trips average their final location.</p> <p>7c. Finally, if no matching trips are found use the average value of all the other successfully matched test trips.</p> <p>Attached are a few plots that show the starting point for the test trip in GREEN the end point of the test trip in RED and the end points for the matched training trips in BLUE.</p>
ECML/PKDD 15: Taxi Trajectory Prediction (I)
Method sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ECML/PKDD 15: Taxi Trajectory Prediction (I) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Jun,</p> <p>Roughly speaking, I remember it took about 2-3 hours to create features for all the training data, then about 1-2 hours to do the correlations that made the predictions for the 320 testing trajectories. </p> <p>I saved the training features after making them so I didn't have to redo that part every time. For my validation sets, I used 2k-5k samples, but only 150k training samples, so that took maybe 1 hr per validation.</p> <p>AdeB,</p> <p>1.94 - wowsa!&nbsp; You guys had a very interesting approach that really shows the power of ML - I look forward to reading your paper.&nbsp;&nbsp;</p> <p>Can you comment on your CV process?&nbsp; How do you think you missed out on selecting the 1.94 for one of your final submissions?</p> <p>Congratulations to all the winners!&nbsp; This was a fun and challenging contest. My solution placed 5th on the private leaderboard, though only 22nd (after cheater removal) on the public leaderboard.&nbsp;</p> <p>Here's how I approached the problem:</p> <p>1)&nbsp; I divided the area covered by the training cases up into a grid in lat/long in order to bin different locations.&nbsp; The grid resolution was about 100m.&nbsp; The jist of my method was to compute the grid points that each trajectory passed through and correlate the test trajectories with the training trajectories.&nbsp;</p> <p>2)&nbsp; For each grid bin, four feature values were possible, corresponding to passing through that grid with a north-westerly, south-westerly, north-easterly, or south-easterly velocity.</p> <p>3)&nbsp; Given the number of shared &quot;features&quot; (i.e., a lat/long bin with velocity direction) between the test case and the training set, the training samples with a correlation above a particular threshold were chosen to generate the prediction.&nbsp; Each sample was weighted according to its number of shared features.</p> <p>4) *important*&nbsp; The prediction was generated by minimizing the weighted sum of the distances between the sample destinations and the guess.&nbsp; This is not the same as minimizing the sqrt of the sum of the squares of the distance difference (i.e., taking the mean)!&nbsp; It makes a big difference for this contest's evaluation function.</p> <p>A few helpful tweaks I found were:</p> <p>1)&nbsp; I weighted the last feature in the test trajectory at 10-15x the other features.</p> <p>2)&nbsp; If the taxi was stopped for more than a certain time, I only used features in the test sample after that time.</p> <p>As far as setting up a CV goes, I was successful in that my CV seemed to indicate the relative (though not absolute) worth of different approaches well.&nbsp; It was clear to me that longer trips are more likely to end up in the test set because they are more likely to be active at the chosen sample time.&nbsp; I ended up using the same method as the moderators to create my validation set - just take the trips that were active at a certain time and cut their POLYLINE at that time.&nbsp; I wanted to create a training set the same way, but I could only get about 50k unique data points out of the 1.7M training set before distorting the probability distribution across trip time (a power law fit very well).</p>
ECML/PKDD 15: Taxi Trip Time Prediction (II)
1st Place Solution write-up (+code)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ECML/PKDD 15: Taxi Trip Time Prediction (II) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Anthony,</p> <p>I trained two kinds of expert models:</p> <ul> <li>Three models for very short trips, e.g. for trips where only one, two or three positions of the trip are known. </li> <li>The second kind of expert model was for every end position in the test data set. I collected all trips crossing the end position and trained a specific model for this end position. I used these models only for trips with high enough training size (e.g. where I have at least 1000 example trips in the training data).</li> </ul> <p>1st Place Solution </p> <p>First, thanks to Organizers and Kaggle for such great competition.</p> <p>Finally I found some time to summarize my approach (see the attachment) and to clean-up the code. You can find it on github:</p> <p><a href="https://github.com/hochthom/kaggle-taxi-ii">https://github.com/hochthom/kaggle-taxi-ii</a></p> <p>if you have any questions I will be happy to answer them.</p> <p>All the best, <br> Fluxus</p>
ECML/PKDD 15: Taxi Trip Time Prediction (II)
Method Sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ECML/PKDD 15: Taxi Trip Time Prediction (II) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>@Dessy, I have not seen that error before. &nbsp;It may be related to this&nbsp;<a href="http://r.789695.n4.nabble.com/data-table-package-problem-td4708890.html">this</a>&nbsp;or&nbsp;<a href="http://stackoverflow.com/questions/31142280/unusual-error-in-setkeyv-with-data-table">this</a>. &nbsp;I don't have the computer that I tested the code with so I can't verify the versions. &nbsp;Just tried it with&nbsp;R 3.2 and data.table 1.9.4 and it works. &nbsp;Try upgrading to R 3.2.1 and reinstalling your packages to see if that solves the problem.</p> <p>And just in case,&nbsp;I've updated the code to call setkey on train and poly before that particular statement. &nbsp;This shouldn't be the problem though.</p> <p>Posted my solution on&nbsp;<a href="https://github.com/WillieLiao/Kaggle_Taxi_Trajectory">GitHub</a>. &nbsp;Cleaning the code took almost as much time as creating it since I introduced a bug and also found another bug. &nbsp;Clean code got scores of 0.55903 for time duration and 2.39573 for location.</p> <p>Congrats to all the winners.</p> <p>For my submission, the idea was to do something quick and dirty without the usual time-consuming steps of modeling, cross validating, and ensembling. Whereas my posted benchmark code used 1 point to find similar trajectories, the final submission used 2 points.</p> <p>Below is the recipe.</p> <ul> <li><span style="line-height: 1.4">Only use the most relevant rows. For example, to predict for 2014-12-21 at 14:30, only use train data from the afternoons of Sundays before public holidays.</span></li> <li><span style="line-height: 1.4">Only use the most relevant columns. Basically, just timestamp and polyline.</span></li> <li><span style="line-height: 1.4">Look up the most similar trajectories in train for each test case. &nbsp;</span>For simplicity, only the starting locations and the cut-off locations are compared. (Train trajectory is cut to about the same length as test).</li> <li>Similarity features include distance, overall bearing, and last bearing.</li> <li>Calculate the average of coordinates and log(polyline length), weighted by similarity.</li> <li>Thresholds chosen&nbsp;based on intuition and looking at a couple of examples to make sure enough similar trajectories are used.</li> </ul>
EMC Israel Data Science Challenge
Third winner solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: EMC Israel Data Science Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>&nbsp;</p> <p>Hi everybody,<br> <br> following the rules of the competition, we upload the files of our solution. You all can see &quot;description.pdf&quot; document describing our models and &quot;scripts.zip&quot; file containing all Matlab scripts needed to train our models (including the license).<br> <br> If someone has any doubt about the models or any other part of the document or scripts, please ask any of our team members.<br> <br> See you in more contests,<br> <br> Bye</p>
EMC Israel Data Science Challenge
What is your methods?
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: EMC Israel Data Science Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Cool.</p> <p>I have done PCA and SVD.</p> <p>For the model TF, SVD imporved SVM to 0.79/ PCA 1.56/ Init 1.08</p> <p>&nbsp;For SVM, I use libSVM.</p> <p>&nbsp;</p> <p>I am using L1 Logistic Regression. The best single model is 0.38</p> <p>Best KNN model is 0.40</p> <p>Best SVM model is 0.41</p> <p>What is your method?</p> <p>linear binding is enough. For most cases, half/half works very well.</p> <p>Also I tried Kalman Filter, it worked well when I combined a random forest result with a bad SVM result. Unfortunatily, it didn't work on later models.</p> <p>Can you tell me more about the process&nbsp;<em><strong>scaled Linear SVC's answer to [0,1] and took to the power of 11 and renormalized</strong></em>?</p> <p>&nbsp;</p> <p>Is it:</p> <p>init [0.5, 0.3, 0.2] -&gt; [11^0.5, 11^0.3, 11^0.2] -&gt; make it sum to 1?</p> <p>Why 11 works?</p> <p>&nbsp;</p> <p>Thanks!</p> <p>@<a class="x_profilelink" title="View LI, Wei's profile" href="http://www.kaggle.com/users/25648/li-wei">LI, Wei</a>&nbsp;, For KNN is implemented by my teammate, I can't tell more without his&nbsp;permission. I am sorry. I will send him an email and ask for his opinion :)</p> <p>&nbsp;</p>
EMC Israel Data Science Challenge
Winning Solutions
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: EMC Israel Data Science Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I was wondering when the winning solutions will be posted to the forums. I thought the rules stated that as a requirement?</p> <p>Its been well over a month since the contest close. The third place winners have posted their solution in a separate thread and its great to be able to learn from their approach. How about the others? </p>
EMI Music Data Science Hackathon - July 21st - 24 hours
Code/approach sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: EMI Music Data Science Hackathon - July 21st - 24 hours <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>yuenking, there was code to replace -1 with NA's, at R script, but imporvement was just slightly better (in magnitude of measurment error) so i removed it completely. </p> <p>First, thanks to organizers,<strong>&nbsp;</strong>it makes a lot of fun to work in such short time line, without need to invest much time in competition!</p> <p><strong>Feature genearation(very simple)</strong>:</p> <p>1. Use (artist id, user id) from each train/test entry to get features from words.csv, and user id to get features from users.csv, then join it</p> <p>2. Find for (artist_id, user_id) set of ratings in train.csv, and use it mean/max/min/median as feature for each train/test entry (removing rating of curent entry)</p> <p>Make same, but agregating by artist_id, and user_id separately.</p> <p><strong>Model</strong>: apply r gbm (gradient boosted trees) with some parameter tunning by hand.</p> <p>Uploaded: main.py (feature genearation, alot of dumb/simple code), train_gbm.r (model building)</p>
Elo Merchant Category Recommendation
#228 Solution - ELO(utliers)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Elo Merchant Category Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Data Preprocessing</h1> <p>We know that the purchase amount has been Normalized by ELO. I used the following steps to back transform the purchase amount.</p> <ul> <li>Step 1 - Get Unique values of purchase amount. Sort the unique values in ascending order.</li> <li><p>Step 2 - Scaling purchase_amount back </p></li> <li><p>(a) Look at differences between neighbours</p></li> <li>(b) Group the differences and get counts.</li> <li>(c) Divide purchase_amount by the difference with maximum count in step (b)</li> <li>(d) Finally add the minimum value </li> </ul> <p><code> n_merc = fread('new_merchant_transactions.csv',select = c('card_id','purchase_amount')) </code> <code> x = data.table('purchase_amount' = unique(n_merc$purchase_amount)) </code> <code> cols = 'purchase_amount' </code> <code> x[,c("lag1"):=shift(.SD,1),.SDcols=cols] </code> <code> x$diff = round(x$purchase_amount - x$lag1,8) </code> <code> c = x %&amp;gt;% group_by(diff) %&amp;gt;% summarise(cnt = n()) %&amp;gt;% arrange(desc(cnt)) </code> <code> x$transformed = (x$purchase_amount / 0.00001503) </code> <code> x$transformed = x$transformed - min(x$transformed) </code> <code> x$transformed = log1p(x$transformed) </code></p> <p>Applying log1p over the transformed purchase amount gave 0.04 boost along with aggregate features derived from it. </p> <h1>Feature Engineering</h1> <h2>CLV Features</h2> <p><a href="https://www.kaggle.com/prashanththangavel/c-ustomer-l-ifetime-v-alue">CLV</a> - This is my kernel explaining the basics of it. Lifetime package in python automatically calculates R (Recency), F (Frequency) , T (Time) and M (Monitary value). (<a href="https://towardsdatascience.com/whats-a-customer-worth-8daf183f8a4f">Lifetimes</a>) This also helps us calculate predicted purchases using regression techniques (you will have to specify time frame in days as prediction period for purchase amount) In addition, two way interactions between R,F,M and T were also included. (+,-,*,)</p> <h2>Regression Features</h2> <p><a href="https://www.kaggle.com/c/elo-merchant-category-recommendation/discussion/79217">Oleg's</a> regression features. Transformed the purchase amount as mentioned above, before running regression.</p> <h2>Ratio and Percentage change Features</h2> <ol> <li>Subset for purchase amount, purchase date and card_id from historical transactions.</li> <li>Sort by purchase date and card_id.</li> <li>Ratio of current-purchase_amount / prev-puchase_amount.</li> <li>Percentage change from prev-puchase_amount to current-purchase_amount.</li> <li>Repeat the step (3) using time stamp to get ratios. (Ideal time between each transaction)</li> <li>Finally used mean, min and max over each card_id for the features obtained in the steps above. </li> </ol> <h2>Aggregate Features</h2> <p>Aggregate features from hist_transactions, splitting based on authorized and unauthorized transactions before aggregation was very useful. Similar aggregates for new_transactions table. </p> <h1>Feature Selection and Models</h1> <p>Shout out to <a href="https://www.kaggle.com/ogrellier">Olivier</a> for sharing this remarkable kernel on feature selection using <a href="https://www.kaggle.com/ogrellier/feature-selection-with-null-importances">NULL IMPORTANCE</a>. Several thresholds for feature importance was used iteratively to remove insignificant features.</p> <p><strong>Regression Model trained without outliers</strong> 1. LightGBM 2. XgBoost </p> <p><strong>Classification Model</strong> 1. LightGBM model for binary classification. 2. Optimal threshold for classifying probabilities into class 1 (outlier) and 0 (not outlier) was determined using <a href="https://www.saedsayad.com/model_evaluation_c.htm">KStat</a> 3. Built a Ridge regression over 2207 outliers with some additional rows (having low loyalty scores) and predicted over the results which has class 1 from step 2.</p> <p>Combined both regression and classification model results. Best private score was 3.611 which would have helped our team end at silver position instead of a bronze. None the less, it was an amazing contest. </p> <h1>Thanks for your time.</h1>
Elo Merchant Category Recommendation
#5 solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Elo Merchant Category Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>As I know this method became popular after BNP Paribas competition, where Stas Semenov explain the idea. I found many sources about it, but mostly in Russian. I'm not sure but may be youtube auto-translate could help - this video was very helpful for me when I studied target encoding <a href="https://www.youtube.com/watch?v=NVKDSNM702k">https://www.youtube.com/watch?v=NVKDSNM702k</a> This article could be helpful too <a href="https://habr.com/ru/company/ods/blog/426227/">https://habr.com/ru/company/ods/blog/426227/</a></p> <p>Congrats to all winners and all who survived shakeup.</p> <p>My solution consists of a few models:</p> <ol> <li>Regression (full dataset)</li> <li>Binary classification for outliers prediction (cards without repeated transactions in test period). <br> This model was used to split train and test datasets by threshold 0.015 for creation 2 other regression models (low and high probabilities). I also used predictions from this model as a feature for high prob model (stacking with predictions from full regression model)</li> <li>Low prob model - regression with small concentration of outliers based on predicts of binary classification. For rare outliers I used decreased weights 0.4.</li> <li>High prob model - regression with high concentration of outliers based on predictions of binary classification. The main features of this model were predictions from binary classification and full regression. Plus some other features. This model helped me to avoid post processing which was main source of shakeup. </li> <li>For submit I combine together predictions from low and high prob models and blend them with full regression.</li> </ol> <p>Features</p> <p>I created and checked a few thousands features. Finally, after selection, I used about one hundred features.</p> <p>Some of the features were created by target encoding (target as is or binary target). Target encoding was two step process - first I calculated target means for features combinations and joined results with transactions and after that I aggregated to cards level. For target means and aggregation I used weights with decline for early month_lags.</p> <p>I believe that train-test split was random and it was a good strategy to trust local cross validation instead of public leaderboard. </p> <p>we could calculate mean target for some features combination and use it as a new feature. For this competition we need to use two step process. For example we calculate mean target for every merchant_id (merchant_id_TE). After that we join new column merchant_id_TE to transactions data by merchant_id as a key. And now we could aggregate merchant_id_TE by card_id and use it as a feature to train. Similar for features combination (for example authorized_flag + category1 + subsector_id)</p> <p>When we use target encoding we should be careful about overfitting - to calc target means in double out of fold mode plus regularization. I recommend to find more about target encoding in solution of previous competitions. </p> <p>My binary classification predicted outliers, but it was not perfect and small amount of outliers were in the low part as my classifier didn't catch them. </p> <p>No special methods - I just tried different thresholds from .007 to .015 and checked them on local validation. For binary classification I used LightGBM ( GOSS ). Stratified 5-folds CV. Metric for early stop = logloss. Best single model with 54 features had roc auc 0.9131. Blend of 4 classifiers with slightly different parameters/features had roc auc 0.9141 The final combined prediction had local rmse 3.609</p>
Elo Merchant Category Recommendation
10th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Elo Merchant Category Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It depends on what params you are using with lgbm. I used these params in one of my model. As you can see it's very simple. No hyperparams tuned</p> <p>params = {'iterations': 15000, 'learning_rate': 0.01, 'eval_metric': 'RMSE', 'metric_period': 200, 'l2_leaf_reg': 10, 'od_type':'Iter', 'od_wait': 1000, 'task_type': 'GPU', 'depth':8, 'random_seed': 0, 'use_best_model': True}</p> <p>Thanks to all competitors for this wonderfull day) I am very happy to get first Gold and become Kaggle Master)</p> <p><strong>PREPROCESSING and FE</strong></p> <ol> <li>Merging historical data and new_data with authorized_flag=1</li> <li>Count Vectorizer merchant id, subsector id, merchant category id + PCA, SVD</li> <li>Grouping month lag data (auth month-13, auth month-12 and etc.) for count purchases, and sum adjusted purchase_amount, std, min, max</li> <li>Ratio every auth_month_purchase_amount-13,...auth month purchase amount+2 to previous with interval 2,4,6</li> <li>Predicting month lag 3 and 4 and calculating ratio for them</li> <li>Simple Rolling mean and SimpleExpSmoothing over auth purchase amount-13, ...auth purchase amount+4 over month lags</li> <li>Calculating avg, min, max purchase time elapsed periods beetween transactions</li> </ol> <p><strong>FEATURE SELECTION</strong></p> <p>Finally, i created almost 6500 features and created different features sets using</p> <ol> <li>Boruta (it took almost 8 hours to select 500 best features)</li> <li>Selecting best features based on Catboost, LGBM, XGB</li> <li>Deleting features based on Adversarial validation</li> </ol> <p><strong>Best Single Model and Ensembling</strong></p> <p>My best single model is Catboost over 5 stratified folds CV 3.645 std 0.012 LB 3.671. Also tried DeepFM but CV 3.656 std 0.016 and LB 3.682. Other models, including LGBM sometimes got even better CV score, but were worse on LB. </p> <p><strong>Finally, i collected 47 models with outliers, and 35 models without them</strong>. For combining submissions, also created classification model for outliers ROC 0.907 F1 0.1025</p> <ol> <li>Stacking models with outliers CV 3.637 std 0.013 LB 3.669</li> <li>Stacking models without outliers CV 1.544 std 0.011</li> <li>Combining models, replacing 10000 values</li> </ol> <p><strong>Thanks for sharing ideas end experience</strong></p> <p><a href="https://www.kaggle.com/raddar"></a><a href="/raddar">@raddar</a> Post about <a href="https://www.kaggle.com/raddar/target-true-meaning-revealed/">real meaning of target </a> and <a href="https://www.kaggle.com/peterhurford"></a><a href="/peterhurford">@peterhurford</a> for this <a href="https://www.kaggle.com/c/elo-merchant-category-recommendation/discussion/73937">feature selection post</a></p> <p>Thanks. Yes, i deleted features which were most important in classifying Train and Test</p> <p>I tried Count Vectorizer. PCA for merchantid (15 components), SVD for subsectorid, merchant_subcategory_id (10 components). So, i tried to reach cum explained_variance_ratio_ at least 0.8.</p> <p>So in some features sets i used Count Vectorizer matrix + top50 features, in other features + pca, svd. It helps my model to add CV a little in single model 0.0007.</p> <p>Yes, you are right</p>
Elo Merchant Category Recommendation
11th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Elo Merchant Category Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is the 11th place solution writeup. </p> <p>Hello, I would like to congratulate everyone for the efforts they put into this competition. And ELO for hosting such a great competition with interesting data and hair pulling outliers :D </p> <p>I would like to give special thanks to my teammates <a href="https://www.kaggle.com/kangzhang">@woo</a> <a href="https://www.kaggle.com/nabilblk">@Nabil</a> <a href="https://www.kaggle.com/a763337092">@Lindada</a> for their skills, perseverance and hardwork. </p> <p>The huge gap between CV and LB did not make it easy for us (and for everyone I imagine). Those that persevered until the end would perfectly understand. </p> <p>Congrats for all the medalists, you did a tremendous job :D </p> <h2><strong>SUMMARY</strong></h2> <p>So basically we were two separate teams that merged a couple of hours before the deadline. Each had had completed a tremendous amount of work, and after almost running out of new ideas, a merge was deemed necessary (and fruitful). </p> <h2><strong>FEATURE ENGINEERING</strong></h2> <p>Like most public kernels, we constructed aggregation features. Here's a list of the our strongest ones: </p> <pre><code>1. I refer the Kaggle Rank System Compute Formula(link:[https://www.kaggle.com/progression][4]) df_data['duration_sqrt_counts'] = df_data['durations']/sqrt(df_data['card_id_counts']) df_data['duration_log1p_counts'] = df_data['durations']/log1p(df_data['card_id_counts']) df_data['duration_counts'] = df_data['durations']/df_data['card_id_counts'] 2. Categorical features: frequence, Maxfrequence, MaxfrequenceRatio 3. card_id/merchant_id/mechant_category_id/city_id (visit sequence to sequence embedding) 4. purchase_amount:hist/new 5. features interactions between hist/new df['purchase_amount_ratio_v3'] = df['new_purchase_amount_max']/df['hist_purchase_amount_sum'] df['purchase_amount_diff_v1'] = df['new_purchase_amount_sum']-df['hist_purchase_amount_sum'] df['purchase_amount_diff_v2'] = df['new_purchase_amount_mean']-df['hist_purchase_amount_mean'] df['purchase_amount_diff_v3'] = df['new_purchase_amount_max']-df['hist_purchase_amount_max'] df['purchase_amount_diff_v4'] = df['new_purchase_amount_min']-df['hist_purchase_amount_min'] df['pa_mlag_ratio'] = df['new_purchase_amount_sum']/(df['month_lag_mean'] - 1) df['pa_new_hist_ratio'] = df['new_purchase_amount_sum']/(df['hist_purchase_amount_sum']) df['pa_new_hist_mean_ratio'] = df['new_purchase_amount_mean']/(df['hist_purchase_amount_mean'] ) df['pa_new_hist_min_ratio'] = df['new_purchase_amount_min']/(df['hist_purchase_amount_min'] ) df['pa_new_hist_max_ratio'] = df['new_purchase_amount_max']/(df['hist_purchase_amount_max'] ) </code></pre> <p>We had two separate feature sets. One with +1000 features and another one with +200 features</p> <p>At this point our best models scored around: CV 3.642X and LB: 3.688 and CV 3.644X and LB: 3.686</p> <p>After that, we took the correlation matrix of the +200 feature set and paired each feature with the feature it's the least correlated with. Then we applied a bunch of aggregations on each pair and it resulted in pretty strong features. </p> <p>So we ended up with two feature sets with +1000 features each. </p> <h2><strong>FEATURE SELECTION</strong></h2> <p>For feature selection we did some manual feature selection based on the features importance feedback we got from lgb. Then we used this (simple yet effective) <a href="https://towardsdatascience.com/a-feature-selection-tool-for-machine-learning-in-python-b64dd23710f0">method here</a> for some further filtering. What it basically does is remove:</p> <ul> <li>Features with a high percentage of missing values according to a threshold </li> <li>Collinear (highly correlated) features </li> <li>Features with zero importance in a tree-based model </li> <li>Features with low importance</li> <li>Features with a single unique value</li> </ul> <p>The CV score got better in both feature sets. </p> <p>After this, our best models scored around CV:3.639X LB:3.682</p> <h2><strong>MODELS</strong></h2> <p>We used different models for training: LightGBM / Catboost / Xgboost / H2oRF / H2oGBM. We tried a couple of NN architectures but it didn't work out for us (Check the last section)</p> <h2><strong>STACKING</strong></h2> <p>We stacked around 32 models using bayesian regression. Our models were well varied that it yielded a score of CV:3.630X LB :3.675</p> <h2><strong>POST PROCESSING</strong></h2> <p>During the last day, we focused on doing some post processing and this is how we cherrypicked our outliers.</p> <p>we carefully designed a Classifying module that combined four different classifers.</p> <p>We picked the top 100 outliers predicted from every classifier and then intersected the four sets. This resulted in 21 final outliers. </p> <p>This gave us a devilish boost on LB. We went from 3.675 to 3.666 (See what I did there?)</p> <h2><strong>SUBMISSIONS</strong></h2> <p>Finally we chose:</p> <ul> <li>A model without post processing (Our best stacking sub): CV:3.63019 LB:3.675 and Private LB: 3.599</li> <li>A model with post processing: LB:3.666 and Private:3.599 </li> </ul> <h2><strong>STUFF THAT DID NOT WORK</strong></h2> <p>Of course these last two months were not all roses and rainbows. We pulled our hair trying a lot of things and we failed miserably.</p> <p>Here are the bloopers of our participation :D : </p> <ul> <li>NN. We tried designing different architectures with the main focus on having a simple NN with heavy regularization (BatchNorm and Strong Dropout)</li> <li>In the middle of the competition, we tried tackling the outliers detection as an anomaly detection problem using AutoEncoders trained only on the non outliers data </li> <li>We tried PCA for more features. And it didn't work</li> <li>We tried TSNE. It didn't work </li> <li>We tried FM and FFM. It did not work</li> <li>We tried isolation forest. Nope. Did not work. </li> <li>We had a Ridge-based pairwise ranker that we intended to use for outliers detection but it didn't match with the approach we had.</li> <li>We tried a lot of weak models in the hope of adding diversity (simple tree-based, linear, svm, etc.). And guess what? It did not work.</li> </ul> <p>Anyway, that was a rough wrap up of all the things we did. </p> <p>Thank you for sparing the time to read this. </p> <p>And remember to always trust CV :D </p>
Elo Merchant Category Recommendation
16th Place Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Elo Merchant Category Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>By looking at the CV. I didn't spend much time adjusting it actually.</p> <p>For predicted outliers: the output will be just from the regression trained on all samples. I wasn't doing any post processing here, because my outlier detector wasn't super good.</p> <p>For predicted non-outliers: the output will be the naive average, it can be better like using a Ridge etc...</p> <h2>Feature Selection</h2> <p><strong>Just before training</strong></p> <p>I used the code from here: <a href="https://www.kaggle.com/c/elo-merchant-category-recommendation/discussion/77537">https://www.kaggle.com/c/elo-merchant-category-recommendation/discussion/77537</a></p> <p>It is to remove features with different distribution on train and test set.</p> <p><strong>After training</strong></p> <p>I used the null selection idea (modified). <a href="https://www.kaggle.com/ogrellier/feature-selection-with-null-importances">https://www.kaggle.com/ogrellier/feature-selection-with-null-importances</a></p> <p>It is basically a permutation test to adjust the features ranking. Trying to avoid features that work so well on random data.</p> <p><strong>After getting the features ranking</strong></p> <p>I cut the features with a loop to see what feature size works best. i.e. 90%, 80% .... 10% of features.</p> <h2>Modified 3-model Model</h2> <p><img src="https://i.imgur.com/6otyev8.png" alt="enter image description here"></p> <p><strong>Cross-validating the path of the outlier detection -&gt; model without outliers:</strong></p> <p>For the original public kernel (training with the manually selected non-outliers (&gt;-33), If the outlier classifier is not good enough, the CV&amp;PB error from the "model without outliers" will be huge.</p> <p>That's why I came up with a (relatively more stable but with less improvement) modified stacking method to simulate the error "inheriting" from the outlier classifier to the "model without outliers".</p> <p>I used the predicted non-outliers to train the model without outliers. So that the local cross validation covers the outliers prediction of the test set:</p> <p><code> all_train_df['outliers'] = all_train_df['prediction'] &amp;gt;= all_train_df['prediction'].quantile(0.9) </code></p> <p><code> all_test_df['outliers'] = all_test_df['prediction'] &amp;gt;= all_train_df['prediction'].quantile(0.9) </code></p> <h2>Blending</h2> <p><img src="https://i.imgur.com/xTZjWms.png" alt="enter image description here"></p> <p>The final blending consists of 16 models. They are single models, 3-model models, outlier detection models and 2 public kernels.</p> <h2>Models Summary</h2> <p><strong>Best Single Model</strong>: CV: 3.63964, Public: 3.690, Private: 3.606</p> <p><strong>Best 3-model Model</strong>: CV: 3.63469, Public: 3.685, Private: 3.604</p> <p>Thanks so much for your kind comment. </p> <p>My bad, I didn't explain clearly that part.</p>
Elo Merchant Category Recommendation
18th place story (pocket's side)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Elo Merchant Category Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks!!</p> <p>Thanks :) <br> Actually, I expected you to shake up into gold, but maybe we were really lucky this competition. <br> Wish you luck in the next one!</p> <p>Thanks! I actually thought there will be about 30% chance that I will survive, based on my experience :)</p> <p>Thanks!</p> <p>You know what? <br> I was like "If our team fails to stay in gold, the blame will be on me, because they managed to get gold in Plasticc with CPMP..." so I tried really hard. hahaha</p> <p>Thanks! <br> I think Marcus would be the better person to explain, but for example, I made 3 out-of-fold models of (binary, non-outlier, all-data) and used the 3 predictions as the sole input into a ridge regression, which made the final prediction.</p> <p>Thanks!</p> <p>Thanks :) <br> I never thought that anyone could be following all my solutions :)</p> <p>Thanks senkin-san! <br> Nowadays, I never seem to be able to beat you ;)</p> <p>Yes the former. I split the hist/newtrans by the value of categories, and aggregated within the subset. You could make a lot of features with different categories, but those were the ones that survived my feature selection.</p> <p>Just like everyone is doing, I re-labeled the target to 0(not -33) and 1(is -33), and did a prediction. <br> So the output is an array of length(card_id), each element being a float between 0 and 1</p> <h2>My journey</h2> <p>So this competition has been a long long journey for me. <br> It felt way longer than it actually was, which is just 40 days. 40days ago, I wanted to take a break from another competition, and decided to do a "40days challenge".</p> <p>I started off by aggregating a lot of the time-series, but it was hard to even beat the kernel. The data seemed to have so little signal, almost everything that I did didn't work. <br> After a week or two, I somehow managed to score 3.636-3.630 locally, which happened to be around 3.608-3.605 private. I was running out of ideas, so I decided to team merge.</p> <p>After merging with my team, we merged our features and re-run our models. Our best single model improved about +0.002 at that time. We searched for a good way to ensemble our models, but couldn't find a good way, especially with all the post-process that was going on. <br> Just one week before the deadline, <a href="/marcuslin">@marcuslin</a> finally found that a simple ridge including non-outlier model + binary-prediction model works really well. Our score jumped to the 3.66x range in public, which was 3.600 range in private.</p> <p>For our final submission, we were torn between 3subs. One with heavy post-process, one with conservative post-process, and one without post-process. <br> We were pessimistic about our final position, so we took the risk and selected the two with post-process. This was actually a mistake, but still good enough for 18th place. <br> Had we selected our no-post-process sub, we would have placed around 10th place, but that is just another story.</p> <p>I really feel lucky/happy about the final result, since in the heat of the competition, I could neither trust the CV nor could I trust the LB, which means I was flying blind.</p> <p>Since some people may have interest, I will describe the technical side of things, my features and my models below.</p> <hr> <h2>Features</h2> <p><strong>Used features</strong> <br> 1)Basic features: very good <br> Aggregations like the kernel. Not much to say.</p> <p>2)Time features: includes magic features <br> Features about When there was a purchase. The kernel lack "the last day of transaction", which is magically good.</p> <p>3)Conditional features: good <br> The aggregation when (authorized_flag == 1, 0), (city_id == -1)</p> <p>4)Recent features: good <br> The aggregation when (month_lag &gt;= -2, or 0)</p> <p>5)Prediction of time: probably good <br> Predict the magic_feature:(last day of month_lag2 - last purchase_day) from only the historical_transaction features. Then do (predicted_days - actual days), and drop the predicted days.</p> <p>6)Time features: maybe good? <br> <a href="https://www.kaggle.com/denzo123/a-closer-look-at-date-variables">https://www.kaggle.com/denzo123/a-closer-look-at-date-variables</a> Hour and DayOfWeek seems to be very important (but hard to improve the cv-score). I binned "hour" to [6-12, 12-17, 17-21, 21-05], which seems to add very small value(maybe). By the way, 00:00:00 has a very high target value (probably because it is periodic purchases), but I couldn't improve the CV score with a flag, maybe because there are too few occurrence...</p> <p>7)Target-encoding features: maybe good? destablized the score... <br> OOF-target encoding of categories(merchant_id, subsector_id).</p> <p>8)(+, -, /, *) of two features: maybe stablizes the score? not sure...</p> <hr> <p><strong>Unusued features</strong> <br> I had tons of ideas which didn't work. Here are some of the interesting ones</p> <p>LDA features: thrown away <br> the LDA topic of each categories (like in TalkingData competition). I guess categories have no information!</p> <p>Sequential authorized_flag="N": thrown away <br> Sometimes, there are many authorized_flag="N", in a sequence.</p> <p>Periodic buys: thrown away <br> Sometimes, there are periodic purchases. Like, always once a month with the same price, at the same merchant.</p> <p>Encode features: thrown away <br> Encode the categories by mean_purchase_amount. Take the difference (purchase_amount - mean_purchase_amount).</p> <hr> <h2>Models</h2> <p><strong>Models/post-process</strong> <br> In general, stacking with simpler algorithms worked better, so Ridge/Linear regression seemed better than LGBM. My teammates were very successful with stacking, so I will leave it for them to write.</p> <hr> <p><strong>Models which didn't work/weren't used</strong> <br> Time-series prediction: didn't work <br> Like in the HomeCredit competition, predict the target based on the raw time-series/monthly time-series. Neither worked.</p> <p>Isotonic regression: didn't work <br> I probably did this wrong, but I couldn't make it work</p> <p>Random stacking: didn't use <br> For stacking, I stacked upon automatically-generated-random-models by randomly selecting a subset of my features. We did not select the stack, since the publicLB score was bad, but the CV and privateLB score(3.660) is actually better than our final submission. Need some more experiments to confirm.</p> <p>Pseudo-labeling: didn't use <br> Pseudo-labeling with our best stack seems to improve our base model on the CV and privateLB. But it did bad on the publicLB, so we didn't use it. PL probably worked both on LB/CV earlier with NN, but we didn't have enough time to squeeze it into our final stack.</p> <hr> <h2>Conclusion</h2> <p>I really want to thank my teammates for doing a fantastic job in this competition. <br> Without my team, I would not have been in this place. <br> I also want to thank and congratulate <a href="/philculliton">@philculliton</a> for his first competition without a leak! I hope you won't ban more than 221 people so that I remain in gold :)</p> <p>And.. I guess that is it. Thank you all for participating in this competition, and good luck ;)</p> <p>Thanks! Nice to hear that from you :) You guys were such a beast.</p> <p>I usually upload them to Github few weeks after the competition ends.</p> <p>1) The day of the month of the last transaction. 2) Google is your friend. You can also read the Toxic Comment competition winning solution for more detail.</p>
Elo Merchant Category Recommendation
19th place memo
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Elo Merchant Category Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for the reply!</p> <p>train = train.merge(new-transaction-data, how='left') test = test.merge(new-transaction-data, how='left') len(train) -&gt; 1241616 len(test) -&gt; 756954</p> <p>These data have not unique card_id. You can get 1241616 train-predict-values and 756954 test-predict-values.</p> <p>train['predictions'] = train-predict-values test['predictions'] = test-predict-values</p> <p>You can aggregate 'predictions' by card_id.</p> <p>I’m sorry, but I can't reproduce your problem. on=card_id In my kaggle kernel, this code works. <a href="https://i.imgur.com/qmIe7ut.png">https://i.imgur.com/qmIe7ut.png</a></p> <p>Thanks a lot!</p> <p>Thank you for the reply! I agree with you. I think merchantIds are important and there are a lot of way to handling them.</p> <p>Thank you for your reply! Yes, you are right.</p> <p>train-set: train.merge(transaction_data, how='left')</p> <p>test-set: test.merge(transaction_data, how='left')</p> <p>You can build model in ordinary way. (cross-validation ~ prediction)</p> <p>I created feature dataset. If you want, you can try them. <a href="https://www.kaggle.com/hmdhmd/elo-features">https://www.kaggle.com/hmdhmd/elo-features</a></p> <p>Yes, my xgboost parameters may not have been good. I was not able to spend much time tuning parameters.</p> <p>This competition was difficult because CV and LB did not work together... Next time, I would like to participate in the competition which's CV and LB work together ;)</p> <p>Thank you! Your discussion of feature selection is useful for me.</p> <p>There are several possible reasons in difference between xgb and lgb. Parameters, used features...etc But in my case, xgboost always make score worse, private 0.004, public 0.003 ;)</p> <p>Thanks to ELO for hosting such an interesting competition, and congratulations to all the top teams. I will write a brief memo.</p> <h1><strong>Final Submission</strong></h1> <p>Blending some lgb with different features Private : 3.600 Public : 3.681</p> <h1><strong>Best Single Model</strong></h1> <p>12 fold lgb with 300 features Private : 3.605 Public : 3.688 CV : 3.634 ・Best cv model has best private score. ・I didn't any post processing.</p> <h1><strong>Useful</strong></h1> <p>Aggregate one-hot-encoded merchant_id. I had to select some merchant_id because there was no sufficient RAM.</p> <pre><code>train['outlier'] = (train['target'] &amp;lt; -33).astype(int) hist = hist.drop_duplicates(['card_id','merchant_id']) hist = hist.merge(train) hist_outlier = hist.groupby('merchant_id')['outlier'].agg([len, sum, np.mean]) </code></pre> <p>Now, you can see some merchant_id have large value. So you can select them. For exsample. <code> hist_outlier[(hist_outlier['len']&amp;gt;500) &amp;amp; (hist_outlier['mean']&amp;gt;0.02)] </code></p> <h1><strong>Didn't work</strong></h1> <p>-- Modiling with NN, RNN, xgboost, catboost, FFM -- Directly target encoding -- Feature decomposition(TrancatedSVD, LDA) -- Feature selection with boruta -- Stacking models by logistic regression, lightgbm</p> <h1><strong>Addition</strong></h1> <p>I found out later that it was effective to predict target for each transaction row. 1. Merge hist(or new) transaction data and train data. 2. Build lightgbm model to predict target of each row. 3. Aggregate predictions by card_id. <code> df.groupby('card_id')['predictions'].agg([min, np.mean, np.median, max, sum, np.std])) </code> These features improve Private 0.003 but make worse Public 0.003. So I could't use these features in final submission. <br> That's all. Thank you for your reading.</p>
Elo Merchant Category Recommendation
21th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Elo Merchant Category Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>thanks for my temamate, my single model about 3.680 lb , and my teammate's single model about 3.677 lb and 3.79 lb </p> <p>feature engineer agg feature +-*/ about new hist agg feature and hist agg feature countvevtor feature+feature select some date feature word2vec and nn oof feture</p> <p>stacking model: we have about 70 different lgb and nn model ' oof are used for stacking stacking cv: 3.627 lb:3.669 pb:3.601</p> <p>our codes release here: [1]: <a href="https://github.com/bestpredicts/ELO">https://github.com/bestpredicts/ELO</a>. </p>
Elo Merchant Category Recommendation
22nd (some 10k feats)shareing some engineering and some trick(not used)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Elo Merchant Category Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>two parts ,outlier value multiply it‘s prob add inlier value multiply it's prob</p> <p>thanks</p> <p>oh oh!! In my teamates model. 5k it goes, </p> <p>w2v 2 thousand score </p> <p>a format is a single event, sentencs are contains of serveral events</p> <p>Fisrt of all congratulations to the winner, sec, thanks for kagglers sharing in the compete,I have learned a lot</p> <p>law: trust your local cv</p> <p>here something I want to share trick!!! final_pred = binary_prediprob * -33 + pred_without_outlier * ( 1- binary_prediprob), as I see in previos post the 1st used, We use this trick too,but did not select as our final sub eng!!!</p> <ol> <li>choose a good baseline I take this kernel as my baseline (here --&gt; <a href="https://www.kaggle.com/roydatascience/elo-merchant-recommendation-fathers-day-specials),tks">https://www.kaggle.com/roydatascience/elo-merchant-recommendation-fathers-day-specials),tks</a> to Ashish Gupta</li> <li>repreoduce feature with baseline feats( except holidays ) select subsets to reprodeuce feats mainly through → lag &gt; threadhold (-3 ,-6) as subset condiction, reproduce feats we did once such as month_diff ---&gt; lag_3_month_diff,lag_6_month_diff, this give me at least 10thousand improvements</li> <li>abount merchants.csv, [user preference], find user preference merchant(id) , and use it, use its features which is calcaulate from merchants.csv</li> <li><p>ratios basically new/his, feature calcu by new_merchant div by same feature calcu in his</p></li> <li><p>word2vec through what-and-when-and-where format to build senctence</p></li> <li>timestamp use feats from use timestamp</li> </ol> <p>apart from modeling, I think most improtant of all, do more eda, undersand well feats , and What is the company's business? 10k feats 1. new_mount_sum/his new_date_max_tonow/his new_date_max_tonow/his 2. pref_merchant_purchase_amount_percard</p> <p>pref_merchants_avg_sales_lag3_ratio</p> <p>new_pref_merchants_avg_sales_lag3_ratio</p> <p>pref_merchants_avg_sales_lag3_div_6</p> <p>new_pref_merchants_avg_sales_lag3_div_6</p> <p>new_pref_merchants_avg_sales_lag3</p> <p>pref_merchants_avg_sales_lag3</p> <p>new_pref_merchants_numerical_1</p> <p>new_pref_merchants_numerical_1_ratio</p> <p>pref_merchants_numerical_1</p> <p>pref_merchants_numerical_1_ratio</p> <p>pref_merchants_most_recent_sales_range_ratio</p> <p>new_pref_merchants_most_recent_sales_range</p> <p>new_pref_merchants_most_recent_sales_range_ratio</p> <p>pref_merchants_most_recent_sales_range</p> <p>new_pref_merchants_category_4</p> <p>pref_merchants_category_4</p> <p>thank you</p> <p>thanks</p> <p>TOP 50th feats</p> <ol> <li>user_his-new_transactions_purchase_amount_mean</li> <li>user_new_transactions_purchase_amount_max</li> <li>card_merchant_pay_hist_card_pay_count_mean</li> <li>user_new_transactions_purchase_amount_mean</li> <li>user_new-his_transactions_purchase_timestamp2</li> <li>user_transactions_user_merchant_purchase_timestamp_min_skew</li> <li>user_his_transactions_merchant_purchase_month_frequence</li> <li>liu_new_hist_duration_max</li> <li>user_his_transactions_timedelta_skew</li> <li>liu_new_hist_amount_month_ratio_max</li> <li>user_new-his_transactions_purchase_timestamp1</li> <li>purchase_amount5</li> <li>Lag_3purchase_amount_min</li> <li>purchase_amount0</li> <li>user_his/new_transactions_purchase_amount_std</li> <li>purchase_amount2</li> <li>merchant_category_id0</li> <li>Lag_3purchase_amount_std</li> <li>user_his_transactions_month_lag_0_ratio</li> <li>liu_ratio_2</li> <li>month_lag6</li> <li>purchase_min</li> <li>user_his_transactions_timedelta_std</li> <li>user_history_transactions_purchase_amount_skew</li> <li>Lag_3purchase_amount_mean</li> <li>user_transactions_user_merchant_purchase_timedelta_skew</li> <li>liu_ratio_4</li> <li>user_new_transactions_purchase_amount_min</li> <li>user_new_transactions_month_lag_mean</li> <li>merchant_category_id1</li> <li>user_purchase_count_in_same_merchant_skew</li> <li>subsector_id8</li> <li>user_new_transactions_purchase_timestamp_mean</li> <li>purchase_amount4</li> <li>subsector_id2</li> <li>user_transactions_merchant_purchase_month_frequence</li> <li>subsector_id0</li> <li>month_lag5</li> <li>month_lag9</li> <li>merchant_category_id6</li> <li>city_id8</li> <li>purchase_max</li> <li>user_category_all_purchase_timestamp_min_std</li> <li>Lag_3weekend_mean</li> <li>user_his/new_transactions_purchase_amount_mean</li> <li>user_transaction_timedelta_mean_before_Black_Friday_2047</li> <li>user_authorized_flagN_transactions_ratio</li> <li>card_merchant_pay_hist_card_pay_mean_min</li> <li>city_id2</li> <li>purchase_amount8</li> </ol> <p>sure. but I am doing microsoft competition now, not a good time now . I will make an kernel sharing it with you later</p> <p>it's something like slides window teck in time series, mainly throuh Lag variable's values to choose subsets to redo what has bend done on total dateset</p>
Elo Merchant Category Recommendation
28th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Elo Merchant Category Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>this is my first time play kaggle i want to share something about our model we build two model a regression model without outliers and a binary model,target is outliers the final result = regression result + (-33*binary result) best score in pb is 3.677</p>
Elo Merchant Category Recommendation
31st place, Some of Feature Engineering
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Elo Merchant Category Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here you go one of the model out of many <a href="https://www.kaggle.com/mks2192/fork-of-train-test-features-old-new-combin-150-fea/notebook">https://www.kaggle.com/mks2192/fork-of-train-test-features-old-new-combin-150-fea/notebook</a></p> <p>here are some feature engineering </p> <p><a href="https://www.kaggle.com/mks2192/feature-engineering/notebook">https://www.kaggle.com/mks2192/feature-engineering/notebook</a></p> <p>it takes more than kernals time so i had to run it on my PC.</p> <p>This is mainly FE for cityID, subsectorID, merchantID, merchant_categoryID during FE i did not used target information the way we do in mean encoding. So features are free from fear of overfitting.</p> <p>features coming out of this are here <a href="https://www.kaggle.com/mks2192/elo-features-data-7-feb">https://www.kaggle.com/mks2192/elo-features-data-7-feb</a></p> <p>Thanks,</p> <p>PS: some of the codes are adopted from others especially <a href="/mitsuru">@mitsuru</a>_fujiwara</p> <p>Hi Mike, i see it as shared publicly, can you check again </p>
Elo Merchant Category Recommendation
55th solution sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Elo Merchant Category Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone, congrats all medal winners - you all well deserve the rewards for this exciting competition!!!</p> <p>I’d also like to thank my teammate rcarson. Sorry buddy we didn’t do very well this time but let’s makenit better next time!!</p> <p>While there's nothing to boast as a 55th, I do think there could be something in my solution/experience that might interest you. So there you go:</p> <h1>Feature engineering</h1> <p>Most of the features were derived from Chau Huynh's 3.699 public LB kernel (<a href="https://www.kaggle.com/chauhuynh/my-first-kernel-3-699">https://www.kaggle.com/chauhuynh/my-first-kernel-3-699</a>) with minor modifications. I had tried quite a lot of feature engineerings but only a few ended up being helpful including:</p> <ol> <li>Transaction counts by month lag. This is probably, as many of you may have been aware, the most significant engineered feature. (0.005+) </li> <li>Ridge stackings on tf-idf of a variety of entities count by card id (historical transactions only). e.g. merchant id, merchant category id, and combinations of them. In fact, adding those features directly to the model didn't seem to help, nor more complicated embeddings such as word2vec and fasttext. However, adding an additional ridge stacking layer on top of those sparse features yielded considerable improvements, both on public board and private board. (0.002+).</li> <li>Similarities calculated between historical and new transactions using word2vec and fasttext gave minor improvements as well. (0.001+)</li> </ol> <h1>Modeling</h1> <p>I used primarily LightGBM (should have spent more time on NN using various embeddings but didn't get a chance to implement due to busy schedule (laziness indeed LOL).</p> <h1>Feature selection</h1> <p>Inspired by @Yifan Xie's post (<a href="https://www.kaggle.com/c/elo-merchant-category-recommendation/discussion/75935">https://www.kaggle.com/c/elo-merchant-category-recommendation/discussion/75935</a>), I started exploring feature selections and after playing different strategies I ended up with RFE + LightGBM (random forest booster) which gave a decent boost about 0.006+.</p> <p>I then developed a bit crazy (dummy) stackings using similar RFE framework but different parameters and the number of features selected are between 60-100 and each ended up with comparable results. However, stacking of them didn't make any advances on LB.</p> <h1>Postprocessing</h1> <p>Just like most of top, and probably bottom rankers - and I'm sorry for the latter - I applied post-processing using a 3-models solution originally proposed by 李胃疼疼疼 (<a href="https://www.kaggle.com/waitingli/combining-your-model-with-a-model-without-outlier">https://www.kaggle.com/waitingli/combining-your-model-with-a-model-without-outlier</a>). Given that the original solution didn't seem to be able to allow us to perform CV so I modified it a bit particularly on the model without outliers: instead of predicting on test data only I made it generate out-of-fold predictions on training data too, which was quite similar to stacking with a small trick which allows it to make predictions on "all" training data rather than "without outliers" training data.</p> <p>This allows me to test out the best threshold instead of having to guess. In addition, instead of using predictions from the model trained with all data for those samples with p(outlier)&gt; threshold, I used the average of predictions from both models (all and without outliers), which seemed to be more stable in the end. (0.003+)</p> <h1>Blending</h1> <p>I had to confess that my final submissions were blended with some of the outputs from public kernels, which gave me a boost of 0.001~.</p> <p>I have been enjoying the competitions and shared the same self-doubting and frustrating moments as many of you may have had. In the end, it's not only the PB position, nor the medal, but <strong>these hard working moments</strong> made us stronger than what we were at the beginning of this competition.</p> <p>Cheers,</p> <p>Chris,</p>
Elo Merchant Category Recommendation
7th Place Solution (Updated Late Submission Finding)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Elo Merchant Category Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks, tabular data competitions are so attractive that you never know which technology can work which not</p> <p>No, I think best mse equal to best rmse ,isn't it? I never have shakeup experience,so I think should make the single model strong enough to prevent huge shakedown.</p> <p>Thanks , I am waiting for your team's solution.</p> <p>yes, single lgb is better than stacking on private score,so I think stacking overfit too much.</p> <p>Thanks, trial and error are necessary</p> <p>yes,kaggle is the best place to learn pratical skills</p> <p>normal features worked unwell that have to try new ideas</p> <p>I read your github and learned something useful :)</p> <p>thanks,your 4 days journey is impressive</p> <p>Same to you.</p> <p>Congrats! nice idea of using Ridge and tfidf,I will try to use that at next comp.</p> <p>you are right,after testing late submission,I found isotonic regression overfit too much,that's why I shakedown.</p> <p>Congratulations your first silver medal.</p> <p>your team did well</p> <p>similar with this post from homecredit solution <a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/64503">https://www.kaggle.com/c/home-credit-default-risk/discussion/64503</a></p> <p>no it's feature numbers, we use <code>scores_df = pd.DataFrame(feature_scores, columns=['feature', 'split_score', 'gain_score']) </code> the gain_score is 1.0 or 1.5 or 0 due to your cv performance</p> <p>thanks</p> <p>Also congratulations your first silver medal.</p> <p>thanks, the door is half opened</p> <p>I saw your mail,I have added source code to post</p> <p>I add source code to post</p> <p>when new features can not improve the best model,I start to build another model with those features and add new from different idea that give me 12 different feature sets. I didn't spend too much time to choose features, just try some thresholds to choose top N important features.</p> <p>Congratulations your first silver medal.</p> <p>yes,local cv.I merged all the features seems good to create a best single model.</p> <p>yeah, reproduce solution is good way to learn.</p> <p>Thanks</p> <p>Basicly I don't select features due to LB, I think I got 0.002 cv boost from svd features when my cv is 3.632,I combinate only w2v features to train.csv ,can boost from 3.84 to 3.67</p> <p>I like it too,unfortunately I stop to deepdive </p> <p>Congrats to all winner teams and new masters,experts.It's a very hard competition,I think every team work hard to try everything and fight with overfitting every day,no one fails who does his best. I started at very early stage so maybe I just did more work than others,let me share my solution briefly.</p> <h1><strong>Preprocessing</strong></h1> <p>sort the historical and new data by timestamp then create 8 kinds of dataset for different features.</p> <ol> <li>Only historical data</li> <li>Only historical data with authorized_flag=1</li> <li>Only new data </li> <li>Merge of historical data with authorized_flag=1 and new data </li> <li>Merge of historical data and new data </li> <li>Merge of historical data and merchants data</li> <li>Merge of new data and merchants data</li> <li>Merge of historical data and new data and merchants data</li> </ol> <h1><strong>Feature Engineering</strong></h1> <h2>Aggregation</h2> <p>I think there are not big difference with everyone's aggregation features, I want to thank raddar's magic that purchase_amount_new can bring additional features beside of original purchase_amount.</p> <h2>Interval</h2> <p>days interval(the difference from current purchase_date to next purchase_date) purchase_amount interval(the difference from current purchase_amount to next purchase_amount)</p> <h2>Interaction</h2> <p>new.purchase_date.max() / historical.purchase_date.max() is very strong</p> <h2>SVD</h2> <p>dimession reducing from original card_id's merchant|merchant_category transaction sequence to 5 ``` def svd_feature(prefix, df, traintest, groupby, target,n_comp): tfidf_vec = TfidfVectorizer(ngram_range=(1,1), max_features=None) df_bag = pd.DataFrame(df[[groupby, target]]) df_bag = df_bag.groupby(groupby, as_index=False)[target].agg({'list':(lambda x: list(x))}).reset_index() df_bag[target + '_list']=df_bag['list'].apply(lambda x: str(x).replace('[','').replace(']','').replace(',',' '))</p> <pre><code>df_bag = df_bag.merge(traintest,on=groupby,how='left') df_bag_train = df_bag[df_bag['target'].notnull()].reset_index(drop=True) df_bag_test = df_bag[df_bag['target'].isnull()].reset_index(drop=True) tfidf_full_vector = tfidf_vec.fit_transform(df_bag[target + '_list']) tfidf_train_vector = tfidf_vec.transform(df_bag_train[target + '_list']) tfidf_test_vector = tfidf_vec.transform(df_bag_test[target + '_list']) svd_vec = TruncatedSVD(n_components=5, algorithm='arpack') svd_vec.fit(tfidf_full_vector) svd_train = pd.DataFrame(svd_vec.transform(tfidf_train_vector)) svd_test = pd.DataFrame(svd_vec.transform(tfidf_test_vector)) svd_train.columns = ['svd_%s_%s_%d'%(prefix,target,x) for x in range(n_comp)] svd_train[groupby] = df_bag_train[groupby] svd_test.columns = ['svd_%s_%s_%d'%(prefix,target,x) for x in range(n_comp)] svd_test[groupby] = df_bag_test[groupby] #df_svd = pd.concat([svd_train,svd_test],axis=0) print ('svd_train:' + str(svd_train.shape)) print ('svd_test:' + str(svd_test.shape)) return svd_train,svd_test </code></pre> <p>```</p> <h2>Word2vec</h2> <p>create each merchant_id's , merchant_category_id's , purchase_date's ... word embedding,then group by card_id's min/max/mean/std <code> def word2vec_feature(prefix, df, groupby, target,size): df_bag = pd.DataFrame(df[[groupby, target]]) df_bag[target] = df_bag[target].astype(str) df_bag[target].fillna('NAN', inplace=True) df_bag = df_bag.groupby(groupby, as_index=False)[target].agg({'list':(lambda x: list(x))}).reset_index() doc_list = list(df_bag['list'].values) w2v = Word2Vec(doc_list, size=size, window=3, min_count=1, workers=32) vocab_keys = list(w2v.wv.vocab.keys()) w2v_array = [] for v in vocab_keys : w2v_array.append(list(w2v.wv[v])) df_w2v = pd.DataFrame() df_w2v['vocab_keys'] = vocab_keys <br> df_w2v = pd.concat([df_w2v, pd.DataFrame(w2v_array)], axis=1) df_w2v.columns = [target] + ['w2v_%s_%s_%d'%(prefix,target,x) for x in range(size)] print ('df_w2v:' + str(df_w2v.shape)) return df_w2v </code></p> <h2>Transaction-based Model</h2> <p>use above 6,7,8 dataset, merge train's target to each transaction row,then create a lightgbm to output each transaction's meta feature , then groupby card_id's min/sum to create card_id's meta feature , it can boost 0.005 ~ 0.006 both on cv and lb.</p> <h2>Feature Selection</h2> <p>at early stage I add new features one by one,sometimes group by group,at late stage I use target permutation learned from <a href="https://www.kaggle.com/ogrellier/feature-selection-with-null-importances">https://www.kaggle.com/ogrellier/feature-selection-with-null-importances</a> I created thousands of features,finally get 12 different feature sets with number from 200 ~ 700</p> <h2>Top 20 Feature Importance</h2> <p><code> ('reborn_new_min_submodel', 3212145.126355231) ('reborn_hist_new_min_submodel', 2897647.0672301054) ('reborn_hist_min_submodel', 2571620.7657690495) ('reborn_hist_sum_submodel', 1556242.6892476082) ('reborn_hist_new_sum_submodel', 1495420.6608867645) ('card_purchase_date_max_ratio', 1176560.0546371937) ('reborn_new_sum_submodel', 988300.9342432022) ('hist_new_purchase_date_max_ratio', 751098.8485645056) ('hist_new_sum_category_1_1_purchase_amount', 634349.3938679695) ('hist_sum_category_1_1_purchase_amount', 626576.9190101624) ('new_card_sum_purchase_amount_new', 573281.6908314228) ('hist_card0_max_month_diff', 539578.2926783562) ('new_card_var_day_diff', 537170.215294838) ('new_card_last_day', 532604.059334755) ('hist_card_sum_month_lag_0_purchase_amount_new', 530102.9405331612) ('hist_card0_mean_month_diff', 506467.2120089531) ('svd_hist_new_merchant_id4', 454442.183167696) ('hist_card0_sum_month_diff', 445077.63865828514) ('hist_card_std_merchant_category_id_w2v_4', 436928.8732688427) ('hist_new_card_std_merchant_category_id_w2v_1', 434849.51894950867) </code></p> <h1><strong>Single Model</strong></h1> <h2>Lightgbm</h2> <p>My best single model is a lightgbm has 385 features with CV:3.6144,LB:3.668,PB:3.593,maybe it can reach to 3th place.It also happened at homecredit competition,best single model is much better than stacking,I will doubt if stacking is a good choice.</p> <h2>Nueral Netowork</h2> <p>My best nn has CV:3.6407,LB:3.680,PB:3.605,it's just a simple 3layers-dense network.</p> <p><code> def ann(input_shape): model = Sequential() model.add(Dense(2 ** 10, input_dim = input_shape, init='random_uniform', activation='relu')) model.add(Dropout(0.25)) <br> model.add(BatchNormalization()) model.add(Dense(2 ** 9, init='random_uniform', activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.25)) model.add(Dense(2 ** 5, init='random_uniform', activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.25)) <br> model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') return model </code></p> <h2>Others</h2> <p>Xgboost and Catboost got not bad results but almost no improvement to stacking. Also Tried Ridge,KNN,FFM...no success to me.</p> <h1><strong>Ensemble</strong></h1> <p>I built 12 lightgbm models and 40 nn models for Stacking stage1,then use lightgbm and nn as stage2,finally blend them with average.CV can reach to 3.603,but lb got worse,and final pb also got worse.</p> <h1><strong>Postprocessing</strong></h1> <p>Isotonic regression give me 0.005 ~ 0.006 boost both on cv and lb but pb score become worse,that's the reason I shakedown. Combination with a non-outliers model using binary model's best threshold give me 0.001 ~ 0.002 boost both on cv and lb and pb.</p> <h1><strong>Leaderboard Probing</strong></h1> <p>I used 100 subs to probe the lb,that let me grasp 24 public-test outliers and 6 non-outliers. I tried Pseudo-Labelling but no boost,finally it just can be used to decide a "perfect threshold" to override the outliers but failed.</p> <h1><strong>Other tricks</strong></h1> <p>at very early stage I tried target transformation,using minmaxscaler to transform from 0 ~ 1, then use xentropy boosting type,finally reversed to original value range,it didn't improve cv and lb, but I found pb is very good, a very eary stage lightgbm model is enough to get a gold medal.</p> <h1><strong>Late Submission Founding</strong></h1> <p>Isotonic regression overfit both on cv and lb, should be careful. Stacking can work if stage-2 model is nn or ridge ,lgb and extratree overfit too much.</p> <p>I manully start different features combination then use target permutation,choose the best importance threshold,the threshold is from 200 ~ 700</p> <p>Thanks, I think Olivier is deserved every kagglers' upvote</p> <p>your team is very strong</p> <p>Congratulations your first silver medal.</p> <p>Congratulations your first gold medal.</p> <p>Come with me</p> <p>about 300 features</p> <p>don't use groupby features,like this <code>historical_train = historical.merge(train,on='card_id',how='right') historical_test = historical.merge(test,on='card_id',how='right')</code></p> <p>thanks</p> <p>after trained a word2vec model,one merchant_id should has a vector like [w2v_merchant_1,w2v_merchant_2,w2v_merchant_3,w2v_merchant_4,w2v_merchant_5],then you can use historical or new data to create aggregation features like groupby(['card_id'])['w2v_merchant_1'].mean()</p> <p>``` for f in [ 'new_card_purchase_date_max', 'hist_card_purchase_date_max',]: df[f +'_int'] = df[f].astype(np.int64) * 1e-9</p> <p>df['card_purchase_date_max_ratio'] = df['new_card_purchase_date_max_int'] / df['hist_card_purchase_date_max_int'] ```</p>
Elo Merchant Category Recommendation
Congratulation to everyone!!!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Elo Merchant Category Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>glad it helped, and CONGRATULATION for the prize-winning performance! </p> <p>all right, all right, I get the message :)</p> <p>yup</p> <p>not being able to compete for the last month definitely helped, although I honestly ran out of idea by that time :)</p> <p>sorry, it was meant to be a multiplication formula, I put X in now.</p> <p>Hello all, congratulations to the winner, gold medalists,, and especially to those who survived the shake-up :)</p> <p>I got busy with my work and had to stay away for 4 weeks without doing anything, now looking back that may have prevented me from overfitting my model further!</p> <p>The shake-up seems to be quite strong, and I am not entirely sure what worked in my solution and what didn't. I am pretty sure the following were helpful:</p> <ul> <li>feature selection by the mean of feature permutation </li> <li>a significant amount of pseudo labelling, even concatenating multiple copies of test data and predict label. one of my best single models was 3 x pseudo labels with public 3.668 private 3.600, but sadly I didn't select it </li> <li>a little moderated post-processing for prediction with more than 0.50 probability seems to help - I used a formula of binary_probabilityX1.1XOUTLIER_VALUE, which helped to improve my best final submission</li> </ul> <p>anyway, I have learnt a lot throughout this competition -- even though I didn't have much as fun as I would like to - competing by yourself is hard, turning down team-up invitation from friends is HARD, getting that solo gold is VERY HARD!</p> <p>See you all around the LB!!!!</p>
Elo Merchant Category Recommendation
First silver zone, Some ideas sharing, Congrats to everyone!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Elo Merchant Category Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>@Manchoi , 6 in total, as I mentioned in the models part, each section has one lgbm, one NN using the exactly same features! Hope the explanation help!</p> <p>Thanks! Hope you enjoy the competition as well!</p> <p>Hi @Manchoi, I’m not sure if this makes sense to you or not. As the classifier provides a probability of whether it’s an outlier or not, I use the prob as one feature for the final stacking Ridge regression model. By doing so, the ridge regression would decide the coef of the prob and I won’t need to manually adjust the threshold of outlier.</p> <p>Hi Kagglers,</p> <p>Congrats to everyone participates in this difficult competition! This is my first serious competition and first metal after I joined Kaggle. I indeed learn a lot from everyone's posts and kernels in this competition. To be honest, I am very surprised by my final result because I was expected to probably fall out from the bronze zone according to the public LB score.</p> <p>Although my final result might not be very outstanding comparing to other talents, I would like to share my solution hope it could help.</p> <p><strong>FE</strong> </p> <ul> <li>a. Aggregate the ' hist_transaction data ', choose only ' authorized_flag ' == 1</li> <li><p>b. Aggregate the ' new_transaction data ' using the same features as ' hist_transaction data ' but groupby ' month_lag' [ 1 , 2], ' authorized_flag' == 1 </p></li> <li><p>c. Merge a, b. and use log1p to normalize data</p></li> <li>e. I select <strong>the abs(target) &gt;=10 as outlier instead of only choose -33.2</strong></li> </ul> <p><strong>Models</strong> with some feature importance selection done by choosing lgbm results.</p> <ul> <li>a. 1 lgb model and 1 NN model for all data including outliers (5 fold , StratifiedKFold)</li> <li>b. 1 lgb model and 1 NN model for all data excluding outliers (5 fold , StratifiedKFold, separate by target values abs()&gt;5)</li> <li>c. 1 lgb model and 1 NN model for outlier detection (5 fold , StratifiedKFold)</li> <li>d. Using Ridge() to stack a, b, c models</li> </ul> <p>I think I am very luck this time because my CV is only 3.63X but the final result is even lower than my CV. Would like to try more trick sharing from the kagglers to see if the score can improve.</p> <p>Thanks for reading!</p>
Elo Merchant Category Recommendation
Learning is everything - 173rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Elo Merchant Category Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello everyone, thanks for the great competition and discussion. Much was was learned and put into practice.</p> <p>I went into this competition marked by <a href="https://www.kaggle.com/yifanxie">Yifan Xie</a> <a href="https://www.kaggle.com/c/elo-merchant-category-recommendation/discussion/75935">words</a>: </p> <blockquote> <p>Each and every kaggle competition shall be treated as a work/study project in which you should setup a proper workflow to structure your work.</p> </blockquote> <p>With that in mind I hope someone can take from this just as I learnt a lot from all the discussions.</p> <h2>Organization</h2> <p>Github for code, reproducible jupyter notebook, lgbm/xgboost for testing things, ensemble of 4 models for submitting. The train_model function <a href="https://www.kaggle.com/artgor/elo-eda-and-models">here</a> was a great find.</p> <h2>Feature Engineering</h2> <p>Much feature engineering was done. It would not have been possible with the much used <a href="https://www.kaggle.com/gemartin/load-data-reduce-memory-usage">memory usage script</a> frequently shared here. Here is everything that wen into the final models, in order of what I tried. Since most ideas are not new, I will link to other users example implementation</p> <ul> <li><p>Basic time features were initially done, things <a href="https://www.kaggle.com/ashishpatel26/lightgbm-goss-dart-parameter-tuning">such as this</a></p></li> <li><p>More heavy time feature engineering, things such as <a href="https://www.kaggle.com/rayidura/elo-lag-and-lgbm-scores/code">here</a>, <a href="https://www.kaggle.com/tunguz/eloda-with-feature-engineering-and-stacking">here</a> and <a href="https://www.kaggle.com/gpreda/elo-world-high-score-without-blending/codea">here</a>.</p></li> <li><p>Negative <a href="https://www.kaggle.com/rayidura/elo-don-t-forget-the-past/code">month lag</a> features </p></li> <li><p><a href="https://en.wikipedia.org/wiki/RFM_%28customer_value%29">RFM</a>, from both the customer_id and merchant_id, weighted and not weighted</p></li> <li><p>Aggregations that only had few examples. For this, a especially consideration was used to not include information from whatever was in the currently validation set, ie, for each pass of the fold I would calculate it from the training data of that fold only.</p></li> <li><p>More and more aggregation and stats </p></li> </ul> <h2>Models</h2> <p>Final model was based on combining model with and without outliers as mentioned <a href="https://www.kaggle.com/waitingli/combining-your-model-with-a-model-without-outlier">here</a> with some extra modifications; basically, with a 3rd model to predict outliers, I found a ideal threshold amount to set as outliers and for those instances, the prediction was the average of the two models; for all others the prediction was based on the model without outliers.</p> <p>Each model was in itself a ensemble of xgboost, catboost, lightgbm - stacking was tried, but the results weren't a improvement.</p> <h2>Things I didn't have the time to do</h2> <p>Late into the competition I had to take a break due to work/traveling, so some things in the pipeline could not be tried to the end. </p> <p>I was playing with the idea of manually identifying states (and possibly a number of cities from correlation of spending/population) and including external economics and social data based on <a href="https://www.kaggle.com/tapioca/category-1-2-3-in-transactions">my findings</a> that category 2 is identifying the 5 different regions in Brasil, which other people also found out.</p> <p>Neural networks were also one of my next go-to things to try but alas, no time :/</p> <h2>Ideas that failed</h2> <p>FM and <a href="https://www.youtube.com/watch?v=1cRGpDXTJC8&amp;t=1s">FFM -</a> spent some time understanding the theory, but in the end all the models built were very sub par.</p> <p><a href="https://docs.featuretools.com/">Featuretools</a> - It kind of worked and some of the features were slightly useful, but not what I expected, although I was also lacking in computing power by this point</p> <p>Thanks everyone for the great competition and discussion!</p>
Elo Merchant Category Recommendation
My simple trick for this competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Elo Merchant Category Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>thank for kale‘explanation,i’m poor in English,kale is right</p> <p>the most important features from this post <a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/64503#378162">https://www.kaggle.com/c/home-credit-default-risk/discussion/64503#378162</a></p> <p>maybe your feature is not fit for outlier Classifier. my outlier Classifier is 0.914</p> <p>Classifier AUC: 0.914 no outlier Regression CV:1.545</p> <p>train['final'] = train['bin_predict']*(-33.21928)+(1-train['bin_predict'])*train['no_outlier']</p> <p>0.015 boost in local cv compare with same feature train directly</p> <p>I think this trick works because binary is better than regression even though metric is rmse when label is 1&amp;0</p> <p>like Kaza's post <a href="https://www.kaggle.com/c/avito-demand-prediction/discussion/59885">https://www.kaggle.com/c/avito-demand-prediction/discussion/59885</a> this competition is rmse too,but Kaza use xentropy as their objective</p> <p>add: Classifier AUC: 0.914 no outlier Regression CV:1.545</p> <p>the most important features for binary_predict from this post <a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/64503#378162">https://www.kaggle.com/c/home-credit-default-risk/discussion/64503#378162</a></p> <p>I think this trick works because binary is better than regression even though metric is rmse when label is 1&amp;0</p> <p>like Kaza's post <a href="https://www.kaggle.com/c/avito-demand-prediction/discussion/59885">https://www.kaggle.com/c/avito-demand-prediction/discussion/59885</a> this competition is rmse too,but Kaza use xentropy as their objective</p>
Elo Merchant Category Recommendation
sharing idea and result
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Elo Merchant Category Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for my teammates and all kagglers.</p> <p>I want to share some ideas used in my model (mainly feature engineering) and result.</p> <h2><strong>Feature Engineering</strong></h2> <p><strong>1. Split data</strong></p> <p>I think the most important part in this competition is to derive useful features from chunk of data split by various standards.</p> <p>I split data based on : </p> <ol> <li>Historical / New</li> <li>month_lag (each month_lag, range of month_lag, rank..)</li> <li>purchase_date</li> <li>authorized_flag</li> <li>categorical columns (category_1, category_3, category_4)</li> </ol> <p>For some cases, i split data using two or three standards simultaneously (ex. with category_1 = 1 at month_lag = 0). In my model, features from month_lag and category_1 are most powerful.</p> <p>One of interesting methods i tried is using ranking method for month_lag and purchase_date.</p> <pre><code>df_his_tran['month_rank'] = df_his_tran.groupby(['card_id'])['month_lag'].rank(method='dense',ascending=False) df_his_tran['date_rank'] = df_his_tran.groupby(['card_id'])['purchase_date'].rank(method='dense',ascending=False) </code></pre> <p>Features based on these rank numbers can give you compact information for the latest purchase of each card_id or vice versa. They improved both my CV and LB.</p> <p>BTW, i also tried to rank purchase_amount grouped by card_id, so i can use information such as "the gap between the date with highest purchase_amount and now (or reference date)". but it didn't work.</p> <p><strong>2. Adding true categorical features to each card_id</strong></p> <p>Since each card_id has multiple transactions, categorical features in transactions such as category_1, 2, 3, 4, and identifier columns(merchant_id, subsector_id, ..etc) are represented as aggregated values (mean, sum, nunique..) in normal ways.</p> <p>But, we can obtain true categorical features by getting representative category values which have highest frequency in transactions of each card_id. I got improvement by using category_2 and subsector_id in this way.</p> <p><strong>3. Using <em>merchants</em>.csv</strong></p> <p>I tried basic aggregations of all columns in <em>merchants</em>.csv. Only category_4 and merchant_group_id worked for me. FYI, you should be careful when you use <em>merchants</em>.csv because it has duplicate merhcant_id.</p> <p><strong>4. What didn't work (based on LB)</strong></p> <ul> <li>Tfidf, CountVectorizer for identifiers</li> <li>Categorical embedding for identifiers</li> <li>Target encoding (naive way, leave-one-out, ...)</li> <li>FFM on categorical features</li> </ul> <p><strong>Total 93 features selected by GBM importances, RFE, manual works.</strong> </p> <h2><strong>Models</strong></h2> <ul> <li>Lightgbm</li> <li>Xgboost</li> <li>Catboost</li> <li>NN (for categorical embedding / FFM)</li> </ul> <p>Like most of teams, i employed both single model method and 3-model method with outlier handling. All of three GBM algorithms showed some potential for improvement by blending. But I only used Lightgbm as my final model for our team's ensemble result.</p> <h2>Best score</h2> <ul> <li>CV : 3.621</li> <li>Public : 3.666</li> <li>Private : 3.601</li> </ul>
Event Recommendation Engine Challenge
Solutions
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Event Recommendation Engine Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>@Matthew - I tried adding the age and gender, as they were or by building an event profile (with the average age and gender of attendants). They were ranked as important by the random trees, but crossvalidation error increased.</p> <p>@hellofys - You don't have to do anything. Missing values were represented as None in the feature vectors. The Random Forest Classifier from scikit-learn will work with that. For SVM or Logistic Regression, you'll have to remove those features or use some averaging technique.</p> <p>@Dzidas - I also struggled with those locations. Having an API to get coordinates was so comfortable. But you could get the same results without. There are 1.6 million events with both location string and GPS coordinates. That's a big dataset. I used them to match the other locations to coordinates. Nonetheless, I also think that this sort of data should be provided already processed or tools that use public information should be allowed.</p> <p>Well this was fun :)</p> <p>Congrats to the winners! GG</p> <p>I'd like to start this topic for participants that want to share their solutions. I'm really curious how others approached the problem.</p> <p>My code and comments are here: http://webmining.olariu.org/event-recommendation-contest-on-kaggle</p>
Expedia Hotel Recommendations
1st Place Solution Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Expedia Hotel Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>@vtKMH, @Alpha</p> <p>(note: try refreshing the page a couple times if the math doesn't display correctly)</p> <p>Let me try to explain the gradient descent part in more detail and let's agree that a unique user location \\(u\\) is a triple (user_location_country, user_location_region, user_location_city) and a unique hotel location \\(h\\) is a pair (srch_destination_id, hotel_cluster).</p> <p>Now let's place the distinct user and hotel locations on the globe completely at random. So each user \\(u_i\\) will have a latitude and longitude \\((\phi_{i,1},\phi_{i,2})\\). Likewise, each hotel \\(h_j\\) has a latitude and longitude \\((\theta_{j,1},\theta_{j,2})\\). It's possible to write a down a formula which gives the distance along the surface of a sphere:</p> <p>$$D(\phi_{i,1},\phi_{i,2},\theta_{j,1},\theta_{j,2})=r*\arccos(\sin \phi_{i,1} \sin \theta_{j,1} + \cos \phi_{i,1} \cos \theta_{j,1}\sin(\phi_{i,2}-\theta_{j,2}) )$$ Where \\(r\\) is the radius of the sphere.</p> <p>From the training set we collect all the combinations of \\((u_i,h_j,d_{ij})\\) where \\(d_{ij}\\) is the orig destination distance. Our goal is to adjust the placement of each \\(u_i\\) and \\(h_j\\) in order to make the computed distance \\(D\\) as close as possible to the actual distance \\(d\\). One way to quantify this is to define a loss \\(L\\): $$L=(D(\phi_{i,1},\phi_{i,2},\theta_{j,1},\theta_{j,2})-d_{ij})^2$$ Whatever placement of users and hotels minimizes this loss function should make our predicted distances very close to the actuals.</p> <p>From calculus, you may recall the notion of the gradient. The gradient of \\(L\\), denoted \\(\nabla L\\), is just the vector formed from the partial derivatives of \\(L\\):$$\nabla L=(\frac{\partial L}{\partial \phi_{i,1}}, \frac{\partial L}{\partial \phi_{i,2}},\frac{\partial L}{\partial \theta_{i,1}}, \frac{\partial L}{\partial \theta_{i,2}})$$</p> <p>For our purposes, the important fact is that the negative of the gradient points in the direction the loss is decreasing most rapidly.</p> <p>If the gradient isn't sounding familiar, don't panic, there is an easy geometric description of what's going on. Since we have two points on a sphere, we can draw the geodesic great circle through them. If predicted distance along the great circle \\(D\\) is larger than the actual distance \\(d\\), then we can think of the negative gradient as a pair of vectors emanating from the user and hotel and pointing toward each other along the shorter arc of the great circle. Conversely, when \\(D\\) is smaller than \\(d\\) then the negative gradient is a pair of vectors pointing toward each other along the bigger arc of the great circle.</p> <p>Either way you want to think about the gradient, taking the current position for our user-hotel pair and moving in the direction of the negative gradient should make our loss a little smaller. All that really happens in gradient descent is that we iterate through each triple \\((u_i,h_j,d_{ij})\\) and make a very small step from our current configuration in the direction of the negative gradient. After many, many iterations, the hope is that we end up with a configuration of users and hotels whose distances agree with the actual distances reasonably well.</p> <p>@Watts</p> <p>By &quot;burst&quot; I just mean that each booking instance was repeated 100 times, once for each hotel cluster. This was done because learning-to-rank models have an additional group structure in which comparisons are made. In this case, the group consists of a single booking instance and the items to be ranked are the hotel clusters.</p> <p>@CPMP</p> <p>You're right the distance minimization problem is very far from convex. Even after all the parameter tuning, my final solution was definitely not the global minimum. One technique used to mitigate this problem, which was glossed over in the summary, was to run both versions of the matrix completion 18 times in parallel with different seeds. The average and minimum distances across the different seeds were what actually got used in the model. </p> <p>@Tagarin</p> <p>My understanding of FM models is that they are motivated by <a href="https://en.wikipedia.org/wiki/Non-negative_matrix_factorization">matrix factorization</a> which one could consider as a sort of dimension reduction technique for interacting pairs of categorical variables. </p> <p>In terms of increasing the dimensionality, it does improve results up to a point. Past that point overfitting tends to become an issue. The threshold in question will depend on the data.</p> <p>@FengLi</p> <p>The choice of the &quot;rank:pairwise&quot; algorithm was motivated by the evaluation metric. The metric belongs to a class of information retrieval metrics where learning-to-rank models such as &quot;rank:pairwise&quot; perform quite well. Had the metric been something else, multiclass logloss for instance, I would choose a different algorithm.</p> <p>A similar approach can work for situations with thousands of classes. It depends on whether large number of classes can be rejected up-front. The <a href="https://www.kaggle.com/c/icdm-2015-drawbridge-cross-device-connections">ICDM 2015</a> is an example with millions of potential classes where similar techniques were successful. </p> <p>@aldente</p> <p>(1) Yes, the indicator is binary</p> <p>(2) Perhaps someone more familiar with the python interface can chime in, but that sounds right. One thing to note is that you'll also want your input data sorted by group.</p> <p>(3) I built the model on a dual socket E5-2699v3 with ~700GB ram and was only able to use about half the bookings after '2014-07-01' due to memory limitations. The training set was around 58 million rows and it took 38 hours to fit 1200 trees. Honestly though, it was a huge waste of electricity because there was virtually no improvement over using a training set 1/10th the size.</p> <p>@data_js</p> <p>Yes, the features were one-hot encoded. Even user_id with its 1,198,786 distinct values. It might seem a little nutty to add a million columns to your data and then try fitting a linear model on it, but the &quot;trick&quot; to FM models is that they always project the one-hot encoding to some lower dimensional subspace. I used libFFM's default 4-dimensional space in this case.</p> <p>I'd like to note that I'll be donating 10% of the prize to the American Cancer Society in honor of <a href="https://www.kaggle.com/leustagos">Lucas</a>. He was always an inspiration to me here on Kaggle.</p> <p>In terms of the solution, let me describe a couple of the components before getting into the model.</p> <ul> <li>Distance Matrix Completion</li> </ul> <hr> <p>The idea was to map users and hotels to locations on a sphere. If we can do this successfully, then we can 'widen the leak' not just to previously occurring distances but to potentially any example where the distance between user and hotel is known.</p> <p>One immediate issue with this approach is the question of what combinations of columns to use in uniquely identifying user and hotel locations. For users, the tuple U=(user_location_country, user_location_region, user_location_city) was a natural choice. For hotels things are less clear. Two parallel notions of hotel location were developed H1=(hotel_country, hotel_market, hotel_cluster) and H2=(srch_destination_id, hotel_cluster). Both the H1 and H2 versions were used as features and the final model was relied upon to sort out which was more useful.</p> <p>For both H1 and H2, user and hotel locations were randomly initialized on a sphere and gradient descent was applied to the <a href="https://en.wikipedia.org/wiki/Great-circle_distance#Formulas">spherical law of cosines formula</a> on the distinct combinations of (U, H, orig_destination_distance). </p> <p>Convergence for the gradient descent was not quick at all. Using nesterov momentum and a gradual transition from squared error to absolute error, the process took about 10^11 iterations and 36 hours.</p> <p>In the end, the average errors for H1 and H2 were around 1.8 and 3.7 miles respectively and I'm looking forward to finding out what tricks the other teams had here.</p> <ul> <li>Factorization Machines</li> </ul> <hr> <p>These tend to pick up on interactions between categorical variables with a large number of distinct values. The hope was that they would find some user-hotel preferences in-line with Expedia's description of the problem.</p> <p>The implementation used was <a href="http://www.csie.ntu.edu.tw/~r01922136/libffm/">LIBFFM</a>. For each of the 100 hotel clusters, a seperate factorization machine model was built using the categorical features in the training set. The attributes for each model were the same but the target for k-th model was an indicator of whether the hotel cluster was equal to cluster k.</p> <p>These FM submodels added about 0.002 to the validation map@5. Aside from leak related features, these provided the most lift over the base click and book rates.</p> <ul> <li>Training/Validation Setup</li> </ul> <hr> <p>The strategy here was to build a model on the last 6 months of 2014, from '2014-07-01' onwards. In order to mirror the situation in the test set, the features used to train the model were generated using only hotel cluster and click data prior to '2014-07-01'. </p> <p>When scoring the model for the test set, all the features were refreshed using all available training data.</p> <p>A portion of the user_ids from '2014-07-01' onward was set aside as validation. This did not line up precisely with the leaderboard, but it probably agrees directionally. </p> <ul> <li>Learn-to-Rank Model</li> </ul> <hr> <p>In order to turn this into an xgboost &quot;rank:pairwise&quot; problem, each booking in the training sample was &quot;burst&quot; into 100 rows in the training set. The features for each row were generated relative to the corresponding cluster.</p> <p>The features input into the model were essentially the historical book and click rates, distances derived from the matrix completion, and the factorization machine scores. So, the <strong><em>i</em></strong>-th row corresponding to the <strong><em>j</em></strong>-th booking would be something like: </p> <ul> <li>the historical click and book rates of cluster <strong><em>i</em></strong> for someone having the attributes of the <strong><em>j</em></strong>-th booking</li> <li>the difference between the matrix completion predicted distance and the distance provided for the <strong><em>j</em></strong>-th booking</li> <li>the factorization machine score for the <strong><em>i</em></strong>-th cluster based on the attributes of the <strong><em>j</em></strong>-th booking.</li> </ul> <hr> <p>For the final scoring, the test set is also &quot;burst&quot; into 100 rows per booking instance and each booking-cluster combination provides a score. The top 5 scoring clusters per booking are submitted as the solution.</p> <p>A few details were left out in the interest of brevity as this post is already quite long, but the above more or less summarizes the ideas in play. I'm looking forward to hearing about the approaches of the other teams.</p> <p>@Laurae</p> <p>The loss distribution for gradient descent, in my experience, had a fat tail. L-BFGS probably suffers from the same issue.</p> <p>It was possible to get much faster convergence by capping (actual-expected) term in the gradient. Unfortunately, capping too aggressively had a negative impact on the eventual quality of the fit. The compromise I came up with was to lower the cap very gradually, hence the long fit time.</p> <p>I'm using C++ as my gradient descent library. The user interface leaves a lot to be desired, but the execution time is hard to beat.</p>
Expedia Hotel Recommendations
A List of Counter Features and Their Strengths
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Expedia Hotel Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I see, did you evolve your layers? Because I let evolution take care of running the same counter-type model multiple times if that was preferable. Using the &quot;only add one&quot; method you can always replicate &quot;add as many as you've got&quot; by repeating the counter 5 times. And evolution was able to pick up on that and sometimes use the same list 4 times in a row and sometimes just use it once.</p> <p>28 layers, huh? That's pretty deep! Could you elaborate on geometric averages?</p> <p>@Isreal It's Map@5 since that's what's measured in the competition. So a feature that does 0.3113 in validation should do slightly better than 0.3113 in LB, because these counting features/grouping features have more data in the LB case than in the validation case.</p> <p>@epiplus1 - that's a brilliant feature, wish I would have thought of that!</p> <p>@Army of Darkness, here's my last evolving run - it wasn't my best run but it wasn't much worse. It got 0.45775 on full validation and 0.45915 on a 10% validation run. It did significantly better in the public leader board, about 0.50395 - that's not great but I've lost the list that got me the best result. But basically, it would look the same.</p> <p>As you can see, it's a very long list and some items are repeated - this turned out to be significant. Below you'll find the first row (hotel_country, hotel_market, orig_destination_distance - which is basically a version of the leak) is repeated 4 times. Each time, the counter got to add just <em>one</em> new vote to each search. So even if the counter produced 5 votes, it was only allowed to add the first one that didn't already exist. This allowed the evolving solution to be much more powerful. That made my results jump significantly. And landed me at #10 for a brief while.</p> <p>Hope that explains things.</p> <p>93 (27): full= ** 0.45776 **, fast=0.45915</p> <ul> <li>hotel_country, hotel_market, orig_destination_distance</li> <li>hotel_country, hotel_market, orig_destination_distance</li> <li>hotel_country, hotel_market, orig_destination_distance</li> <li>hotel_country, hotel_market, orig_destination_distance</li> <li>hotel_market, srch_destination_id, user_id</li> <li>hotel_market, srch_destination_id, user_id</li> <li>hotel_continent, hotel_market, srch_destination_id</li> <li>hotel_continent, hotel_market, srch_destination_id</li> <li>hotel_continent, hotel_market, user_id</li> <li>hotel_market, pca-1, pca-3</li> <li>hotel_continent, hotel_market, srch_destination_id</li> <li>hotel_market, pca-1, pca-3</li> <li>month, srch_destination_id, user_location_city</li> <li>hotel_country, hotel_market, pca-2</li> <li>hotel_country, hotel_market, pca-2</li> <li>hotel_country, hotel_market, pca-1</li> <li>hotel_continent, hotel_market, pca-3</li> <li>hotel_continent, hotel_market, pca-3</li> <li>hotel_continent, hotel_market, pca-3</li> <li>hotel_market, month, season</li> <li>hotel_country, hotel_market</li> <li>hotel_continent, hotel_market</li> <li>hotel_continent, hotel_market</li> <li>dayOfWeek, hotel_continent, hotel_market</li> <li>hotel_continent, month, pca-3</li> <li>hotel_continent, month, pca-3</li> <li>pca-1, season</li> </ul> <p>As a part of my evolving a strategy for this competition, I created a huge number of features that I evaluated for their total and specific strength. Total is how well the feature would do on its own. Specific is how well the feature does measured only on the rows it made predictions. </p> <p>I found that grouping by month, orig_destination_distance, srch_destination_id was the most exact predictor I was able to find, it had a specific score of 0.9044 in my validation set. But it didn't make many predictions so it wasn't super strong. I should have used it in my final submission though, stupid of me not to.</p> <p>The strongest non leaky feature I was able to find was grouping on &quot;hotel_country, hotel_market, srch_destination_id&quot; which gave me 0.3131 specific and 0.3106 total. I did have slightly better ones, but the used PCA data from the destinations file, and not everyone had that.</p> <p>Here's a link to a spreadsheet of some of the different features I tried. I had other features (like day of week for ci/co/book, hour of book, stuff like that). But since all individual features permute with all other individual features, the number of permutations exploded and I got rid of the weakest individual features along the way.</p> <p><a href="https://docs.google.com/spreadsheets/d/1Y_jLHZXmVa5hmwkrvysOhaK5u5Vx0FV-VMApmHN1_CE/pubhtml">https://docs.google.com/spreadsheets/d/1Y_jLHZXmVa5hmwkrvysOhaK5u5Vx0FV-VMApmHN1_CE/pubhtml</a></p>
Eye Movements Verification and Identification Competition
Results?
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Eye Movements Verification and Identification Competition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hey Killian, thanks for posting! I read through the paper you referenced too, and that was very eye-opening. Can I ask you a question about the tuning process though? Here's what I'm doing:</p> <p>Cross-Validation Model Building:<br> - Partition the data into 10 subsets.<br> - Fit a model to 9 of the 10, predict on 10th (and do this for each set of 9).<br> - With each of the 10 models, predict on the test data and average those for my overall prediction.<br> - For the training set, calculate a probability for each example using the one model that didn't use that example. This is then my list of probabilities.<br> Tuning<br> - Using the novel calibration method, do I just find the A and B parameters for the sigmoid that minimize the logloss between my probabilities and the observed values (on the full training set)? Then, just use those same parameters to tune the test set predictions?</p> <p>Again, thanks so much for your help!</p> <p>Is anyone planning on posting their methodology?&nbsp; I'd be interested to hear about any techniques that other people tried...</p> <p>My model didn't perform too well.&nbsp; I first tried cleaning up the data (reducing it to a standard range, removing linear trends, etc).&nbsp; I added some new features, such as the Fourier transform of each person's eye movement and then ultimately combined all the predictors using a random forest.</p> <p>What other models did people use?&nbsp; Did ensembling help?&nbsp; What about feature selection/engineering?</p>
Feedback Prize - English Language Learning
1,782nd place - Multitasking solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>💡 Leaderboard: Top&nbsp;68%&nbsp;(1,782 out of 2,654)</p> <h1>Task</h1> <p>The task was to score an essay in&nbsp;<strong>6 targets</strong>: cohesion, syntax, vocabulary, phraseology, grammar, and conventions. No other labels are provided. But we can use data from past Feedback competitions. That includes argument segmentation (FB 1.0) and classification (FB 2.0).</p> <h1>Final Solution</h1> <p>I trained a multitasking model using RoBERTa to do three tasks from all FB competitions. That includes:</p> <ul> <li><a href="https://www.kaggle.com/competitions/feedback-prize-2021/data" target="_blank">FB1.0</a>: Detecting discourse boundary. This is like a NER task. The labels are lead, position, evidence, claim, concluding statement, counterclaim, and rebuttal.</li> <li><a href="https://www.kaggle.com/competitions/feedback-prize-effectiveness/data" target="_blank">FB2.0</a>: Classifying each discourse as being effective, ineffective, and adequate.</li> <li><a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/overview" target="_blank">FB3.0</a>: Overall essay scoring.</li> </ul> <p>Here is the link to the&nbsp;<a href="https://www.kaggle.com/code/vitouphy/fb3-multi-task-prediction" target="_blank">notebook</a>, and here is its performance:</p> <ul> <li>Public: 0.457696</li> <li>Private: 0.458945</li> </ul> <h1>What I learned</h1> <p>All the scores reported below are private scores. I also run some other experiments before this multitasking model. So I’ll document the overall concepts I learned below.</p> <p><strong>RoBERTa followed by Conv helps a bit</strong></p> <p>The Conv layer on top of RoBERTa helps boost the performance by&nbsp;<strong>(0.9% better)</strong>.</p> <ul> <li>RoBERTa: 0.465965</li> <li>RoBERTa-Conv: 0.461717</li> </ul> <p><strong>DistillRoBERTa is not too bad</strong></p> <p>Fine-tuning DistillRoBERTa can achieve a similar performance to RoBERTa for significantly fewer parameters and inference latency&nbsp;<strong>(0.1% worse)</strong>.</p> <ul> <li>RoBERTa-Conv: 0.461717</li> <li>DistillRoBERTaConv: 0.462217</li> </ul> <p><strong>Word-Level embedding is essential</strong></p> <p>We try using SentenceTransformer or fixed RoBERTa to get embedding for the whole essay without being able to update the encoder’s weight. The performance jumps to more than 0.5.</p> <ul> <li>RoBERTa: 0.465965</li> <li>SentenceTransformer: 0.52097</li> <li>Fixed pretrained-NER-RoBERTa: 0.515841</li> </ul> <p><strong>Fine-tuning on previous tasks helps</strong></p> <p>I try to fine-tune RoBERTa on FB1.0 which is to detect the boundary of arguments in the essay. It improves by&nbsp;<strong>0.5%</strong>.</p> <ul> <li>(Pretrained NER)-RoBERTa-Conv: 0.459103</li> <li>RoBERTa-Conv: 0.461717</li> </ul> <p>I didn’t get to fine-tune on the FB2.0 though.</p> <p><strong>Use HyperParameter</strong></p> <p>I started using hyperparameter tuning instead of sitting and tweaking each parameter. It saves a lot of time as it would just run automatically and return the result at the end. Here are some of the experiment results:</p> <ul> <li>RoBERTa-Conv:<ul> <li>Plain: 0.472317</li> <li>HParams: 0.461717&nbsp;<strong>(+2.2%)</strong></li></ul></li> <li>RoBERTa:<ul> <li>Plain: 0.487746</li> <li>HParams: 0.465965&nbsp;<strong>(+4.4%)</strong></li></ul></li> </ul> <p>However, I was not able to do this hyperparameter search all the way. For some experiments, I end up doing them manually. For example, I was testing by using the top 4 layers of Transformers or using 4 layers but evenly selected from different layers. I did hyper-parameter tuning for each one, but not all together. I should have just thrown it in hyperparameter and just look at the result all in one go.</p> <p><strong>Multitasking is about balancing</strong></p> <p>Conceptually, multitasking makes the encoder more aware of all the tasks and hopefully it trains to use all those knowledge and improve the score altogether. As it turns out, setting the right trade-off parameter can be tricky. Often, it tips off toward one loss function and that just worsens the others. I’m still searching for the right way to balance these objectives. The current result favors the main scoring task and it makes discourse detection way worse. But yes, for this task, the multitasking model was able to achieve higher performance than the RoBERTa one.</p> <ul> <li>MultiTask (3 tasks): 0.458945</li> <li>RoBERTa-Conv: 0.461717&nbsp;<strong>(+0.6%)</strong></li> <li>RoBERTa: 0.465965&nbsp;<strong>(+1.5%)</strong></li> </ul>
Feedback Prize - English Language Learning
10th place solution...my quick write-up!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Acknowledgement</strong></p> <p>Phew, the shakeup was massive! :) I want to thank the organizers, the Kaggle community, and the Kaggle team for this competition. Competitions like this are great opportunities for me to learn new things and update my machine-learning knowledge.</p> <p>I also want to thank my teammates, <a href="https://www.kaggle.com/runningz" target="_blank">@runningz</a>, <a href="https://www.kaggle.com/kunihikofurugori" target="_blank">@kunihikofurugori</a>, and <a href="https://www.kaggle.com/habedi" target="_blank">@habedi</a>, for their efforts and cooperation in this competition. </p> <p><strong>Quick Summary</strong></p> <p>Our final submission was a blend of two submissions files:</p> <ul> <li>The first one was the model trained by <a href="https://www.kaggle.com/runningz" target="_blank">@runningz</a>, which was an ensemble of three deberta-v3-base models trained (his models were trained in PyTorch, and he used pseudo-labeling to improve the results). His models were based on his knowledge of the previous feedback competitions.</li> <li>The second one was the stacked ensemble of 22 models from the rest of the team. The information about the stacking process is as follows:<ul> <li>We used <code>iterativestratification</code> with the number of folds set to four and the seed set to 42.</li> <li>A set of oofs based on<ul> <li>a model trained with deberta-v3-small, </li> <li>two models trained with deberta-v3-base, </li> <li>a model trained with deberta-v3-xsmall ,</li> <li>a model trained with distilroberta-base,</li> <li>a model trained with opt-125m,</li> <li>a model trained with roberta-base,</li> <li>two models trained with opt-350m,</li> <li>a model trained with electra-base-discriminator.</li></ul></li> <li>We used a couple of transformers without training, just embedded the text, and trained simple regressors on the numerical vectors. These models where<ul> <li>a model based on deberta-v3-base embeddings along with a BayesianRidge regressor,</li> <li>a model based on deberta-base-mnli embeddings along with a BayesianRidge regressor,</li> <li>a model based on deberta-v3-large embeddings along with a BayesianRidge regressor,</li> <li>seven models based on deberta-v3-large embeddings along with a KNN regressor with different values of k (k=2, 4, 8, 16, 32, 64, 128),</li> <li>a model based on deberta-base-mnli embeddings along with a BayesianRidge regressor,</li> <li>a model based on google-electra-base-discriminator embeddings along with a BayesianRidge regressor,</li></ul></li> <li>Finally, we trained a Lasso regressor on the oofs. </li> <li>The CV of the stacked model was 0.4473, and LB was 0.43.</li> <li>In the case of the models trained based on different transformers, the CV ranged in from 0.45+ to 0.49+, with deberta-v3-base being the best model in terms of CV and LB</li> <li>In the case of the text embedding models, the CV of our oofs (even the KNN models) was between 0.45+ to 0.54+, and the best CV belonged to deberta-v3-large with CV equal to 0.4518 and LB of 0.44 (it is fascinating to see that with just embedding and no training one can get to 0.44!)</li> <li>We tried different regression models as the stacker, and the best in terms of CV was Lasso</li> <li>Performing log transformation on the oofs and running the RobustScaler from SKlearn improved the CV a bit</li></ul></li> </ul>
Feedback Prize - English Language Learning
11th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for question. <br> I used Ridge regression for decision of model weight for each target.<br> I tried stacking on transformers' prediction (6 targets) too, but that resulted in worse score.</p> <h1>11th Place Solution</h1> <p>First of all, I'd like to thank Kaggle and host for hosting this competition.<br> I still can't believe I got a solo gold.</p> <h2>Overview</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2648907%2F0d8799582d6ad0461e9ec995a3a1d3f7%2F2022-12-06%2013.47.10.png?generation=1670305587215632&amp;alt=media" alt=""></p> <p>I ensembled 21 models (including models trained with Pseudo Labels). I used the Ridge-Regression and <a href="https://kaggler.readthedocs.io/en/latest/_modules/kaggler/ensemble/linear.html#netflix" target="_blank">Netflix method</a> for ensembling.</p> <p>For all of my transformer models I used the <a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">this notebook</a> and <a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-inference/notebook" target="_blank">this notebook</a>.</p> <h2>Code</h2> <ul> <li>Inference Notebook: <a href="https://www.kaggle.com/code/irrohas/final-many-deberta-ridge" target="_blank">https://www.kaggle.com/code/irrohas/final-many-deberta-ridge</a></li> <li>Code (Training): I will upload later.</li> </ul> <h2>What Worked</h2> <ul> <li><p>[Hight Impact] Ridge-Regression and Netflix method per target for ensembling</p> <ul> <li>Result: about -0.009 </li> <li>Reasoning and context: Adding the predictions of various models (even if the each score was low) to the ensemble, not just Deberta-v3, improved the subsequent CV. So I used 10 other models besides deberta-v3. </li></ul></li> <li><p>[Low Impact] Pseudo Label</p> <ul> <li>Result: about -0.003</li> <li>Reasoning and context: I created a pseudo label with the 'Feedback Prize - Evaluating Student Writing'(FB1) competition data. However, there were too many pseudo-labels for the training data, and using all of them was likely to result in a large leakage, so we limited the pseudo-data that could be used in each fold.<br> I made up to 2 steps of learning with pseudo labels.</li></ul></li> </ul> <h2>What Didn't Work</h2> <ul> <li>Stacking for 2nd stage prediction<ul> <li>What you did: I tried LGBM, 2DCNN, MLP, and GCN for 2nd stage prediciton.</li> <li>Result: worse</li> <li>Reasoning and context: I tried LGBM, 2DCNN, MLP, and GCN for 2nd stage prediciton. However, all of them made CV worse. Also the weight optimization by Nelder-Mead method did score better, but not as well as Ridge-Regression or Netflix-method, so we did not use it for final submission.</li></ul></li> <li>Pretraiend from previous competition dataset<ul> <li>what you did: Before training, I pretrained model with data from a previous competition ('Feedback Prize - Evaluating Student Writing' and 'Feedback Prize - Predicting Effective Arguments')</li> <li>Result: not change or worse</li> <li>Reasoning and context: Scores were almost the same or slightly worse, so I stopped considering the cost of learning.</li></ul></li> <li>Changing the model depending on the length of the input text<ul> <li>Result: worse</li> <li>Reasoning and context: Deberta-v3 models can handle sentences of arbitrary length, while Roberta and Funnel were limited because of 512 and 1024. However, it was determined that even if the <code>max_len</code> limit was used, the model score would remain almost the same and could be used to seed the ensemble.</li></ul></li> </ul> <h2>Additional Context</h2> <h3>CV Strategy</h3> <p>I used <a href="https://www.kaggle.com/code/abhishek/multi-label-stratified-folds" target="_blank">abiheshark's cv strategy</a> with 4,5 folds various seed.</p> <h3>Training Config</h3> <ul> <li>Oprimizer: AdamW<ul> <li>weight decay: 0.01</li> <li>initial lr: 2e-5</li></ul></li> <li>Loss: SmoothL1Loss</li> <li>No dropout</li> </ul> <h3>My Score</h3> <p>My each model score is here, (Including models not in use)<br> I use mainly mean Pooling for model's head. (I tried LSTMhead and Attentionhead, but the score not changed)</p> <table> <thead> <tr> <th>model</th> <th>CV</th> </tr> </thead> <tbody> <tr> <td>Deberta-v3-base</td> <td>0.4590</td> </tr> <tr> <td>Deberta-v3-large</td> <td>0.4519</td> </tr> <tr> <td>Roberta-large</td> <td>0.4583</td> </tr> <tr> <td>luke-large</td> <td>0.4590</td> </tr> <tr> <td>bigbird-roberta-large</td> <td>0.4643</td> </tr> <tr> <td>Deberta-large</td> <td>0.4615</td> </tr> <tr> <td>Deberta-large-mnli</td> <td>0.4629</td> </tr> <tr> <td>Funnel-large</td> <td>0.4686</td> </tr> <tr> <td>Funnel-large-base</td> <td>0.4644</td> </tr> <tr> <td>Bart-large</td> <td>0.4606</td> </tr> <tr> <td>GPT2-medium</td> <td>0.4658</td> </tr> <tr> <td>GPT2-large</td> <td>0.4882</td> </tr> <tr> <td>mpnet-base</td> <td>0.4952</td> </tr> <tr> <td>electra-large</td> <td>0.4731</td> </tr> <tr> <td>Deberta-v3-large (1st PL)</td> <td>0.4473</td> </tr> <tr> <td>Debreta-v3-large (2nd PL)</td> <td>0.4467</td> </tr> </tbody> </table> <p>My submissions CV score and Best Score CV are here,</p> <table> <thead> <tr> <th>submission</th> <th>CV</th> <th>My Inner Private Score Rank</th> </tr> </thead> <tbody> <tr> <td>20 models + Ridge</td> <td>0.4417</td> <td>2</td> </tr> <tr> <td>21 models + (0.5Ridge + 0.5Netflix)</td> <td>Ridge:0.4417, Netflix:0.4411</td> <td>3</td> </tr> <tr> <td>13 models + Ridge</td> <td>0.4421</td> <td>10</td> </tr> <tr> <td>16 models + Ridge</td> <td>0.4420</td> <td>1</td> </tr> </tbody> </table> <h3>Model Selection</h3> <p>I made a very large number of models, but had to choose a model due to dataset storage, Deberta-v3 had the highest score, so I chose about half of them, and the rest I chose by looking at the CV of the various models. So electra and mpnet are not in the final sub. But the best private scores were the models that included them :_(</p> <p>We also considered the need for a variety of Deberya-v3 models, so we used model weights from <a href="https://www.kaggle.com/code/kojimar/fb3-deberta-family-inference-9-28-updated" target="_blank">public Code</a>. I really appreciate.</p> <h2>Important Citations:</h2> <ul> <li>Code:<ul> <li><a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train</a></li> <li><a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-inference/notebook" target="_blank">https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-inference/notebook</a></li> <li><a href="https://www.kaggle.com/code/kojimar/fb3-deberta-family-inference-9-28-updated" target="_blank">https://www.kaggle.com/code/kojimar/fb3-deberta-family-inference-9-28-updated</a></li> <li><a href="https://www.kaggle.com/code/abhishek/multi-label-stratified-folds" target="_blank">https://www.kaggle.com/code/abhishek/multi-label-stratified-folds</a></li></ul></li> <li>Others:<ul> <li>netflix method: <a href="https://kaggler.readthedocs.io/en/latest/_modules/kaggler/ensemble/linear.html#netflix" target="_blank">https://kaggler.readthedocs.io/en/latest/_modules/kaggler/ensemble/linear.html#netflix</a></li></ul></li> </ul> <h2>Thanks and Acknowledgements:</h2> <p>Finally, I'd like to thank Kaggle, host for hosting such an iteresting competition and all who participated in lively discussions.<br> I hope my solution helps you in your future NLP competitions!</p> <p>The following is my opinion. I'm sorry if I'm wrong. </p> <p>Stacking use the all targets for prediction. For example, I used the 'syntax' and 'vocabulary' predictions besides 'cohesion' predictions for 'cohesion' prediction in 2nd stage.</p> <p>On the other hand, ensemble use the only 'cohesion' predictions for 'cohesion' prediction in 2nd stage.<br> In the other words, I simply determine the weight by regression.</p> <p>I think that the reason why stacking made the score worse is that other features (i.e. 'syntax' prediction or 'vocabulary' prediction in 1st stage) are noisy. (Or other reason is in)</p> <p>Thank you for question.</p> <blockquote> <p>how many data number of events did you use for each fold?</p> </blockquote> <p>About 3,000. In 4 folds training, I have divided the pseudo label data into 4 parts and I added each of them to training data in each fold.</p> <blockquote> <p>When you say "2 steps of learning with pseudo labels.", do you mean that you trained with pseudo labels for 2 epochs and then used train data from FB3?</p> </blockquote> <p>No. I trained with both pseudo label data and FB3 data in all epochs.<br> "2 steps of learning with pseudo labels." This means multi-step pseudo labeling. In other words, I created new data(= 2nd pseudo label data) by using the model trained on the 1st data.</p>
Feedback Prize - English Language Learning
12th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all I would like to thank the competition organizers and Kaggle and congratulate all the winners.<br> I would also like to thank all the kagglers who provided useful information in the various discussions.<br> I have been working hard to achieve Solo Gold and am happy to finally win it.</p> <h1>Overview</h1> <p>My solution is an ensemble of various deberta models and RAPIDS SVR.<br> I have been referring to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> excellent notebook on RAPIDS SVR. Thank you very much.<br> <a href="url" target="_blank">https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x</a></p> <h1>Preprocessing</h1> <p>Two types of text data were prepared, one with \n converted to [BR] and the other without.<br> The model trained on the text data converted to [BR] scored higher for both Public and Private.</p> <h1>Cross validation</h1> <p>I used 5folds in MultiLabelStratifiedKFold by target.</p> <h1>Model</h1> <p>backborn</p> <ul> <li>deberta-v3-base</li> <li>deberta-v3-large</li> <li>deberta-large</li> <li>deberta-xlarge</li> <li>deberta-xlarge-mnli</li> </ul> <p>pooling</p> <ul> <li>meanpolling&amp;layernorm</li> </ul> <h1>RAPIDS SVR</h1> <p>Features were extracted from the following models and SVR trained.</p> <ul> <li>deberta-v3-base</li> <li>deberta-large</li> <li>deberta-v3-large</li> <li>deberta-xlarge</li> <li>deberta-base-mnli</li> <li>deberta-large-mnli</li> <li>deberta-xlarge-mnli</li> <li>deberta-v2-xlarge</li> <li>deberta-v2-xlarge-mnli</li> </ul> <h1>ensemble</h1> <p>Weights were adjusted by target using the Nelder-Mead method.</p> <h1>What worked</h1> <ul> <li>Soft Labeling</li> <li>Pseudo Labeling</li> <li>Small batch size (batchsize=2)</li> </ul> <h1>What didn't work</h1> <ul> <li>MLM</li> <li>Non-deberta models</li> </ul> <h1>What worked in Private</h1> <p>The ensemble with GPR (Gaussian process regression) had the highest private score, but was not selected because the public score was not high.(public:0.437852/private:0.433646)</p> <h1>Efficiency Prize(4th place)</h1> <ul> <li>deberta-v3-xsmall</li> <li>Soft labeling</li> <li>Pseudo Labeling</li> <li>Use the model with the highest CV score out of 5folds</li> </ul> <p>Thank you!</p> <p>Thank you!</p> <p>Soft labeling was learned by adding a certain percentage of oof predictions to the labels in the training data.<br> For the train:oof ratio, 0.5:0.5 was well balanced and 0.2:0.8 showed a strong tendency toward overfitting.</p> <p>Pseudo-labeling was trained by labeling the fb1 data.<br> We also always used the same CV to avoid leakage.</p> <p>[BR] is converted in preprocessing as follows.</p> <p><code>test["full_text"] = [text.replace("\n", "[BR]") for text in test["full_text"]]</code></p> <p>Add it to the tokenizer as a special token.</p> <p><code>tokenizer.add_special_tokens( {"additional_special_tokens": ['[BR]']} )</code></p> <p>The following may be of interest regarding Nelder-Mead.</p> <p><a href="url" target="_blank">https://www.kaggle.com/competitions/commonlitreadabilityprize/discussion/258328</a>.</p>
Feedback Prize - English Language Learning
13th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>thank you for your comment !!</p> <h3>14th Place Solution</h3> <p>First of all, I'd like to thank kaggle team, host for hosting this competition and thank many other participants to share codes, ideas and datasets.</p> <p>I'm really happy to get solo gold.</p> <h3>Overview</h3> <p>My final submission is weighted avg model which is 0.5 * 11 models nelder-mead blended one + 11 models bayseian ridge stacking one.</p> <p>Each models are trained with various hyperparameter such as seed, num_fold, max_length, and so on.</p> <p><img alt="Screen Shot 2022-11-30 at 21 07 06" src="https://user-images.githubusercontent.com/38504303/204792257-95877fde-4e82-45cd-852d-0df26a138adc.png"></p> <p>My final Code is here. : </p> <ul> <li><p><a href="https://www.kaggle.com/code/kaerunantoka/fb3-private-14th-solution-code?scriptVersionId=112497983" target="_blank">https://www.kaggle.com/code/kaerunantoka/fb3-private-14th-solution-code?scriptVersionId=112497983</a></p></li> <li><p>I'm sorry that a lot of my notebooks are public because everyone can run this code.</p></li> </ul> <p>My training repo is here. :</p> <ul> <li><a href="https://github.com/osuossu8/FeedbackPrize3" target="_blank">https://github.com/osuossu8/FeedbackPrize3</a></li> </ul> <h3>What Worked</h3> <ul> <li><p>[High Impact] Idea</p> <ul> <li><p>Trust CV</p> <ul> <li>I added new model to ensemble in case oof cv score is improved.</li></ul></li> <li><p><a href="https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x" target="_blank">rapids SVR</a></p></li> <li><p><a href="https://docs.scipy.org/doc/scipy/reference/optimize.minimize-neldermead.html#optimize-minimize-neldermead" target="_blank">nelder-mead optimized weighted blending</a></p> <ul> <li>I heard this method first time in <a href="https://www.kaggle.com/competitions/commonlitreadabilityprize/discussion/258328" target="_blank">this previous nlp compatitions solution.</a></li></ul></li> <li><p>longer max length (1536) and use head-and-tail tokens</p> <ul> <li>many public notebooks used 512 max_length</li> <li>however max token length in train full_text is around 1430.</li> <li>I decided to use longer token length for my training.</li></ul></li> <li><p>freeze embedding layer and first 2 layers</p> <ul> <li><pre><code>def freeze(module): for parameter in module.parameters(): parameter.requires_grad = False if self.cfg.freezing: freeze(self.model.embeddings) freeze(self.model.encoder.layer[:2]) </code></pre></li></ul></li> <li><p>evalulate twice per epoch</p></li></ul></li> <li><p>[Medium Impact] Idea</p> <ul> <li><p>I replaced svr with xgboost and added final model. This is contributed to my model diversity.</p></li> <li><p>add special token </p> <ul> <li>I noticed full_text has many "\n\n".</li> <li><code> def setup_tokenizer(CFG): CFG.tokenizer.add_tokens([f"\n"], special_tokens=True) CFG.tokenizer.add_tokens([f"\r"], special_tokens=True) </code></li></ul></li></ul></li> </ul> <h3>What Didn’t Work</h3> <ul> <li>Idea<ul> <li>Use my finetuned embeddings when I try rapids svr method.</li> <li>Use Universal Sentence Encoder Embedding </li> <li>Use text features as meta feature when I finetuneing hugging face models and my 2nd stage stacking models.</li></ul></li> </ul> <h3>Additional Context:</h3> <ul> <li><p>CV Strategy</p> <ul> <li>I used <a href="https://www.kaggle.com/code/abhishek/multi-label-stratified-folds" target="_blank">abiheshark's cv strategy</a> with 5, 10 folds various seed.</li></ul></li> <li><p>Models</p></li> </ul> <table> <thead> <tr> <th>Name</th> <th>CV</th> <th>max length</th> <th>num_fold</th> <th>link</th> </tr> </thead> <tbody> <tr> <td>deberta-v3-large</td> <td>0.4584</td> <td>1536</td> <td>5</td> <td><a href="https://github.com/osuossu8/FeedbackPrize3/blob/main/exp/024.py" target="_blank">https://github.com/osuossu8/FeedbackPrize3/blob/main/exp/024.py</a></td> </tr> <tr> <td>deberta-v3-large</td> <td>0.4585</td> <td>1536</td> <td>5</td> <td><a href="https://github.com/osuossu8/FeedbackPrize3/blob/main/exp/026.py" target="_blank">https://github.com/osuossu8/FeedbackPrize3/blob/main/exp/026.py</a></td> </tr> <tr> <td>deberta-large</td> <td>0.4617</td> <td>512</td> <td>5</td> <td><a href="https://www.kaggle.com/code/kaerunantoka/feedback3-n004" target="_blank">https://www.kaggle.com/code/kaerunantoka/feedback3-n004</a></td> </tr> <tr> <td>roberta-base</td> <td>0.4622</td> <td>512</td> <td>5</td> <td><a href="https://www.kaggle.com/code/kaerunantoka/feedback3-n005" target="_blank">https://www.kaggle.com/code/kaerunantoka/feedback3-n005</a></td> </tr> <tr> <td>longformer-large-4096</td> <td>0.4670</td> <td>512</td> <td>5</td> <td><a href="https://www.kaggle.com/code/kaerunantoka/feedback3-n009" target="_blank">https://www.kaggle.com/code/kaerunantoka/feedback3-n009</a></td> </tr> <tr> <td>deberta-v3-base</td> <td>0.4554</td> <td>512</td> <td>5</td> <td><a href="https://www.kaggle.com/code/batprem/deberta-layerwiselr-lastlayerreini-infer" target="_blank">original</a>, <a href="https://www.kaggle.com/code/kaerunantoka/fb3-public-save-oof" target="_blank">my copy to save oof</a></td> </tr> <tr> <td>deberta-v3-large</td> <td>0.4670</td> <td>512</td> <td>10</td> <td><a href="https://www.kaggle.com/datasets/kojimar/20221012-123357-deberta-v3-large" target="_blank">https://www.kaggle.com/datasets/kojimar/20221012-123357-deberta-v3-large</a></td> </tr> <tr> <td>deberta-v2-xlarge-mnli</td> <td>0.4675</td> <td>512</td> <td>10</td> <td><a href="https://www.kaggle.com/datasets/kojimar/0919-deberta-v2-xlarge-mnli" target="_blank">https://www.kaggle.com/datasets/kojimar/0919-deberta-v2-xlarge-mnli</a></td> </tr> <tr> <td>deberta-v3-large</td> <td>0.4548</td> <td>512</td> <td>10</td> <td><a href="https://www.kaggle.com/datasets/kojimar/0926-deberta-v3-large-unscale" target="_blank">https://www.kaggle.com/datasets/kojimar/0926-deberta-v3-large-unscale</a></td> </tr> <tr> <td>pretrained hugging face embedding &amp; rapids SVR</td> <td>0.4499</td> <td>-</td> <td>15</td> <td><a href="https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x" target="_blank">original</a>, <a href="https://www.kaggle.com/code/kaerunantoka/fb3-public-svr-024-infer" target="_blank">my copy to save oof and models</a></td> </tr> <tr> <td>pretrained hugging face embedding &amp; XGB</td> <td>0.4552</td> <td>-</td> <td>15</td> <td><a href="https://www.kaggle.com/code/kaerunantoka/fb3-pretrained-bert-family-xgb-train" target="_blank">https://www.kaggle.com/code/kaerunantoka/fb3-pretrained-bert-family-xgb-train</a></td> </tr> <tr> <td>bayseian ridge</td> <td>0.4446</td> <td>-</td> <td>10</td> <td><a href="https://www.kaggle.com/code/kaerunantoka/fb3-stacking-nelder-mead" target="_blank">https://www.kaggle.com/code/kaerunantoka/fb3-stacking-nelder-mead</a></td> </tr> </tbody> </table> <h3>Important Citations:</h3> <ul> <li><p>notebook</p> <ul> <li><a href="https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x" target="_blank">https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x</a></li> <li><a href="https://www.kaggle.com/code/batprem/deberta-layerwiselr-lastlayerreini-infer" target="_blank">https://www.kaggle.com/code/batprem/deberta-layerwiselr-lastlayerreini-infer</a></li> <li><a href="https://www.kaggle.com/code/quangphm/lb-0-43-simple-ensemble-deberta-base-svr" target="_blank">https://www.kaggle.com/code/quangphm/lb-0-43-simple-ensemble-deberta-base-svr</a></li></ul></li> <li><p>discussion</p> <ul> <li><a href="https://www.kaggle.com/competitions/commonlitreadabilityprize/discussion/258328" target="_blank">https://www.kaggle.com/competitions/commonlitreadabilityprize/discussion/258328</a></li></ul></li> <li><p>dataset</p> <ul> <li><a href="https://www.kaggle.com/datasets/kojimar/20221012-123357-deberta-v3-large" target="_blank">https://www.kaggle.com/datasets/kojimar/20221012-123357-deberta-v3-large</a></li> <li><a href="https://www.kaggle.com/datasets/kojimar/0926-deberta-v3-large-unscale" target="_blank">https://www.kaggle.com/datasets/kojimar/0926-deberta-v3-large-unscale</a></li> <li><a href="https://www.kaggle.com/datasets/kojimar/0919-deberta-v2-xlarge-mnli" target="_blank">https://www.kaggle.com/datasets/kojimar/0919-deberta-v2-xlarge-mnli</a></li></ul></li> </ul> <p>Thanks and Acknowledgements:</p> <p>Finally, I'd like to thank Kaggle, host for hosting such an iteresting competition and all who participated in lively discussions.<br> I wish my solution helps you in your future NLP competitions!</p> <p>Thanks for comment and question !!<br> My answers are below.</p> <p>1 . I'm sorry that my figure confused you. I fixed my figure. I got oof from both pretrained embedding models and finetuned models.<br> 2 . Yes, there are a little leakage as you thought. Using the same split data (from the same seed, num_folds, … and so on) is better than as I did. By the way, we sometimes use seed averaging to avoid overfitting. I did it with that intention.</p> <p>Thank you for your encouraging comment :)</p> <p>Yes, I thought more diversity is important than more precise experiment in such a shakey cometition.</p> <p>Thank you for sharing great dataset. Without your datasets, both my CV and LB are not improved !!</p>
Feedback Prize - English Language Learning
14th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks to the host for an interesting competition and congratulations to all the winners!</p> <h1>Overview</h1> <p>My final submission is ensemble of 21 models. The weights of the models are determined using nelder-mead. </p> <h1>Model</h1> <p>Using <a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">this notebook</a> as a reference, I experimented a lot, changing model and training method.</p> <p>The following three things were important</p> <ul> <li>Add lstm after bert. And freeze the bert part for the first few epochs of training (model 15)</li> <li>Ensemble of low-cv models with negative weights(model 4, 12)</li> <li>Train model for each target or two targets.(model 20, 21)</li> </ul> <table> <thead> <tr> <th>model</th> <th>cv</th> <th>weight</th> </tr> </thead> <tbody> <tr> <td>1. bart-large</td> <td>0.456</td> <td>-0.009</td> </tr> <tr> <td>2. deberta-large</td> <td>0.456</td> <td>-0.089</td> </tr> <tr> <td>3. deberta-xlarge</td> <td>0.454</td> <td>0.2175</td> </tr> <tr> <td>4. deberta-base</td> <td>0.471</td> <td>-0.148</td> </tr> <tr> <td>5. luke-large</td> <td>0.456</td> <td>0.072</td> </tr> <tr> <td>6. gpt2-large</td> <td>0.460</td> <td>0.109</td> </tr> <tr> <td>7. deberta-v2-xlarge</td> <td>0.456</td> <td>0.073</td> </tr> <tr> <td>8. deberta-v3-large (add lstm)</td> <td>0.452</td> <td>0.025</td> </tr> <tr> <td>9. gpt2</td> <td>0.476</td> <td>0.047</td> </tr> <tr> <td>10. deberta-v3-large(Re-initializing layer)</td> <td>0.452</td> <td>0.073</td> </tr> <tr> <td>11. deberta-v3-base (add lstm)</td> <td>0.453</td> <td>0.014</td> </tr> <tr> <td>12. t5-large</td> <td>0.462</td> <td>-0.180</td> </tr> <tr> <td>13. gpt-neo-125M</td> <td>0.479</td> <td>0.052</td> </tr> <tr> <td>14. deberta-v3-large(pseudo label)</td> <td>0.450</td> <td>-0.013</td> </tr> <tr> <td>15. deberta-v3-large(add lstm + freeze bert(2epoch))</td> <td>0.451</td> <td>0.162</td> </tr> <tr> <td>16. deberta-v3-base(pseduo label + Re-initializing layer)</td> <td>0.454</td> <td>-0.055</td> </tr> <tr> <td>17. deberta-v3-large(add /n token)</td> <td>0.454</td> <td>0.105</td> </tr> <tr> <td>18. deberta-v3-base(add /n token)</td> <td>0.458</td> <td>0.032</td> </tr> <tr> <td>19. deberta-v3-base</td> <td>0.456</td> <td>0.109</td> </tr> <tr> <td>20. deberta-v3-large (train model for each target)</td> <td>0.452</td> <td>0.243</td> </tr> <tr> <td>21. roberta-large (train model for two targets)</td> <td>0.458</td> <td>0.159</td> </tr> <tr> <td>ensemble</td> <td>0.4435</td> <td></td> </tr> </tbody> </table> <p>Thank you for your comment. Your U.S. Patent Phrase to Phrase Matching solution was quite helpful.</p> <p>Thank you !<br> Yes, all models are the same fold cuts and the same seeds.</p>
Feedback Prize - English Language Learning
15th Place Solution : 6 kinds of AttentionPooling for each metric work well for us
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h3>First of all</h3> <p>We appreciate this fortunate luck, and are truly honored to receive the gold medal!!<br> From the results, Yuichi Tateno ( <a href="https://www.kaggle.com/hotchpotch" target="_blank">@hotchpotch</a> ) and Masaki Aota ( <a href="https://www.kaggle.com/masakiaota" target="_blank">@masakiaota</a> ) became the Kaggle Competitions Master.</p> <h3>Overview</h3> <p>Our final submission achieved Private Score: 0.4346, Public Score: 0.4351, Local CV: 0.4431. The inference is ensembling of 11 models. Our inference notebook is here. <a href="https://www.kaggle.com/code/hotchpotch/fb3-infer-ens043-no-pl" target="_blank">https://www.kaggle.com/code/hotchpotch/fb3-infer-ens043-no-pl</a></p> <p>The summary of 11 models is following. Abbreviations appearing in column names will be explained in <strong>What Worked</strong>.</p> <table> <thead> <tr> <th>exp num</th> <th>backbone model</th> <th>CV</th> <th>Pooling</th> <th>AWP</th> <th>LLRD</th> <th>ReInit</th> <th>SW</th> <th>max_len</th> <th>LayerNorm</th> </tr> </thead> <tbody> <tr> <td>072</td> <td>deberta-v3-large</td> <td>0.4491</td> <td>Mean</td> <td>Yes</td> <td>No</td> <td>No</td> <td>No</td> <td>Inf</td> <td>No</td> </tr> <tr> <td>089</td> <td>deberta-v3-large</td> <td>0.4487</td> <td>Mean</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>No</td> <td>Inf</td> <td>No</td> </tr> <tr> <td>136</td> <td>deberta-v3-large</td> <td>0.4488</td> <td>6AP</td> <td>No</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Inf</td> <td>No</td> </tr> <tr> <td>182</td> <td>deberta-v3-base</td> <td>0.4496</td> <td>6AP</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>640</td> <td>No</td> </tr> <tr> <td>214</td> <td>deberta-v1-large</td> <td>0.4484</td> <td>6AP</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>640</td> <td>Yes</td> </tr> <tr> <td>215</td> <td>deberta-v3-large</td> <td>0.4481</td> <td>6AP</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>640</td> <td>Yes</td> </tr> <tr> <td>226</td> <td>deberta-v3-large</td> <td>0.4477</td> <td>6AP</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>Inf</td> <td>Yes</td> </tr> <tr> <td>231</td> <td>deberta-v3-base</td> <td>0.4487</td> <td>6AP</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>640</td> <td>Yes</td> </tr> <tr> <td>232</td> <td>deberta-v3-base</td> <td>0.4508</td> <td>6AP</td> <td>No</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>640</td> <td>Yes</td> </tr> <tr> <td>233</td> <td>deberta-v1-large</td> <td>0.4486</td> <td>6AP</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>640</td> <td>Yes</td> </tr> <tr> <td>234</td> <td>deberta-v1-large</td> <td>0.4495</td> <td>6AP</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Inf</td> <td>Yes</td> </tr> </tbody> </table> <p>We use the following ensemble weights:</p> <table> <thead> <tr> <th>exp num</th> <th>cohesion</th> <th>syntax</th> <th>vocabulary</th> <th>phraseology</th> <th>grammar</th> <th>conventions</th> </tr> </thead> <tbody> <tr> <td>072</td> <td>0.0186</td> <td>0.1387</td> <td>0.1042</td> <td>0.1474</td> <td>0.1135</td> <td>0.1757</td> </tr> <tr> <td>089</td> <td>0.0022</td> <td>0.0123</td> <td>0.0001</td> <td>0.1071</td> <td>0.2395</td> <td>0.0512</td> </tr> <tr> <td>136</td> <td>0.1295</td> <td>0.1666</td> <td>0.0589</td> <td>0.0950</td> <td>0.1235</td> <td>0.0012</td> </tr> <tr> <td>182</td> <td>0.0011</td> <td>0.1051</td> <td>0.0749</td> <td>0.2280</td> <td>0.0794</td> <td>0.0577</td> </tr> <tr> <td>214</td> <td>0.2944</td> <td>0.0930</td> <td>0.1365</td> <td>0.0485</td> <td>0.0259</td> <td>0.0922</td> </tr> <tr> <td>215</td> <td>0.0965</td> <td>0.0036</td> <td>0.0244</td> <td>0.1390</td> <td>0.0772</td> <td>0.1055</td> </tr> <tr> <td>226</td> <td>0.1654</td> <td>0.1212</td> <td>0.1700</td> <td>0.0469</td> <td>0.0394</td> <td>0.0325</td> </tr> <tr> <td>231</td> <td>0.0562</td> <td>0.0801</td> <td>0.0157</td> <td>0.0216</td> <td>0.0958</td> <td>0.1178</td> </tr> <tr> <td>232</td> <td>0.0105</td> <td>0.0434</td> <td>0.1012</td> <td>0.0827</td> <td>0.1154</td> <td>0.0830</td> </tr> <tr> <td>233</td> <td>0.1543</td> <td>0.2058</td> <td>0.1807</td> <td>0.0194</td> <td>0.0317</td> <td>0.1106</td> </tr> <tr> <td>234</td> <td>0.0704</td> <td>0.0318</td> <td>0.1335</td> <td>0.0662</td> <td>0.0596</td> <td>0.1727</td> </tr> </tbody> </table> <h3>Our Code</h3> <p>Inference: <a href="https://www.kaggle.com/code/hotchpotch/fb3-infer-ens043-no-pl" target="_blank">https://www.kaggle.com/code/hotchpotch/fb3-infer-ens043-no-pl</a></p> <p>Training: <a href="https://www.kaggle.com/code/masakiaota/fb3-15th-place-solution-train-exp231/notebook" target="_blank">https://www.kaggle.com/code/masakiaota/fb3-15th-place-solution-train-exp231/notebook</a></p> <p>If the codes are helpful, don't forget to upvote not only mine but also Y.Nakama's one (<a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train</a>) .</p> <h3>What Worked</h3> <ul> <li>[High Impact] 6 kinds of AttentionPooling for each metric (6AP)<ul> <li>CV : -0.0015</li> <li>This allows our model to focus on different tokens for different metrics.</li> <li>The idea can be illustrated like this (right one).<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1655570%2F16fd4987360a19c9c775105b0164ac7d%2F6AP_M.jpg?generation=1670065314021872&amp;alt=media" alt=""></li></ul></li> <li>[High Impact] Adversarial Weights Perturbation (AWP)<ul> <li>CV : -0.002</li> <li>Original paper : <a href="https://proceedings.neurips.cc/paper/2020/file/1ef91c212e30e14bf125e9374262401f-Paper.pdf" target="_blank">https://proceedings.neurips.cc/paper/2020/file/1ef91c212e30e14bf125e9374262401f-Paper.pdf</a></li> <li>My slide (in Japanese) : <a href="https://speakerdeck.com/masakiaota/kaggledeshi-yong-sarerudi-dui-xue-xi-fang-fa-awpnolun-wen-jie-shuo-toshi-zhuang-jie-shuo-adversarial-weight-perturbation-helps-robust-generalization" target="_blank">https://speakerdeck.com/masakiaota/kaggledeshi-yong-sarerudi-dui-xue-xi-fang-fa-awpnolun-wen-jie-shuo-toshi-zhuang-jie-shuo-adversarial-weight-perturbation-helps-robust-generalization</a></li></ul></li> <li>[High Impact] Layer-wise Lerarning Rate Decay (LLRD)<ul> <li>CV : -0.0005</li> <li>Decay by *0.85 for large model, *0.7225 for base model.</li></ul></li> <li>[High Impact] Last Layer Re-initialization (ReInit)<ul> <li>CV: -0.0006</li></ul></li> <li>([High Impact] Of Cource, Ensembling)</li> <li>[Medium Impact] Max Token Length to 640 (max_len)<ul> <li>CV: - 0~0.001</li> <li>Only work for base model</li> <li>Not work for large model but training speed become fast</li></ul></li> <li>[Medium Impact] SlidingWindowTransformerModel (SW)<ul> <li>CV: - 0~0.001</li> <li>But bad combination with AWP.</li> <li>My implementation is similar with this <a href="https://www.kaggle.com/code/aerdem4/xgb-lgb-feedback-prize-cv-0-7322" target="_blank">https://www.kaggle.com/code/aerdem4/xgb-lgb-feedback-prize-cv-0-7322</a></li></ul></li> <li>[Medium Impact] Pseudo Labeling<ul> <li>CV: - 0.0001~0.0006</li> <li>Our submission which got highest private score did not contain the model learning with pseudo-label.</li></ul></li> <li>[Medium Impact] High Frequency Validation and Saving<ul> <li>CV: - 0.0008</li></ul></li> <li>[Medium Impact] Layer Normalization after Pooling Layer, before Fully Connected Layer (LayerNorm)<ul> <li>CV: - 0~0.001</li> <li>Only work for base model</li></ul></li> <li>[Medium Impact] Multi sample dropout<ul> <li>CV: +- 0.0</li> <li>The curve of validation loss becomes stable a little.</li></ul></li> <li>[Medium Impact] Optimize Ensembling Weights for Each Metrics Independently<ul> <li>CV: - 0.0001 ~ 0.0004</li></ul></li> </ul> <h3>What Didn’t Work</h3> <ul> <li>SiFT<ul> <li>May be bugs in our code…</li></ul></li> <li>SVR<ul> <li>No leakage but too low CV and bad public score. So, we didn't use SVR to ensemble.</li></ul></li> <li>Any Augmentations</li> <li>Any Losses except L1 Smooth</li> <li>Pretraining using Feedback1,2 Sentences<ul> <li>I’m not sure this is really not to work…</li></ul></li> <li>Dealing with “Generic_Name” and others</li> <li>Change the Way to Initialize Weights<ul> <li>Parameters sampled from normal distribution is the best for us</li></ul></li> </ul> <h3>Additional Context</h3> <ul> <li>Our Best Single model (in private and CV)<ul> <li>Private: 0.4361, Public: 0.4375, CV: 0.4450</li> <li>This model was <strong>Learning from Ensembled Pseudo-Labels.</strong></li> <li>But, our emsemble inference, which get highest private score, did not contains this model. If we leveraged this model to ensemble, we might get the more good standing.</li> <li>This idea may be useful especially for efficiency prize (in my opinion).</li></ul></li> </ul> <h3>Important Citations</h3> <ul> <li>Y.Nakama’s baseline <a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train</a></li> <li>AI Shift’s tech blog (Japanese) <a href="https://www.ai-shift.co.jp/techblog/2170" target="_blank">https://www.ai-shift.co.jp/techblog/2170</a></li> <li>Discussion about Attention Pooling <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/361678" target="_blank">https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/361678</a></li> <li>XGB+LGB Feedback Prize CV 0.7322 <a href="https://www.kaggle.com/code/aerdem4/xgb-lgb-feedback-prize-cv-0-7322" target="_blank">https://www.kaggle.com/code/aerdem4/xgb-lgb-feedback-prize-cv-0-7322</a></li> </ul> <h3>Thanks and Acknowledgements</h3> <p>We would like to thank the organizers of the competition for hosting this amazing event.<br> I personally would like to mention and thank Taro Masuda ( <a href="https://www.kaggle.com/taromasuda" target="_blank">@taromasuda</a> ), Raiii ( <a href="https://www.kaggle.com/skraiii" target="_blank">@skraiii</a> ) and MPEG ( <a href="https://www.kaggle.com/mpeg31" target="_blank">@mpeg31</a> ) who did not participate in this competition, as I have learnt so much from them in the past competitions. I certainly was able to utilize the knowledge this time!</p> <h3>Team Members</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1655570%2Fb36ba8b5675df4b7e1b29ea114c0e8d6%2FTeamN.png?generation=1670066208813739&amp;alt=media" alt=""><br> From left in the above figure:</p> <ul> <li>Masaki Aota <a href="https://www.kaggle.com/masakiaota" target="_blank">@masakiaota</a></li> <li>Olivine-Ryo <a href="https://www.kaggle.com/olivineryo" target="_blank">@olivineryo</a> </li> <li>Yuichi Tateno <a href="https://www.kaggle.com/hotchpotch" target="_blank">@hotchpotch</a></li> </ul>
Feedback Prize - English Language Learning
17th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle and the competition hosts. The Feedback competitions were a lot of fun to work on, and it is great to hear that more competitions will be launched soon. </p> <h1>Overview</h1> <p>My solution for this competition is an ensemble of 3 deberta models - one deberta-v3-base and two deberta-v3-large models. I relied on different training methods for diversity and ensembled them based on weighted average. For validation, I used MultilabelStratifiedKFold and my CV and LB were fairly correlated. Also, the essays in the training data are based on 42 topics and some topics had a lot of essays (100 - 300) while others had only a few essays (&lt; 50). Models that performed well on the topics with fewer samples also performed well on the public LB, and on the private LB as well. </p> <h1>What Worked</h1> <ul> <li>Training with AWP<ul> <li>Result: +.001</li> <li>A lot of top solutions in recent NLP competitions used AWP and it worked well for me in this competition. My strongest model was deberta-v3-large with AWP starting from the 2nd epoch. </li></ul></li> <li>Using differential learning rates<ul> <li>Result: .0008</li> <li>Based on the discussion forums, this was one of the first things I tried and I ended up using it in all my models.</li></ul></li> <li>Reinitialising the last layer<ul> <li>Result: Improved the ensemble CV by .001</li> <li>I trained this model based on the suggestions in <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/360819" target="_blank">this post</a> and it worked well in the ensemble. </li></ul></li> </ul> <h1>What Didn’t Work</h1> <ul> <li>Pseudo labels <ul> <li>I tried pretraining on pseudo labels, and using them while training as well but couldn’t get it to work. Probably needed to tune it some more as it seemed to have worked for other teams. </li></ul></li> <li>MLM</li> <li>Adding essay prompts while training</li> </ul> <h1>Important citations:</h1> <p>AWP: <a href="https://www.kaggle.com/code/wht1996/feedback-nn-train/notebook" target="_blank">https://www.kaggle.com/code/wht1996/feedback-nn-train/notebook</a></p> <h1>Thanks and Acknowledgements</h1> <p>Thanks to everyone who shared training tips in the discussion forums, especially <a href="https://www.kaggle.com/wuwenmin" target="_blank">@wuwenmin</a>. I joined a little late and reading all the discussions really helped. </p>