Title of Competition
stringclasses 343
values | Title of Writeup
stringlengths 6
139
| User
stringclasses 1
value | Writeup
stringlengths 419
80.4k
|
---|---|---|---|
Mechanisms of Action (MoA) Prediction
|
Thank you all! First time competition, silver medal! My blending nn+tabnet+resnet
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I would like to thank everyone, I learned a lot from your great notebooks and discussions.</p>
<p>My final solution is blending 3 models:<br>
NN + Tabnet + Resnet</p>
<p>NN public score 0.01832</p>
<p>Tabnet: Actually I don't know the score since I run out of submission, I just put it into my blending model. But I think it is around 184x. Since this is a modified version of this notebook from kushal:<br>
<a href="https://www.kaggle.com/kushal1506/moa-tabnet-inference" target="_blank">https://www.kaggle.com/kushal1506/moa-tabnet-inference</a></p>
<p>Resnet: public score 0.01861</p>
<p>Thank you all!</p>
|
Mechanisms of Action (MoA) Prediction
|
lucky!!! 234th to 48th.share my solutions
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>thank you.the same to you 😄</p>
<p>Congratulations !! you are an excellent kaggler😄</p>
<p>first of all,thank all of you!!! I learned a lot from public kernels.I think participation and learning what we are unclear in seem more important than medals.just enjoy it!<br>
This is my solution.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3841226%2Ff7998f184fc791c388cebfb82a50048d%2F1606787683512.jpg?generation=1606788356625920&alt=media" alt=""></p>
|
Melbourne University AES/MathWorks/NIH Seizure Prediction
|
Single-subject models vs general models
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Melbourne University AES/MathWorks/NIH Seizure Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi all,</p>
<p>I developed a general model for this competition, partly because they're presumably more clinically useful, and partly because it simply worked better than my implementation of single-subject models.</p>
<p>I'd like to try and gain some insight as to why, as conceptually, single-subject should work better. It's possible that there was a bug in single model code, or that I tried to combine the predictions in a inefficient way.</p>
<p>Single models should work better for a number of reasons:</p>
<ul>
<li>There's no relation between the channels between subjects. Although the devices used to record the data were presumably the same, and hence have the same channel layouts, they were not necessarily positioned on the same parts of the cortex.</li>
<li>Even if there was correspondence between the channels, individual brains are very different. </li>
<li>Individual subjects are also very different as a whole, and they probably had their brains doing very different things throughout the data collection period.</li>
</ul>
<p>There are also reasons why general models may work well:</p>
<ul>
<li>They're likely to over-fit less to the test set, particularly when there's relatively little preictal data and only a 30% of the test data available publicly.</li>
<li>One good predictor of seizures is band power in the lower frequencies. Low frequency power is more global than higher frequencies, making the lack of correspondence between channels less of a problem.</li>
<li>The models I used, particularly the RUS boosted tree ensemble, could clearly tell the difference between subjects, although actually performed worse when given specific subject information (like subject number as a feature). </li>
</ul>
<h3>Raw predictions (RUS Boosted Tree ensembles)</h3>
<p><strong>Single-subject models:</strong> <br>
<img src="https://s11.postimg.org/elrh8rbkz/RBTs.png" alt="enter image description here" title="Single-subject (RUS boosted trees)"> <br>
This model would score ~0.74 in local CV and ~0.63 on the LB. </p>
<p><strong>General model:</strong> <br>
<img src="https://s17.postimg.org/vovhkpyf3/RBTg.png" alt="enter image description here" title="General (RUS boosted trees)"> <br>
This model would score ~0.73 in local CV and ~0.74 on the LB. </p>
<p>Note that epoch predictions are the predictions of the ~100s windowed training data (corresponding to the xaxis) and segment preds are the average of these, representing rows in the test set (and don't line up with the xaxis).
In both sets of predictions, the divisions between subjects in the test set are quite clear. For the single models, it seems the relative information between subjects is lost. I tried adjusting the prior when fitting models, but got the best performance by using the proportions in each training group (ie. 30% preictial for subject 1, ~10 % for subjects 1 and 2, some weighted average of these for the general model), than any other value.</p>
<p><img src="https://s14.postimg.org/d8jtl9ytd/RBTs_vs_RBTg.png" alt="enter image description here" title="Scatter of single-subject model predictions vs. general model predictions, for each subject."> <br>
Scatter of single-subject model predictions vs. general model predictions, for each subject.</p>
<h3>Questions</h3>
<ul>
<li>Did anyone else have more success with a general model than subject-specific models?</li>
<li>How did people combine the predictions for single models? </li>
</ul>
<p>Regarding combining the predictions together from the single models, I didn't do any post processing. I simply combined them, which, as you say fergusoci, probably means the general model would be better "calibrated". I was a bit worried about how different the predictions looked for subject 1, though.</p>
<p>It doesn't necessarily follow that the predictions from different models would be directly comparable, but I didn't attempt anything apart from simply combining them as this seemed to work ok for the first test set. Also, I wasn't really sure how to go about doing it. Did you guys both use single-subject models? How did you combine the predictions for the submission?</p>
|
Melbourne University AES/MathWorks/NIH Seizure Prediction
|
Solution (5th place)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Melbourne University AES/MathWorks/NIH Seizure Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>We would like to express our gratitude to organizers and all participants.</strong> It was really interesting competition and it would be just great to participate in other similar competitions related to fields of biomedical data analysis.</p>
<p>Here is a description of our solution, which has <strong>AUC 0.81423</strong> in public leaderboard and <strong>0.79363</strong> in private leaderboard.</p>
<h2>Software</h2>
<p>All data analysis and models were built using Python. Libraries used: scikit-learn, pandas, xgboost. </p>
<h2>Preprocessing</h2>
<p>The signal from each file was divided on epochs 30 seconds length without any filtration. From each epoch features were extracted. We have tried also 15 and 60 seconds epoch length but the results were worse. </p>
<h2>Feature extraction</h2>
<p>We tried many features in different combinations during this competition, but not all of them were used in final models. <strong>Feature sets</strong> we’ve tried: </p>
<ol>
<li><a href="https://www.kaggle.com/deepcnn/melbourne-university-seizure-prediction/feature-extractor-matlab2python-translated">Deep’s kernel</a> for features extraction.</li>
<li><a href="https://www.kaggle.com/treina/melbourne-university-seizure-prediction/feature-extractor-matlab2python-translated">Tony Reina’s kernel</a> for features extraction.</li>
<li>Correlation between all channels (120 features).</li>
<li>Correlation between spectras of all channels (120 features).</li>
<li>Spectral features version 1: total energy (sum of all elements in range 0-30 Hz), energy in delta (0-3 Hz), theta (3-8 Hz), alpha (8-14 Hz) and beta (14-30 Hz) bands, energy in delta, theta, alpha and beta bands divided by total energy, ratios between energies of all bands.</li>
<li>Spectral features version 2: the same as Spectral features set 1 plus low and high gamma band were used in calculation of total energy, energy in bands and ratios between energies in bands. In addition, mean energy in bands was extracted.</li>
<li>Spectral features version 3: power spectral density was calculated for the whole epoch. Then it was divided on 1 Hz ranges and in each range energy was calculated (30 features).</li>
</ol>
|
Melbourne University AES/MathWorks/NIH Seizure Prediction
|
Solution (8th)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Melbourne University AES/MathWorks/NIH Seizure Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yep, that looks exactly the same. Just the eegData in your case seems to be transposed, my data matrix has dimensions 240000x16 but that does not matter.</p>
<p>So I finally cleaned up the code a little and uploaded it, you can find it <a href="https://github.com/nahlik-michal/kaggle-melbourne-seizure-prediction">here</a>. Some things might look strange but it was part of a framework I made that allowed me to easily switch models and settings and I just cut it out and cleaned.</p>
<p>When I was testing the code I found out that if you use the general model with features calculated on data including dropouts you can get surprisingly high score on private leaderboard (like 0.82382 AUC). But not sure the result should be taken seriously as the general model favors the overall AUC and it's trained on data with dropouts.</p>
<p>Not a single tree per channel but single model. The <em>fitctree</em> actually creates model with a tree for every fold in cross validation so for every channel I ended up with 10 trees in the model and I used the mean output across these trees (maybe I should added that into the summary to make it clear). I guess it's similar to your ensemble but in total there were 160 trees per subject (10 per channel) so slightly more than in your model. Earlier I tried to fit the model across all channels but per channel approach seemed to be giving better results so I did not spent much time on it.</p>
<p>I was quite surprised that the general model performed the same as per single models and this was the only explanation that occurred to me. Testing it on unseen subject seems to be confirming this explanation. And it is kind of a letdown about making a model that could be used in general for new data.</p>
<p>Thanks for the recommendation, I was thinking about upgrading my quite old Matlab to a newer one before this competition, unfortunatelly I did not have enough to play with it anyway so I sticked with the old version, but for future projects I am going to upgrade.</p>
<p>First I would like to thank the organizers and all the participants, this competition was very interesting, I really enjoyed it and it would be great to see some useful ideas coming out of this competition.</p>
<p>So here is a summary of my final solution based on <strong>Classification decision trees</strong> which scored <strong>0.80396 AUC</strong> on public leaderboard and <strong>0.79074 AUC</strong> on private leaderboard:</p>
<h2>Software</h2>
<p>Matlab 2014a</p>
<h2>Data</h2>
<p>I used all data files marked as safe in <em>train_and_test_data_labels_safe.csv</em>. No preprocessing was done.</p>
<h2>Features</h2>
<p>Features were calculated on the whole 10 minute files for each channel without splitting into any shorter epochs. </p>
<p>I basically took all the features from sample submission script and added few more based on my hunch and some articles about this topic. The features included:</p>
<ul>
<li>mean value, standard deviation, skewness, kurtosis, spectral edge, Shannon’s entropy (for signal and Dyads), Hjorth parameters, several types of fractal dimensions</li>
<li><del>eigenvalues</del> singular values of 10 scale wavelet transformation using Morlet wave</li>
<li>maximum correlation between channels in interval -0.5,+0.5 seconds (the maximum was pretty much always at 0 second delay anyway), correlation between channels in frequency domains, correlation between channels power spectrums at each dyadic level</li>
</ul>
<p>I had 73 features in total for each channel, only the real part of features was used.</p>
<h2>Cross validation</h2>
<p>I used <em>cvpartition</em> from Statistical toolbox which can create random partitions where each subsample has equal size and roughly the same class proportions. I did not care about sequences which caused my local AUC results to be around 0.1 higher than the leaderboard ones.</p>
<h2>Model</h2>
<p>A classification decision tree model was created for each channel and patient, the mean output across channels for the patient was used as the outcome. Models were trained with 10 fold cross validation to prevent overfitting.
Because we had only 2 classes the <em>Exact</em> training algorithm was used (see Matlab documentation for <em>fitctree</em> for more details).</p>
<p>Training models and generating output for each patient took in all cases under one minute.</p>
<p>The most important predictors across all decision trees were: correlation between channel power spectrums at dyadic levels, spectral edge, Shannon’s entropy for Dyads, mean value and 3rd estimate of fractional Brownian motion.</p>
<h2>Discussion</h2>
<p>I tried to make both single subject models and general model as the general model would be more useful. Both approaches actually reached the same scores on leaderboard, which is probably caused by the fact that the general model was trained on all subjects on which it was then tested. That might allow decision trees to grow different branches for each subject at some part and thus reaching the same score as single subject models. When I tried to cross validate general model by subject (2 subjects used for training, 1 subject for testing) the results got much worse, around 0.6 AUC. In guess I would have to normalize the features to make the general model work for unseen subjects, but I did not have time to pursue this idea.</p>
<p>I am sorry, that is my mistake, there should be written "singular values" instead of eigenvalues (I will fix that) as the eigenvalues are just for squared matrices. <a href="https://en.wikipedia.org/wiki/Singular_value">Singular values</a> are a similar concept as eigenvalues but can be used for non-squared matrices. To calculate them you can use <em>singular value decomposition</em> (<a href="https://uk.mathworks.com/help/matlab/ref/svd.html">svd</a>) function in Matlab. Thanks for pointing that out.</p>
<p>I tried splitting into shorter epochs (30 seconds, 1 minute and 5 minute) but it did not bring any significant improvements so I decided not to used them and just go with the 10 minute samples. I was thinking about giving it a try again but as the competition was going to the end I did not have time for that.</p>
<p>Hi,</p>
<p>I am sorry, I probably dragged there a bug during refactoring, forgot to put a parameter into the function call. I fixed it in the git repo, so it should work when you update your project.
Thanks for reporting that.</p>
<p>I was not able to replicate that. Would you mind to copy here the whole error trace? Line 25 of get_morel_perf is plotting a bar, can't see a reason it would not work.
Could you also upload your features to my google drive so I can run it on your data, here is a shared folder where you can upload it: <a href="https://drive.google.com/drive/folders/0B-ERL8FuD3kEcklYbUphdTdtZ0k?usp=sharing">https://drive.google.com/drive/folders/0B-ERL8FuD3kEcklYbUphdTdtZ0k?usp=sharing</a></p>
|
Melbourne University AES/MathWorks/NIH Seizure Prediction
|
Solution(4th place)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Melbourne University AES/MathWorks/NIH Seizure Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Dear all,</p>
<p>I have uploaded my code. The parameters were set to generate the best solution of the ones I tried. But you can try to alter the features and enable parameter searching such that a better LB scores can be achieved. <a href="https://github.com/GainaTang/Melbourne-University-AES-MathWorks-NIH-Seizure-Prediction-4th-solution-.git">You can access my code here</a></p>
<p>There is a hot discussion in the forum about probability tuning and averaged AUC vs. combined AUC. I didn't use any probability tuning across the patients, and I don't think there is any gain in doing that. To the best of my knowledge, a unique model with slight difference in parameters outperform stacked models with too complicated features. Model stacking will increase the computation power, and complicated features are hard for generalisation, both of which(least computation and best generalisation) is the key to accurate seizure prediction.</p>
<p>My strategy was to train model on each patient independently and pick the best set of parameters to make prediction. This was done in a hope that model can make good prediction on every patient, instead of an averaged good prediction for all. In the later scenario, the prediction may be bad for some patient but extremely good for others. That is not a realistic case. In real-world prediction we wouldn't want any patient to suffer due to inaccurate prediction.</p>
<p>Hi all, as now the private LB scores are available, I can better see which submission yield a higher score on the whole data set. I want to share with you my best submission(0.79458private and 0.79650public)'s method:</p>
<ol>
<li><strong>model</strong>: gradient boosting classifier</li>
<li><strong>feature</strong>: original data(denote as D) were firstly split into 75s windows, then resampled to 100Hz. Real fft was applied to the resampled data, which result in a frequency spectrum from 0-50Hz(denote as F) .
1)F was then divided into 50 equally spaced bands from 0.67-46.67 Hz , the power of these bands as well as their correlation matrix between channels and eigenvalues were extracted as first feature. 2)The second feature was extracted by dividing F into five frequency bands: delta (0.1-4Hz), theta (4-8Hz), alpha (8-12Hz), beta (12-30Hz), low-gamma (30-50Hz). High gamma was not included as they exceed max frequency of F. Then the power and entropy of five bands serve as the second feature. 3) Spectral edge power of 50% power up to 50Hz was added as the third feature. 4)time correlation matrix between channels of D and their eigenvalues were fourth features. 5) <strong>The last but not least</strong>, all the features were squared and added as the last feature. </li>
<li><strong>data used:</strong> all data with safe label=1</li>
<li>some comments: only best score solution is here, if anyone interested in the things I tried and their relative performance, I'd like to share. Overall the performance of these features should be very stable, as you can see the private and public LB are very close.</li>
</ol>
<p>@Benjamin Chu, I was using Gradient Boosting, not XGB. The parameters are as following : n_estimators=6000,max_depth=10,min_samples_leaf=5,min_samples_split=2,learning_rate=0.001,max_features=40, subsample=0.65
@buptxiaofeng cross-validation was applied by a self-written algorithm. So that sequence from same hour are kept in same fold, the class labels are also stratified in each fold. </p>
<p>I am in the process of commenting my code and it will be shared shortly. One interesting I found from previously candidate submissions that I did not include in final solution, is that if you try to resample the data to 160Hz, and extract the equal-band power from 0.67~33.33 Hz, you get even better result. See attached. Anyone interested can try this approach. </p>
|
Melbourne University AES/MathWorks/NIH Seizure Prediction
|
Winning solution (1st)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Melbourne University AES/MathWorks/NIH Seizure Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2><strong>Overview of the winning solution</strong></h2>
<p>The winning solution is a blend of 11 models created by the team members after we teamed up. All models were created subject-specific. No usage of test data and no cross-subject probabilistic tuning was performed. The blend is done using an average of ranked predictions of each individual models. The blend has been designed to reduce overfitting and improve robustness of the solution. To this end, we limited ourselves to the minimum of weight tuning, choosing a weight of 1 for all models.
Each model will be described in details below</p>
<h2><strong>Alex / Gilberto Models</strong></h2>
<p>A total of 4 models were selected for the final ensemble .</p>
<p>For all models, preprocessing consisted in segmentation of the 10 minutes segment into 30 non-overlapping 20 seconds segment. No filtering or artifact rejection was applied. After modeling, the maximum of the prediction of this 30 smaller time window was affected to the 10 minute segment.
Total training time (including feature extraction) is estimated to less than half a day for these 4 models. We used python, scikit-learn, pyRiemann, xgboost, mne-python and pandas.</p>
<p><strong>Model 1 : XGB, 10 bags</strong></p>
<p>Features : This dataset consist in the normalized log power in 6 different frequency band (0.1 - 4 ; 4- 8 ; 8 - 15 ; 15 - 30 ; 30 - 90 ; 90 - 170 Hz) and for each channel. Power spectral density was estimated using Welch’s method (window of 512 sample, 25% overlap). PSD was averaged in each band, normalized by the total power before applying a logarithm. Total size of this dataset is 6 x 16 = 96 features.</p>
<p><strong>Model 2 : XGB, 5 bags</strong></p>
<p>Features : This dataset include the relative log power dataset mentioned before with the addition of various measures including signal statistics (mean, min, max, variance, 90th and 10th percentiles), AR error coefficient (order 5), Petrosian and Higuchi fractal dimension and Hurst exponent. Total size of this dataset is 21 x 16 = 336 features</p>
<p><strong>Model 3 : XGB, 4 bags.</strong></p>
<p>Features : For each channel, an auto-correlation matrix was estimated by concatenating time-delayed single channel signal before estimation of correlation matrix. Signal was downsample by 2 and 8 logarithmically spaced delays were used (0, 1, 2, 4, 8, 16, 32, 64). Each of the autocorrelation matrices were projected into their respective riemannian tangent space (see [1], this operation can be seen as a kernel operation that unfold the natural structure of symmetric and positive define matrices) and vectorized to produce a single feature vector of 36 item. Total size of this dataset was 36 x 16 = 576.</p>
<p><strong>Model 4: XGB, 10 bags.</strong></p>
<p>Features : This feature set is composed by cross-frequency coherence (in the same 6 sub-band as in the relative log power features) of each channels, i.e. the estimation of coherence is achieved between pairs of frequency of the same channel instead to be between pairs of channels for each frequency band. This produce set of 6x6 coherence matrices, that are then projected in their tangent space and vectorized. Total size of this dataset is 21 x 16 = 336.</p>
<p><strong>Models that did not make it into the ensemble</strong></p>
<p>The difficulty (or impossibility) to build a reliable cross-validation procedure was making very difficult to select best performing feature set properly. Other features developed during this challenge include coherence, correlation, spectral edge frequency, cumulative power and peak frequency covariance. In addition to XGB modeling, logistic regression was used but led to slightly lower performance (depending on the feature set).</p>
<p><strong>Remark</strong></p>
<p>When slicing larger 10 min segment with a 20s time window, comes the question of how to optimally combine those predictions. We chose here to use the maximum probability to represent the probability of the 10 min segment. The rationale was that patterns predictive of seizures were likely not to be stationary during the whole 10 minutes. Post-competition analysis revealed that using mean of probability led to a significant decrease in performances ( 0.674 Public / 0.722 Private). Very interestingly, using standard deviation of probability led to an increase in performances ( 0.805 Public / 0.776 Private). This finding seems to validate our initial hypothesis. However, this observation only holds true for the 4 models described above.</p>
<h2><strong>Feng models</strong></h2>
<p>A total of 4 models were selected for the final ensemble </p>
<p><strong>Preprocessing</strong> : The Butterworth filter (5th order with 0.1-180 HZ cutoff ) was firstly applied to the raw data and then I partitioned the raw data into non-overlapping 30s windows (50s windows for GLM). A combination of arithmetic mean of individual analysis windows was used to aggregate them into a single probability score for each 10-minute segment.
Total training time (including feature extraction) is estimated to less than 6 hours for these 4 models on my 8 GB RAM MacBook Pro .</p>
<p><strong>Features :</strong></p>
<p><em>Feature 1:</em></p>
<p>The standard deviation and average spectral power in delta (0.1–4 Hz), theta (4–8 Hz), alpha (8–12 Hz), beta (12–30 Hz), low gamma (30–70 Hz) and high gamma (70–180Hz)</p>
<p><em>Feature 2:</em></p>
<p>The correlation in time domain and frequency domain (upper triangle values of correlation matrices) with their eigenvalues.</p>
<p><strong>Models :</strong></p>
<p><em>Model 1:</em> XGB with feature 1</p>
<p><em>Model 2:</em> KNN with feature1</p>
<p><em>Model 3:</em> KNN with feature1+feature2</p>
<p><em>Model 4:</em> Logistic Regression with L2 penalty with feature1+feature2</p>
<p><strong>Remark</strong></p>
<p>I read relevant papers at the beginning of this competition and found one could generate thousands of features from the raw EEG. Because I don’t have background of signal digital processing, I generated those features based on common features used in some important papers and my intuition. Too many noise features and correlated features would damage the performance of most classifiers.</p>
<p>Actually, simple weighted blending Model 1, Model 2 and Model 4 could get 0.8+ combined AUC score. </p>
<h2><strong>Andriy models</strong></h2>
<p>A total of 3 models were selected for the final ensemble . </p>
<p><strong>Preprocessing:</strong>
For all models preprocessing consisted in a) demeaning the EEG signal, b) filtering of the EEG signal between 0.5 and 128 Hz with a notch filter set at 60Hz, c) downsampling to 256 Hz, d) segmentation of the 10 minutes segment into non-overlapping 30 seconds segment. After modeling, the maximum probability was taken to represent the probability of preictal for the whole 10m window. </p>
<p><strong>Feature extraction:</strong> the features can be divided into two groups, per-channel feature (sometimes called univariate) and cross-channel features (multivariate). From each EEG channel, 111 feature were extracted from both time, frequency and information theory domain to capture energy, frequency, temporal and structural information and to form a generic description of the EEG signal. These features have been previously used in several EEG applications, including seizure detection in newborns and adults [2-3]. These include: peak frequency of spectrum, spectral edge frequency (80%, 90%, 95%), fine spectral log-filterbank energies in 2Hz width sub-bands (0-2Hz, 1-3Hz, ...30-32Hz), coarse log filterbank energies in delta, theta, alpha, beta, gamma frequency bands, normalised FBE in those sub-bands, wavelet energy, curve length, Number of maxima and minima, RMS amplitude, Hjorth parameters, Zero crossings (raw epoch, Δ, ΔΔ), Skewness, Kurtosis, Nonlinear energy, Variance (Δ, ΔΔ), Mean frequency, band-width, Shannon entropy, Singular value decomposition entropy, Fisher information, Spectral entropy, Autoregressive modelling error (model order 1-9). These led to 111*16=1776 features in a concatenated feature vector.</p>
<p>Apart from univariate measures, autoregressive modelling error (model order 1-9) was extracted from a single channel following common spatial filtering. The cross-channel features consisted of the following characteristics: lag of maximum cross correlation, correlation, brain asymmetry, brain synchrony index, coherence, and frequency of maximum coherence. These 6 features were extracted for the five conventional EEG subbands (delta, theta, alpha, beta, gamma) for 6 different montages (horizontal, vertical, diagonal, etc) leading to 180 features. </p>
<p>Both univariate and multivariate features form a pool of 1965 features. </p>
<p><strong>Feature selection:</strong> for feature selection a pool of features was subjected to an XGB classifier from which an importance was computed and top N features were used for some models. </p>
<p><strong>Model:</strong></p>
<p><em>Model 1:</em> All features were used in a bagged XGB classifier (XGB). </p>
<p><em>Model 2</em>: Linear SVM was trained with top 300 features (SVM)</p>
<p><em>Model 3</em>: GLM was trained with top 100 features (Glmnet)</p>
<h2><strong>CV method</strong></h2>
<p>Building a representative CV was one of the most challenging tasks in this competition, especially after the leak. We tried many CV approaches and most of them gave us too optimistic AUC scores. </p>
<p>Finally, our CV score is mainly composed of observing the score of the 2 approaches. Both of them were not perfect and served as an indicator whether to trust the LB or not. Both CV approaches are based on the integrity of 1-hour segments and only "safe" data were used. </p>
<p>2-Fold CV: The first fold is the first half interictal and the train preictal. The second fold is the second half of the interictal and the old test preictal. The problem of this approach is that we have time-based split of preictal data and no such split for interictal data. Also, as only 2 folds are used the results possess natural measurement noise.</p>
<p>26-Fold CV: Because we have 25 1-hour preictal segments for train per patient, we put each 1 hour to form first 25 folds, whereas as no sequence was provided for old test data, all of it was placed to the 26th fold. Interictal are split into 26 folds based similarly preserving the 1-hour sequence integrity and keeping the proportion of the preictal to interictal roughly equal in all folds. </p>
<h2><strong>Whole team ensemble</strong></h2>
<pre><code>Wholeteam_ensemble 0.81308 0.80701
</code></pre>
<h2><strong>Reference</strong></h2>
<ul>
<li><p>Barachant, A., et al. "Classification of covariance matrices
using a Riemannian-based kernel for BCI applications." Neurocomputing
112 (2013): 172-178. </p></li>
<li><p>A. Temko, E. Thomas, W. Marnane, G. Lightbody, G. Boylan, <br>
EEG-based neonatal seizure detection with support vector machines, <br>
Clin. Neurophysiol. 122 (2011) 464-473.</p></li>
<li><p>A. Temko, E. Thomas, W. Marnane, G. Lightbody, G. Boylan, <br>
Performance assessment for EEG-based neonatal seizure detectors, <br>
Clin. Neurophysiol. 122 (2011) 474-82.</p></li>
</ul>
<hr>
<hr>
<p>Thanks for all organizers and kagglers in this research competition. </p>
<p>We will put a more detailed description with score for each model and code by the end of this week. </p>
|
Mercari Price Suggestion Challenge
|
13th Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mercari Price Suggestion Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It's quite tricky actually. I make 20 kernel and only ver-19 pass the time limit.</p>
<p>Kernel : <a href="https://www.kaggle.com/muhammadalfiansyah/push-the-lgbm-v19/code">https://www.kaggle.com/muhammadalfiansyah/push-the-lgbm-v19/code</a></p>
<p>This model is ensamble of ridge RNN and LGBM. </p>
<p>I train ridge and RNN on 2 fold then use their prediction as input for my LGBM model.</p>
<p>Ridge : Mostly 2-3 ngram from name and description. I also train another ridge with transformed target.</p>
<p>RNN : <a href="https://www.kaggle.com/nvhbk16k53/associated-model-rnn-ridge">https://www.kaggle.com/nvhbk16k53/associated-model-rnn-ridge</a></p>
<p>Nothing special I add. I just modify to use only one dense layer, smaller GRU, smaller sequence length to speed up the training process. </p>
<p>LGBM : <a href="https://www.kaggle.com/muhammadalfiansyah/simple-lgbm-2500-boost">https://www.kaggle.com/muhammadalfiansyah/simple-lgbm-2500-boost</a></p>
<p>I found out that LGBM workbest only using 1 ngram and countvectorizer. adding more ngram will degrade the performance.
I also found out the more I overfit my NN the better the ensemble score is.
I found out that better score for each individual model doesn't mean better ensamble score. </p>
<p>Ridge Only = 0.48xx</p>
<p>LGBM Only = 0.419x</p>
<p>RNN Only = 0.44xx</p>
<p>Ridge OOF+ LGBM = ~0.407</p>
<p>RNN OOF+ LGBM = ~0.408</p>
<p>Ridge OOF+ RNN OOF+ LGBM = 0.400x</p>
<p>I think the key in my ensemble is having different set of input for each model.</p>
<p>Hope that helps!</p>
<p>Congrats to your great performance as well!!!!!! </p>
|
Mercari Price Suggestion Challenge
|
14th solution,CNN+finetune+FM
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mercari Price Suggestion Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My solution is an ensemble of CNN and FM. The most important part is finetuning on CNN.</p>
<h2>1. CNN</h2>
<p>1) embeddings of field <code>name</code> and <code>item_description</code> are inititalized from glove.twitter.27B.d.txt, I choosed the <code>twitter</code> version sinced twitter is UGC, which may be like customer reviews.
<br>2) each feature including shipping and item_condition_id are embeded into a 50 dim vector.
<br>3) single layer cnn, used 40 filters of size 2 and 40 filters of size 3, I haved tried filters with size 1 and size 4, not too much improved, GlobalAveragePooling1D was used.
<br>4) concate every feature and give them to BN-dense(512, relu, he_normal)-dropout(0.1)-BN-dense(64, relu, he_normal)-dropout(0.1)-BN-dense(32, relu, he_normal)-dropout(0.1)-dense(1, linear, he_normal)
<br>5) I found RMSProp worked better than Adam with my features, could imporve ~0.001
<br>6) epochs=3, batch_size=2048
<br>7) rmsle is between 0.415~0.418 on my CV, the result is not stable.</p>
<h2>2. Finetune</h2>
<p>1) since different category has different price distribution, After train the CNN model for 3 epochs, I trained each category for 2 epochs with the same architecture, only on items of the exact category.
<br>2) This improved rmsle to ~0.408 on my CV. The result is more stable.</p>
<h2>3. FM</h2>
<p>1) folked from <a href="https://www.kaggle.com/anttip/wordbatch-ftrl-fm-lgb-lbl-0-42555?scriptVersionId=2122276">anttip's solution</a>, It is amazing. Thanks a lot.
<br>2) rewrite the <code>wordbatch.WordBatch</code> function to let it run faster
<br>3) rmsle is ~0.425</p>
<h2>4 Ensemble</h2>
<p>1) CNN(finetune) * 0.6 + FM * 0.4, to get 0.399x~0.400x on my CV, and 0.401 on LB</p>
|
Mercari Price Suggestion Challenge
|
16th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mercari Price Suggestion Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi Everyone,</p>
<p>Ending final validation, I share my solution.<br>
<a href="https://www.kaggle.com/toshik/test-60?scriptVersionId=2459048">https://www.kaggle.com/toshik/test-60?scriptVersionId=2459048</a></p>
<p>My solution is consit of three part.</p>
<p>[1] NN (Embedding-Sum-MLP) part</p>
<p>I use Chainer in this part. The words in name and item-descriptions are replaced to wordvectors.
After that, aligned wordvectors are aggregated. They are connected to subsequent MLP and output. Initial embedding matrix is set as Fasttext pre-trained vectors.</p>
<p>[2] NN (Embedding + Convolution) part</p>
<p>I use Tensorflow in this part. Initial embedding matrix is set as Word2Vec pre-trained vectors. Wordvectors are connects to convolution, MLP and output.</p>
<p>This part is forked from this starter kernel. Thanks to ololo.<br>
<a href="https://www.kaggle.com/agrigorev/tensorflow-starter-conv1d-embeddings-0-442-lb">https://www.kaggle.com/agrigorev/tensorflow-starter-conv1d-embeddings-0-442-lb</a></p>
<p>[3] N-Gram + Regressor part</p>
<p>I use LightGBM, Ridge, FastFM in this part. TfidfVectorizer, LabelBinarizer, and CountVectorizer are used for feature extraction. This makes sparse feature matrix. The price predicted by LighGBM are used for Ridge (Stacking!). At last, output from Ridge and FastFM are averaged.</p>
<p>This part is forked from this public kernel. Thanks to dangtao.<br>
<a href="https://www.kaggle.com/dangtao/go-go-go">https://www.kaggle.com/dangtao/go-go-go</a></p>
<p>In addition, I tried some works (clean name of products, make additional features ... and so on).
Final predictions are logarithmic average.</p>
<p><code>[ALL] = 0.25 * np.log1p([1]) + 0.3 * np.log1p([2]) + 0.45 * np.log1p([3])</code></p>
<p>I utilize multiprocessing and chunk-reading to save kernel resources (time and memory).</p>
<p>Thank you !</p>
|
Mercari Price Suggestion Challenge
|
18th Place with no Neural Nets - LGB and FM - 0.40604
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mercari Price Suggestion Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><3</p>
<p>Thanks! I just couldn't figure out how to make a NN go sub-0.43. And I could make a sub-0.42 FM that runs in a very short amount of time. So it was an easy choice!</p>
<p>As far as I can tell, I think this solution is the best solution to not feature any neural nets!</p>
<p>So we didn't figure out how to get to 0.39X, but given that at the beginning of the competition I didn't expect anyone (least of all me) to crack 0.42X, I'd say we did pretty well.</p>
<p>We also made a solution without any neural nets -- just a Wordbatch FM scoring ~0.419 (thanks antiip!) and a custom LGB scoring ~0.410. The weighted average gets our score of ~0.406.</p>
<p><strong>Our kernel is here: <a href="https://www.kaggle.com/peterhurford/lgb-and-fm-18th-place-0-40604">https://www.kaggle.com/peterhurford/lgb-and-fm-18th-place-0-40604</a></strong></p>
<p>When DataGeek and I merged, we were surprised to see we both had roughly the same solution (LGB and FM average with similar LGB architectures), so we tried to combine the best features from each of our approaches. The general idea was building a LGB on all the categorical data and using Ridge models as variables in the LGB to represent the text data. We did 2-fold CV to get out-of-fold predictions to use in the LGB. We also did a lot of feature engineering for both the FM and LGB to learn what the TFIDF and Wordbatch would not learn, which is things like spacing, punctuation, ratios, etc.</p>
<p>Two great boosts to our model included using a Multinominal Naive Bayes submodel in the LGB to correct for the tendency of the Ridge models to underestimate and a <code>SelectKBest</code> and a regression to find the top performing individual wordbatch vectors and included them in our LGB. Both of these tricks improved our position by ~0.003 each.</p>
<p>Overall, our LGB had 48000 individual wordbatch vectors found via <code>SelectKBest</code> and a regression; 37 custom feature engineering variables made and tested by hand; target encoded versions of brand, category 1, category 2, category 3, and the first word of the item description; and 4 2-fold submodels (one Ridge trained on everything, one MNB trained on everything, one Ridge trained on just the name vectors, and one Ridge trained on just the item description vectors). We tried running the FM as a submodel in the LGB also, but it ended up surprisingly more valuable as a standalone model that we would average with.</p>
<p>We had a big scramble at the end to get Stage 2 ready (definitely should've started preparing for that earlier!) and took a 0.005 drop from our best non-ready submission. I'm still not exactly sure what happened there and would've loved more time to push us up further, but oh well. We ended up submitted two different approaches -- one with batched predictions and one with not. I was surprised the one without batched predictions made it and didn't OOM or run out of time, but both our submissions crossed the finish line. The one running in batch was only 0.001 worse.</p>
<p>There's also a lot more I would of loved to tune more -- I wish I knew more about wordbatch params and FM params as we weren't able to get much lift from hand tuning in those areas. We did notice however that we were able to get pretty good <em>speed</em> improvements from some FM tuning relative to what was in the public kernels.</p>
<p>Thanks for the awesome competition and kudos to everyone who managed to be Stage 2 ready. Big congrats to everyone who cracked to 0.39X - I can't wait to see how they did that! I wish there was less stress and uncertainty around Stage 2, but I'm glad we made it, and I overall did really like the Kernel-only format.</p>
<p>And thank you for the TargetEncoding implementation that gave us a good boost! Congrats on your place as well!</p>
|
Mercari Price Suggestion Challenge
|
1st place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mercari Price Suggestion Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>1) Build system template <a href="https://gist.github.com/pjankiewicz/5d1f4a142dc23f65badcf4cbe67af0d2">https://gist.github.com/pjankiewicz/5d1f4a142dc23f65badcf4cbe67af0d2</a> this should be enough to get started. I redacted the code data but the template should be usable - it was autogenerated.</p>
<p>2) Almost no framework supports sparse inputs at this moment and even fewer support sparse gradient updates:</p>
<ul>
<li>pytorch is limited and what I understood it only supports sparse tensors for embeddings</li>
<li>cntk we haven't looked into</li>
</ul>
<p>Mxnet has right now the best implementation for sparse architectures for both forward and backward gradients calculation that work with Adam optimizer. (you need to have sparse updates). Tensorflow is second best choice (no sparse updates at this moment).</p>
<p>I meant relatively for a code that is so much bigger you gain only 0.01. We also struggled to improve at some point. Especially the 0.38 was hard to crack.</p>
<p>We updated the description of the solution</p>
<p>I was also wondering this :). 0.38 was really hard to break. We did work very well together. Even when we were clearly ahead we were still pushing, making improvements. About your result. I'm really sorry. I know how you feel.</p>
<p>Hello everyone, thanks for an amazing competition. Together with Konstantin we are very excited, probably too much to describe our solution in greater detail - tomorrow we will write more. </p>
<p>However we have something to whet your appetites. Konstantin did an amazing job and created a very small kernel based on our work over those 3 months that achieves 0.3875 on the validation ... and it is only 83 LOC - how cool is that!</p>
<p><a href="https://www.kaggle.com/lopuhin/mercari-golf-0-3875-cv-in-75-loc-1900-s">https://www.kaggle.com/lopuhin/mercari-golf-0-3875-cv-in-75-loc-1900-s</a></p>
<p>UPDATE 2</p>
<p>Link to solution <a href="https://github.com/pjankiewicz/mercari-solution">https://github.com/pjankiewicz/mercari-solution</a></p>
<p>~~~~~~~~~ UPDATE ~~~~~~~~~</p>
<h2>Full description </h2>
<h2>Pawel’s premerge solution (~0.3950)</h2>
<p>I had a pretty complicated models before teaming up with Konstantin. It consisted of 3 parts:</p>
<ol>
<li>Models per category - I created ridge models for category level 1, 2, 3, combinations of category 2 + shipping and item_condition_id etc. They were quite fast to train and created 0.4050 solution in about 20 minutes.</li>
<li>Residual models MLP - based on the 1. models I trained next models on sparse inputs (neural networks). The target was the difference between the predictions and real prices (think of a boosting but with strong models).</li>
<li>Residual model LGBM - the same idea as 2. </li>
</ol>
<p>While the concept of training models per category seems like a good one it is really not. Training models like this is cool but it underestimates the model one is using. You can think that a neural network can’t be trained to understand the interaction between categories and descriptions, brands etc. This is a wrong assumption. Our example shows that a single well tuned neural network can learn what it wants.</p>
<h2>Konstantin’s premerge solution. (~0.3920)</h2>
<p>Before the merge, I had two base models:</p>
<ol>
<li>Sparse MLP implemented in Tensorflow, without any sophisticated features, compared to public kernels I tweaked only the number of features, and tokenizer using eli5 explanations of the Ridge model. 3 models were trained sequentially.</li>
<li>CNN with conv1d, similar to what was in many kernels, implemented in Keras. The model itself was not great, but it was very different from an MLP model, so gave a nice boost.</li>
</ol>
<h2>The merge</h2>
<p>At the moment of merging Konstantin was first and I was second. Someone would say “why did we merge at all”. But we did and it was a good decision. We expected that in order to succeed it wasn’t enough to blindly merge 2 solutions. In fact we had to create 1 solution which uses ideas of both of us. Making it work under the constraints was very challenging. The time wasn’t such a big deal for us, we focused more on the memory and being able to run 4 neural networks at the same time.</p>
<p>It took us maybe 2 weeks to come up with a unified solution (probably we worked the hardest at that point). It turned out that 2 different preprocessing schemes (2 datasets) created much needed variance in the solutions and we easily improved by 0.01 which was quite a big jump.</p>
<p>In the end we used 3 datasets and 4 models in each dataset. We tried to diversify the models:</p>
<ul>
<li>different tokenization, with/without stemming</li>
<li>countvectorizer / tfidfvectorizer</li>
</ul>
<h2>Build system</h2>
<p>It was quite impossible to manage this project without a way to split our solution into modules. We independently created a build system for our code. Eventually we used the system that created a Python package with the solution. Our script was strange and looked like this (pseudocode):</p>
<pre><code>ENCODED_FILES = {some base64 encoded characters}
DECODE_FILES
INSTALL THE MERCARI PACKAGE
RUN MAIN FUNCTIONS
</code></pre>
<p>It was really important to run models in 3 sequential processes because Python has a tendency not to clean the memory after some operations (especially data preprocessing).</p>
<h2>Feature preprocessing</h2>
<p>Some tricks/no-tricks that worked:</p>
<ul>
<li>name chargrams - We don’t know why exactly but using character n-grams from name improved the score. Maybe it was because it produced relatively dense features</li>
<li>stemming - we used a standard PorterStemmer</li>
<li>numerical vectorization - we noticed that a very big source of errors were bundle items with descriptions like: “10 data 5 scientists” were vectorized to data=10, scientists=5. This vectorizer applied in only 1 dataset improved the ensemble by 0.001. We didn’t have much time to test this idea further. </li>
<li>text concatenation - to reduce the dimensionality of the text fields by just concatenating them together - we tested all configurations {name, item_description, category, brand}. This was a reason for the 0.37xx push. </li>
</ul>
<p>Whatever cool idea we had about additional feature engineering didn’t work. To name a few:</p>
<ul>
<li>Extraction of features like “for [Name]”. We noticed that many items were designated to a particular person. We weren’t sure what it meant exactly but it seemed important enough to create a feature. We created a list of names from nltk and searched for similar strings with AhoCorasick algorithm.
<ul><li>We noticed that there were issues with new lines in descriptions. Wherever someone used a newline in description it concatenated the words likethis.</li>
<li>Spell checking.</li></ul></li>
</ul>
<p>Quoting Pawel:
neural networks are like "ok I guess I can use your <em>feature engineering</em> here you are 0.0003 increase"</p>
<h2>Models</h2>
<p>Pawel discovered that with a properly tuned sparse MLP model, it’s better to achieve diversity in the ensemble by training the same model on different datasets, compared to different models on the same dataset. So we decided to stick to one model, which really simplified our solution. The base model is a multilayer feedforward neural network with sparse inputs: nothing fancy at all, so it’s surprising it works so well. I think one reason is that it’s more efficient compared to other approaches: the dataset is large, so the model must have big capacity and be able to capture feature interactions. But in the time that a conv1d CNN could train on 4 cores with embedding size 32, we could train 4 MLP models with hidden size 256.</p>
<p>Careful learning schedule tuning is crucial: most importantly, we doubled batch size after each epoch, which made the model train faster after each epoch and also lead to better final performance. We also lowered the learning rate in addition to batch size increase. Everything was tuned to get the best validation score after the second epoch, and then overfit after the third epoch: this made the model much stronger in the ensemble.</p>
<p>Compared to the model in the kernel (<a href="https://www.kaggle.com/lopuhin/mercari-golf-0-3875-cv-in-75-loc-1900-s">https://www.kaggle.com/lopuhin/mercari-golf-0-3875-cv-in-75-loc-1900-s</a>), we had some extra tweaks:</p>
<ul>
<li>We had two model variations that we used: one was trained with Huber loss (it’s better here as it’s less sensitive to outliers), and another was doing classification instead of regression. For classification, we split all prices into 64 buckets, and created a soft target for prediction: first calculate L2 distance from the centers of the buckets, and then apply softmax to this (with high softmax temperature which was a hyperparameter). This classification model achieved better score on it’s own due to less overfitting, and also added diversity.</li>
<li>For 2 out of 4 models for each dataset, we binarized input data during training and prediction by setting all non-zero values to 1. This is like getting an extra dataset with a binary CountVectorizer instead of TFIDF for free. I think it’s a great idea, kudos to Pawel! We also tried to binarize using non-zero thresholds, but this didn’t bring much improvement.</li>
<li>L2 regularization helped a little (we applied it only to the first layer), also Tensorflow models worked better with PRELU activations instead of RELU.</li>
</ul>
<p>Model implementation and optimization: as it’s a kernel competition with very hard constraints for this dataset size, it was important to make training maximally efficient: by making the first hidden layer bigger, it was possible to improve the score, and we also had 12 models to train in addition to preparing dataset. In the end, we managed to train 12 models with hidden size 256 on about 200k features. Here is how we did it:</p>
<ul>
<li>TF can use multiple cores, but it’s very far from linear scaling, especially for sparse models. So it is better to use one core for each model and train 4 models in parallel: we restricted TF to one core with OMP_NUM_THREADS=1 and usual TF config variables, and then allowed TF to use multiple cores with threading using undocumented use_per_session_threads=1: this meant that we didn’t need to start multiple processes and used less memory.</li>
<li>It turns out that MXNet allows for a more efficient CPU implementation of a sparse MLP, because it has support for sparse updates (maybe TF has it too, but we were not able to use it). Pawel wrote an initial MXNet version that was about 2x faster, and then we added all features from TF models and made it work even better. The problem was that MXNet executor engine is not thread safe, if you try to parallelize using threading you get either 1 core used, or segfault. So we had to move to multiprocessing and to also write our own data generator because MXNet one made a lot of copies and used too much memory. We also had a version with putting data into shared memory, but it was way too close to disk space limit, so we had to scratch that.</li>
</ul>
<p>Overall MXNet solution was faster and allowed to use a smaller initial batch size without loss in speed, but used more memory and looked less reliable. So at the end we had two submissions using the same datasets, one with MXNet (0.37758 private LB / 0.37665 public), and one with TF (0.38006 private / 0.37920 public).</p>
<p>At the end, we had 12 predictions and needed to merge them. An average worked well, but it was better to tune blending weights, so we used 1% of the dataset for validation (5% locally) and tuned the weights using a Lasso model. Lasso uses L1 regularization, so it was a bit funny when you have some idea and add more models, but Lasso says: meh, I don’t want your new models, and sets their weights to zero.</p>
<p>Models that did not work:</p>
<ul>
<li>MoE (mixture of experts): this is a really cool paper <a href="https://arxiv.org/abs/1701.06538">https://arxiv.org/abs/1701.06538</a> which describes how to train a model which has more capacity while using the same compute budget: exactly what we want here! But at the end it turned out that TF lacks sparse support for some operations required to make it work.</li>
<li>We tried to add some exotic model blends for example: merging FM and MLP in the same architecture, adding a skip layer with linear regression to the output of MLP. It all eventually converged to a simple MLP. </li>
</ul>
<p>We achieved a point where only ways to improve was to train bigger MLP networks but due to constraints we couldn’t. This was really evident because models improved only when we added more data/features to the ensemble.</p>
<h2>Meta Statistics</h2>
<ul>
<li>Slack messages: 4500</li>
<li>Git commits: 152 (317 across all branches)</li>
<li>PR merged: 31</li>
<li>Python LOC: 2015</li>
</ul>
<p>Complete solution is basically the kernel Konstantin published + some tweaks. It is probably 20x the code and the improvement is only 0.01.</p>
<p>We've updated the post with a link to the solution </p>
<p>We've added a compiled pdf into the repo: <a href="https://github.com/pjankiewicz/mercari-solution/raw/master/presentation/build/yandex.pdf">https://github.com/pjankiewicz/mercari-solution/raw/master/presentation/build/yandex.pdf</a></p>
|
Mercari Price Suggestion Challenge
|
22nd Around the world solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mercari Price Suggestion Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>@Peter Hurford, no coordination only Joule effect !</p>
<p>Thanks @Mark Peng for your kind words. Your ideas and positive energy gave us the strength to go through stage 2 ! </p>
<p>@Zhaowen Xu as far as I know Gradient Boosters focus is on non-linear relations between features but are not good at finding linear ones. So the idea was to use pure linear predictors and to send that to LightGBM to get better prediction estimates. You can see the code here : <a href="https://www.kaggle.com/ogrellier/multi-process-queues-and-workers-test">https://www.kaggle.com/ogrellier/multi-process-queues-and-workers-test</a></p>
<p>First of all I would like to congratulate the winners who were impressive all along in terms of sharing and scoring in such a constrained environment.</p>
<p>I would like to warmly thank @Kueipo who made this whole experience possible. He is very talented, with lots of energy. His knowledge and determination have been so precious to the team. He deserves the Kaggle master grade he just obtain with his 2nd silver medal.</p>
<p>Our solution is based on the following ;</p>
<ol>
<li><p>Hashing the text data and not using any vocabulary based vectorizer.
Dimension is 2 **24. The hash matrix is the addition of :</p>
<ul><li>name ngram 1, 2</li>
<li>mix of category + name ngram 1, 2 <= this is huge </li>
<li>description (prefixed to seperate from words in name)</li>
<li>brand (prefixed to seperate from words in name)</li></ul></li>
<li><p>Compute 5-fold OOF predictions from Ridge and LinearSVR</p></li>
<li><p>Train 2 LGBMs with different depths and regularization
Using name hashing vectorizer, a few numerical features and the above mentioned OOF predictions</p></li>
</ol>
<p>The final prediction is a weighted merge of the LightGBM outputs.</p>
<p>To make it possible within a 1 hour time frame we extensively used the multiprocessing library with numerous workers, managers and queues. Multi threading package was used to fit Ridge and LinearSVR in parallel.</p>
<p>Eventhough team members were from UTC-6 to UTC+8 I think we managed to get everybody involved and most of us brought something to the mix. In particular I had a great pleasure working with Yifan, Mark and Rand. </p>
<p>I let you guys add your own feeling on the competition and team work.</p>
|
Mercari Price Suggestion Challenge
|
2nd Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mercari Price Suggestion Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>this line is not needed as item_condition_id feature have no NULL values in stage1. But since we dont if there will be Null values in stage2, we added this line. the value 2 has been chosen because it's close to the median if I remember well.</p>
<p>We found that model scores are not really relevant in this competition, but rather how well models blend well together.
Our NN model scores probably range from .424 to .43 (I didnt have time to test them individualy on the LB).
you can for example take our ridge model which scores .418 on LB, change tthe alpha parameter to something like 5 instead of 20, you will get an LB score close to .428 for that model. This new score doesnt affect our final ensemble score.
We have an LGB model that scores less that .41 but didnt work well in the ensemble( no improvements) so we removed it.</p>
<p>So the way we approached the problem is by optimizing the ensemble score rather than model scores</p>
<p>So here's our second place solution (the kernel we are publishing is almost similar to the winning kernel)
The kernel scored 0.38888 on public and 0.38948 on private and run in 2800s</p>
<p><a href="https://www.kaggle.com/mchahhou/second-place-solution">https://www.kaggle.com/mchahhou/second-place-solution</a></p>
<p>As a preprocessing step, we kept maximum 60 words for all names and descriptions and apply the same cleaning process on all the texts
We labelEncoded all brand and categories and concatenated the encoded values with the name feature</p>
<p>We made 4 models :
1) Ridge model : trained on 1-ngrams and custom bigrams.
1-ngrams is using both name and description. Our custom bigrams work as follows : we concatenate name with the first 5 words from description, then apply np.unique() on the list of word to sort them and remove duplicates, then create all 2-way possible combination of words.
The ridge model is able to score .418 on public LB</p>
<p>2) Sparse NN model trained on a CountVectorizer with ngram_range=(1,2). The NN is fit with sparse data from name and description features</p>
<p>3) a fastText NN model with a shared_embedding layer for name and description features</p>
<p>4) another Sparse NN model fit with character Ngrams using name and description features.</p>
<p>We waste half our time building an LGB model that didnt help even though it was our best model. We found that using different models with the same data representation is not going to work. We then choose to use different NN with different inputs</p>
<p>To make our NN fast, we double the batch size after each epoch. This is quite similar to reducing the learning rate after each epoch with the advantage of speed gain</p>
<p>Kernel is working now</p>
|
Mercari Price Suggestion Challenge
|
3rd solution. 3300s to 0.3905 at public
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mercari Price Suggestion Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I don't know why the kernel seems wrong sometimes. My browser is fine now.
You can fork and edit to see the code.
About tuning CNN, use the suitable kernel_size of cnn, preprocess the text to adjust the size and reuse the embedding layer for item_name and item_description is the main idea.</p>
<p>haha. ID geili.</p>
<p>Hi, everyone. it is really an exciting competition.</p>
<p>Here is my kernel:<a href="https://www.kaggle.com/whitebird/mercari-price-3rd-0-3905-cv-at-pb-in-3300-s">https://www.kaggle.com/whitebird/mercari-price-3rd-0-3905-cv-at-pb-in-3300-s</a></p>
<p>Description: My solution has two models: A CNN and a Ftrl-Fm based on @anttip's share:<a href="https://www.kaggle.com/anttip/wordbatch-ftrl-fm-lgb-lbl-0-42555?scriptVersionId=2164491">https://www.kaggle.com/anttip/wordbatch-ftrl-fm-lgb-lbl-0-42555?scriptVersionId=2164491</a>. I did a lot of jobs on text normalizing to let my CNN works which cost much time. And it also improved the ftrl-fm model. The CNN is very slow with cpu, so you can see I did some tricks like the network structure and the training batchsize. I got my CNN to 0.400 and ftml-fm to 0.415 while I'm not a pro on traditional model. Maybe the feature etl could be done better, however I am just a software engineer and good at optimizing the algorithm and NN model. The code is a bit long and you can post anything here and I will try my best to answer it.</p>
<p>Hope you can find sth useful. Enjoy it. lol</p>
|
Mercari Price Suggestion Challenge
|
6th place solution!!
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mercari Price Suggestion Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>You could do that as well! But that would increase feature dimension space and make the code run slower(might even lose some generalisation power compared to doing embeddings for them)</p>
<p>Thanks!</p>
<p>all these little things :)</p>
<p>First of all! Thanks to everyone who participated here for making this a great learning experience. Special mentions to @Konstantin, @Pawel, @RDizzl @ololo @Muhammad @ThyKhueLy @Serigne and ofcourse my teammates @Ajay @huiqin</p>
<p>A big thanks to all my teammates for putting in lot of effort which helped us finish 6th!</p>
<p>Thanks to organizers and admins, I think they did a wonderful in handling their first kernel competition! </p>
<h2>Solution Summary</h2>
<p>We had three models in our ensemble.</p>
<ol>
<li><p><strong>NN with extra features</strong> (Our main model which gets 0.400x on plb in 2500s):
(<a href="https://www.kaggle.com/tezdhar/nnet-v1001-attention-dbbd10">https://www.kaggle.com/tezdhar/nnet-v1001-attention-dbbd10</a>)</p>
<ul><li><p>text processing: We tried some manual text normalization, but yielded
very little improvement, nevertheless we decided to keep it.</p></li>
<li><p>Name and description fields were padded, followed by a embedding layer and average pooling, (we tried GRU with attention on name field
and that yielded little improvement)</p></li>
<li><p>2 gram features - Initially we started with CNN (filter sizes 2 and 3) to capture 2 and 3 grams but that resulted in longer running
time, so I decided to encode name and description field into 2 gram <br>
text and directly use embedding layer followed by average pooling. <br>
The preprocessing time and running time for later was lower than CNN
model and resulted in almost similar score.</p></li>
<li><p>I explicitly added interaction features e.g. category*condition, category*shipping etc. This improved score by
0.002-0.003 (May be I could have added dot product layer for capturing interaction effects, but I got that idea too late to implement)</p></li>
<li><p>We just concatenate all layers and pass through two sets of batch_norm and PRelu</p></li>
<li><p>Used different learning rate for each epoch with Adam optimizer</p></li>
<li><p>Optimized hyperparameters using gaussian process in scikit-optimize locally on validation set(5 splits with 5% data <br>
concatenated)</p></li>
<li><p>The next major thing was to use all 4 cores to run 4 NN's and bag them, we used multiprocessing to do this (we took the risk)</p></li>
<li><p>We used slightly modified targets in 2 NN's for added diversity, a standardized target and target with category means subtracted from them.</p></li></ul></li>
<li><p><strong>NN with conv1d</strong>:</p>
<ul><li>We started with <a href="https://www.kaggle.com/agrigorev/tensorflow-starter-conv1d-embeddings-0-442-lb">public kernel</a> and tuned the architecture and hyperparameters using our learnings from above model</li></ul></li>
<li><p><strong>Fastetxt NN</strong>
Same as 1st but without any text normalization and extra features. </p></li>
</ol>
<p>Both 2 and 3 ran very fast (combined under 20 minutes and scores 0.408x on plb). It can be found here:
<a href="https://www.kaggle.com/tezdhar/tf-conv1d-tf-fasttext">https://www.kaggle.com/tezdhar/tf-conv1d-tf-fasttext</a></p>
<p>Final solution was a simple weighted average of above 3.</p>
<p>I also started with Keras based FM (based on script from qianqian) but was not able to give it time for it to be included in ensemble. It can be found here:
<a href="https://www.kaggle.com/tezdhar/kerasfm/code">https://www.kaggle.com/tezdhar/kerasfm/code</a>
In current state, it scores 0.421 on plb</p>
<p>Thanks for reading! :)</p>
<p>Oops! Thanks for reminding! :D</p>
|
Mercari Price Suggestion Challenge
|
9th place solution 0.3975
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mercari Price Suggestion Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>9th place solution is available now</p>
<p>Code Link: <a href="https://www.kaggle.com/leeyun/ensemble-model?scriptVersionId=2460425">https://www.kaggle.com/leeyun/ensemble-model?scriptVersionId=2460425</a></p>
<p>My final submission is the 106th version, It's an ensemble of Conv1d, FastText and FM+FTRL. </p>
<p><strong>1. Offline Validation and Data Preprocess</strong></p>
<p>Many thanks to <strong>chenglong</strong> 's share: <a href="https://github.com/ChenglongChen/Kaggle_HomeDepot">https://github.com/ChenglongChen/Kaggle_HomeDepot</a>, I really learn a lot from this tutorial, such as cross validation, feature engineer, hyperopt, stacking and so on.</p>
<p>My offline code is based on this tutorial, I use 5 fold cv. I use hyperopt to find good parameters.</p>
<p><strong>2. NN models</strong></p>
<p>For neural network, I've tried some models from public kernels, many thanks to the authors: <strong>yanglu</strong> , <strong>ololo</strong> and <strong>nzw</strong> .</p>
<p>GRU: <a href="https://www.kaggle.com/yyll008/gru-25-12-12-with-keras-512-64-relu-sgdr-lb0-432?scriptVersionId=1956235/code">https://www.kaggle.com/yyll008/gru-25-12-12-with-keras-512-64-relu-sgdr-lb0-432?scriptVersionId=1956235/code</a></p>
<p>Conv1d: <a href="https://www.kaggle.com/agrigorev/tensorflow-starter-conv1d-embeddings-0-442-lb">https://www.kaggle.com/agrigorev/tensorflow-starter-conv1d-embeddings-0-442-lb</a></p>
<p>Fasttext: <a href="https://www.kaggle.com/nzw0301/simple-keras-fasttext-val-loss-0-31">https://www.kaggle.com/nzw0301/simple-keras-fasttext-val-loss-0-31</a> </p>
<p>I put many effort on GRU. But it's accuracy is lower and time consume is higher compared to conv1d and fasttext. So I abandoned it. The conv1d is a very simple revise version from ololo, it achieve 0.413 in public LB. The fasttext was inspired by anttip's topic: <a href="https://www.kaggle.com/c/mercari-price-suggestion-challenge/discussion/49430">https://www.kaggle.com/c/mercari-price-suggestion-challenge/discussion/49430</a>. I just remove the conv1d layer from my conv1d NN, and achieve 0.414 alone and it's very fast. The ensemble model of conv1d NN and fasttext can achieve 0.405.</p>
<p><strong>3. Non NN models</strong></p>
<p>For Non NN models, I've tried many linear models, such as ridge and lasso, many non linear models, such as XGBoost, LightGBM, , and large scale sparse model FTRL, FM+FTRL, Many thanks to the public kernel's authors: <strong>ynqa</strong> , <strong>Bojan Tunguz</strong> , <strong>anttip</strong> . Especially thanks to <strong>anttip</strong> 's work, which is a really useful tool.</p>
<p>Ridge: <a href="https://www.kaggle.com/object/more-effective-ridge-script/code">https://www.kaggle.com/object/more-effective-ridge-script/code</a></p>
<p>LGBM: <a href="https://www.kaggle.com/tunguz/more-effective-ridge-lgbm-script-lb-0-44823?scriptVersionId=1855115/versions">https://www.kaggle.com/tunguz/more-effective-ridge-lgbm-script-lb-0-44823?scriptVersionId=1855115/versions</a></p>
<p>FM+FTRL: <a href="https://www.kaggle.com/c/mercari-price-suggestion-challenge/discussion/47295">https://www.kaggle.com/c/mercari-price-suggestion-challenge/discussion/47295</a></p>
<p><strong>Konstantin</strong> give a kernel <a href="https://www.kaggle.com/lopuhin/eli5-for-mercari">https://www.kaggle.com/lopuhin/eli5-for-mercari</a> demonstrate the usage of ELI5, but i'm too stupid to improve my models by this usefull tool :P. </p>
<p>I can't improve linear model into 0.425. The LGBM's performance is very bad and time consuming. The XGB's performance is even more terrible. I can't improve FTRL into 0.43, so I abandoned it. </p>
<p>The FM+FTRL is a very powerful tool, it achieve 0.407 alone. the only risk is out of memory, so it need carefully monitor the memory usage offline with simulate data. </p>
<p><strong>4. Conclusion</strong></p>
<p>I really learn a lot from this competetion. The sharing spirit, the effective usage of memory and time, the skill to model neural networks, the conception of FM and FTRL, cross validation, feature engeneer, and most important, keep learning and improving. </p>
<p>Thanks my teammates, they give me many help and valuable suggestions, especially when I felt frustrated, depression, and lose my way. Thanks all your guys that compete togethor in the last two month, It's a great honor for me to fight with you! </p>
<p>seems it need a long time to understand what you guys mean and done, I'm confused on many released kernel and discussion after the competion completed. There are so many conception and ideas I didn't knew before</p>
<p>I can understand your feeling. I'm very nervous when I'm waiting for stage2 score although I've done many simulations for stage2. Although this kernel competion is exciting, it still has space to improve, especially how to minimize the lose that our competetors work for months but gain nothing in the end. And we need to be more cautious for this type competetion. I believe you can achieve high score in the next competetion!</p>
|
Mercari Price Suggestion Challenge
|
Bullet-proof elegant solution variant
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mercari Price Suggestion Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>That can be pickled and deployed to production making live predictions
<a href="https://www.kaggle.com/jszymczak/final-submission-no1?scriptVersionId=2512952">https://www.kaggle.com/jszymczak/final-submission-no1?scriptVersionId=2512952</a></p>
<p>It achieves 126th place on private leaderboard :)</p>
|
Mercari Price Suggestion Challenge
|
Detailed Post and Kernel about a Deep NLP solution that is worth 35th place
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mercari Price Suggestion Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi guys,</p>
<p>I am sharing with you a detailed <a href="https://medium.com/@gabrielpm_cit/how-i-lost-a-silver-medal-in-kaggles-mercari-price-suggestion-challenge-using-cnns-and-tensorflow-4013660fcded">blog post</a> and the <a href="https://www.kaggle.com/gspmoreira/cnn-glove-single-model-private-lb-0-41117-35th">Kernel</a> of my solution for Mercari Price Suggestion - <strong>a single Deep NLP model (no ensembling)</strong>, which scores 0.41117 and places the <strong>35th position</strong> in Private Leaderboard (Late Submission).
If the material is useful for you, please upvote the Kernel and this discussion topic.</p>
<p>Ps. My final score for this competition was not computed because I missed Phase 1 deadline for submission (due to the inconsistent deadlines).</p>
|
Mercedes-Benz Greener Manufacturing
|
10th solution, average 4 predictions
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mercedes-Benz Greener Manufacturing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Try different weights on local cv.
The first principle is TRUST LOCAL CV.</p>
<p>1, trust local cv
I use shuffled 5-fold cross validation
2, grid search
lgb parameters: lgb_params = {'learning_rate': 0.05, 'seed':0, 'subsample': 1, 'colsample_bytree': 0.8,
'num_leaves': 7, 'n_estimators': 90}
xgb parameters: xgb_params = {'learning_rate': 0.05, 'seed':0, 'subsample': 1, 'colsample_bytree': 0.7,
'max_depth': 3, 'n_estimators': 130}
random forest: rf_params = {'max_depth': 5, 'n_estimators': 100, 'max_features': 0.7}</p>
<p>These parameters are different with most kernels. </p>
<p>3, feature engineering
test different feature sets
(1)Test add_feature from <a href="https://www.kaggle.com/rupeshwadibhasme/lb-0-578-upgraded-from-stack-lgblasgbdt-kernal">https://www.kaggle.com/rupeshwadibhasme/lb-0-578-upgraded-from-stack-lgblasgbdt-kernal</a>
(2)Test remove outliers using <a href="https://www.kaggle.com/tilii7/you-want-outliers-we-got-them-outliers">https://www.kaggle.com/tilii7/you-want-outliers-we-got-them-outliers</a>
(3)Using [10,20,30,50,80,...] most important columns
(4)on_hot on ['X0',...,'X8']</p>
<p>I tried lgb, xgb, randomforest, huberregression on my local cv. 3 best predicts are:
(1) lgb with 20 important features, cv:0.601026 , keep 'ID', no add_feature, no outlier_remove, no one_hot
(2) xgb with 120 important features, cv:0.599090 , drop 'ID', no add_feature, no outlier_remove, no one_hot
(3) xgb drop 20 important features, cv:0.593299 , drop 'ID', no add_feature, no outlier_remove, no one_hot</p>
<p>4, average
pred1 = (lgb_imp20* 0.7+ xgb_imp120* 0.3)* 0.9 + xgb_drop20* 0.1, cv: 0.602019
pred2 comes from a public kernel <a href="https://www.kaggle.com/hakeem/stacked-then-averaged-models-0-5697?scriptVersionId=1236940">https://www.kaggle.com/hakeem/stacked-then-averaged-models-0-5697?scriptVersionId=1236940</a>
pred2's private LB is 0.55184
pred1's private LB is 0.55242 (180th)</p>
<p>final submission:
pred1* 0.7 + baselane* 0.3 : private LB is 0.55396 (10th)</p>
|
Mercedes-Benz Greener Manufacturing
|
1st place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mercedes-Benz Greener Manufacturing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Sherry, thank you for your interest!
1. Yes, I use features or variables, interchangeably.
2. The gain is, loosely speaking, the relative contribution of a feature to the model taking its contribution for each tree in the model. Higher values of gain when compared to other variable indicates it is more important for generating a prediction. The cut value is an arbitrary value I used to discard features. You could think of it like an F-value in stepwise regression. </p>
<p>¡Muchas gracias, Alan!</p>
<p>First of all, congrats to all Kagglers who participated and enjoyed this competition; enjoying and learning during the trip is what really matters, not just the landing which was frustrating for some. Thanks to many of you who participated in discussions and helped in my understanding of the problem; I am proud of being part of this community.</p>
<p>Now, to my solution.</p>
<p><strong>The Competition Problem</strong></p>
<p>Before modelling, I try to understand the process behind the data and figure out what could affect the response. For this task I had very good aids aids: listen to others in the forum and view wonderful EDAs in kernels. For this learning stack I had some good time. </p>
<p>I set up some <strong>working hypotheses</strong>:</p>
<ol>
<li><p>Lack of synchronization between manufacturing subprocesses leads to time delays. </p></li>
<li><p>Mercedes process must be a highly controlled one, only focalized perturbances. </p></li>
<li><p>Test time changes attributable to a small set of variables and/or short parts of few subprocesses.</p></li>
</ol>
<p>I consider that in any modelling problem, feature extraction and selection are the primary issues one has to consider. What kind of new features would be interesting to explore?</p>
<p><strong>The Feature Exploration Problem</strong></p>
<p>Data features are: </p>
<ol>
<li><p>Anonymized (no hints with names)</p></li>
<li><p>Only binary and categorical (prone to overfitting)</p></li>
<li><p>Just 2-way interactions of binary variables: 67528</p></li>
<li><p>Small data set relative to number of features</p></li>
<li><p>High quality data: apparently no need to deal with contaminated data</p></li>
</ol>
<p>I used some hints to explore for more features:</p>
<p><strong>Parameters in kernels</strong>
Low value of max_depth in XGBoost suggested me that perhaps a few 2- or 3-way interactions and a small set of variables would be relevant in the sense that test time changes seemed to be attributable to a small set of variables and/or parts of few subprocesses. </p>
<p><strong>Sequencing of feature names: 9 sequences, 8 categorical features</strong>
I imagined that X0 to X8, the categorical features, were some sort of summary of parts of the manufacturing process. The holes in the sequencing of the binary features took me to define nine groups of binary variables, consistent with the eight categorical ones. </p>
<p><strong>Interactions</strong>
I found that discussions and kernels tended to confirm my view of the process, but notably, besides one kernel dealing specifically with interactions, I found no reference to any 2 or n-way interactions. The kernel on this subject did not deal with the features I used later. </p>
<p><strong>Initial Features in Training Method</strong></p>
<p>Search for <strong>interesting interactions looking at patterns</strong> in preliminary XGBoost runs. Pairs of variables (X314, X315), (X118, X119) and (X47, X48) appeared always “near” in the variable importance reports. So, these interactions were included and, additionally (X118, X314, X315). This last interaction did not conform to my understanding of the process, but I could be totally wrong.</p>
<p>As a <strong>tool to try to identify subprocesses</strong> and catch some information of them related to test time, <strong>cumulative sums of binary variables within</strong> each of the nine groups were formed. So in a first step the number of variables increased substantially. This was a risky decision, since adding correlated features to data can also add too much noise. Decision tree algorithms are relatively robust to these features.</p>
<p>To <strong>include or not ID</strong> was a question I tried to answer in preliminary runs. Discussions in the forum suggested that including ID was totally consistent with my thoughts on the Mercedes process. I detected modest improvements in preliminary runs and so it was included.</p>
<p>After some playing with the data, I decided to <strong>recode eleven of the levels of X0</strong> (trigger of the process?) into new ones and added the recoded variable named recX0 </p>
<p>One-hot encoding (dummies) of categorical features was applied, that is, the original X0 through X8 and the ones created for interaction variables. One-hot encoding variables were kept if sum of ones exceeded 50. Since this value looks reasonable, but arbitrary, it is subject to tests.</p>
<p>It is known that decision tree algorithms can handle categorical features transformed to numerical, something that makes no sense in other models. These features were also included, so extra weight is given to categorical features.</p>
<p><strong>Training Method</strong></p>
<p>Two models were trained with XGBoost, named hereafter Model A and Model B. Both are built in a sequence of feature selection steps, like backward elimination. Model B uses a stacked predictor formed in a step of Model A. Any decision point in this sequence is preceded by a 30-fold cross validation (CV) to find the best rounds. The steps are simple:</p>
<ol>
<li><p>Preliminary model with all features included, Model A, 900 features and Model B, 900+1, the stacked predictor.</p></li>
<li><p>Feature selection. Keep the variables used by XGBoost as seen on variable importance reports (229 in Model A, 208 in Model B).</p></li>
<li><p>Feature selection. Include features with gains above a cut value in the models; <strong>0.1%</strong>, in percentage, was the cut value used, 53 in Model A, 47 in Model B.</p></li>
</ol>
<p>Both models use XGBoost and a 30-fold CV through all the model building process. The rationale for a 30-fold validation was to use them for a 30-fold stacking in Model B. With 30-fold the CV-mean results in different runs behaved too optimistic (0.59xx) in contrast with a 5-fold CV (0.57xx). On the other hand, CV-standard deviations increased substantially going from (0.04xx) in 5-fold CV runs to (0.11xx) in 30-fold CV runs. </p>
<p>Fortunately, last three of the final five submissions would have won. First two are previous versions of Model A with private scores 0.55292 and 0.55351; the third submission comes from Model A (0.55530). Fourth is from Model B, (0.55536) and the last, a simple average of A and B (0.55551).</p>
<p>In the models proposed <strong>Interactions played an important role</strong> </p>
<pre><code>a) By far, pair (X314, X315), jointly and pair levels
b) 3-way interaction (X118, X314, X315)
c) X314
d) (X118,X314,X315), levels (1,1,0)
e) Individual features: X279, X232, X261, X29
f) Two levels of recoded X0 and recoded X0
g) Sum of X122 to X128
h) X127
</code></pre>
<p><strong>Note: Fifteen features account for 85% of gain</strong></p>
<p><strong>Tools and Process Time</strong> </p>
<p>Tools: R Version 3.4.0, Windows version and required packages</p>
<p>Time to train both models in a desktop I7-3770 @3.40 GHz, 8 cores, 16 MB RAM: ~8 minutes</p>
<p>Time to process new data: Load packages and prepare model, ~6 seconds; predictions, for 4209 observations, ~2 seconds </p>
<p><strong>Conclusions</strong></p>
<ol>
<li><p>Try including more two or three-way interactions </p></li>
<li><p>Model B does not really add value since both models are at the same <strong>R2 level of 0.555xx</strong>. Discard Model B; better ways can be found to improve results. </p></li>
<li><p>Try reducing features from the ones in Model A. Single test with 24 features: 0.5553</p></li>
<li><p>Some sensitivity checks on parameters for XGBoost might lead to slight gains.</p></li>
<li><p>The variables used are quite different from others in discussions, so there is a good chance to do some blending to capture synergies with one or both of the 2nd and 3rd places solutions.</p></li>
<li><p>Using feature ID in the model may result in poor results with new data. Time must be relevant, but needs special treatment. Single test without ID: Public LB, 0.55799, Private LB: 0.55421</p></li>
<li><p>As far as I have seen, the variables in the model developed are quite different from others in discussions, so there is a good chance to do some blending to capture synergies with one or both of the 2nd and 3rd places solutions.</p></li>
</ol>
<p>I hope this post answers most of your questions and thanks for taking the time to read it. Pleas upvote it if you liked it.</p>
<p>Oscar Takeshita: Here is a summary of ModelA:</p>
<p>Min. 1st Qu. Median Mean 3rd Qu. Max. </p>
<p>76.82 94.16 95.21 100.99 112.01 132.19 </p>
<p>Thank you! Good point! No, I had no time for this task but looked surely promising as a next step. </p>
<p>I am not sure, Oscar, but I don't think ii adds any value publishing them. Thanks again.</p>
<p>Thank you. Orhan!</p>
<p>Thanks, Roberto!</p>
<p>Hi Keuipo, thanks!</p>
<p>I use the CV-folds for training and tuning parameters checking for internal consistency. The only parameter left in the steps I mentioned before was the best ntreelimit. For the stacking predictor, each fold serves to predict the holdout sample. I don't choose the best fold for anything.</p>
<p>Hi Oscar, sorry for not answering before...</p>
<p>If you take a fold from your CV-fold and watch the residuals of the holdout sample, you will find they have a highly skewed distribution, indicating that models do not predict very well high test times, not just the extreme ones. One model that can catch better high test times, may outperform the ones in contest. So if I understood your question well, yes, the job is in predicting better the general behaviour. I predicted ID624 with 132.19, but a model that predicts this particular observation, say, with 127 and a small bulk of data better, would do a better job.</p>
<p>Thank you, Kanishk!</p>
|
Mercedes-Benz Greener Manufacturing
|
2nd Place Solution (And How to Score .55571 on Private LB)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mercedes-Benz Greener Manufacturing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I think that consistently lower scores doesn't suggest the data is different. If you split any data set in half, randomly perhaps, one half will always score higher than the other. What we look for is more of a correlation between higher CV scores and higher LB scores. I do think that with smaller datasets, you cannot take the CV so literal that very incremental improvements will translate. The smaller the dataset the more likely those small improvements to CV are attributable to randomness. It's sort of like the concept of statistical significance we're all familiar with from more traditional statistical modelling. So the likelihood that model A is better than model B is a function of the difference in their CV scores and the number of samples. Given the sample size, I assumed CV improvements less than .001 were 'not significant.'</p>
<p>Thank you @Cro-Magnon!</p>
<p>Hi @den3b, </p>
<p>I certainly tried to make a fair CV comparison. My CV for his script was 0.5676 versus 0.5735 on my sub. Even if I were to take some kind of weighted average of the CVs and the public score, which I'm sure many people do, my sub would have still scored ~.003 higher. Normally I would include my highest scoring sub on the public LB as one of my two subs, but given the widespread use of probed LB values, there was no real indication of how competitive a sub is on the public LB. In other words, the pubic LB was effectively useless as an indicator of the current standings. </p>
<p>The big question for me was what about his script is creating the over-performance? After experimenting with it I was fairly certain the secret sauce was in his model selection, including the actual parameters. So while I could get slightly higher CVs with minor adjustments to the model parameters, those improvements were not material enough for me to divert from his model selection. </p>
<p>Thank you Oscar. only 1:
ID y
624 139.9532533 (2nd place sub)
624 153.591637 (highest scoring sub)</p>
<p>Perhaps this was the difference between 1st and 2nd? </p>
<p>Thank you Heads or Tails! I decided on 155 early on as it seemed to keep the y's distribution somewhat continuous. I did test a few other levels at the end but they did not improve the CV so I left it unchanged. </p>
<p>repeat</p>
<p>Thank you @KBA, Early on I noticed the process would focus in on two features and increase their weights to around .65 and .25. Those weights would generally remain at those levels so I figured I should remove them from the process and add to a permanent list so the process could then promote other features with the freed up weight. </p>
<p>At this point I'm not sure if I'm allowed to share the actual solution. However, I would like to offer some insight into how I was able score .55498 on the private LB (and as high as .55571 with one minor tweak.) I will note that these two score were fairly consistent with their respective public LB scores of .55796 and .55819. </p>
<p>Given the size and noise of the data, I decided early on that I wanted to focus the majority of my efforts on feature reduction. I found that the features I trained on had a greater effect on my CV than traditional parameter tuning. With this much noise, I somewhat disagree with the idea that you can "regularize out" bad features. Generally speaking, Models will use the features you feed into it if the training error is reduced and your complexity restraints are maintained (i.e. "garbage in garbage out"). So, feature selection / reduction needs to occur through cross validation. <br>
With that said, one thing I've learned from previous competitions is you cannot ignore the highest-scoring public kernels. So, I made sure that one of my two selections incorporated Hakeem's Kernel: <a href="https://www.kaggle.com/hakeem/stacked-then-averaged-models-0-5697?scriptVersionId=1236940">https://www.kaggle.com/hakeem/stacked-then-averaged-models-0-5697?scriptVersionId=1236940</a> </p>
<p>I worked in two parallel tracks with the goal of selecting one submission from each. One track was a stacked ensemble including GradientBoostingRegressor, RandomForestRegressor, and SVR with a reduced set of features customized for each model. The other track was modifying Hakeem's kernel to maximize my CV. In the end, my 2nd place solution was a hybrid of these two. I was able to maximize my CV in Hakeem's kernel by replacing all of the features with the union of the features used in my stacked ensemble. </p>
<p>The features I used in my 2nd place solution were selected after experimenting with a learning process I created, which I coined "feature learner," which I will describe at the end. </p>
<p>With respect to cross validation, I used a simple 5-fold KFold with shuffle set to true and the same random seed. I tried increasing the number of folds to 10 on multiple occasions and the results were always consistent - so I generally stuck with 5 folds. I should note that in lieu of the noise, I relied on my CV but I tried to focus more on changes that brought material changes to my CV. For example, I assumed CV scores within .001 of each other to be effectively equal. </p>
<p>It's also important to note that instead of throwing out outliers, I clipped all y's at 155. 155 was selected from a visual inspection of y's distribution. </p>
<p>Below is a sample code of my Feature Learner Process. The idea is to randomly bag features and increase / decrease the likelihood that features are bagged ('weights') on the CV score for the models tested with those features. When the weight exceeds a threshold, the feature is added to the list 'top_feats.' The top_feats are appended to each feature bag ('sample_feats'). The final list of features ('best_feats') is simply the iteration ('epoch') which produced the highest CV score ('best_score'). Another layer to this problem was selecting the features that go into this process. I found that including principle components or other dimension reducing features did not improve the best_score. I also found that categorical features were better represented with dummy matrices as oppossed to a since array of factors. My solution included the following features: </p>
<ul>
<li>'ID', and 'X0' as a single factorized array </li>
<li>X0: 15 of the 47 unique categories </li>
<li>X1: 6 of the 27 unique categories</li>
<li>X2: 13 of the 44 unique categories</li>
<li>X3: 2 of the 7 unique categories</li>
<li>X4: no categories</li>
<li>X5: 9 of the 29 unique categories</li>
<li>X6: 4 of the 12 unique categories</li>
<li>X8: 5 of the 25 unique categories
X10 - X385: 78 of the 357 binary features</li>
</ul>
<p>Although I acknowledge it creates a risk of overfitting to a reduced set of features, I do believe it has potential given further exploration. The best_feats will depend on the n_features and the model trained, including the modelling parameters selected. I recommend running the process multiple times, with multiple models / parameters, and using the union of all of the best_feats. This approach should reduce your risk of overfitting and eliminate counterproductive features. In order for this process to be computationally feasible with larger data sets, one would likely need to use a combination of linear models and low-complexity tree ensembles. </p>
<p>With respect to the small tweak for a 1st place score, I noticed that I would have scored .55571 (private LB) if I had not included 'XO' as a single factorized array. Unfortunately I made the mistake of including this even though it was not one of the best_feats! </p>
<p>from numpy.random import choice
n_elements = train.shape[0]
feats = train.columns.values[9:]
n_feats = len(feats)
weights = np.array(np.zeros(n_feats))
weights += 1/n_feats</p>
<p>best_feats = []
best_score = 0
epochs = 3000 # number of rounds / feature bags
n_features = 10 # try out different values
wt_g = .2 #weight growth rate
w_threshold = .2 #weight to add feature
kf = KFold(n_elements, n_folds=3, shuffle=True)
y = train['y'].values
scores = []
top_feats = np.array([])</p>
<p>for i, ind in enumerate(list(range(epochs))):</p>
<pre><code>sample_feat_ids = choice(a=n_feats, size=n_features,
replace=False, p=weights)
sample_feats = np.append(top_feats, feats[sample_feat_ids])
tst_P = np.array(np.zeros(n_elements))
for trn, tst in kf:
X = train.loc[:,sample_feats]
trn_X, tst_X = X.iloc[trn,:], X.iloc[tst,:]
trn_Y, tst_Y = clipped_y[trn], clipped_y[tst]
#mod = SVR(C=50, epsilon=3, gamma=.2)
#mod = GBR(alpha=.01, n_estimators=50, max_depth=5, min_samples_leaf=15, subsample=.5, random_state=1776)
mod = RFR(n_estimators=100, max_depth=12, max_features=20, min_samples_leaf=4, n_jobs=2, random_state=1776)
mod.fit(trn_X, trn_Y)
tst_P[tst] = mod.predict(tst_X)
# I don't want to overfit to outliers so I am clipping all y's at 155
tst_rsq = r2(y_pred=tst_P, y_true=clipped_y)
if ind > 29:
scores.append(tst_rsq)
ma_rsq = np.mean(scores[-30:])
if ind % 25 == 0:
print(ind, ma_rsq, tst_rsq)
if tst_rsq > ma_rsq:
weights[sample_feat_ids] *= (1+wt_g)
sum_w = np.sum(weights)
weights /= sum_w
if tst_rsq > best_score:
best_score = tst_rsq
best_feats = sample_feats
else:
weights[sample_feat_ids] *= (1-wt_g)
sum_w = np.sum(weights)
weights /= sum_w
mx_w = np.max(weights)
# add feature to the top feats if weight is > threshold
if mx_w > w_threshold:
feat_imps = pd.Series(index = feats, data = weights)
new_feats = feat_imps[feat_imps &gt; w_threshold].index.values
top_weights = feat_imps[feat_imps &gt; w_threshold].values
top_feats = np.append(top_feats, new_feats)
feats = list(feats)
weights = list(weights)
for f,w in zip(new_feats,top_weights):
print(f, w)
feats.remove(f)
weights.remove(w)
n_feats = len(weights)
feats = np.array(feats)
weights = np.array(weights)
sum_w = np.sum(weights)
weights /= sum_w
if ind % 25 == 0:
print(mx_w)
else:
scores.append(tst_rsq)
print(ind, tst_rsq)
</code></pre>
<p>Thanks Mataflight, yes! I didn't catch the conversion. </p>
|
Mercedes-Benz Greener Manufacturing
|
33th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mercedes-Benz Greener Manufacturing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I would like to thank the organizers for this competition! And thanks a lot the competitors for valuable discussions!</p>
<p>My final submit was an average of five models. Two of them I've made myself (weights 0.25 and 0.2) one model use all feature but some outliers.
and the other one use the result with regards to genetic programming features = ['X118', 'X127', 'X47', 'X315', 'X311', 'X179', 'X314', 'X232', 'X29', 'X263', 'X261']
<a href="https://www.kaggle.com/tilii7/four-blob-tsne-with-legal-supplements">https://www.kaggle.com/tilii7/four-blob-tsne-with-legal-supplements</a>
Another two models use different parameter,but the data is same as above(wieght 0.25 and 0.2(genetic feature)), the different is:</p>
<p>I did PCA->GRP->SRP->ICA and the result of PCA is also include as the input of GRP, also the result of GRP is include as the input SRP, so did the ICA. That really interesting, the order of these four process can not change otherwise the LB would be worse. </p>
<p>The last model is base on the public kernel by Jason:LB 0.57+ Stack lgbLasGBDT and average with xgb(weight 0.1):<a href="https://www.kaggle.com/jiashenliu/lb-0-57-stack-lgblasgbdt-and-average-with-xgb">https://www.kaggle.com/jiashenliu/lb-0-57-stack-lgblasgbdt-and-average-with-xgb</a></p>
<p>You can access all code here: <a href="https://github.com/arvidzt/Kaggle-Mercedes-Benz-Greener-Manufacturing-33th-Solution">https://github.com/arvidzt/Kaggle-Mercedes-Benz-Greener-Manufacturing-33th-Solution</a></p>
<p>you can refer scirpus code <a href="https://www.kaggle.com/scirpus/four-blob-tsne">https://www.kaggle.com/scirpus/four-blob-tsne</a></p>
<p>might 300+</p>
|
Mercedes-Benz Greener Manufacturing
|
94th solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mercedes-Benz Greener Manufacturing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>here <a href="https://www.kaggle.com/c/mercedes-benz-greener-manufacturing/discussion/36125">https://www.kaggle.com/c/mercedes-benz-greener-manufacturing/discussion/36125</a> my team tried to describe in English our solution for this competition </p>
|
Mercedes-Benz Greener Manufacturing
|
A third place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mercedes-Benz Greener Manufacturing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, Sorry if I didn't respond to all the comments available on my public kernel, I haven't made an effort myself that was worth submitting to this competition since this kernel, so you can see how much I was detached from the competition this last month.</p>
<p>Second, I want to point that there was some kind of undeterminism with the script that led people who run the same kernel to not get the score I got neither on private nor on public, even with fixing the same random seeds across many runs, you might get diferents results from one run to another.</p>
<p>Lastly, I've seen @DDgg getting the 6th place position on private LB using a modified version of my kernel but with a huge number of components of PCA and the other clustering and dimensionality reduction, I want to say that instead of trying to augment the number of components you can get the 3rd place position on this competion with the same kernel but with some little modification : </p>
<ul>
<li>we let the number of components the same (12)</li>
<li>we change the loss of the GradientBoostingRegressor to 'ls" instead of hubber.</li>
<li>we change the weights of the averaged model to 0.65 - 0.35 instead of 0.75 - 0.25.</li>
</ul>
<p>When I was building this script, I hadn't have done yet any serious feature engineering except for the dimensionality reduction feature (PCA ...), and I wanted to get the best machine learning algorithms combination that we can run on them, I had just learned from a slack groups about the tpot library wich is TPOT is a Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming. So I used the class tpot.TPOTRegressor() to pick for me this best machine learning and feature engineering combinations that get me the best validation score on the train data using the combinations of feature I gave her(categorical variables, binary ones and PCA, tsvd ...)</p>
<p>After some long run of this class using many generations and population sizes( takes less time if you have more computer power) the best combination that gave me is the one with GradientBoostingRegressor with 'ls" used a loss function and LassoLarsCV at the entry and the exit of the pipeline ofcourse. sadly xgboost isn't supported on tpot yet, so I taught about averaging those model to get a better public lb score, this model gave me a great public LB score with the first weight that came to my mind (0.75 - 0.35) after that I tried some others weights (wich have gotten some great results on the public LB too. </p>
<p>Because I was running the kernel on my PC first before I made it on kaggles, I've had success to get a great public LB score with a combination of weights and average on my models one night but I couldn't produce the same results so I can share a public kernel on it to the group, so while in searching for the perfect combination to get get the best public LB score, I've decided to change the loss to 'huber' instead of 'ls' wich I taught it will be better at the time because it is less sensitive to outliers. And I got my best pubic LB score with it and shred the kernel to you. I wasn't using cross validation at the time when I was using the 'ls' loss because I was focusing on public LB fitting and I used CV after I changed the loss to 'huber'.</p>
<p>I wasn't sure kaggle kernels had tpot available on them so I can run the future suggested to me by calling its StackingEstimator, so I took the class from their available source code and used it on my kernel and fed to makepipline from sklearn as they suggest in the tpot documentation.</p>
<p>the photo I'm sharing here is of the code you'll get using my kernel and the configuration that is shared here.</p>
<p>Finally, if you're interested about the slack group I was talking about it is one where many great kagglers gather to share knowledge and have some fun, insightful and also helpful discussions and it is named 'kagglenoobs' :D</p>
<p>You're welcome @zr and thank you for your contribution to correct the kernel too , I did go check that and I found there's no problem with it as it did always return the same columns names across 3 times (did not check that much :D) but probably the solution or fix you proposed is great and safer to use it every time, it is a nice trick and I'll remember to use it all the times just to be sure ;).
Actulay, I did go see the documentation of GradientBoostingRegressor of sk-learn that I used in my script and found that there's argument called 'random_state ' where it set to None as default value and that means if this is not set by the user(which I didn't too in my script) then it is set to None by default and that means the random number generator is the RandomState instance used by np.random. So guess this can explain the deferent predictions when the same kernel is running on Kaggle or other machines. there's another parameter that could be responsible also for this kind of indeterministic behavior when combined with 'random_states'=None and that is 'max_leaf_nodes' parameter where it default value None means an unlimited number of leaf nodes.</p>
<p>I transformed then to numerical values in my script, so I'm not sure what you meant man, we'll surely lose an amount of information in the data if we pick only 1é components as I did in my script but I'm not sure how that affect categorical values that have transformed to numerical one, maybe if we did binarize those categorical values instead of transforming them to numerical one, and then done PCA one them, maybe it would have represented the data better with the same number of components with 2 or 1 more.</p>
<p>You're welcome, actually the slack group isn't mine and I'm just a member of it :D, I did pull out from the competition because I had a family trip for 7 days and after that I was thinking I still have time so I spent a week playing video games as I am in sort of vacation this time of year :D after that I spent soo much time doing EDA on data and thinking about feature engineering till I gathered too many ideas but sadly it was the last day so I hadn't had time to apply any :/, but lesson learned for the next competition ;) !</p>
<p>You're actually right, PCA as far as I remember of my college courses to should be done on centred (and probably also normalized) values, I was hoping to do it after in my next attempt but I didn't had the chance. We had the chance that this somewhat worked in this dataset :) </p>
<p>That's right, thank you for your explanation @Serigne, and Also I just want to point to the fact that this kernel could be more useful for the private LB score by just altering some other parameters and no need to put the number of components to 320 so you get like more than 1500 columns in your data to get the 6th place solution (by seeing the kernel attached to the 6th place on private LB kernel, it is different from mine by using n_comp=320 (I used 12) wich is in my humble opinion crazy giving the number of rows we have in the data, we needed dimension reduction, not augmentation)</p>
<p>You're welcome, I'm happy you found my kernel usefull :) </p>
<p>You're welcome, Thank you ;) !</p>
<p>You're welcome, Thank you :)!</p>
<p>You're welcome @Binil George, It feels great that you found my kernel great :), I was hoping I stayed too :/, I had spent so much time doing and reading also EDA on the data and thinking about so many features engineering idea but it was very late to implement them and choosing the models and do the stacking in the last day. </p>
<p>Actulay, I did go see the documentation of GradientBoostingRegressor of sk-learn that I used in my script and found that there's argument called 'random_state ' where it set to None as default value and that means if this is not set by the user(which I didn't too in my script) then it is set to None by default and that means the random number generator is the RandomState instance used by np.random. So guess this can explain the deferent predictions when the same kernel is running on Kaggle or other machines. there's another parameter that could be responsible also for this kind of indeterministic behavior when combined with 'random_states'=None and that is 'max_leaf_nodes' parameter where it default value None means an unlimited number of leaf nodes.</p>
|
Mercedes-Benz Greener Manufacturing
|
The 11th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mercedes-Benz Greener Manufacturing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you, congrats on your medal at this competition as well :)</p>
<p>I meant just replacing each category with the group mean of the target with respect to this category. It's a very common technique, but I failed to find any source with a brief explanation of it in English (I was really surprised). If you're still confused with this, just let me know, I'll try to explain it.</p>
<p>Yes, that's right, thank you. I've also updated the notebook a bit to make the encoder creation more explicit.</p>
<p>Thank you!</p>
<p>Thank you very much and congratulations for you as well for your medal :)</p>
<p>Thank you.
I actually used t-test for cv as well, but I published only the code to reproduce my final solution, without the cv section.
I used 10 repetitions with 5 folds, so I got 50 results on the same subsets for each of two candidates. T-test answers the question whether on of them if better and whether it is statistically significant.</p>
<p>olivier, yes I've done cross-validation without these bad objects. Doing this I found that my models which were tuned on the entire dataset cv are overfitted (to large n_estimators, to small feature subsets). And I tried to maximize the performance on dataset without outliers rather then with them. The r2 on cv on the entire dataset decreased from 0.58 to 0.57 after this, but I decided to trust the cv without outliers. And that was a good choice because the model with the best cv-score on the entire dataset has now 0.54883 on private leaderboard. </p>
<p>dmi3kno, OK, with the quotation I got this ;)</p>
<p>Sure)
Just go here: <a href="https://github.com/Danila89/kaggle_mercedes">https://github.com/Danila89/kaggle_mercedes</a> and download sep_estimator.py as well as cluster_target_encoder.py. Or just clone the repository.
sep_estimator is a class which makes four separate estimators, fits them on the DataFrame when fit method is called and when predicting it uses separate estimator for each value of the 'labels' column.</p>
<p>Thank you. I'm not sure I understand what do you mean by 'dropping error objects "under the hood" in xgboost'. If you're speaking about colsample_ parameters - they drop features, not objects.</p>
<p>The solution is here: <a href="https://github.com/Danila89/kaggle_mercedes/blob/master/solution_11_place.ipynb">https://github.com/Danila89/kaggle_mercedes/blob/master/solution_11_place.ipynb</a>.
I found that putting the data on github is against the rules, so you can't just clone the repository and ran the code. You have to put the data in the right place (../input/), sorry for that.</p>
<p>The brief description:</p>
<ol>
<li><p>I did not use probed values. At all.</p></li>
<li><p>The target's distribution consists of four components. Every estimator captures it: if you do cross_val_predict with any estimator and plot a histogram – you will observe 4 distinct components.</p></li>
<li><p>These components can be estimated particularly well using the X0 feature. This solution is a good baseline, here we are returning the group means by X0 and by doing this we split the components: <a href="https://www.kaggle.com/paulorzp/one-line-solution-in-python-lb-0-55453">https://www.kaggle.com/paulorzp/one-line-solution-in-python-lb-0-55453</a>. Here you can find the approach I used to split the components: <a href="https://www.kaggle.com/daniel89/mercedes-cars-clustering/">https://www.kaggle.com/daniel89/mercedes-cars-clustering/</a></p></li>
<li><p>The main question is how can we do inside this components. Actually we can do poor.</p></li>
<li><p>The main point of my solution is to investigate the clusters independently. I built four algorithms, I've tuned them independently, I've selected for them subsets of features and then combined the answers.</p></li>
<li><p>I ended with three clusters predicted by xgboost. They all used the entire training set to fit, but they had different parameters and the features for them also differ a bit. The fourth cluster was really very hard. I just return cluster mean for all such objects at the test set. Luckily it was the least populated cluster.</p></li>
</ol>
<p>Now some details if someone is interested.</p>
<p>Cross-validation. It was really essential here. The leaderboard was absolutely uninformative due to the probing. The test set was of the same size as train set, public test set was only 0.2 of test, so we can't do anything without CV here. I used 10 times 5-fold cross-validation, 50 folds totally. I did not compare means, instead I calculated the t-statistic (scipy.stats.ttest_rel does it well in this case) and I tried to guide myself with this statistic. I tried not to go forward if the p-value was more than 0.05. And I tried to reduce the number of fits because I'm aware about the multiple testing problem. This framework was robust, it gave good results, I've managed to come to 0.58 local score without dropping the outlier.</p>
<p>However although I tried to do less fits, changed my random seeds for cv regularly, the overfitting has encountered. The problem was that by improving the score we are fitting better almost on outliers only. That's how I came to the second cv framework, suggested by Evgeny Patekha (<a href="https://www.kaggle.com/johnpateha">https://www.kaggle.com/johnpateha</a>) after the Sberbank competition. I've done cross_val_predict with xgboost and removed 1% of objects with the biggest errors. And tried to do everything the same on this subset. And I found that my feature selection was good mostly for outliers, the n_estimators was far to large (but the best on the entire dataset). I decided to trust the second cv framework, changed the parameters and feature subsets and that was my final submission.</p>
<p>The best feature selection for this problem for me was colsample_bytree=0.6, colsample_bylevel=0.8. I tried to add features greedy, tried the adddel method (adding, deleting, adding, etc.), tried shuffling the feature's value and see how does the cv changes. But in my case all these methods caused overfitting. The outliers overfitting.
I found that gamma parameter helps a lot in controlling the depth of the trees in xgboost. This parameter helped me a lot to increase my score.</p>
<p>I didn't use categorical features in the final submit. However I tried target encoding and it showed good results on the private leaderboard (0.55263). But it was not so good on cv and I didn't use it.</p>
<p>Thanks for reading, questions, suggestions and discussions are appreciated</p>
|
Microsoft Malware Classification Challenge (BIG 2015)
|
2nd place code and documentation
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Microsoft Malware Classification Challenge (BIG 2015)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everyone,</p>
<p>We cleaned up our code, ran everything again and after hours and hours of waiting it was quite a relief to reproduce the same score - even slightly better: 0.0030 instead of 0.0032.</p>
<p>Attached you find our code and documentation, feel free to ask for more details!</p>
<p>Marios & Gert</p>
|
Microsoft Malware Classification Challenge (BIG 2015)
|
6th place code and description
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Microsoft Malware Classification Challenge (BIG 2015)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everyone, here is the link to <a href="https://github.com/sash-ko/kaggle-malware-classification">github repo</a> with code and brief solution description. I concentrated mostly on feature engineering and in the end, I got around 28K of them grouped into 9 feature sets: opcode ngrams, bytecode ngrams, section names from asm files and some others.</p>
|
Microsoft Malware Classification Challenge (BIG 2015)
|
Brief Description of 10th Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Microsoft Malware Classification Challenge (BIG 2015)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Unfortunately I didn't have much time for this competition, but here it is my solution. It can still have a LOT of improvements.</p>
<p>Basically I created 4 features groups.</p>
<p>1) - Convert each .bytes file into an image matrix and extracted 320 features acording SARVAM method. Then used kmeans and other clustering algorithms like Affinity propagation, Spectral clustering, etc, to extract groups with different number of classes of that 320 features. It generated me 10 categorical features. It don't have great performance, but together with other features can improve a little the final results.</p>
<p>2) - Count of many manual choosen words from the .asm files by section (.header, .text, .data, etc), like: "import, sub, loc, call, push, xor, add, jmp, security, dll, ret, unk, extrn, re, LZX, Decode, bf, obfuscator, rwe, COLLAPSED, dd, dw, db, nop, export, dllentry, reg, crypt, CWinApp, dllimport, Cookie, Thread, bb, jumptable, File, Delphi, Microsoft". In my tests that features have a good performance. It generated me 407 count features. interesting I found more that 1 file with word "obfuscator" inside, and it is an obfuscator.</p>
<p>3) - Most interesting feature, mean line length by section (.asm files). I gives me 11 features from all sections (.header, .text, .data, etc). Performed good. Probably using stardard deviation of line length can also give good performance, but I didn't tested it. To build that features I suposed Mean Line Length by section could work as a malware signature.</p>
<p>4) - Files sizes compressed and uncompressed and .bytes and .asm and first byte of each .bytes files. 5 features.</p>
<p>My last step was join all those features and run an multiclass random forest with many trees to find the feature importances. Then I just pick around 300 more important features and trainned it using crossvalidation with xgboost. High number of trees and low eta gives a more stable response in gbdt algorithms. My local CV was 0.0103.</p>
<p>Then I aplied a trim technique to that results testing in my crossvalidated trainset. For each instance if the maximum prob was higher than 0.9998 I turn it to 1 and the other 8 classes to 0. If any class have prob below 0.0002 I set it to 0 and sum the prob value to the class with the high prob in that instance. It improved my CV trained dataset performance from 0.0103 to ~0.009. And private 0.00684.</p>
<p>As the trim technique is very risk and only 1 mislabeled instance can destroy your score, I trusted in my local CV, so I choose those two submission as my final models. It seems my trimmed model worked well.</p>
<p>Also I sent a lot of email (via Kaggle) to other competitors (more than 10) to team up and nobody returned... anybody knows if that email system is working?</p>
|
Microsoft Malware Classification Challenge (BIG 2015)
|
Brief Description of 3rd Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Microsoft Malware Classification Challenge (BIG 2015)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everyone, thanks for this competition! Special thanks to my teammates - you are amazing!</p>
<p>In this post I will briefly describe my features and semi-supervised trick.</p>
<p>The features we used consists of ~200 features from me and 800 features from Dmitry. <br>My features are:</p>
<ol>
<li><span style="line-height: 1.4">count of lines for every sections ('.data', '.rsrc', '.text' and so on) in *.asm and entropy of this distribution. I found this kind of features are very useful!</span></li>
<li><span style="line-height: 1.4">asm instruction count, like ‘jmp’, 'mov', … I use 30 most frequent instructions.</span></li>
<li><span style="line-height: 1.4">sizes of *.asm and *.bytes and it’s ratio.</span></li>
<li><span style="line-height: 1.4">sys calls (grepped by “__stdcall” from *.asm) + TF-IDF + NMF(n_components=10). </span><span style="line-height: 1.4">I</span> found it that TF-IDF works well and NMF performs better than PCA for dimentionallity reduction (at least for this task)</li>
<li><span style="line-height: 1.4">like previous, but for functions (grepped by “FUNCTION”)</span></li>
<li><span style="line-height: 1.4">4-gramms. We perform feature selection by freqeuncy analisys + LinearSVC(penalty=’l1’).transform() + RF.feature_importance(). It’s allow as to select ~130 4-gramms which solely gave 0.01193/0.00958 on public/private LB using single model!</span></li>
<li><span style="line-height: 1.4">10-gramms. The process was similar to 4-gramms extraction and selecting, but we use hashing and some magic(see below) on last stage. Finally we use only </span><strong style="line-height: 1.4">14</strong><span style="line-height: 1.4"> 10-gramms -- and they improved our scores!</span></li>
</ol>
<p>Using this features (without 10-gramms), xgboost and some kind of bagging-magic (thanks Stanislav for this!) we got score public/private: 0.00847 / 0.00559</p>
<p>Similarly to <a href="http://www.kaggle.com/c/malware-classification/forums/t/13490/say-no-to-overfitting-approaches-sharing/72311#post72311">LittleBoat&Co's approach</a>, we’ve done semi-supervised trick, but instead choosing label by the max probability, we sampled labels based on test predictions, and averaged several runs. This trick improves our score from 0.0055 to 0.0045 on private LB (single model)</p>
<p>We were able to go down from 0.0045 to 0.0041 on private LB by adding 14 10-gram feats. We selected them as follows: we generated out of fold predictions for train and sorted objects by true class probability, labeled as ‘1’ top 100 worse, and run RF on 500 preselected 10-grams. We selected the most important features, and it turned out we can have 1.0 accuracy for this binary task just with 14 feats. Since the feats were naturally 10 gramms, the objects of class ‘1’ were rather different, the number of feats is 10 times less than number of objects, we wanted to think that we will not overfit using these feats (but you will in general).</p>
<p>Dmitry will describe his approach and some mixing stuff soon.</p>
<p>We are going to share our code later</p>
|
Microsoft Malware Classification Challenge (BIG 2015)
|
Brief Description of 7th-Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Microsoft Malware Classification Challenge (BIG 2015)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, congratulations to the winners. This was a fun competition with an interesting dataset and great communication in the forums. I’m posting a description of my solution because it seems to use many fewer features (25) than the winning solutions and includes an interesting calibration technique for RF prediction probabilities.</p>
<p><strong>General Approach</strong></p>
<p>1. Generate features from the .asm and .byte files. <span style="line-height: 1.4">Features included:</span></p>
<p style="padding-left: 30px">• General characteristics, such as the length of the files, start address of the asm file, etc.</p>
<p style="padding-left: 30px">• Counts of various objects such as hex codes, hex 2grams, assembly language commands, etc.</p>
<p style="padding-left: 30px">• Hashes of specific parts of the asm file, such as each of the first 50 lines and specific properties of the asm file (e.g., segment type, segment permissions, etc.)</p>
<p>2. Reduce the feature set (which started at about 2000 features) with greedy feature selection. CV was done with 10-fold cross-validation repeated 10 times with re-randomized folds, so 100 folds altogether.</p>
<p>3. Generate prediction probabilities with random forest (RF) algorithm (sklearn implementation, estimators ranging from 500 – 1,500)</p>
<p>4. Arguably the most important step, calibrate the prediction probabilities by raising them to a power selected via grid search (actually 9 different powers, one for each class). See below for details of this technique.</p>
<p><strong>Performance </strong></p>
<p>Public LB Performance was consistently a little better than CV performance but almost always moved in the same direction. The final CV and LB log losses were .005 and .004, respectively. The Private LB result was .006.</p>
<p><strong>Model </strong></p>
<p>The final solution was a single RF model consisting of the mean of 10 forests. Each forest consisted of 1,500 trees. The purpose of producing 10 different forests was to randomly change the hash seed for each forest to discourage RF from treating the hash features as numerical values. In approximate order of importance, the features were:</p>
<p>- hash of 11th line of asm file (this is the line is most of the asm files that identifies the target processor)</p>
<p>- length of HEADER section in asm file</p>
<p>- count of '??' 2grams in bytes file</p>
<p>- count of '00' 2grams in bytes file</p>
<p>- length of .rsrc section in asm file</p>
<p>- count of ‘A6’ 2grams</p>
<p>- count of ‘DE’ 2grams</p>
<p>- sum of all hex values in bytes files (excluding ‘?’ chars)</p>
<p>- count of ‘E8’ 2grams</p>
<p>- count of movzx commands</p>
<p>- count of align commands</p>
<p>- length of .idat section</p>
<p>- hash of segment type</p>
<p>- length of .data section</p>
<p>- count of proc commands</p>
<p>- first address of asm file</p>
<p>- hash of Segment Permissions</p>
<p>- count of jb commands</p>
<p>- hash of line 10 in asm file</p>
<p>- count of xor commands</p>
<p>- count of dll calls</p>
<p>- length of DATA section</p>
<p>- count of sbb commands</p>
<p>- count of fstp commands</p>
<p>- hash of 34th line in DATA section</p>
<p><strong>Probability Calibration</strong></p>
<p>While RF did a very good job of separating the classes, it did a poor job of predicting probabilities. This is a well-known problem with the RF algorithm, as described in [1]. Because each tree contains only a subset of features, even a very good model will include some trees with poor predictions, which will skew the probabilities towards the center. I now know that there are a number of existing techniques to re-calibrate the probabilities produced by an RF classifier, but I was unaware of this research at the time I encountered the issue, so I devised a heuristic to solve the problem.</p>
<p>Since the prediction probabilities produced by RF will be too conservative (e.g., too close to .5), it seemed reasonable that raising the probabilities to a power <em>n</em> and then re-normalizing might improve the results. For example, in a two-class problem, suppose the probabilities are .95 and .05. After raising both to the power of 2 and then renormalizing, the probabilities would be .997 and .003.</p>
<p>Applying this technique to the results of the Malware model dramatically improved the log loss score. Before applying the calibration, CV scores from my best model were about .02. Applying a standard power of 2.3 to all probabilities reduced the CV score to about .006. The LB scores fell by a similar amount. </p>
<p><span style="line-height: 1.4">I found further that I could improve the log loss score even more by applying a different calibration power to each class. This makes sense since the model is better at identifying some classes than others, so some classes need more calibration than others. Interpreting the calibration powers as tuning parameters and using a CV grid search to find the best ones resulted in the following list of calibration powers:</span></p>
<p>Class1: 3.1</p>
<p>Class2: 1.8</p>
<p>Class3: 4.4</p>
<p>Class4: 2.8</p>
<p>Class5: 1.9</p>
<p>Class6: 2.9</p>
<p>Class7: 2.7</p>
<p>Class8: 2.4</p>
<p>Class9: 1.8</p>
<p>For example, all predictions for Class 1 were raised to the power of 3.1, and all predictions for Class 2 were raised to the power of 1.8, etc.</p>
<p><span style="line-height: 1.4">Applying this heuristic to the raw probability scores produced by the RF model reduced the CV score to about 0.005 and the public LB score to about .004. Further, the calibration powers were relatively stable. Changing them a little had correspondingly small effect on the log loss value. This makes makes the technique far superior to threshholding, which can have dramatic and unexpected effects on the logloss score. </span></p>
<p><span style="line-height: 1.4">Since my feature set was relatively small and not especially creative, I think the calibration technique had the biggest impact on producing a top-10 result. I suspect that if others who used RF (or other ensemble methods) had applied this technique, they could have easily surpassed my score.</span></p>
<p>[1] H. Boström. Estimating class probabilities in random forests. In Proc. of the International Conference on Machine Learning and Applications, pages 211–216, 2007.</p>
<p>@Bats & Robots</p>
<p>1) Just took the mean of the predictions from all 10 forests</p>
<p>2) Yes, I did use some 2gram counts -- ??, 00, etc. See list above.</p>
<p>3) I used a sort of funny system for greedy selection. I started by adding features based on CV scores, but used a relatively low number of trees (500) and folds (10) for this step so it tended to add more features than were optimal. Then I removed features using cv with more trees (1000) and more folds (100). I found that this method was much more efficient than starting with all potential features. </p>
|
Microsoft Malware Classification Challenge (BIG 2015)
|
First place code and documents
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Microsoft Malware Classification Challenge (BIG 2015)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi here is our code and documents. Our reproduced result can get 0.0029 and 0.0023 for our best single and ensemble models, respectively.</p>
<p>Edit: you can also find it in Little Boat's github repo, https://github.com/xiaozhouwang/kaggle_Microsoft_Malware</p>
|
Microsoft Malware Prediction
|
#1435th Solution - Public LB 0.707 - Private LB 0.631
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Microsoft Malware Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for the compliments and encouragement. I posted my kernel <a href="https://www.kaggle.com/cdeotte/embeddings-network-malware-0-697-0-773">here</a>. </p>
<p>Hi Carlos, thanks for your renewed interest. You motivated me to make diagrams and I'll post my finished kernel today. Below are pictures. A neural network without activations mimics PCA (because PCA is a linear model). Note, when I say "no activation", that means that the "identity activation" is used. In the first diagram, you can see the 2:1 ratio. The output from the country embeddings is 6 and the output from their dense layer is 3. </p>
<p>An embedding mimics inputing one-hot-encoding into your neural network. Therefore in the diagram below, <code>Country_1</code> is the boolean which says if the variable is county 1. The actual geographical grouping takes in 8 variables: <code>CountryIdentifier</code>, <code>CityIdentifier</code>, <code>OrganizationIdentifier</code>, <code>GeoNameIdentifier</code>,
<code>LocaleEnglishNameIdentifier</code>, <code>Census_OSInstallLanguageIdentifier</code>, <code>Census_OSUILocaleIdentifier</code>,
<code>Wdft_RegionIdentifier</code> <br>
<img src="http://playagricola.com/Kaggle/geographicalB31919.jpg" alt="image"> </p>
<p>Below is the hardware grouping. The actual module takes in 18 variables: <code>Processor</code>, <code>Census_MDC2FormFactor</code>, <code>Census_DeviceFamily</code>, <code>Census_ProcessorCoreCount</code>, <code>Census_ProcessorClass</code>,
<code>Census_PrimaryDiskTypeName</code>, <code>Census_HasOpticalDiskDrive</code>, <code>Census_TotalPhysicalRAM</code>, <code>Census_ChassisTypeName</code>,
<code>Census_InternalPrimaryDiagonalDisplaySizeInInches</code>, <code>Census_InternalPrimaryDisplayResolutionHorizontal</code>,
<code>Census_InternalPrimaryDisplayResolutionVertical</code>, <code>Census_PowerPlatformRoleName</code>, <code>Census_InternalBatteryType</code>,
<code>Census_InternalBatteryNumberOfCharges</code>, <code>Census_IsTouchEnabled</code>, <code>Census_IsPenCapable</code>,
<code>Census_IsAlwaysOnAlwaysConnectedCapable</code>
<img src="http://playagricola.com/Kaggle/hardwareB31919.jpg" alt="image"> </p>
<p>After creating the 6 grouping, they send all their outputs into the final 3 hidden layers:
<img src="http://playagricola.com/Kaggle/outputB31919.jpg" alt="image"></p>
<p>Ok, thank you <a href="/cpmpml">@cpmpml</a> and <a href="/sheriytm">@sheriytm</a> for the encouragement. I posted my insights in the Discussions section.</p>
<p>I don't believe that I did overfit. Every decision was made using Time Split Validation. I restrained from making decisions based on LB. I believe my model has strong power to predict into the future. I posted more information <a href="https://www.kaggle.com/c/microsoft-malware-prediction/discussion/84227">here</a>.</p>
<p>(I dropped in private LB because my average malware rate throughout October and November 2018 is a constant 0.5 as indicated in my plots above. If I added any downward slope, I would have had a high private LB. The original NFFM had a downward slope but I manually adjusted it to level before ensembling with my NN and LGBM)</p>
<p>I know the definition and I answered your question above. I "overfit" because I adjusted the <code>HasDetection</code> rate to be an average 0.5 during October and November 2018. My NN and LGBM learned this on their own without any post adjustment. My NFFM was adjusted manually (see full model description above). </p>
<p>When I was making my final submission, I knew that the average rate over time would decide the competition winner. My first final submission had constant 0.5 thru October and November. For my second final submission I added an upward slope, unfortunately. What I didn't try was a downward slope. </p>
<p>(After the competition ended, I tried a downward slope and saw that it was the correct choice.)</p>
<p>CPMP:</p>
<blockquote>
<p>Yes, and Onodera shared a related feature. My immediate reaction was to ask about what could cause this correlation. Correlation without causation is a trap for machine learning models.</p>
<p>In this case, the only plausible cause would be the way data was sampled. For this correlation to carry on, it would mean that private test data was sampled independently, but with the same methodology than train data. Betting on that is a bet. Bets are not always won, that's why they are bets. </p>
</blockquote>
<p>There are two reasons for this correlation. One is artificial and one is real causation. I could explain, but does anyone really want to know now? (and from me, position 1435) Therefore my decision to add an upward slope wasn't a lottery. It was based on more insights that I have not yet revealed.</p>
<p>Thank you Michael</p>
<p>Thank you Kaggle and Microsoft for providing data, computing resources, and community. Thank you Kagglers for the collaboration (too many wonderful people to name). This competition was fun and the dataset was endlessly fascinating.</p>
<h1>Ensemble</h1>
<p>My solution is the ensemble of 3 ensembles! NFFM, LGBM, NN. First I apply my <a href="https://www.kaggle.com/cdeotte/private-leaderboard-0-703">trick</a> to update gdys's <a href="https://www.kaggle.com/guoday/nffm-baseline-0-690-on-lb?scriptVersionId=10342653">NFFM baseline kernel</a> from 0.690 to 0.695 LB (and I ensemble his 4 submission files). Next I ensemble 7 of my LGBM that score 0.702 each (50 CPU hours). Next I ensemble 7 of my NN that score 0.698 each (50 GPU hours). Lastly I build <code>0.25*NFFM + 0.25*NN + 0.5*LGBM</code> to score 0.707 LB. (Just NN + LGBM scores 0.705)</p>
<h2>NFFM</h2>
<p>Here is NFFM before updating <br>
<img src="http://playagricola.com/Kaggle/nffm31419.png" alt="image">
NFFM after updating
<img src="http://playagricola.com/Kaggle/nffm31519.png" alt="image"></p>
<h2>LGBM</h2>
<p>My LGBM is the result of many discussions with Kagglers. Thank you. (LGBM kernel posted <a href="https://www.kaggle.com/cdeotte/high-scoring-lgbm-malware-0-702-0-775">here</a>)</p>
<p>I removed <code>MachineIdentifier</code> and 17 variables posted <a href="https://www.kaggle.com/jiegeng94/everyone-do-this-at-the-beginning">here</a> (actually I changed 3 choices). Next I label encode all variables and remove values less than 0.01% total data or 3x more or 3x less frequent between train and test. Explained <a href="https://www.kaggle.com/cdeotte/time-split-validation-malware-0-68">here</a>. (I found 3x better than 4x) Next I do not turn these variables into <code>category</code> dtypes nor one-hot-encode. Instead I feed them into my LGBM as integers. I set <code>num_leaves=2048</code>, <code>learning_rate=0.025</code>, <code>colsample_bytree=0.5</code>, and <code>early_stopping_rounds=100</code>. All this achieves around 0.693 LB (i think). </p>
<p>Next I add frequency encoded <code>Census_OSVersion</code>, <code>Census_OSBuildRevision</code>, <code>Census_InternalBatteryNumberOfCharges</code>, <code>AvSigVersion</code>. Important: I encode train and test <strong>separately</strong>. Next I add frequency encoded <code>CountryIdentifier</code>, <code>Census_InternalBatteryNumberOfCharges</code> and encode <strong>together</strong>. This brings score to around 0.696. Time FE works because there is an induced frequency correlation leak resulting from Microsoft up-sampling. Country FE works because there is a true correlation between Country and frequency. Not sure why Battery FE works, but validation says so.</p>
<p>Next I convert the following variables to dtype category: <code>CountryIdentifier</code>, <code>SkuEdition</code>, <code>Firewall</code>, <code>Census_ProcessorCoreCount</code>, <code>Census_OSUILocaleIdentifier</code>, <code>Census_FlightRing</code>. This prioritizes them. LB now 0.697. </p>
<p>Next come the big features. I engineer <code>Lag1</code> equal to the difference between <code>AvSigVerision Date</code> and <code>Census_OSVersion Date</code>. And <code>Lag2</code> equal to the difference between <code>AvSigVersion Date</code> and July 26, 2018 the beginning of training data sampling window (and replace negatives with zero). I encode test's <code>Lag2</code> as <code>AvSigVersion Date</code> minus September 27, 2018. This brings LB to 0.701 i think. Both of these variables indicate whether a user updates their computer or not. (Contrary to AvSigVersion time stamps, I believe there is no training data before September 27. All earlier virus AvSigVersion dates are un-updated computers. Thus Lag2 finds them. These have lower detection rates because either the user doesn't use computer often or has other better virus software.)</p>
<p>Finally, I engineer the difference <code>Census_SystemVolumeTotalCapacity</code> minus <code>Census_PrimaryDiskTotalCapacity</code>. And the ratio. And I frequency encode <code>Lag1</code> <strong>separately</strong>. Responsible hard drive use indicates responsible computer use. </p>
<p>In total, I have 82 - 18 + 6 + 2 + 3 = 75 features. All of these choices increase my <a href="https://www.kaggle.com/cdeotte/time-split-validation-malware-0-68">Time Split Validation</a> and yield 0.718 AUC. And my LGBM scores 0.702 LB public. Look at my LGBM's beautiful detection rate over time. This is original output no updating! From the <code>Lag</code> variables, the model learned the shape itself</p>
<p><img src="http://playagricola.com/Kaggle/lgbm31419.png" alt="image"></p>
<h2>Embedding Neural Network</h2>
<p>I'm very proud of my embedding network and am thankful to many Kagglers who helped me. (NN kernel posted <a href="https://www.kaggle.com/cdeotte/embeddings-network-malware-0-697-0-773">here</a>.)</p>
<p>First I remove 17 variables and MachineIdentifier. Next I remove the 5 time variables <code>EngineVersion</code>, <code>AvSigVersion</code>, <code>AppVersion</code>, <code>Census_OSVersion</code>, <code>Census_OSBuildRevision</code> I set aside the two truly numeric variables <code>Census_SystemVolumeTotalCapacity</code> and <code>Census_PrimaryDiskTotalCapacity</code>. Each of the remaining 59 category variables, I preprocess. Within each category, I remove all values that are less than 0.1% total data and/or less than 1 standard deviation from probability of detection 0.5. Explained <a href="https://www.kaggle.com/cdeotte/neural-network-malware-0-67">here</a>. I label encode the remaining values and send each into an <code>Embedding</code> layer with input output ratio 1:1. (There are a total of 1500 unique values among all variables) </p>
<p>Now the creative part. There are 8 geographical variables. I output all their Embeddings into a common dense layer with 2:1 input output ratio (and no activation thus mimicking PCA). There are 18 hardware variables. I input them into their own separate dense layer 2:1. Then I do this with the 6 name/model variables. Then I do this with the 15 software/virus related variables. And finally with the leftover 12 variables. So I'm squeezing similar variables together in a 2:1 reduction thus forcing the network to represent data in a reduced latent space.</p>
<p><img src="http://playagricola.com/Kaggle/geographicalB31919.jpg" alt="image"></p>
<p>Next I log frequency encode all variables (including removed time variables) with cardinality over 10. There are around 30. I feed all those numeric variables into their own dense layer with 2:1 input output. I also feed the two disk capacities into that dense layer. And I create the variables <code>Lag1</code> and <code>Lag2</code> described above and send them into this layer too.</p>
<p>I take all the outputs from the 6 separate dense layers and feed into a single hidden layer of 100 units. I finish the model like this </p>
<pre><code>x = Dense(100,kernel_initializer='he_uniform')(x)
x = Dropout(0.2)(x)
x = BatchNormalization()(x)
x = Activation('elu')(x)
x = Dense(100,kernel_initializer='he_uniform')(x)
x = Dropout(0.2)(x)
x = BatchNormalization()(x)
x = Activation('elu')(x)
x = Dense(100,kernel_initializer='he_uniform')(x)
x = Dropout(0.2)(x)
x = BatchNormalization()(x)
x = Activation('elu')(x)
x = Dense(1,activation='sigmoid')(x)
</code></pre>
<p>I compile with Adam optimization and learning rate 0.01. I train this network for 10 epochs using a random 80% of the overall training data. This takes 7 hours on Kaggle's Tesla P100 GPU. To fit the training data in GPU memory, I alternate training with 4.5 million rows at 256 batch, then the other 4.5 million rows at 256 batch. I shuffle each 4.5 million before using. Look at the beautiful perfect distribution (original no updates): </p>
<p><img src="http://playagricola.com/Kaggle/nn31419.png" alt="image"> </p>
<h1>Result</h1>
<p>Everything runs in Kaggle kernels and once I add comments I will post my code so Kagglers can play with it. This model results in 0.707 LB and the illusion of a Gold Medal :-)
<img src="http://playagricola.com/Kaggle/result31419.png" alt="image"></p>
<p>Yes. I am curious about those solutions too.</p>
|
Microsoft Malware Prediction
|
#25th Solution by finally_overfitting (Say No to Gradient Boosting)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Microsoft Malware Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks a lot <a href="/cpmpml">@cpmpml</a> , I never expected such a good result, its possible only because of the wonderful Kaggle community</p>
<p>From my team and from everyone participating in this competition, I would firstly like thank to all those who contributions to the Discussion Forms and Kernels, gave us a lot of insights,
<a href="/cpmpml">@cpmpml</a> , <a href="/titericz">@titericz</a> <a href="/ogrellier">@ogrellier</a> <a href="/tunguz">@tunguz</a> , <a href="/cdeotte">@cdeotte</a> , <a href="/abhishek">@abhishek</a> , <a href="/tilii7">@tilii7</a>, <a href="/longyin2">@longyin2</a> <a href="/nroman">@nroman</a>
<a href="/bogorodvo">@bogorodvo</a> <a href="/guoday">@guoday</a> <a href="/adityaecdrid">@adityaecdrid</a>. I am really sorry if I missed out any of you. </p>
<p>Secondly a shout out to all my awesome team-mates, without whose contribution this could not have been possible <a href="/alonbochman">@alonbochman</a> <a href="/jaiwei98">@jaiwei98</a> <a href="/datahobbit">@datahobbit</a> <a href="/paragkale">@paragkale</a> (names in just alphabetical order), all of us worked really hard for this.</p>
<p>Our top 20 public LB subs had a least of 0.64350 private LB score, which speaks quite good of our validation approach.</p>
<p><strong>Validation Approach</strong></p>
<p>Using <a href="/ogrellier">@ogrellier</a> 's idea of Adversarial Validation I used around 15-20 features(mostly time dependent ones) to get the P(test) for each sample. Then I broke the sample's into 10 bins of equal size as Bin 1: 0 < P(test) < 0.1, Bin 2: 0.1 < P(test) < 0.2, ... etc. Each bin was then broken into 2 bins, (One with Positive Class and Other with Negative Class). In total, there were 20 bins and the training and validations sets were stratified using these bins.
One more idea to try could be to create bins of unequal size. We used 5 folds for each of our models and my idea was that both training and validation set for each of the model should have every type of sample(determined by P(test)). Initially when the distribution of data in private and public LB was not known, all folds having the samples of each kind was the safest way to go, and it turned out to be better in the later stages too. We did not spend too much time on time split validation as it was not promising in terms of PLB with our initial tries, returned to it after <a href="/cdeotte">@cdeotte</a> 's awesome kernel, but abandoned it again.</p>
<p><strong>Feature Engineering</strong></p>
<p><strong>LGBM:</strong>
Our best LGBM model scored 0.695 on the PLB, using only 81 features, almost all of them were original features, encoded in some different way. For almost all of the IdentifierColumns I had used frequency encoding(some log and some non-log) thanks to <a href="/longyin2">@longyin2</a> 's insights, used <a href="/cpmpml">@cpmpml</a> 's great and simple code to sort the features with Version in them and split up some of them. Three features which were shown to be of no importance to LightGBM were dropped. There was no removing of unbalanced values in categorical features in LGB, I wonder if it would have lead to higher private scores.</p>
<p><strong>Note</strong>: Don't do frequency encoding or feature interactions blindly, try to find which kind of frequency encoding and interactions correlate most with your target. Blind frequency encoding can have an adverse effect on your CVs.</p>
<p><strong>NNs:</strong>
Our best NN scored 0.691 on the PLB and 0.669 on the private LB. From the discussions, I found that most of the NNs scored high on the private LB too, which verified the general idea that NNs were overfitting less. We used entity embeddings with 3 layers and 0.691 was achieved in the final week of the competition, it was a long struggle from the initial 0.684. Kagglers getting 0.693/0.697 please do share how did you manage to get so high scores. NNs used a total of 105 features, these features were created in the last week of the competition too, I tried using them on LGBMs but thanks to Kaggle's memory limit struggles, I was constrained. The main advantage for me using NNs was that each fold took only 30 mins to train so training parallely NNs were much better in terms of speed and accuracy(both considered) than any other model.</p>
<p><strong>FTRLS</strong>
FTRLs would not have been possible had it not been for <a href="/tunguz">@tunguz</a> . Thanks again Bojan for your great Kernel. His great kernel's adaption to our validation scheme of 5 folds helped boost our public LB by 0.001. FTRLs took the same basic features as Bojan's Kernel.</p>
<p>Other Models:</p>
<p>We gave a shot at using Sparse Matrices thanks to <a href="/bogorodvo">@bogorodvo</a> , also FTRLs using NNs thanks to <a href="/guoday">@guoday</a> Did not experiment very much with these, but they helped in blending/stacking.</p>
<p><strong>Other Ideas</strong></p>
<p>One more idea was regarding misclassifications/good classifications. You can choose to remove either one of them and get 2 new models. This worked quite well in giving us diversity, atleast in the PLB ;)</p>
<p><strong>Final Submission</strong></p>
<p>We were quite concerned about choosing our last 2 subs and had a lengthy debate. The final idea was to choose an aggressive and a defensive one.</p>
<p><em><strong>Aggressive</strong></em>: was the best PLB of 0.704, which gave us a private LB of 0.64729.
This was a weighted blend of 2 stacks(both at PLB 0.700) and one blend(at PLB 0.702),
this would land us at 48th in the current private LB.</p>
<p><em><strong>Defensive</strong></em> was at 0.700 with a correlation of 95% with LGBMs. This was a stack of only NNs and FTRLs going with our idea that NNs and FTRLs were overfitting less. It was a harsh decision, choosing the other sub so low in PLB,(0.004 difference), but it worked out quite well</p>
<p>I would like to specially mention my team mate <a href="/datahobbit">@datahobbit</a> who worked very hard with stacking inspite of his office hours and gave us NN+ FTRL stack of 0.700 on the last day of the competition which lead to our rank of 25th.</p>
<p><strong>Challenges Faced</strong></p>
<ol>
<li><p>The biggest challenge for me and for all those without any powerful machine or resources was memory limitation :D. Also the time it took for each model to run, was huge.</p></li>
<li><p>Our issue with stacking was that many of our models with high CVs were not performing equally well in PLB. I admit we went a bit after PLB after seeing sudden increase in the PLBs in
the last 2 weeks. One model at 0.745 would score 0.691, other would 0.694 on the LB, similarly there were many more such disparities. This would lead them to getting equal weightage in the stacking process, which we definitely did not want. We used thresholds and filters to try and get some subset of data which correlated well with the public LB. This was quite risky because the no. of samples were reducing to a great extent and chances of overfitting were increasing. Even this worked only to some extent. The best stacking could get alone was only 0.700. I would love to know if any one you faced similar kind of problems and what did you try?</p></li>
</ol>
<p>PS: Forgive me for any error in my write-up and it being so long, this is my first, I hope there are many more to come.
Also <a href="/alonbochman">@alonbochman</a> <a href="/jaiwei98">@jaiwei98</a> <a href="/datahobbit">@datahobbit</a> <a href="/paragkale">@paragkale</a>, please add anything you think should have been included, but I missed. </p>
|
Microsoft Malware Prediction
|
#6 Solution (Giba view)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Microsoft Malware Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Some time depend variables I just added to my models as categorical features and not TE. The original versioning features(AvSig, Engine, App, etc...) I dropped completely. I just used some transformations of Engine and App that showed small difference between train and Private distributions. </p>
<p>sorry, not planning to share. It would need a lot of working hours to clean it to make public.</p>
<p>Thanks Alon.</p>
<p>1) To perform HPO, first its necessary a reliable cross-validation split which is not possible to perform using this data set. Using simple Kfold CV in this dataset lead to overfited and unreliable 0.74+ scores. Performing time split validation, one month to train and one to validate, is also not ideal, but its better than 5Fold here. In this schema validation AUC is around 0.71. Using that schema I was able to try some HPO with more realistic results.</p>
<p>2) Regarding the validation scheme: yes. It produces better correlation between CV and LB. Most of the time(but not all) improvements in this CV correspond to improvements in LB.</p>
<p>3) I tested feature by feature AUC, I tried many threshold for abs(AUC1-AUC2) and picked 0.015 as the threshold to define time invariant or not. </p>
<p>4) stage 3 uses 2x number of round bacause it have ~2x number of rows. The weights I got after some optimization procedure. First I trained the data set on 5 Fold CV and predicted the test set. After that I trained on 3stage schema and optimized the weights in order to maximize the correlation between both predictions. Optimization found some weights like 0.21xx 0.16xx 0.63xx. I rounded to .2 .2 .6.</p>
<p>Thanks </p>
<p>Steps 2 and 3 uses exactly the same hyperparameters. Only num_rounds of step 3 uses 2x num_rounds because train is about 2x bigger.</p>
<p>Geometric or arithmetic should work fine. Usually geometric perform a little better when metric is AUC.</p>
<p>Coefficients .2 .2 .6 I got optimizing test prediction correlation of the same dataset trained on 5fold CV and 3stage approach. An advantage of this approach is that it bring some diversity at the same LB score. Blending 5Fold train and 3stage prediction of the same dataset bring even more improvement in LB scores.</p>
<p>I pick my best submission and hard coded zeros on all rows with dates after 2018-10-26. Submit it and the Public LB score was the same as before. So it confirms my hypothesis that dates after 2018-10-26 are all in Private LB.</p>
<p>Thank you Oliver and sorry for your drop on Private.</p>
<p>Thanks Chris, your public contribution in this competition was amazing. </p>
<p>Actually I used Bayesian probability to calculate Target Encoding of all categories in August using Kfold strategy(split by time in August also). It leads to August AUC of all features. Then I calculated the same Bayesian probability for September but using August as base. It lead to September AUC for all features. Comparing both AUCs I can say which feature is time variant, which isn't. Of course its not perfect since we have only 2 month available in train. To mimic Train/Private it would be necessary at least 3 months of data in train: 1 month to train, 1 month gap and 1 to validate. </p>
<p>To calculate Bayesian prob I use frequency and a smoothing factor, so I can use original features to do it.</p>
<p>Regarding my 3 stage training, its similar of training 5Fold CV for this data set, but its a lot faster. First I train only on 56% of train and optimize my hyper-parameters and features according AUC. Then the state2 and 3 I can't use AUC as early stop, so I disable metrics from the training stages and it fits faster. </p>
<p>thanks</p>
<p>In real life the test, usually follow the same distribution of train, or don't change much over time. So you can expect the same behavior of train. This is not the case of the Private test here.</p>
<p>for 1) I tried truncation but all failed on time validation split.</p>
<p>For 2) the way you imagined is valid, but take into account that some combinations, even rank normalized, are not present in both train/test.</p>
<p>To validate features a good strategy is try adding one-by-one to the model, comparing performances and using a good validation split (it was impossible in this competition due the lack of the precise time variable). Also take into account adding a new feature can change a little the hyper-parameters.</p>
<p>Thanks!</p>
<p>I used time validation split to check time consistency. First split train in 2 folds based on time. Then calculate Target Encoding internally in fold 1, then using fold1 statistics apply TE for fold 2. Compare fold 1 AUC with fold 2 AUC. If both AUCs are close then the feature don't change much over time.</p>
<p>For some features the AUC in fold1 is around 0.54 and fold2 0.51. there are other cases were AUC is 0.50x for fold1 and 0.49x for fold2. This is the idea. Of course the math is not perfect due imperfect time splits for fold1 & 2, but the information is useful when selecting features for TE.</p>
<p>First of all congratulations to the prize winners, gold, silver and bronze medalists. And a special congrats to all who kept the top ranking on both Public and Private sets: Jamon Serrano power, ItaiPink, mlandry, Dmitry Larko, Robin Smits, emorsstuu, ozoneiscool, T.U.V, Kevin Durant is better than St, finally_overfitting, [ods.ai] SMPP, just to name a few.
Also I would like to thanks my team mate <a href="/cpmpml">@cpmpml</a> , I'm sure without your ideas and perseverance we wouldn't be here. By the way, <a href="/cpmpml">@cpmpml</a> solution is <a href="https://www.kaggle.com/c/microsoft-malware-prediction/discussion/84069">HERE</a>.</p>
<p>This competition was really tough. The data is really cool, thanks to Microsoft and Kaggle. CPMP and I tried several experiments of most diverse approaches. First thing we found is that cross-validation scores is not reliable and we looked for better ways to validate models. Before it was shared in the foruns I was using AvSigVersion release dates to build a time variable to help validate models better. I pick the latest date between: AvSigVersion, OsVersion and BuildLab as a good estimate of the sampling date for each MachineIdentifier. Probing LB we were able to detect the switching date from Public to Private test sets in a date around 2018-10-26. So we know approximately the sizes of Public(5438982) and Private(2414271). we compared distribution of features in Public/Train and Private/Train and found many features with completely different distributions between Public and Private compared with train. It makes us think we must build models that generalizes well for Private and not for Public test set. Taking another look at the dates looks like train is sampled from 2 consecutive months from 2018-07-25 to 2018-09-25, Public from: 2018-09-26 to 2018-10-25 and Private from: 2018-10-26 to 2018-11-25. Knowing time series approaches there is no way to mimic Private set time behaviour using that 2 months train set only. To model correctly we would need to have in train 1 month gap between train and validation fold to mimic Private set time split. Even a TS solution being much better, I believe Microsoft was not looking for TS approaches as stated in the problem description.</p>
<p>Taking dates into account I started to build features and models that don't change much over time. To do it I splited trainset in 2 parts: [2018-07-25 to 2018-08-25] and [2018-08-26 to 2018-09-25]. Even splitting train in 2 parts its is not perfect and some time leakage is present in both folds. Even with this small leakage I found it better to test feature engineering than the regular 5 Kfold technique. So in train we got first month of trainset to fit models and second month to validate. Using this splits and Target Encoding, I found many features that don't change much the performance from 1 month to another. The features that are most time dependents are: "AvSigVersion","EngineVersion", "AppVersion", "Census_OSBuildRevision", "Census_OSVersion", "OsVer", "OsBuild", "OsBuildLab", "OsPlatformSubRelease", "ProductName", "IsBeta", "DefaultBrowsersIdentifier", "IeVerIdentifier", "UacLuaenable", "Census_OSBranch", "Census_OSInstallTypeName", "HasTpm", "Platform", "Census_OSWUAutoUpdateOptionsName", "Census_FlightRing", "IsProtected", "Census_DeviceFamily", "Census_ProcessorClass", "Census_PrimaryDiskTypeName", "Census_IsPortableOperatingSystem", "Census_IsSecureBootEnabled", "Census_FlightRing", "Census_IsFlightsDisabled".</p>
<p>On all my models I dropped the original versioning features ("AvSigVersion", "EngineVersion", "AppVersion", "Census_OSBuildRevision", "Census_OSVersion",), because most of the versions in test are not present in train and only used some minor feature engineering based on that features. Feature engineering based on that features are normalized by the sizes of each fold:
a)- Train: 4885834
b)- Valid: 4035649
c)- Public: 5438982
d)- Private: 2414271
This makes features like counts and rank based features better distributed between test splits and train splits.</p>
<p>My train pipeline was base on time splits so I have 3 stages:
1) Train on a)Train and validate on b)Valid
2) Train on b)Valid with no validation and using same number of rounds early stopped in 1)
3) Train full train (a)+(b) using 2x number of rounds early stopped in 1)
Result: blend using geometric average (1)^0.2 * (2)^0.2 * (3)^0.6</p>
<p>This pipeline achieve the same results in terms of Public LB, as training using 5 FOLD CV, but brings low test set correlation compared with my team mate models.</p>
<p>I built several models, including some libFM, FFM, SVCs at the beggining and Keras DL and LighGBM. But we found only Keras DL and LightGBM are aggregating value when blended, so our final solution used around 10x LGBs and 4x Keras DL models in a ranked geometric weighted average.</p>
<p>What didn't worked for me:
- Denoise Auto Encoder,
- Train a model on Private set to fit AvSigVersion, EngineVersion and AppVersion as multiclass target and then predict on Public and train sets as new features. It just added noise to train.
- Target Encoding on Versioning features and time dependent features.
- N-Way target Encoding with N>2</p>
<p>obs. The trick of our solution is just to realized that Private testset is completely different from Public test and prepare the models taking it into account. It would be great if Kaggle in the future explains the methodology to split and sample Public and Private. It could have avoided the huge shakeup that happened here.</p>
<p>Giba 👍 </p>
|
Microsoft Malware Prediction
|
#7th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Microsoft Malware Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks so much <a href="/cdeotte">@cdeotte</a>. We should have formed a team ;-)</p>
<p>Thanks <a href="/chairborneranger">@chairborneranger</a> </p>
<p>My 7th place submission was my first Kaggle competition. I really learned a lot.</p>
<p>I used LGBM with 40 features and used count encoding for most of the features.</p>
<p>Public score:0,68563
Private score:0,66370</p>
<p>I thought it would be nice to make a kernel of it. I only had to change the number of rows that was used (4mln in stead of 8mln), because of a Kernel memory issue.</p>
<p>You can find the Kernel here:
<a href="https://www.kaggle.com/sanderf/7th-place-solution-microsoft-malware-prediction">7th place solution Kernel</a></p>
|
Microsoft Malware Prediction
|
#9 Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Microsoft Malware Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello Chris,</p>
<p>effectively, when I speak of "groupbys" I mean encoder by frequency, but making groupings of columns, for example the columns of the versions, or the columns that are binary type, and some more, that right now I do not even remember.</p>
<p>Regarding the kmeans, we have done kmeans with different populations (2, 4, 8 ... to 64) in the columns that were numerical, but the latter has not had much predictive value.</p>
<p>On the dates we have added the STD between the previous difference of dates, the current difference of dates, and the following difference of dates, ordered by AvSigVersion, of course. The columns that we used the dates are: differences between dates, the mentioned STD and maximum of the differences and STD grouped by AvSigVersion and OSBuildLab.</p>
<p>Finally, what has contributed most to our model, according to the basic importance of the LightGBM training has been the "groupbys", the rest has not contributed much.</p>
<p>Thanks for all your contributions, you have helped us a lot.</p>
<p>MapsaTeam solutions, made with <a href="/carlosevi94">@carlosevi94</a>:</p>
<p><strong>DATA</strong></p>
<p><em>PREPROCESSING:</em></p>
<p>Firstly, we transformed some categorical columns which there are values with differences. E.g: in the feature SmartScreen exists “of”, “Off”, “off” and “OFF”. We transform these data to the same value: “OFF”.</p>
<p>In NaN values, we transform with many criterials: the median, with value -1, and more.</p>
<p><em>PROCESSING:</em></p>
<p>We transform with Label Encoding the categorical data, except in versions columns.
In versions columnn we use two subset of these values. The first contain the first and second number, and the second contain the first, the second and the third number. E.g: the version “1.2.3.4” we transform to “1.2” and “1.2.3”. If the first number are always the same value in all dataset, we transform the following numbers: “2.3” and “2.3.4” and transform these values with Label Encoding.</p>
<p><em>KMEANS:</em></p>
<p>We added some Kmeans values using only the real numerical data. (Using a union of train and test)</p>
<p><em>DATES:</em></p>
<p>We use the dates shared in Discussions by Chris Deotte and add to dataset some values like differences between dates, STD, etc. (using a union of train and test)</p>
<p><em>GROUPBYS:</em></p>
<p>We add eight groupings of columns. For example, we group the versions columns. For example, if there are 14 machines with same versions, we add the number 14 to data in this registers. Our intention with this strategy is remove time-dependence in the dataset. (with all data, train and test)</p>
<p><strong>MODEL</strong></p>
<p><em>LIGHTGBM:</em></p>
<p>We have mainly used LightGBM for predict. We use the hyperparameters which can overfit our model with low value, e.g: low MaxDepth, low EarlyStopping rounds, etc.</p>
<p>We use random CV with differents numbers of splits (3, 5, 7…) and then we ensemble all our results</p>
<p>Our gold medal AUC results: CV: ~ 0,73 | Public LB: ~ 0,697 | Private LB: ~ 0,662.</p>
<p>Our best non-selected AUC results: CV: ~ 0,72 | Public LB: ~ 0,697 | Private LB: ~ 0,664.</p>
<p><img src="https://i.imgur.com/lgilU5x.png" alt=""></p>
<p><strong>-- EDIT --</strong>
We forgot mention that finally we removed versions columns for our final model.</p>
|
Microsoft Malware Prediction
|
12th - whaouh ! - my first real kaggle game
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Microsoft Malware Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Je n'ai fait qu'assister à 2 de ses conférences à Paris et lire sa bio, elle était la marraine de la 1ère promo data science pour l'actuariat à laquelle j'étais il y a 3 ans. Mais ses propos (attention à l'over fitting entre autre) était d'actualité dans cette compétition.
Merci et tout autant bravo ! Vous n'êtes que 3 équipes à être dans les 20 sur les 2 tableaux, quelle maîtrise, respect !</p>
<p>Hi,</p>
<p>Congratulations to the winners ! </p>
<p>I have not participated earlier to the forum because i thought i didn't have anything to bring.
What a surprise this morning ! I would have been proud of myself with a bronz medal, but i've got a gold one !
Thank you to Kaggle, to Microsoft for LGBM and for this game.</p>
<p>Special thanks to :
Andrew Lukyanenko
<a href="https://www.kaggle.com/artgor/is-this-malware-eda-fe-and-lgb-updated">https://www.kaggle.com/artgor/is-this-malware-eda-fe-and-lgb-updated</a>
I began with your kernel, thank you.</p>
<p>Long Yin : for your explanations. I've tried, if i had kept my model inspired from your recommandation, i would be 0.687 on private LB !!
In that solution, i fit my model on train's dataset, with frequency encoding obtained on test data, for appversion and avsigversion.
I chose those two features (and not censusversion etc.) by CV.</p>
<p>Bojun Tunguz
<a href="https://www.kaggle.com/tunguz/ms-malware-adversarial-validation">https://www.kaggle.com/tunguz/ms-malware-adversarial-validation</a>
Of course, it has helped me to implement Yong Yin's explanations.</p>
<p>Vladislav Bogorod
<a href="https://www.kaggle.com/bogorodvo/lightgbm-baseline-model-using-sparse-matrix">https://www.kaggle.com/bogorodvo/lightgbm-baseline-model-using-sparse-matrix</a>
Whaouh, what a kernel !
Your parameters were completetly different from mine. Yours : deep trees with small leaves, instead of mine : small trees (30) with big leaves (2000).
Your parameters disturbed me, because i always have in mind the following sentence : "Experience so far indicates that 4 ≤ J ≤ 8 works well in the
context of boosting" (T. Hastie, R.Tibshirani, J.Friedman - Elements of Statistical Learning p.363). And because i wanted to prevent over-fitting.
But your kernel was very good on public LB.</p>
<p>Olivier and CPMP who i followed a lot.</p>
<p>My solution to be 12th (or 10th, incredible !) :
First transform the dataset, with a lot of frequency encoding (sometimes "rank encoding" like Yan I believe), and splits of "Versions" features in version, subversion, and booleans "is it a major subversion ?" Big memory problems...
Second, fit a LGBM model with big leaves and small depth. I spent a lot of time on this. (0.691 on public LB)
Third, as i fell in the ranking, i read Vladislav's kernel, i tried with my dataset (0.693 on public LB)
And last, blend of these two models. </p>
<p>My configuration : at home after my job and children in bed, PC 8 cores, 16GO RAM and some EC2's machines with 16 cores.
Python 3 - Pandas, ScikitLearn & LGBM.
3 CV's or 5 CV's.
I chose this game because of categories features which i encounter often in my job.</p>
<p>At the end, i made some backward selection on my first model, some features were really bad : </p>
<p>My notebooks (even the one which give a 0.687 private LB) are available on gitlab : <a href="https://gitlab.com/adaubas/microsoft-malware-prediction">https://gitlab.com/adaubas/microsoft-malware-prediction</a></p>
<p>I'm lucky. It's just a game. We all worked so hard.</p>
<p>Thank you !</p>
<p>Clin d'oeil à mes camarades de promo Data Science pour l'actuariat de 2015/2016
et aux enseignants de l'époque : Arthur Charpentier, Jeremy Jakubowicz, Romuald Ellie et bien sûr Xavier & Sébastien Conort qui m'ont donné l'envie.
La marraine (God mother) Françoise Soulié Fogelmane : "feature engeneering, attention to over-fitting, and feature engeneering and attention to over-fitting, and ..."</p>
<p>Thank you !</p>
<p>I spent approximatively 120$ on EC2, my initial budget. I know that exists some less expensive services, but I know that one, and so I choose EC2.
How much faster : i don't really know. In fact, I often have some memory problems with my 16Go laptop (i ve got some memory problems on EC2 too, because there is no swap partition, and before I found the good machine for me : 0.8$ per hour) . And at the beginning, I leave my personal laptop at home running while I go to my job, and didn't like that : EC2 is a solution</p>
|
Microsoft Malware Prediction
|
18th place solution overview
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Microsoft Malware Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="https://www.kaggle.com/cdeotte"></a><a href="/cdeotte">@cdeotte</a>. You're welcome. I did some test runs with leaving a couple of features but didn't notice any significant increase/decrease in local CV Auc. So made the decision to just use all features..also in the hope that this type of network would be able to use any possible pattern between features. Also tried increasing the numbers of folds used for the CV that worked a little bit on the public board but as it turns out that did very bad on the private board. </p>
<p>When I have time I will give it a try with leaving features out...curious to see how that will work out.</p>
<p>Good luck with your further competitions! You did deliver some of the better kernels in this competition :-)</p>
<p>First of all congratulations to all gold winners. This was again a truly amazing and really enjoyable competition. I would like to thank Kaggle and Microsoft for hosting may'be the most interresting competition of this year. For me it was almost shocking to see the shake up...if that is what you can call it.</p>
<p>It is interresting and at the sametime frustrating to see that probably most Kagglers in this competition had 1 or more submission that could have given them a Gold or almost Gold medal.
But there was just no way of knowing that.</p>
<p>For example .. my best scoring private model did 0.687 and had a public score of 0.688. It was a simple 80/20 train test split in which I changed some features and hyperparameters.
I ignored it further after submission since the score dropped. How should I possibly have chosen a model like that?</p>
<p>In the first few weeks of the competition I basically tried out various things like select randomly half of the features, all features or may'be 10 or 20 features, different seeds etc. Sometimes some changes to the hyperparameters for LightGBM but most of the parameters were from the script
that I also used for the Home Credit Default Risk competition from last year. They worked then...they also did very nicely now. Surprisingly a lot of those models did a very good job on the private board. Best guess...they were simple and generic solutions.</p>
<p>During february I increasingly focussed on filtering data, try out some feature engineering, also trying to see if I could use the by then (sort of..) known time split as was published in multiple excellent kernels. In an effort to keep up with the ever increasing public leaderbord scores it was sometimes almost frustrating to see that at least for me those 'advanced' steps did not seem to work. Looking at the private board now there is indeed a large fluctuation in rather good and also very bad models based on all those additional steps.</p>
<p>Around 2 weeks ago I basically decided to not further invest much time in things like filtering data and feature engineering. I focused on creating multiple very different LightGBM models with some variety in feature selection and hyperparameter tuning. From the most models I removed the feature 'AvSigVersion' as I really doubted if I could use it properly. By just removing it I would skip the largest part of the time split distribution in the test set. Most LightGBM models used somewhere between 60 to 70 features. No feature engineering..only some cleanup of the values for 'SmartScreen'. For encoding I used either factorize or count encoding but most features just the numerical values they already had. Cross Validation mostly based on 5 or 6 folds but also tried out 7, 8 and 10 folds. I also used some heavily modified XdeepFM models which turned out to be very important to the final diversity.</p>
<p>My final 2 selections where both a blend of 6 very different submissions. 4 LightGBM and 2 XdeepFM. They only where different in the various weight factor that were chosen basically only on 'feeling'.</p>
<p>Overall I guess my choice for mostly not using the time split as an advantage and making generic solutions combined with a necessary dose of luck were usefull for a very nice and exciting 20th place.</p>
<p><a href="https://www.kaggle.com/sheriytm"></a><a href="/sheriytm">@sheriytm</a> yes noticed that thread already and made some comments :-)
It seems that the use of time factor or not had big influence...also more generic models seem to be doing a good job. Personally I don't have better explanations.</p>
<p>It seems that what Microsoft stated earlier that they would like to be able to use the different submissions for educations purposes and analysis succeeded quite well...all they had to do was create a very twisted test set.</p>
<p>Thanks your welcome. Looked at your CV scores...and yes local CV and private/public LB scores just don't make any sense. I basically didn't trust my local CV and only used the public LB up to some degree. I trusted more on the simplicity of my models and that it would be a rather generic blend.</p>
<p><a href="https://www.kaggle.com/sheriytm"></a><a href="/sheriytm">@sheriytm</a> you're welcome.. again ;-) I've been able to read most of the discussions sofar and it seems that Microsoft got exactly what they wanted :-)</p>
|
Microsoft Malware Prediction
|
200 iq competition! 4th Public Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Microsoft Malware Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi Alon,
<strong>1º</strong> When I thought about using those rates, I have already suceeded in using SKUedition with dates (point 2 and 3) so I thought they were very informative variables. At the same time I thought there were value in localization and organization variables that I could squeeze, because there are similarities between countries and organizations that the algorithms could not capture with the categorical data.
So I thought lets try to compare countries and organizations by using SKUEdition because richer countries will have possibly higher ratios "pro/home". After those ratios suceeded I tried to do the same with other ratios but they didn't work.</p>
<p><strong>2º</strong> I used the ponderation 0.7*model-trained-earlydata+0.3*model-trained-latedata ( the weights werent calculated with any special tecnique xD, I thougth validating in sept was more interesting )</p>
<p><strong>3º</strong> I one hot encode the input.</p>
<p>Greetings.</p>
<p>Thank you Chris!
I calculate #2 and #3 for the following variables:AppVersion,OsBuild,AvSigVersion,IeVerIdentifier,Census_OSVersion,Census_OSBuildNumber, Census_OSBuildRevision,OsBuildLab and EngineVersion (time dependent variables) and I found the average only for SKUEdition. I share the messy code XD (I use quantile cuts and later i one hot encode) :</p>
<p>```
datedict = np.load('AvSigVersionTimestamps.npy')
datedict = datedict[()]
train['Date'] = train['AvSigVersion'].map(datedict)
test['Date'] = test['AvSigVersion'].map(datedict)
from datetime import datetime
first = datetime(2018,1,1); datedict2 = {}
for x in datedict: datedict2[x] = (datedict[x]-first).days
train['days'] = train['AvSigVersion'].map(datedict2)
test['days'] = test['AvSigVersion'].map(datedict2)</p>
<p>def diffmeantime_sku(df,Var_name):
table=df[['days',Var_name,'SkuEdition']]
table=table.groupby([Var_name,'SkuEdition']).mean()
table=table.unstack().transpose()
table=table[df[Var_name]].transpose()
table=table['days']
table=table.rename(columns=str).reset_index()
table['Education'][df['SkuEdition']!='Education']=0
table['Home'][df['SkuEdition']!='Home']=0
table['Pro'][df['SkuEdition']!='Pro']=0
table['Invalid'][df['SkuEdition']!='Invalid']=0
table['Enterprise'][df['SkuEdition']!='Enterprise']=0
table['Enterprise LTSB'][df['SkuEdition']!='Enterprise LTSB']=0
table['Cloud'][df['SkuEdition']!='Cloud']=0
table['Server'][df['SkuEdition']!='Server']=0
table[Var_name]=None
difdays=df['days']-table.sum(axis=1)
return pd.qcut(difdays.rank(method='first'),q=[0,.05,.1, .3,.5,.7, .9,.95, 1.],labels=False)
```</p>
<p>Then you simply add a variable for each of the variables I wrote before. Example:</p>
<p><code>train['diffdaysCensusOS']=diffmeantime_sku(train,'Census_OSVersion')</code></p>
<p>The point #5 wasnt clear xD. I thougth that there could be different classes of hardware sets (for example is not the same the setup of a gamer, or the setup of a graph designer or the setup of a financial company)... following this I thought clusterization could add value capturing those classes. I did K means and DIANA clusters.</p>
<p>Yes point #4 worked like that, I thought that as categories of many variables were appearing and disappearing in different months it would be interesting to have models trained in different parts of the year. I tried and it worked well xD.</p>
<p>hahahahahha thank you!</p>
<p>Thank u! Congrats too, u did great job building robust models</p>
<p>Muchas gracias Santiago!</p>
<p>Wow! this seems like a nightmare... I was dreaming with renewing my 2008 computer and I wake up and see this!!!
Congratulations to winners and those wise man that did not trust their CV and LB. (200 iq)
Sorry for top teams ;( they worked very hard and do not deserve this final result.
I want to share features and strategies that took me to 4th Public place. I only tuned and use sparse lightgbm and NFFM baseline Kernels models.Thanks to their great developers.</p>
<ol>
<li><p><strong>The importance of SKU_Edition</strong>: I found that some Census OS version and AV where much more popular in "Home" others in "Enterprsies".... for me this was the most relevant feature. I enconded Organizations, cities, countries... using the ratio "Pro/Home" and "Enterprises +Server"/"Home" as an approximation of the wealth or tecnical development.</p></li>
<li><p><strong>Use of dates</strong> (I always suspected this could cause overfitting at the end): I used the timestamps (OS and AV dates) but in a very general way. First I calculated wich features were more susceptible to time (AVSig, Census_OSVers, EngineVersion, ...). Of each one of them I calculated the time difference between them and the mean of the dates of people with same SKU_Edition.
For example: If a person who has Windows Home has a version released in Ago2018 and the average of versions of Windows Home of the given data set is Sep 2019 I would have written -30 days.</p></li>
<li><p><strong>Difference between dates</strong>. It seemed clear that people with very modern Av and old Census_OSVers were significantly different of people with old Av an modern Census_OSVers. Again I used the variable SKU_Edition to quantify how big was the difference depending on the Edition (Servers are normally using very stable Census OSVers and dont vary easily).</p></li>
</ol>
<p>4.<strong>Mixing models trained in Early and Late data</strong>. A very easy way to scale position in the LB was mixing models trained with early data and validated with late data and viceversa.</p>
<ol>
<li><strong>Clusters in Hardware</strong>. I did Clusters with Hardware data and the clusters were very different between them and with high predictive power.</li>
</ol>
<p>I also included some other less relevant variables like if it is a multinational organization, if the person who installs the software speaks the native language of the country...</p>
<p>This is the first Kaggle competition I work hard in and the experience was really satisfactory.
Next time i will be prepared techically and psycologically for the private LB-</p>
<p>Greetings! any question is welcome =)</p>
<p>Gracias Carlos y enhorabuena!</p>
|
Microsoft Malware Prediction
|
2th place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Microsoft Malware Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks
And thank you for sharing so much knowledge </p>
<p>Didn't do much there. Just trained our models with 80% data and validated on the other 20% and compared.
We did learn a lot how we could improve that for future projects, but they might not been helpful in this competition anyway.</p>
<p>You are right. But as I wrote:
```
(I Didn't have time to find a way to do it for Train and then same way for Test afterwards)</p>
<p>Any hints for doing this separately first for Train, and later for any other data sets are very welcome.
```</p>
<p>Hey, I did not change them in any way.
But for these I did frequency encoding, and I did it over Train and Test data together. (I Didn't have time to find a way to do it for Train and then same way for Test afterwards)</p>
<p>Any hints for doing this separately first for Train, and later for any other data sets are very welcome.</p>
<p>```
frames = [train, test]
df = pd.concat(frames)</p>
<p>list_frequency_encoding = ['AppVersion',
'AvSigVersion',
'Census_OSVersion',
'EngineVersion',
'OsBuildLab']</p>
<p>def frequency_encoding(feature):
t = df[feature].value_counts().reset_index()
t = t.reset_index()
t.loc[t[feature] == 1, 'level_0'] = np.nan
t.set_index('index', inplace=True)
max_label = t['level_0'].max() + 1
t.fillna(max_label, inplace=True)
return t.to_dict()['level_0']</p>
<p>for feature in tqdm(list_frequency_encoding):
freq_enc_dict = frequency_encoding(feature)
df[feature] = df[feature].map(lambda x: freq_enc_dict.get(x, np.nan))
df[feature] = df[feature].astype('int64')
```</p>
<p>Thx,
Will try to build it in the next days</p>
<p>Hello everybody,</p>
<p>We were very surprised when we saw the private leaderboard. We reached #4, wow!!! (public rank: 1084).</p>
<p>We registered with Kaggle to gain practical experience in ML and then decided to enter this competition because it was just starting.</p>
<p>We've learned a lot from other kernels, but our goal was not to get the best solution for the competition to be high up in the table.
We didn't took the testdata into consideration for building up models like many others did. We wanted to build a solution like we would do in a real productive environment. </p>
<p><strong>Data Cleaning:</strong>
First we dropped Features with to many NaN values.
(AutoSampleOptIn, Census_InternalBatteryNumberOfCharges, Census_InternalBatteryType, Census_IsFlightingInternal, Census_IsFlightsDisabled, Census_IsWIMBootEnabled, Census_ProcessorClass, Census_ThresholdOptIn, DefaultBrowsersIdentifier, IsBeta, ProductName, PuaMode, UacLuaenable)</p>
<p>We have divided the features into 3 categories.</p>
<p>Numerical Features = Census_InternalPrimaryDiagonalDisplaySizeInInches, Census_PrimaryDiskTotalCapacity, Census_ProcessorCoreCount, Census_SystemVolumeTotalCapacity, Census_TotalPhysicalRAM
With these features we set the NaN values to "-1".</p>
<p>Binary Feature: Census_HasOpticalDiskDrive, Census_IsAlwaysOnAlwaysConnectedCapable, Census_IsPenCapable, Census_IsPortableOperatingSystem, Census_IsSecureBootEnabled, Census_IsTouchEnabled, Census_IsVirtualDevice, Firewall, HasTpm, IsProtected, IsSxsPassiveMode, SMode, Wdft_IsGamer
With these features we replaced the NaN values with the most frequent one, and added it to a list to do the same later for Test Data.</p>
<p><code>
for feature in binary_features:
df[feature] = df[feature].fillna(df[feature].mode()[0])
if df[feature].mode()[0] == 0:
list_nan_0.append(feature)
else:
list_nan_1.append(feature)
</code></p>
<p>Categorical Features: the Rest
First we lowered all values, which helped against different spelling</p>
<p><code>
for feature in categorical_features:
df[feature] = df[feature].str.lower() <br>
</code></p>
<p>Then we manually corrected some values, and changed some that rarely occur to "unknown".
After that we replaced the NaN value with "unknown".</p>
<p>We did some basic label encoding, and then tried different algorithm.
LightGBM, gave the best score, so we did parameter tuning with 10% of training data.
We set parameter categorical_feature, and trained the model with full training data.</p>
<p>As you can see, we did nothing special here, at the end we where lucky to be in top rank.
I hope that the top teams from public leaderboard will publish there Kernel, because I'm sure that we can earn a lot from them, and that's why we are here ;)</p>
<p>Big thanks to the community, we learned a lot and we will try to give back.</p>
<p>Edit: The code is uploaded to GutHub: <a href="https://github.com/imor-de/microsoft_malware_prediction_kaggle_2nd">https://github.com/imor-de/microsoft_malware_prediction_kaggle_2nd</a></p>
<p>Thx
We did CV for parameter tuning. And for a different model I trained with CV, but result was worse. (But was with different Data set) I didn't have time to try it on our best model.</p>
|
Microsoft Malware Prediction
|
4th Place Solution With Code
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Microsoft Malware Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all the prize winners, and gold, silver, and bronze medalists! Thank you Kaggle and Microsoft for hosting the competition.</p>
<p>My 4th place submission was a very simple one. I converted every categorical feature to a count encoding which included both train and test data in the counts. I then averaged the predictions of LightGBM models trained over a 5 fold CV. </p>
<p>Public LB: 0.68406
Private LB: 0.66474</p>
<p>It’s possible to improve on these scores with more model tuning.</p>
<p>This was one of my first competitions on Kaggle. I did not have a special validation strategy that helped me select a winning submission, so I know I am lucky to end up in 4th place with such a simple approach. The leaderboard shakeup worked in my favor on this one.</p>
<p>This competition was a great learning opportunity for me so I’d like to thank everyone for sharing
their ideas in the discussions and kernels!</p>
<p>UPDATED:
Below is a link to a github repo to reproduce my fourth place submission.</p>
<p><a href="https://github.com/Johnnyd113/Microsoft-Malware-Prediction">https://github.com/Johnnyd113/Microsoft-Malware-Prediction</a></p>
<p>Please let me know if you have any issues reproducing the submission. As you'll see, the data processing and modeling is fairly simple! After downloading the competition data, the code should take about 10 hours to process data, train the models, and make predictions.</p>
<p>I do not know the exact hyperparemeters I used to produce the above results. Unfortunately, I stopped working on the competition over a month ago, and my hyperparameter tuning was on an AWS instance that I later terminated. I am now able to recreate better results with the hyperparameters used in <a href="https://www.kaggle.com/artgor/is-this-malware-eda-fe-and-lgb-updated"> Andrew Lukyanenko's kernel</a> and setting max depth to -1. </p>
<p>I did not bag many runs on different Kfold seeds. I only used the five models trained from one 5 fold CV.</p>
<p>Every column that pandas read in as an object dtype by default was considered a categorical feature. There were 29 columns in total ('MachinIdentifer' excluded). I did not specify the dtype= in pd.read_csv().</p>
<p>I am working to clean up my code to share with everyone now. It’s fairly short and simple, but takes a long time to run.</p>
|
Microsoft Malware Prediction
|
My second competition in kaggle. (#54th solution)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Microsoft Malware Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks! That's a long way to go.BTW,you did a great job. I learn a lot from your kernels!!</p>
<p>```
to_merge = train.groupby(['AvSigVersion','Wdft_IsGamer','Census_OSInstallLanguageIdentifier']
).AvSigVersion.count().to_frame('new_column').reset_index()
train = train.merge(to_merge, how='left', on=['AvSigVersion','Wdft_IsGamer','Census_OSInstallLanguageIdentifier'])
train['HasDetections']=df1</p>
<p>to_merge = test.groupby(['AvSigVersion','Wdft_IsGamer','Census_OSInstallLanguageIdentifier']
).AvSigVersion.count().to_frame('new_column').reset_index()
test = test.merge(to_merge, how='left', on=['AvSigVersion','Wdft_IsGamer','Census_OSInstallLanguageIdentifier'])
test['MachineIdentifier'] = test.index.astype('uint32')
test['HasDetections']=[0]*len(test)</p>
<p>```
This code suit for me</p>
<p>First, I didn't expect that i will get a medal in MMP. Because,it is a such huge problem,and i didn't have a good machine(only a laptop with 1050Ti GPU).In this way, i have to wait 4~5 hours(even more)to train a model every time.As you can see,I only have 14 Entries.So,I don't really think i deserve this medal,it should belongs to those person who take more effort.</p>
<p>One of my best PB score is 0.657 which could reach 15th in the PB,It's a single model edit from @gdy5 's work:
<a href="https://www.kaggle.com/guoday/xdeepfm-baseline/output">https://www.kaggle.com/guoday/xdeepfm-baseline/output</a> .I use his model and add some features which i got from forum(apology for i can't find it now). I try the models like lgb,NN.. I also build a deepfm too, It got a worse lb score than gdy5's xdeepfm. But i just use it to blend and as my final submission.And i got the Private Score:0.646 finally.</p>
<p>But this reminds me my first competition in kaggle(Elo),I indeed spend time on it,and before finish competition i still got a 100/4129 ranking.But after the shake, i drop to rank 2640th,just because i chose two best lb score result, rather than trust my cv score.It make me so upset,but also gave me an impressive lesson.So in this competion(MMP),I try to choose submission carefully rather than trust lb score blindly.</p>
<p>Anyway,congratulate to every one who got medal,also to those who really take effort.because they must get something which is more important than medal in this competition(Just like what i got in Elo).</p>
<p>Kaggle is an interesting place,I like the people here,most of them are very nice and like to share.I am so excited to meet more friends here! <strong>Let's enjoy the competitions~</strong></p>
<p>Aha,It occurs to me that i see this code in your comments! Thanks~ I am not sure for your question, maybe you can have a try.:-) Also, congratulations to you for getting another gold medal!</p>
|
Microsoft Malware Prediction
|
The Winning Solution I didn't submit!
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Microsoft Malware Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hey All,</p>
<p>I was just chatting with a colleague today and realized that the competition had come to it's natural conclusion. I didn't have the time to make a submission in the last two months, but I wanted to share an insight I had that I believe turned out to be correct --</p>
<p>Some individuals provided version label to timestamp maps at some point in the competition. Looking at the distribution of the timestamps in the train and test I realized that they had the same shape but there was a constant offset between them.</p>
<p>My interpretation was that Microsoft had taken a snapshot of all their users at two fixed points and provided us with this information as our dataset. I.e. the data was collected in one go at a fixed point in time -- this was the only thing I could imagine that would lead to such a distribution. Please correct me if I am wrong.</p>
<p>Anyway, I converted all variables with associated timestamps and then proceeded to subtract the median of the distribution -- this was then made input to the learning algorithm. I then combined this with one of the starter algorithms to build a simple 2-stack approach where I (1) make predictions and (2) include the cluster prediction mean for all time related variables.</p>
<p>My last submission I made with my code alone had a .74 cv score, and scored 0.68967 publicly and 0.68308 privately. I believe this could have been a competition winner, but I actually didn't pick my submissions -- rather Kaggle selected my highest public scores which corresponded to blend I had computed with some publicly available kernels. I'm not sure I would have had the confidence to pick this solution if I had logged in before the competitions end, but still, I wanted to share this approach.</p>
<p>The attached html of my jupyter notebook may be a slightly modified version, I didn't have time to completely check that this indeed produced the last submission I submitted.</p>
<p>edit : for clarity</p>
|
Microsoft Malware Prediction
|
[Updated] Solution sharing: Private LB 0.695 in 12 mins
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Microsoft Malware Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for your kind comments. </p>
<p>We need to revenge this damn thing. Any <a href="https://www.kaggle.com/competitions">idea</a>? :P </p>
<p>You can also find clues of my failure in the repo. I was doomed the moment I tried timing based frequency encoding. 🦀 </p>
<p>I'm so desperate to save my job! 😂 Edit: come on guys. I'm joking 😆 .</p>
<p>Wow, what a crazy competition. It still hurts so much == </p>
<p>Anyway, I learned so much more from this loss than I could ever imagine. For what's worth to share, I found a 0.69+ submission of mine and I quickly reproduced it in a notebook, <a href="https://github.com/daxiongshu/Rapids-microsoft-malware/blob/master/notebooks/rapids_solution_gpu_only.ipynb">https://github.com/daxiongshu/Rapids-microsoft-malware/blob/master/notebooks/rapids_solution_gpu_only.ipynb</a></p>
<p>Updated! A speedup comparison with pandas on CPU. <a href="https://github.com/daxiongshu/Rapids-microsoft-malware/blob/master/notebooks/rapids_solution_gpu_vs_cpu.ipynb">https://github.com/daxiongshu/Rapids-microsoft-malware/blob/master/notebooks/rapids_solution_gpu_vs_cpu.ipynb</a>
And with rapids we could get <strong>40x speedup with reading csv and label encoding!</strong></p>
<p>Any comments are welcome!</p>
<p>The code requires a Cuda 9.2+ GPU with more than 13 GB GPU memory.</p>
<p>I will translate it to pandas tomorrow and have a more complete timing comparison.
<img src="https://storage.googleapis.com/kaggle-forum-message-attachments/489714/11552/image.png" alt="lb"></p>
<p>Haha, thank you so much. I am joking all around 😆. Actually Nvidia is a wonderful place to work for. We have more flexible policies for kaggling than the DENA folks! </p>
<p>Thank you Olivier! Chris's <a href="https://www.kaggle.com/cdeotte/private-leaderboard-0-703">kernel</a> show's what's went wrong. There is a significant downsampling of malware rates in the private test data.</p>
<p>Thank you LongYin for your very insightful post early on! Hope we could team up in future.</p>
|
Million Song Dataset Challenge
|
my solution to the MSD challenge..
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Million Song Dataset Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>@sedielem<br>
<br>
The technique you used for the band recommendation problem seems in fact quite similar to mine. I didn't try other conditional probability combinations since, initially, my aim was at generalizing the cosine similarity which alone already gave good results.
I thought that adding an external parameter to tune would have likely given better results. I added further motivations for this particular approach in the paper.<br>
<br>
Note that, I also added the "locality" parameter q that (as you can see in the paper) significally improves the results obtained with the probability combination approach alone. <br>
<br>
I perfectly agree with you: Keep It Simple (and Short) !<br>
<br>
Hope you understand the code. If you have time I think you can easily try yourself your idea starting from my code and doing minor modifications.<br>
<br>
I also have done some experiments trying to combine metadata based rankings (using artist and tag info) but the results were not satisfying. My personal and vague opinion on this is that the *implicit* information contained in the users history is much more
than the *explicit* information you have in metadata. In other words, my feeling is that the information extracted from metadata can be useful only when either the hidden (true) prediction rules are simple or when few user data is avalable.<br>
<br>
thanks!</p>
<p> </p>
<p>@IbrahimJumkhawala</p>
<p>thanks for your comment!!</p>
<p> </p>
<p>@Foxtrot<br>
<br>
I ran the experiments on a cluster. The nodes I used have multi-core Intel Xeon processors. The computation of each user recommendation required (on average) 10/15 secs. Each job was defined to compute the recommendation for 1000 users. Completion of each job
required about 4 hours.</p>
<p>Hi everybody,</p>
<p>for the winning solution I basically adopted a item-based collaborative filtering approach with some "crucial" modification:
<br>
<br>
1) invented a new parametric similarity between songs (or users) which lead to 0.16665 on leaderboard</p>
<p>2) final calibration of the scores for ranking (0.17712 on leaderboard)</p>
<p>3) ranking aggregation with a user-similarity based predictor (roughly 0.178 on leaderboard)</p>
<p>As you can see, the first two were crucial for the high scoring!</p>
<p>You can find a quite exaustive description of the method in this paper:<br>
<br>
F. Aiolli, A Preliminary Study on a Recommender System for the Million Songs Dataset Challenge<br>
Preference Learning: Problems and Applications in AI (PL-12), ECAI-12 Workshop, Montpellier<br>
http://www.ke.tu-darmstadt.de/events/PL-12/papers/08-aiolli.pdf<br>
<br>
also available at my web: page http://www.math.unipd.it/~aiolli/paperi.html<br>
<br>
Unfortunately, the calibration step is not fully documented and it is not discussed in the paper above.I am just preparing a new paper which describes the whole method (see the code referred below to have a rough idea of this very simple method).<br>
<br>
I also published (a cleaned version of) the code I used for the winning submissions. It can also be used for validation. Hope it works!! There are three source files:
<br>
1) MSD_util.py, MSD utility functions <br>
2) MSD_rec.py, MSD implementation of the basic classes: Pred (predictor) and Reco (recommender)<br>
3) MSD_subm_rec.py, Example of a script for the computation of user recommendations</p>
<p>The code is not optimized and probably can be made far more efficient. I apologize for the code which is not commented appropriately. I hope it is not too criptic anyway. It might be easier to understand the code if you previously read the paper :)<br>
<br>
I am very busy in this period and not sure I can maintain and correct the code in the future. However, I would appreciate comments and suggestions. Also, I am very courious to hear about other people' solutions..
<br>
<br>
Cheers,<br>
-- Fabio.<br>
<br>
</p>
<pre>sorry guys for the delay of my answers..<br><br></pre>
<pre>@dbv: Thanks! I think you can find the answers to your questions in my paper: http://www.math.unipd.it/~aiolli/PAPERS/MSDaiolli.pdf</pre>
<pre>Specifically, you can find a possible answer for question (a) in the result section (case q=1), and for question (b) in section 2.3 of the same paper.</pre>
<pre>Unfortunately, I do not have the precise score obtained by my technique on the challenge validation/test set. However I published results on a validation set of myself and they should not be too different in value. <br><br></pre>
<pre>@nhan vu: Thank you! I posted the link to the solution up here (<a href="http://www.math.unipd.it/~aiolli/CODE/MSD/">http://www.math.unipd.it/~aiolli/CODE/MSD/</a>). Hope it is sufficient :) </pre>
<pre>I ll be delight if you want to discuss with me about the topic. Feel free to contact me. </pre>
|
Multi-label Bird Species Classification - NIPS 2013
|
Reflections
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Multi-label Bird Species Classification - NIPS 2013
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>[quote=walt;35190]</p>
<p><span style="line-height: 1.4">Some thoughts, we try to capture the correlation of 87 classifiers instead of training the 87 classifiers independently. We can train 87 classifiers sequentially and use the previous output probabilities as next classifiers features input. The order is random. We trained it multiple times. Final result is the average of each run. I am not sure anyone try this before, we tried it but fails....</span><span style="line-height: 1.4">still don't know why.. </span></p>
<p>[/quote]</p>
<p>Hi Walt, Matt attempted this and we likewise did not get any useful information out of it. My guess is that correlation between the classes in the training data is overwhelming the classifier from using some more-noisy, but yet more-valuable, acoustic information. So this would be a case of overfitting decision trees due to too many features to select from. </p>
<p>Great job Mario, Rafael, and everyone else.</p>
<p>We learned a lot and had great fun. Here's a rough writeup of our progression. All scores are for the private leaderboard unless noted otherwise.</p>
<p> </p>
<p>short version: you can make a ~0.895 model by</p>
<p>*split each clip into half-second windows overlapping by 80%. for each window, determine the average value of each mfcc coefficient. (so you have 16 features for each window)</p>
<p>*use a random forest classifier with 87 yes/no outputs</p>
<p>*average the probabilities for each window to get the probability for the entire clip</p>
<p> </p>
<p> </p>
<p>long version:</p>
<p>On Oct 27, my teammate Matt submitted empirical probabilities for each bird, which got an AUC of 0.64 and jumped us into second place (of 6) :). During this time (Oct 20-Oct 30) we worked on trying to understand how to use scikit-learn and I tried to remember what a Fourier transform is.</p>
<p>Our first "real" model (sorry Matt) was as close of a copy as we could make of <a href="http://sis.univ-tln.fr/~odufour/fichiers/DufourICMLWshp_v6.pdf">Olivier Dufour's paper</a>. Thank you, Mr. Dufour! The way we plugged this model into a multilabel problem was using scikit-learn's OneVsRestClassifier with SVC as the base (I still don't know what a OneVsRestClassifier or a SVC is; I think the lesson I learned here is that by reading API documentation of input and output, you can just try things and maybe they'll work). This achieved a score of 0.87.</p>
<p>At this stage, we pursued two paths. One path was working on developing a multi-instance classifier which we dubbed Quandorff (using a Hausdorff algorithm with quantile clusters); Matt can tell you more about this because I don't really understand it. We also worked on modifying the existing model until very little of it remained. The first modification we made was to turn the problem into a 87 yes-or-no output problem, and plugged it into a random forest. We think we got this idea from reading <a href="https://github.com/gaborfodor/MLSP_2013">beluga's code</a>; thank you very much for publishing it. This achieved a result of 0.893. At this point we decided that we could not substantially improve the classifier (although Matt added a custom gain ratio criterion to scikit-learn's random forest based on the results from <a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CC4QFjAA&url=http:%2F%2Fwww.dsic.upv.es%2Fusers%2Felp%2Fcferri%2F105.pdf&ei=PaKSUsHEJ-KJjAKpl4HwBg&usg=AFQjCNEqZaDg56SCy6pTQ88uCftsLfrxgQ&sig2=_880W_RhfBEMxWaGhOtoTA">Cesar Ferri's paper</a>, sadly this did not help us).</p>
<p>Mr. Dufour's feature engineering algorithm has four parts: generating MFCC's, splitting clips into windows, forming features out of MFCC's for each window, and then recombining the classifier results from each window to get a clip result.</p>
<p>We first made advances on the MFCC generation front. We adapted <a href="https://github.com/jameslyons/python_speech_features">James Lyons' code</a> for MFCC generation. We cut off frequencies below 500hz (we discovered this empirically, but <a href="eecs.oregonstate.edu/research/bioacoustics/syllable_tech_report.pdf">Forrest Briggs' paper</a> talks about it), changed the scale from Mel (incidentally designed for human speech, not bird speech) to include a custom corner frequency (we used 1500hz). None of these parameters should be assumed tuned; they're just better than the default ones. We also reduced the coefficients used to 14 of 21. These changes improved our public score by 0.005 and our CV score by 0.005, but did not change the private score (so we were still at 0.89). We still suspect that this is a small anomaly in the private set data, but we also suspect from reading more papers that using 13-14 of ~26 coefficients would have been better (but we returned the computer we borrowed for our more cpu-intensive runs, so we might not get around to determining this). </p>
<p>Our 2nd advance was on making features out of each coefficient window. Mr. Dufour's algorithm involved the absolute value sums and variances of the MFCC coefficients and Delta and Delta Delta MFCC's. Through experimentation, we discovered that feature reduction was a good thing, and that instead of all of these, we should just average the MFCC's for each window. This got us to 0.905. Matt remarked that MFCC Delta and Delta Delta features might be designed for voice recognition, where it is important not only who is speaking, but what is spoken. This led us to read about musical instrument analysis; we recently switched from averaging to primary component analysis thanks to <a>this paper</a>. (It improved results by only 0.003 but in theory I do think it is more robust, if overkill.)</p>
<p>Switching to shorter windows and using the mean, rather than max, window-probabilities to determine the clip-probabilities got us to 0.91. Borrowing 16GB of ram and thus upping forests from 25 to 250 got us to 0.915.</p>
<p>There are a few weak points in our model which we haven't had insight to attack. Our windowing and recombining algorithm is tremendously rudimentary and has no theoretical backing. We attempted determining syllables rather than all possible windows, but did not succeed. While primary component analysis adds some temporal analysis to our otherwise entirely spectral model, our model is still not substantially better than the entirely static MFCC component averaging. Perhaps merging it with some spectrogram image-based features (which could capture temporal evolution) would help.</p>
<p>We didn't do a lot of leakage testing, but if for each clip we submit (0.8*probability of that clip + 0.1*probability of previous clip + 0.1*probability of next clip) both our public and private scores would go up by 0.002 (we tried this once, prompting my leakage question; we didn't select that submission and stopped investigating). It'd probably be pretty easy to improve this substantially by tuning and/or figuring out where the site transitions are. Site grouping the test set would help in general, but by the time we found this would be allowed, we were a combination of tired and uninterested in that aspect of the problem. We never tried combining empirical probabilities or sound clip lengths (but we did note the 0.65 auc for empirical probabilities and 0.6 auc for sound clip length).</p>
<p>We optimized mostly for CV score rather than leaderboard score (though the trends matched pretty well); our current 10-fold cv is 0.943 +/- 0.010. Not sure how much of this is overfitting and how much is differences between the test/train set. We'd be curious to know how the organizers split the test and train sets, and/or if they have more data.</p>
<p>We have a private git repository but it's currently a mess; we may or may not clean it up and will post a link regardless within ~5 days. We can also privately send messy code for verification sooner to anyone who requests it. (python/scikit-learn/pandas stack; It will currently give results within 0.0005 of our best results)</p>
<p>Thanks again for all your previous publications and the competition you provided. Thanks also to the organizers for the dataset, and to Kaggle for the platform.</p>
|
NBME - Score Clinical Patient Notes
|
#2 solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NBME - Score Clinical Patient Notes
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><blockquote>
<p>Could you provide any general advice on how to generate these checkpoints?</p>
</blockquote>
<p>I tried to do that in the writeup above,</p>
<p>As I wrote, the latest notebook version has all datasets <a href="https://www.kaggle.com/code/cpmpml/sub-ensemble-010/data?scriptVersionId=95933493" target="_blank">https://www.kaggle.com/code/cpmpml/sub-ensemble-010/data?scriptVersionId=95933493</a> What is it you don't see?</p>
<p>The datasets are uploaded and used in the last version of the notebook I point to in the above comment. in sub_ensemble I only share the submission code. </p>
<p>Seed values should not matter. I usually use 0,1,2,etc.</p>
<p>You should follow the link I gave. It points to these code you can try: <a href="https://paperswithcode.com/paper/smart-robust-and-efficient-fine-tuning-for" target="_blank">https://paperswithcode.com/paper/smart-robust-and-efficient-fine-tuning-for</a></p>
<p>Hi Bro, I did.</p>
<blockquote>
<p>I am surprised with the lf token not working in the end. </p>
</blockquote>
<p>I wonder if it is not because I only trained it on annotated data. When MLM on all notes is used then token embeddings are better.</p>
<p>Squadv2 pretraining was also used by Chai competition winner. I may have got the idea from there, I don't remember exactly.</p>
<p>Thanks for the fast awp link, I'll have a look.</p>
<p>Rnn improved a bit, but not much. I don't have the exact figure.</p>
<p>I used the models trained on folds to evaluate model performance. I made the hypothesis that the gain form training on all data was consistent across models.</p>
<p>First of all, thanks to NBME host and Kaggle for this challenging competition. I really liked that we got 10x more unlabelled data than labelled data to try unsupervised or semi supervised techniques. Without that the competition would have been way less interesting.</p>
<p>Let me describe my solution. Most of the code can be seen in my inference notebook. For some reason Kaggle doesn't want me to put the link here as I shared it in another post. You can access it from the leaderboard. Then look for the latest version where I shared all checkpoints in pubic datasets.</p>
<p><strong>Data processing</strong></p>
<p>I quickly saw that annotations were inconsistent, as most of the participants did. My guess is that annotators sometimes only identified some of the occurrence of a given feature, and not all of them. This can be seen with Alderall (spelling?) for instance. While fixing these annotations would make much sense in a real life project, it is probably a bad idea here. Indeed, there is no reason to believe that test annotations are not as inconsistent as training data annotations.</p>
<p>I further made the hypothesis that annotators were more likely to miss repeated annotations than first occurrence. It means that the annotations have a sequence dependency, and that using a RNN at some point could make sense.</p>
<p>Another thing I noticed is that using uncased modeling would be better as some text were in uppercase, other in lower case, without any semantic significance for the difference. I therefore decided to use lower case text.</p>
<p>Medical abbreviations were more of a headache. I contemplated expanding the most common ones (as shared in the competition forum) but this could lead to complex token offset mapping. I decided to only map most common cases to each other. My preprocessing text function is this:</p>
<pre><code>def clean_abbrev(text):
text = text.replace('FHx', 'FH ')
text = text.replace('FHX', 'FH ')
text = text.replace('PMHx', 'PMH ')
text = text.replace('PMHX', 'PMH ')
text = text.replace('SHx', 'SH ')
text = text.replace('SHX', 'SH ')
text = text.lower()
return text
</code></pre>
<p>The other data analysis I did was to check where annotation boundary falls within a token span. When this happens there is no way a token based model can correctly learn how to predict these boundaries correctly. I did this for roberta, deberta and debertav2 tokenizers. There are 2 cases that cover most of cases:</p>
<ul>
<li>spaces before some non space text</li>
<li>'\r\n' before some text<br>
I therefore created a post processing function that expanded on <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> space removal function:</li>
</ul>
<pre><code>spaces = ' \n\r'
def post_process_spaces(pred, text):
text = text[:len(pred)]
pred = pred[:len(text)]
if text[0] in spaces:
pred[0] = 0
if text[-1] in spaces:
pred[-1] = 0
for i in range(1, len(text) - 1):
if text[i] in spaces:
if pred[i] and not pred[i - 1]: # space before
pred[i] = 0
if pred[i] and not pred[i + 1]: # space after
pred[i] = 0
if pred[i - 1] and pred[i + 1]:
pred[i] = 1
return pred
</code></pre>
<p>There is a third case that covers most of the remaining ones:</p>
<ul>
<li>'yom' and 'yof' sub strings meaning year old female, or year old male. The span boundary is right after 'yo' but all tokenizers create a token for 'of', and roberta tokenizer creates a token for 'om'. In both cases the boundary is inside the 'of' or 'om' token.</li>
</ul>
<p>I post processed tokens ‘om’ and ‘on’ depending on the feature. If the feature is about gender then I only keep the second character. If the feature is about age, then I only keep the first character. Here s the code I used.</p>
<pre><code>def pred_to_chars(token_type_logits, len_token, max_token, offset_mapping, text, feature_num):
token_type_logits = token_type_logits[:len_token]
offset_mapping = offset_mapping[:len_token]
char_preds = np.ones(len(text)) * -1e10
for i, (start,end) in enumerate(offset_mapping):
if text[start:end] == 'of' and start > 0 and text[start-1:end] == 'yof':
if feature_num in feature_female:
char_preds[end-1:end] = 1
elif feature_num in feature_year:
char_preds[start:start+1] = token_type_logits[i-1]
else:
char_preds[start:end] = token_type_logits[i]
elif text[start:end] == 'om' and start > 0 and text[start-1:end] == 'yom':
if feature_num in feature_male:
char_preds[end-1:end] = 1
elif feature_num in feature_year:
char_preds[start:start+1] = token_type_logits[i-1]
else:
char_preds[start:end] = token_type_logits[i]
else:
char_preds[start:end] = token_type_logits[i]
return (char_preds, text)
</code></pre>
<p>The one thing I didn't use in my select submissions is to handle line feed via a new token. I did <a href="https://www.kaggle.com/code/cpmpml/sub-nbm-315/settings?scriptVersionId=93957412" target="_blank">only one sub</a> with it. It is a 4 fold model that scores 0.894 on private LB! I give more details about it <a href="https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/discussion/322812" target="_blank">here</a>.</p>
<p><strong>Model architecture</strong></p>
<p>I mostly used <code>deberta-large</code> and <code>deberta-v3-large</code> backbones. </p>
<p>I also tried <code>deberta-v2-xlarge</code> and <code>roberta-large</code> backbones, but they were weaker. My best selected sub did not use them.</p>
<p>Training data is rather small and NLP models are large transformers. This is a recipe for training instability or for overfitting, or both. I added various regularization tricks. </p>
<ul>
<li>First, using classification heads with 5 levels of dropout as shared by <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a> in Feedback competition. </li>
<li>Second, using a linear combination of all hidden states with a dropout on the linear coefficients. The dropout forces the model heads to learn from all hidden states and not just the last one.</li>
<li>Third, using a primary head for token classification together with two auxiliary heads for predicting annotation span boundaries (one head for span starts and one head for span ends). Using the span boundary predictions was not so bad, but was about 0.01 - 0.02 worse than using the token classification head.</li>
</ul>
<p>As discussed above, I thought that annotations depend on the order in which they appear in the text. I therefore inserted a RNN after the backbone last hidden state and before the three classification heads. I used a single layer bidirectional RNN, I used either GRU or LSTM, which gives me two variant for the same backbone.</p>
<p><strong>Model training</strong></p>
<p>The most important piece I think was to pretrain my backbones on the Squadv2 dataset. Why Squadv2? Because the input to the model is the same as in qa: a question (the feature text), and a context (the patient history). The annotations are all answers to the question.</p>
<p>I tried some pretrained models but found that pretraining them myself gave better results. I used Huggingface <code>run_qa.py</code> script. I modified the script to use DebertaV2TokenizerFast as shared by <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> in his dataset. I saved checkpoints at end of first 3 epochs and picked the one that led to best result downstream. </p>
<p>Before training my models I loaded the backbone weights form the Squadv2 ones.</p>
<p>I trained my models with AdamW optimizer together with a custom scheduler similar to cosine with warmup.</p>
<p>As said above, I was afraid of overfitting. Therefore I use token dropout, i.e. replacing 10% of input tokens by a mask token, as in MLM pretraining. I did this only for the pn history text and not features. When using dropout for features CV score dropped, probably because some feature text are very small and could be entirely masked.</p>
<p>I then used pseudo labeling, using the model to predict annotations for unlabelled data. I used the token class probabilities, i.e.I did not round them. I found that focal loss was better than BCE and wondered why. Then I thought it was because focal loss down weights the pseudo labels that were close to 0.5. I then decided to try a custom loss that ignores samples with pseudo label 0.5 and increase sample weights until 1 when the label is 0 or 1. It is equivalent to BCE when labels are 0 or 1:</p>
<pre><code>def criterion(preds, targets):
targets2 = torch.where(targets >= 0.5, 2*targets -1, 1 - 2*targets)
preds2 = torch.where(targets >= 0.5, preds, -preds)
loss = - targets2 * F.logsigmoid(preds2)
return loss
</code></pre>
<p>I did one round of pseudo labeling. A second run did not improve CV.</p>
<p>In general I only kept a new model if both its CV score and its public LB score improved. This seems to have been a good practice, but it led me to ignore line feed token unfortunately.</p>
<p>I trained my models using 4 folds. At the end I also trained selected models on full train data with 4 different seeds.</p>
<p>Using the above I could get a public LB score of 0.893 and a private LB score of 0.894. I selected my best LB submission as final submission.</p>
<p><strong>Adversarial training</strong></p>
<p>For my second submission I wanted something quite different, to mitigate risk of LB overfitting. One of Feedback competition top teams shared that they used adversarial training. I decided to deep dive on it and discovered a whole new world. I experimented lots of variants: fgm, vat, sift, smart, etc, and selected my best sub out of it. Although its CV score was quite higher than the one I selected above, both its public LB score and private LB score were lower. It seems that my way of doing pseudo labeling was better. It may be that being quite new to these techniques I didn't tune them correctly. I will try them in future competitions for sure.</p>
<p>Main issue of these methods is that they are slower than pseudo labeling as they require multi forward passes per sample.</p>
<p>For those interested here are some references I recommend:</p>
<p>VAT: <br>
<a href="https://arxiv.org/pdf/1704.03976v2.pdf" target="_blank">https://arxiv.org/pdf/1704.03976v2.pdf</a><br>
<a href="https://arxiv.org/pdf/1605.07725.pdf" target="_blank">https://arxiv.org/pdf/1605.07725.pdf</a><br>
<a href="https://arxiv.org/pdf/1507.00677.pdf" target="_blank">https://arxiv.org/pdf/1507.00677.pdf</a></p>
<p>SMART:<br>
<a href="https://arxiv.org/abs/1911.03437" target="_blank">https://arxiv.org/abs/1911.03437</a></p>
<p>AWP:<br>
<a href="https://www.kaggle.com/code/wht1996/feedback-nn-train/notebook" target="_blank">https://www.kaggle.com/code/wht1996/feedback-nn-train/notebook</a></p>
<p>FGM:<br>
<a href="https://arxiv.org/abs/1412.6572" target="_blank">https://arxiv.org/abs/1412.6572</a><br>
<a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/143764" target="_blank">https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/143764</a> </p>
<p>SIFT:<br>
<a href="https://github.com/microsoft/DeBERTa/tree/master/DeBERTa/sift" target="_blank">https://github.com/microsoft/DeBERTa/tree/master/DeBERTa/sift</a></p>
<p><strong>Ensembling and final prediction</strong></p>
<p>I used a weighted combination of token type probabilities. Coefficients were found by a crude grid search with CV score. </p>
<p>I used the fold models to find the weights then applied same weights to the models trained on the full data. I later tuned some overweight for full data models using the public LB. I did this for only one of my two selected subs.</p>
<p>Probabilities were thresholded same way for all cases. I was afraid of overfitting if using different thresholds per cases. Once thresholded I applied the post processing described above.</p>
<p>That's it! I hope that some of the above will be useful for future competitions.</p>
<p><strong>Edit:</strong> post comp submission shows that using lf token adds lots of noise in LB score. Not using it was probably a good thing.</p>
<p><strong>Edit2.</strong> Maybe lf token did not work for me because I didn't pretrain using MLM on all notes, hence my lf token embedding was not trained properly.</p>
<p>II could not make AWP work at all. I am not sure why.</p>
<p>And I wrote:</p>
<blockquote>
<p>I used Huggingface run_qa.py script.</p>
</blockquote>
<p><a href="https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa.py" target="_blank">https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa.py</a></p>
<p>This is not a MLM task, this a QA task.</p>
<p>I didn't do any MLM. Not sure why you think I did.</p>
<p>Maybe host wants explanations of why the feature is there or isn't. Providing the words identifying the feature is more explicit than a binary prediction.</p>
<p>I have a PhD in machine learning but it is from a long long time ago and not much relevant. I then took Andrew Ng Coursera cours eon machine learning (first one), and started practicing on Kaggle.</p>
<p>A good intro to pseudo labeling is given here: <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/129840" target="_blank">https://www.kaggle.com/c/google-quest-challenge/discussion/129840</a></p>
<p>Pseudo labeling is called noisy student in academic papers like this one: <a href="https://arxiv.org/pdf/1911.04252.pdf" target="_blank">https://arxiv.org/pdf/1911.04252.pdf</a> The paper was published AFTER PL became popular on Kaggle.</p>
<p>I use exactly same training procedure for full train. The only change is full train vs fold based.</p>
<p>Cross validation is used to evaluate the performance of my training procedure. Once I know its value there is no need to evaluate it again. To be crystal clear: the point of CV is to get an evaluation of training procedure that is as independent as possible of the data you use for training.</p>
<p>I don't rely on LB probing. The only exception I made for that is, as explained in the write up, to decide how to weight full train vs fold models in the final ensemble.</p>
<p>You should look at the latest version of the notebook. I made all checkpoint datasets public.</p>
|
NBME - Score Clinical Patient Notes
|
11th Solution: Trust CV
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NBME - Score Clinical Patient Notes
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you. I trained the model using the huggingface trainer with the specified hyperparameters. The hyperparameters are found in my previous chaii competition. Basically the heuristics was to set a relatively small learning rate and disable weight decay.</p>
<p>Save the out-of-fold logits when training each model, and then optimize with</p>
<pre><code>def objective(trial):
n = len(cfg.result_dirs)
weights = np.ones(n)
for i in range(n):
weights[i] = trial.suggest_float(f'w{i}', 0, 1)
char_logits_blend = []
for i in range(len(train)):
logits = []
for j in range(len(cfg.result_dirs)):
logits.append(train.loc[i, f'char_logits{j}'])
char_logits_blend.append(np.average(logits, weights=weights, axis=0))
spans = get_spans(char_logits_blend, train['pn_history'].tolist())
score = get_score(train_labels, spans)
return score
</code></pre>
<p>I forgot to mention that I used an early stopping callback that computes token level micro-f1. It correlates well with the actual score, so I think it helped.</p>
<pre><code>def compute_metrics(eval_pred):
logits, labels = eval_pred
logits = logits.reshape(logits.shape[0], -1)
assert logits.shape == labels.shape
assert len(logits.shape) == 2
predictions = (logits > 0).astype(int)
predictions_masked = []
labels_masked = []
for i in range(logits.shape[0]):
for j in range(logits.shape[1]):
if labels[i][j] != -100:
predictions_masked.append(predictions[i][j])
labels_masked.append(labels[i][j])
return {
'nbme_f1': f1_score(labels_masked, predictions_masked)
}
trainer = Trainer(
...
compute_metrics=compute_metrics
)
</code></pre>
<p><a href="https://github.com/Zacchaeus14/nbme" target="_blank">source code available</a><br>
Firstly, I'd like to express my gratitude to Kaggle and NBME for organizing such a fantastic competition and congratulate all the winners. I began working on this project approximately 10 days before the deadline, initially aiming for a silver medal; achieving gold was thrilling. My solution, grounded in pure engineering rather than any secret tricks, stemmed from the <a href="https://www.kaggle.com/code/yasufuminakama/nbme-deberta-base-baseline-train" target="_blank">top-rated baseline</a>. Due to time constraints, I had to rely entirely on my CV strategy, which was adapted from the baseline (5-fold GroupKFold).</p>
<h1>Tokenizer</h1>
<p>The baseline encountered minor issues with tokenizers, which I identified and resolved throughout the process.</p>
<h2>FN in labels</h2>
<p>The <code>create_label</code> function in the baseline inaccurately labeled sentences ending with a single key phrase. Post-tokenization, these phrases were marked as negative. To address this, I modified the following code:</p>
<p>from</p>
<pre><code> if start_idx == -1:
start_idx = end_idx
</code></pre>
<p>to</p>
<pre><code> if start_idx == -1:
start_idx = end_idx - 1
</code></pre>
<h2>Trimmed offsets</h2>
<p>As discussed, roBERTa tokenizers handle offsets with trimmed spaces by default. To address this, we need to load them with <code>trim_offsets=False</code>. However, I discovered that for some models (e.g., Electra, Ernie, Albert, Funnel, MPNet), offsets remain trimmed even when <code>trim_offsets</code> is set to <code>False</code>. I created a small function to manually correct this:</p>
<pre><code>def fix_offsets(offset_mapping, text):
n = len(offset_mapping)
if n == 0:
return []
if n == 1:
return offset_mapping[0]
re = [offset_mapping[0]]
last_e = 0
for i in range(1, n-1):
s, e = offset_mapping[i]
if text[s] != ' ' and s != last_e:
s = last_e
last_e = e
re.append((s, e))
re.append(offset_mapping[n-1])
return re
</code></pre>
<h2>Misalinged annotations</h2>
<p>In the baseline's <code>get_results</code> function, there's a line that adds 1 to all indices:</p>
<pre><code>result = np.where(char_prob > th)[0] + 1
</code></pre>
<p>This assumes that all tokens start from spaces, which can cause the first character to be lost. To fix this, I removed the "add 1" and instead stripped spaces from both sides:</p>
<pre><code>def my_get_results(char_logits, texts, th=0):
results = []
for i, char_prob in enumerate(char_logits):
result = np.where(char_prob > th)[0]
result = [list(g) for _, g in itertools.groupby(result, key=lambda n, c=itertools.count(): n - next(c))]
temp = []
for r in result:
s, e = min(r), max(r)
while texts[i][s] == ' ':
s += 1
while texts[i][e] == ' ':
e -= 1
temp.append(f"{s} {e+1}")
result = temp
result = ";".join(result)
results.append(result)
return results
</code></pre>
<h1>Model</h1>
<p>My models are straightforward, trained using the Hugging Face trainer. With limited time for hyperparameter tuning, I adopted the following settings and kept all other defaults:</p>
<ul>
<li>For large models: <code>lr=1e-5, bs=4, weight_decay=0, warmup_ratio=0.2, epoch=10</code>. For models that couldn't fit into 24GB RAM (xlarge ones), I used <code>bs=2, lr=5e-6</code>.</li>
<li>For base models: <code>lr=3e-5, bs=16, weight_decay=0, warmup_ratio=0.2, epoch=10</code>.</li>
</ul>
<p>I combined 14 models to create the pseudolabels, with weights determined by Optuna within the range of (0, 1).</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>CV</th>
<th>Weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>1. roberta-large</td>
<td>0.8823</td>
<td>0.269</td>
</tr>
<tr>
<td>2. muppet-roberta-large</td>
<td>0.8817</td>
<td>0.450</td>
</tr>
<tr>
<td>3. roberta-base</td>
<td>0.8742</td>
<td>0.016</td>
</tr>
<tr>
<td>4. bart-large</td>
<td>0.8782</td>
<td>0.147</td>
</tr>
<tr>
<td>5. bart-large-mnli</td>
<td>0.8786</td>
<td>0.291</td>
</tr>
<tr>
<td>6. deberta-large</td>
<td>0.8844</td>
<td>0.760</td>
</tr>
<tr>
<td>7. deberta-base</td>
<td>0.8753</td>
<td>0.373</td>
</tr>
<tr>
<td>8. deberta-large-mnli</td>
<td>0.8845</td>
<td>0.919</td>
</tr>
<tr>
<td>9. deberta-v3-large</td>
<td>0.8833</td>
<td>0.584</td>
</tr>
<tr>
<td>10. ernie-large</td>
<td>0.8794</td>
<td>0.059</td>
</tr>
<tr>
<td>11. electra-large-discriminator</td>
<td>0.8809</td>
<td>0.020</td>
</tr>
<tr>
<td>12. funnel-large</td>
<td>0.8797</td>
<td>0.953</td>
</tr>
<tr>
<td>13. deberta-v3-base</td>
<td>0.8773</td>
<td>0.491</td>
</tr>
<tr>
<td>14. roberta-large-mnli</td>
<td>0.8819</td>
<td>0.086</td>
</tr>
</tbody>
</table>
<p>The ensemble was achieved by merely averaging character logits, ensuring that models were not blended across folds to avoid leakage. Each fold model labeled one-fifth of the unlabeled data (approximately 110k). Following pseudolabeling, I trained the initial 14 models for one epoch, in addition to two more deberta-xlarge models for the final prediction. The process was largely unchanged.</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>1. roberta-large</td>
<td>0.8912</td>
</tr>
<tr>
<td>2. muppet-roberta-large</td>
<td>0.8910</td>
</tr>
<tr>
<td>3. roberta-base</td>
<td>0.8872</td>
</tr>
<tr>
<td>4. bart-large</td>
<td>0.8893</td>
</tr>
<tr>
<td>5. bart-large-mnli</td>
<td>0.8901</td>
</tr>
<tr>
<td>6. deberta-large</td>
<td>0.8930</td>
</tr>
<tr>
<td>7. deberta-base</td>
<td>0.8911</td>
</tr>
<tr>
<td>8. deberta-large-mnli</td>
<td>0.8927</td>
</tr>
<tr>
<td>9. deberta-v3-large</td>
<td>0.8928</td>
</tr>
<tr>
<td>10. ernie-large</td>
<td>0.8916</td>
</tr>
<tr>
<td>11. electra-large-discriminator</td>
<td>0.8917</td>
</tr>
<tr>
<td>12. funnel-large</td>
<td>0.8896</td>
</tr>
<tr>
<td>13. deberta-v3-base</td>
<td>0.8895</td>
</tr>
<tr>
<td>14. roberta-large-mnli</td>
<td>0.8911</td>
</tr>
<tr>
<td>15. deberta-xlarge</td>
<td>0.8948</td>
</tr>
<tr>
<td>16. deberta-v2-xlarge-mnli</td>
<td>0.8940</td>
</tr>
</tbody>
</table>
<h2>Final submissions</h2>
<h3>Submission 1: CV 0.8967 Public LB 0.890 Private LB 0.892</h3>
<table>
<thead>
<tr>
<th>Model</th>
<th>Weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>8. deberta-large-mnli</td>
<td>0.240</td>
</tr>
<tr>
<td>9. deberta-v3-large</td>
<td>0.609</td>
</tr>
<tr>
<td>15. deberta-xlarge</td>
<td>0.968</td>
</tr>
<tr>
<td>16. deberta-v2-xlarge-mnli</td>
<td>0.188</td>
</tr>
</tbody>
</table>
<h3>Submission 2: CV 0.8961 Public LB 0.889 Private LB 0.891</h3>
<table>
<thead>
<tr>
<th>Model</th>
<th>Weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>1. roberta-large</td>
<td>0.339</td>
</tr>
<tr>
<td>7. deberta-base</td>
<td>0.692</td>
</tr>
<tr>
<td>8. deberta-large-mnli</td>
<td>0.306</td>
</tr>
<tr>
<td>9. deberta-v3-large</td>
<td>0.821</td>
</tr>
<tr>
<td>10. ernie-large</td>
<td>0.645</td>
</tr>
</tbody>
</table>
<h1>Conclusion</h1>
<p>As I heavily relied on my CV, I felt disheartened when the CV/LB correlation deteriorated after my CV surpassed 0.89. Nevertheless, I had no choice but to trust it, as I had limited submission quota to probe the leaderboard. Special thanks to Prof. Ross at NYU Shanghai for providing me with the necessary GPU resources, which enabled me to train numerous models within a short time frame.</p>
<p>I mainly used the data pipeline (generating labels, evaluation functions, etc.). For training, I used the huggingface trainer to make things simpler.</p>
|
NBME - Score Clinical Patient Notes
|
13th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NBME - Score Clinical Patient Notes
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, thanks to kaggle team and organizers for hosting this competition!<br>
I learned a lot from this competition, especially for pseudo labeling because this is my first time to use pseudo labeling.</p>
<h1>Overview of our solution</h1>
<p>Our models are basically same as others. Our final submissions consist of the ensemble of deberta-based QA-style models trained with pseudo labeled instances.<br>
The network structure is bert (last hidden state) -> dropout -> fc (output_dim=1) -> BCE loss.<br>
We simply took ensemble over the output probabilities with the uniform weight.</p>
<h1>Pseudo labeling</h1>
<p>What we tried for PL at the beginning is selecting the trustful pseudo instances by setting threshold over the predicted probabilities to reduce the noise.<br>
Unfortunately, we found that setting any threshold got worse LB score than without threshold (precisely, without-threshold is effectively same as threshold=50%).<br>
We also tried the confidence-based selection method (<a href="https://openreview.net/forum?id=-ODN6SbiUU" target="_blank">https://openreview.net/forum?id=-ODN6SbiUU</a> ), but it also didn't outperform the one with no-threshold (I was personally sad for this result).<br>
My thought is, we need difficult instances to improve the model's decision ability but selection methods remove such instances, so there is trade-off. As many prior works have succeeded by selecting the reliable instances, there should be some factors that leads the selection not to work well for this dataset, but I'm not sure which is it because of lacking in my knowledge (maybe if the performance of teacher model is enough good, the selection could be relatively unnecessary?)</p>
<p>Although the threshold-based selection doesn't work well, it's also true that there are noises on pseudo instances.<br>
Instead of setting the threshold, we tried building pseudo instances for each CV split and taking intersection of them (i.e, retain the instance if all CV models output same predictions for it). This reduces roughly 1/3 of pseudo instances and boosted LB +0.001~0.002.<br>
The drawback is, this leaks the information of OOF more so our CV scores go to moon🚀 (0.91~0.92).<br>
We also tried building pseudo instances from the ensemble output over CV and it gives similar CV/LB scores to the one from the intersection. I feel the intersection-version is little further from direct leakage of OOF than the ensemble-version, so I decided to use the intersection-version.</p>
<h1>Training settings</h1>
<p>We used Radam for the optimizer. We kept the learning rate fixed during the training and didn't use any learning scheduler such as cosine annealing.<br>
We clipped the gradient by norm with the maximum norm of 1.0.<br>
We initialized the weight of the output fc layer by transformers.PreTrainedModel._init_weights method, and this really helped to stabilize the training.<br>
No MLM training.</p>
<h1>Tuning</h1>
<p>When I used deberta-v3-large, I tried tuning all hyper-parameters but most parameters affected the CV score only a little. The exception is the learning rate, so for other deberta-variants, I used same values as deberta-v3-large for hyper-parameters other than the learning rate and only tuned the learning rate for each model.</p>
<h1>Ensemble</h1>
<ul>
<li>v3-large + v3-large (other CV seed) + v1-xlarge + v1-large<br>
LB 0.893, Private 0.892</li>
<li>v3-large + v3-large (other CV seed) + v3-large (one more other CV seed) + v1-xlarge + v2-xlarge<br>
LB 0.893, Private 0.892</li>
</ul>
<p>Single (w/ PL):</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>LB</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>v3-large</td>
<td>0.889</td>
<td>0.889</td>
</tr>
<tr>
<td>v3-large (other CV seed)</td>
<td>0.890</td>
<td>0.891</td>
</tr>
<tr>
<td>v3-large (one more other CV seed)</td>
<td>0.888</td>
<td>0.889</td>
</tr>
<tr>
<td>v1-xlarge</td>
<td>0.888</td>
<td>0.891</td>
</tr>
<tr>
<td>v1-large</td>
<td>0.886</td>
<td>0.888</td>
</tr>
<tr>
<td>v2-xlarge</td>
<td>0.888</td>
<td>0.889</td>
</tr>
</tbody>
</table>
<p>CV n_splits=4 for all.</p>
<p>Again, thank you all for this great competition!</p>
<p>Thanks! fixed!</p>
<p>I used RTX 3090 (24GB).</p>
|
NBME - Score Clinical Patient Notes
|
14th Place Solution : pretrained with pseudo labels
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NBME - Score Clinical Patient Notes
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank kaggle and the competition organizers for hosting such an interesting competition.<br>
Secondly, I want to thank my teammates, <a href="https://www.kaggle.com/harshit92" target="_blank">@harshit92</a> , <a href="https://www.kaggle.com/laplaceplanet" target="_blank">@laplaceplanet</a> , <a href="https://www.kaggle.com/xbustc" target="_blank">@xbustc</a> . It was really nice to be in a team with you guys.</p>
<h1>1. Summary</h1>
<p>1.1 The main part of our solution is pre-training with pseudo-labels.</p>
<p>1.2 Pseudo-labels were used not only for 1.1, but also to increase training data.</p>
<p>1.3 Submissions:</p>
<ul>
<li><p>We chose first submission with highest <strong>LB</strong>.<br>
(public score 0.893: <strong>Public rank10th</strong> but private score 0.892)<br>
This one had a better public rank, but the private rank was better with the following second submission:</p></li>
<li><p>We chose second submission with the highest <strong>CV</strong>.<br>
(public score 0.893: private score 0.892 (<strong>Private rank 14th</strong>).<br>
<strong>Trust cv</strong> is very important. </p></li>
</ul>
<p>This strategy is referred by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> <a href="https://www.kaggle.com/competitions/petfinder-pawpularity-score/discussion/301015" target="_blank">PetFinder solution</a>.</p>
<pre><code>Final submission models (the second one) : Ensemble weights were all 0.25.
- deberta v3 large (pre-training with pseudo-labels) lb 0.889
- electra (pre-training with pseudo-labels) lb 0.886
- deberta v3 large (increase training date with pseudo-labels) lb 0.888
- deberta v1 xlarge (pre-training with pseudo-labels & increase training date with pseudo-labels) lb 0.889
</code></pre>
<p>------------------Details below-------------</p>
<h1>2. Model making</h1>
<p>My model is mainly created by merging the <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a> <a href="https://github.com/abhishekkrthakur/long-text-token-classification" target="_blank">code</a> made in Feedback competition and <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> <a href="https://www.kaggle.com/code/yasufuminakama/nbme-deberta-base-baseline-train" target="_blank">code</a>.<br>
Main tips are as follows.</p>
<ul>
<li>mixup <a href="https://www.kaggle.com/shujun717" target="_blank">@shujun717</a> <a href="https://www.kaggle.com/competitions/feedback-prize-2021/discussion/313235" target="_blank">code</a></li>
<li>random mask</li>
<li>multi sample drop out</li>
<li>5 times validation in one epoch</li>
<li>8bit adam <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> <a href="https://www.kaggle.com/code/nbroad/8-bit-adam-optimization/notebook?scriptVersionId=85409379" target="_blank">code</a></li>
<li>gradient checkpoint</li>
<li>change lr between backborn and head</li>
<li>non-use maxlen of tokenizer function. (just align the size in each batch)</li>
<li>sort validation data by the length of the sentence</li>
</ul>
<p>Our best model without pseudo label is public lb 0.887 of deberta v1 xlarge, 0.887 of deberta v2 xlarge.</p>
<h1>3. Pseudo labeling</h1>
<h2>3.0 How to avoid the leakage from pseudo-labeling</h2>
<p>As already reported in other solutions and <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/129840" target="_blank">forums</a>, wrong usage of pseudo-labels will cause a leak in this competition. Specifically, if there are a lot of leaks, the cv will be 0.9 or higher.<br>
This is likely to occur if there is text in the pseudo-label that is similar to the validation text.<br>
Therefore, to avoid this, the pseudo-label should not be calculated using the average of the predictions, and should be used as is without the average.</p>
<p><img src="https://raw.githubusercontent.com/chumajin/NBME/main/leak1.jpg" alt=""><br>
<img src="https://raw.githubusercontent.com/chumajin/NBME/main/leak2.jpg" alt=""></p>
<h2>3.1 Method 1 : pre-training with pseudo labeling(Public LB~+0.003)</h2>
<p>This is <a href="https://www.kaggle.com/xbustc" target="_blank">@xbustc</a> 's great idea. He made the strong model for us as follows.</p>
<p>Step 1 : normal training from train.csv and making models.</p>
<p>Step 2 : inference 612602 ids from patience notes in order to make pseudo labeling</p>
<p>Step 3 : 50% of pseudo label date made from Step2 was used as train data.<br>
And just 1 epoch training was done</p>
<p>Step 4 : He trained 14300 ids from train.csv again by loading weights made in Step3.</p>
<p>As a result, deberta v3 large is boosted from public lb 0.886 to 0.889.</p>
<p>electra is boosted from public lb 0.883 to 0.886. </p>
<p>These models are very strong when we ensemble them.</p>
<h2>3.2 Method 2 : increase training date with pseudo-labels(Public LB~+0.004)</h2>
<p>This is just increasing the training data in my case.</p>
<p>Step 1 : normal training from train.csv and making models.</p>
<p>Step 2 : inference 14300×5 ids from patience notes in order to make pseudo labeling</p>
<p>Step 3 : For each fold, I used different pseudo-labels to add.</p>
<p>The 14300 pseudo label in each were added to the each train data (5kfold)<br>
(all train ids in 1 epoch = 14300 ids × 4/5 from train.csv and 14300 from pseudo labels).</p>
<p>This makes a deberta v3 large boosted from lb 0.884(cv 0.8869) to 0.888(cv 0.89084).</p>
<p>And the deberta v1 xlarge boosted from lb 0.887(cv 0.8871783 ) to 0.888(cv 0.890706)</p>
<h2>3.3 The combination of #3.1 and #3.2 (~ more Public LB +0.001 )</h2>
<p>This is just the combination of Method 1 and 2.<br>
And a deberta v1 xlarge boosted from lb 0.888(cv 0.890706) to 0.889(cv 0.89112)</p>
<h1>4. Ensemble</h1>
<h2>4.1 charpred ensemble</h2>
<p>We use the charpred ensemble. We tried the Weighted Box Fusion(WBF). But charpred was better. This is because the resolution of prediction signal is higher than WBF in our case and the target of this competition is the char location.</p>
<h2>4.2 model selection</h2>
<p>To be honest, we had the hardest time here. After the rank of our team went up to 2, it was necessary to select a model by cv in order to avoid a big shake down and better shake up. We spent a lot of time there. (ex. making the leak-free models and so on..) But, we couldn't have a complete pseudo leak-free model. However, we had to choose the optimal solution from these models.</p>
<p>Models Used:</p>
<ul>
<li>deberta v1 xlarge, v2 xlarge, v3 xlarge, large-mnli</li>
<li>roberta large</li>
<li>electra</li>
</ul>
<p>Methods adopted for Training:</p>
<ul>
<li>using not only BCE but cross-entropy</li>
<li>remove pseudo samples which pn_history are familiar to train samples</li>
<li>2nd around pseudo</li>
<li>pretrain with pseudo</li>
<li>increasing with pseudo</li>
</ul>
<p>Basically, the models were selected by calculating cv using the hill climb method. However, for models that are suspected of leaking even a little, we have set a rule to adopt only those that can improve both cv and lb.<br>
As a result, we were able to select a private score that were optimal to some extent, and we were able to prevent a large shake down.<br>
The one with the best public score was bad for private score, so we think we were able to get the correct answer.</p>
<h2>4.3 ensemble weights</h2>
<p>We tried to use the nelder mead method, but since there is a possibility of a leak, we decided not to use. The weights were divided into all 0.25.</p>
<h2>4.4 final submission</h2>
<p>Our final cv was 0.898527 with a little leak of pseudo label.Final models are written in 1.Summary chapter.</p>
<h1>5. post-process</h1>
<p>We just use the <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> public <a href="https://www.kaggle.com/code/theoviel/roberta-strikes-back" target="_blank">code</a>. I think there were improvements, as in other people's solutions.</p>
<h1>6. What didn't work for us</h1>
<ul>
<li>MLM (other team is working…)</li>
<li>LSTM on output layer of transformer</li>
<li>Add feature text at the start, instead of later</li>
<li>Added class weights based on feature text frequency, and misclassification based on oof data</li>
<li>Adding new tokens to vocabulary (like mycordial/medical terms)</li>
<li>lower case of input</li>
<li>LGBM for post process</li>
</ul>
<h1>Acknowledgments</h1>
<p>We couldn't get this score on our own. Thanks to everyone who shared past knowledge and code! We really appreciate that.<br>
Special thanks to <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a>, <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, <a href="https://www.kaggle.com/shujun717" target="_blank">@shujun717</a>, <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a>, <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a>, <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a></p>
|
NBME - Score Clinical Patient Notes
|
18th place with only Kaggle and free Colab
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NBME - Score Clinical Patient Notes
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I trained all final models on Kaggle, Colab was too slow and unstable for that so I used it only for experiments.</p>
<p>I was able to use only around 20 000 samples from pseudo labels (it's just like 3%). <br>
I did finetuning for 1 epoch with 20_000 pseudo labels + ~11_000 training data and it took around 100 minutes per fold.</p>
<p>Some other times from my training (for deberta-v3-large):</p>
<ul>
<li>MLM pretraining (1 epoch) - 3 hour</li>
<li>Training 5 folds (4 epochs) - 5 x 4 x 37 minutes - a little over 12 hours in total</li>
<li>Generating 20_000 pseudo labels - 10 minutes per fold</li>
<li>Finetuning on 20k pseudo labels (1 epoch) - 100 minutes per fold</li>
</ul>
<p>So my best model (1) took around 25 hours to train from start to finish.<br>
With the deberta-large-mlni model (2) I had to take shortcut and just train without folds because the training was even slower than deberta-v3-large.</p>
<p>I estimated the times 2 weeks before end of competition and planned what I can do and how much pseudo labels I will be able to use (unfortunately not much but it did boost the score)</p>
<table>
<thead>
<tr>
<th>model</th>
<th>without pseudo labels public / private</th>
<th>with pseudo labels public / private</th>
</tr>
</thead>
<tbody>
<tr>
<td>1) deberta-v3-large</td>
<td>0.887 / 0.889</td>
<td>0.888/ 0.890</td>
</tr>
<tr>
<td>2) deberta-large-mlni</td>
<td>0.881 / 0.881</td>
<td>0.885 / 0.886</td>
</tr>
</tbody>
</table>
<p>Thanks. You are right I used 11k for training and ~3k for validation but the patient_notes.csv has 42146 records and train.csv contains labels only for 1000 of the patient_notes.<br>
So you can merge patient_notes.csv with features.csv using case_num to create unlabeled data (you will get 612602 additional records) which you can then use to generate pseudo labels.</p>
<pre><code>features = pd.read_csv('../input/nbme-score-clinical-patient-notes/features.csv')
patient_notes = pd.read_csv('../input/nbme-score-clinical-patient-notes/patient_notes.csv')
pseudo_labels_df = patient_notes.merge(features, on=['case_num'], how='left')
# remove records that are in train.cvs from pseudo_labels_df
# predict location using trained model on pseudo_labels_df
# use predictions from pseudo_labels_df to finetune model
</code></pre>
<p>It is mentioned in the competition data description:</p>
<blockquote>
<p>patient_notes.csv - A collection of about 40,000 Patient Note history portions. Only a subset of these have features annotated. You may wish to apply unsupervised learning techniques on the notes without annotations. The patient notes in the test set are not included in the public version of this file.</p>
</blockquote>
<p>First, big thanks to NBME and Kaggle for organizing such an interesting competition.</p>
<p>I am writing this mainly to show it's possible to get good results without owning or renting expensive GPUs even in competitions like this. I did most experiments on free Colab (T4 gpus) with with subsample of data and smaller models, when something looked promising I tested the idea with bigger model and more data and eventually trained and tested one full fold on Kaggle.</p>
<p>Because I had limited resources for experiments I went through all discussions in NBME competition and read solutions of similar competitions to find out what worked well so I want to thank everybody for sharing their knowledge in discussions. It was my first NLP competition and without kaggle community I wouldn't be able to reach such a good results. </p>
<p>What worked for me:</p>
<ul>
<li>Keeping line breaks - changing the tokenizer as described <a href="https://www.kaggle.com/competitions/feedback-prize-2021/discussion/313330" target="_blank">here</a></li>
<li>Converting texts and features to lower case</li>
<li>Adding extra tokens for line breaks: ["\r", "\n", "\t"])</li>
<li>Adding extra tokens for some shortcuts: ["cc","hpi", "pmhx", "pmh", "rx", "fhx", "shx", "phi", "meds", "sh", "fh", "ros", "psh", "hx", "pshx", "fmhx"]</li>
<li>MLM pretraining (1 epoch, 0.2 mask probability, ~70% accuracy)</li>
<li>Pseudo labels</li>
<li>Post processing to fix spaces from <a href="https://www.kaggle.com/code/theoviel/roberta-strikes-back" target="_blank">Roberta strikes back</a></li>
</ul>
<p>My ensemble is a bit of mix of everything because I didn't have resources to retrain my models so I worked with what i had:</p>
<ol>
<li>deberta-v3-large (public: 0.888, private: 0.890) - weight: 0.4<br>
5 folds, tokenizer not removing line breaks, lower case, added tokens, MLM pretraining and finetuned on pseudo labels (20k samples)</li>
<li>deberta-large-mnli (public: 0.885, private: 0.886)- weight: 0.2<br>
trained on 95% training data, tokenizer not removing line breaks, lower case, added tokens, MLM pretraining and finetuned on pseudo labels (20k samples)</li>
<li>deberta-v3-large (public: ???, private: ???)- weight: 0.4<br>
2 out of 5 folds (ran out of GPU time), normal tokenizer, no extra tokens, MLM pretraining and finetuned on pseudo labels (20k samples)</li>
</ol>
<p>Models 1 and 2 used pseudolabels that were averaged from all models (1,2,3), which boosted the CV (probably leak) and single model score but didn't help in ensemble as much as I expected. So for model 3 I used pseudo labels only from it's own predictions (average across folds), that boosted CV (leak), single model score and helped in ensemble more (I think).</p>
<p>Final prediction was weighted sum of character level predictions from the models with post processing to fix spaces.</p>
<p>Special thanks to <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> for <a href="https://www.kaggle.com/code/theoviel/roberta-strikes-back" target="_blank">Roberta strikes back</a>, <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> for great <a href="https://www.kaggle.com/code/yasufuminakama/nbme-deberta-base-baseline-train" target="_blank">baseline</a> and <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> for his <a href="https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/discussion/315707" target="_blank">experiments</a>.</p>
<p>First I used StratifiedGroupKFold similar to spliting training data (groups=pn_num, y=case_num) but when I realized I have no chance using so much data I did sample with pandas on the folds and used just subsample of fold 0. </p>
<p>Like this</p>
<pre><code>sgkf = StratifiedGroupKFold(n_splits=CFG_pl.n_folds, random_state=CFG_pl.seed, shuffle=True)
for n, (train_index, val_index) in enumerate(sgkf.split(X=pseudo_labels_df , y=pseudo_labels_df['case_num'], groups=pseudo_labels_df['pn_num'])):
pseudo_labels_df.loc[val_index, 'fold'] = int(n)
pseudo_labels_df = pseudo_labels_df.sample(n=100_000, random_state=CFG_pl.seed).reset_index(drop=True)
</code></pre>
<p>I tried doing just random sample but it didn't improve the score as much so I think the stratify across case_num helps.</p>
<p>Thanks, I don't think the upside was very big in the end. With smaller models (deberta-v3-base) and subsample of data I saw ~0.004 increase in CV but with full data and deberta-v3-large the CV difference got smaller but still the model with additional tokens scored +0.001 on leaderboard (public and private).</p>
<p>Thanks :-D to be honest I couldn't have done it without Kaggle community and all the people sharing their knowledge and ideas in discussions :-)<br>
But you got me thinking about solo grandmaster :-D not sure I can manage though, there are so many good data scientists here that getting into gold zone is getting really hard.</p>
<p>Hi, thanks and sorry for late reply, got busy with work. </p>
<p>I didn't do much special I just used a smaller batch size with gradient_accumulation_steps. Both deberta-v3-large and deberta-large-mnli were able to run on Kaggle with batch size 3 and gradient_accumulation_steps = 2. I think the important part was to optimize the code so you don't save any output and trying clean up memory well. </p>
<p>The huggingface website has a nice article about tricks and techniques how to squeeze big models into single GPU: <a href="https://huggingface.co/docs/transformers/perf_train_gpu_one" target="_blank">https://huggingface.co/docs/transformers/perf_train_gpu_one</a> <br>
and <a href="https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/discussion/321033" target="_blank">some pople</a> managed to train v2-xlarge on colab without freezing layers using <a href="https://github.com/facebookresearch/bitsandbytes" target="_blank">bitsandbytes library</a></p>
<p>Some more details about my MLM for anyone interested:<br>
I did MLM training with 40000 samples (mask probability = 0.2) and used 2146 samples for validation. Both models ran around 3 hours, v3 got 0.71 accuracy and mnli 0.77 (I had high hopes for mnli but it didn't perform as well as v3 in the end). I tried doing MLM for 3 epochs but I didn't see much improvement so I just used 1 epoch in the end.</p>
|
NBME - Score Clinical Patient Notes
|
19th Solution: Pseudo label and PP
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NBME - Score Clinical Patient Notes
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I'd like to thank host and kaggle for organizing this great competition! <br>
And thanks to all participants for the useful codes and interesting discussions!<br>
And also thank you to all of my excellent teammates(@naoism, <a href="https://www.kaggle.com/yosukeyama" target="_blank">@yosukeyama</a>, <a href="https://www.kaggle.com/mst8823" target="_blank">@mst8823</a>)!</p>
<h2>Summary</h2>
<p>ensemble of 5 models. some post processing.<br>
cv:0.8971, private lb:0.892</p>
<h2>MLM pretraining</h2>
<p>The accuracy was increased by about 0.002.</p>
<ul>
<li>mlm_probability=0.15. using all unlabeled data.</li>
</ul>
<h2>Models</h2>
<p>three deberta-v3-large. one deberta-v2-xlarge. one deberta-v1-large.<br>
The loss function for all models is BCEwithLogitsLoss (we tried jaccard and KLD, but bce had the best cv).</p>
<ul>
<li><p>model1: </p>
<ul>
<li>deberta-v3-large</li>
<li>cv:0.8875, private lb: 0.889</li>
<li>including /n charactors like this <a href="https://www.kaggle.com/competitions/feedback-prize-2021/discussion/313330" target="_blank">discussion</a></li></ul></li>
<li><p>model2: </p>
<ul>
<li>deberta-v3-large</li>
<li>cv:0.8957, private lb: 0.890</li>
<li>including '/n' charactors</li>
<li>awp</li>
<li>mask augmetation: prob=0.8, ratio=0.03</li>
<li>pseudo label<ul>
<li>Pretraining with pseudo-labels generated from unlabeled data. Finetune with pseudo-labels generated from labeled data.</li></ul></li></ul></li>
<li><p>model3:</p>
<ul>
<li>deberta-v3-large.</li>
<li>cv: 0.890. private lb: - </li>
<li>awp</li>
<li>mask augmetation: prob=1.0, ratio=0.2</li>
<li>multi sample dropout</li>
<li>pseudo label(soft pseudo label)<ul>
<li>Trained with 11,000 random samples from unlabeled data while maintaining labeled data.</li></ul></li></ul></li>
<li><p>model4:</p>
<ul>
<li>deberta v2 xlarge</li>
<li>cv: 0.882 private lb: -</li>
<li>no mlm.</li>
<li>Using bnb for increase the batch size, because small batch size made it difficult to converge.</li></ul></li>
<li><p>model5:</p>
<ul>
<li>debera v1 large</li>
<li>cv: 0.8934, private lb: 0.891</li>
<li>multi sample dropout</li>
<li>pseudo label<ul>
<li>Pretraining with pseudo-labels and labeled(train) data. Finetune with labeled data.</li></ul></li></ul></li>
</ul>
<h2>pseudo label</h2>
<p><a href="https://www.kaggle.com/naoism" target="_blank">@naoism</a> implemented it. Thank you very much.<br>
Only model3 uses the inference results from one model.<br>
Other models were using the ensemble results of three models with good cv without pseudo.</p>
<h2>ensamble</h2>
<ul>
<li>Char probabilities level ensemble.Ensemble the ensemble results with 5folds for each model.</li>
<li>threshold uses the best of cv</li>
<li>Combining with and without pseudo labels improves accuracy.</li>
</ul>
<h2>post process</h2>
<ul>
<li>Inference is misplaced if word is not immediately preceded by a space (e.g., "hello) that happen due to offset mapping issues. In such a situation, we fix the beginning of the inference is shifted by one. </li>
<li>Similarly, inferences starting with 1 are modified to 0.</li>
</ul>
<ul>
<li>Correction for continuous whitespace</li>
</ul>
<ul>
<li><p>Knowledge-based pp</p>
<ul>
<li><a href="https://www.kaggle.com/yosukeyama" target="_blank">@yosukeyama</a> found many erros. Thank you very much.</li>
<li>e.g </li></ul>
<pre><code># feature_num 105: Incorrect because it is urine.
# No-bloody-bowel-movements
test.loc[(test["str_annot"].str.contains("urine")) & (test["feature_num"] == 105), "location"] = ""
test.loc[(test["str_annot"].str.contains("URINE")) & (test["feature_num"] == 105), "location"] = ""
</code></pre>
<ul>
<li>cv improved, but the improvement in the test data was smaller, probably because there were fewer errors like that in the test data.</li></ul></li>
</ul>
<h2>Others</h2>
<ul>
<li>The unable to predict "[]" was hurt accuracy, since micro-averaged F1 is the evaluation metric. So We tried multitask learning, but it was difficult to solve the problem because most of the errors were caused by annotation errors.</li>
</ul>
<ul>
<li>For the purpose of automatic pp, we built a model that predicted whether the input inference result would have the correct label, but it did not work.</li>
</ul>
<ul>
<li>stacking. cv increased by about 0.01, but No effect in test data. </li>
</ul>
<ul>
<li>The dynamic padding implemented by <a href="https://www.kaggle.com/mst8823" target="_blank">@mst8823</a> has accelerated the inference speed, that allowing for ensembles of 25 models or more. Thank you very much.</li>
</ul>
<p>Thanks. It means that there is an 80% chance that 3% of the text will be masked. Hard ratio(20%) was also effective in improving score .</p>
|
NBME - Score Clinical Patient Notes
|
1st solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NBME - Score Clinical Patient Notes
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you! Yes I used optuna. My cv also did not correlate well with Public LB. Therefore, our team chose two final submissions, one with a high cv and one with a high PublicLB. Finally, the best Private LB was the one with the high cv.</p>
<p>Yes, I load my own trained weights in all but the MLM. I don't see any difference between it and the MLM weights.</p>
<blockquote>
<p>When moving from ATL to ATaP, can you explain the process a bit?</p>
</blockquote>
<p>There is no special process. I created the pseudo label as 3 channels and train it as target.</p>
<p>I have learned a lot from your solution, thank you!</p>
<p>Thank you!!<br>
I trained fold 0~4 and build cv on fold 0~4. I didn't train fold 5~9. Because it takes time to train and there is not enough time to submit. ref: <a href="https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/discussion/323095#1777962" target="_blank">https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/discussion/323095#1777962</a> </p>
<p>sorry, public it.</p>
<p>Yes MLM's contribution is very significant. I posted the link in the Notebook section above.<br>
I also tried freezing the parameters, but did not adopt it because the loss did not go down.</p>
<p>First of all, thanks to competition organizers for hosting this competitions and great teammate(@ryuichigt). And thanks to the great notebooks and discussions, I learned a lot and was able to make it my final solution.</p>
<h2>Solution Summary</h2>
<p>The following six models are blended to maximize cv. Each models use MLM, pseudo labeling, AWP, augmentation, and trained 5 out of 10 GroupStratifiedKFolds and all data, built our CV from the resultant 50% train data OOF. Final submit uses some weights of those six (fold 0, 1, 2, 3, 4, all) due to time limitations. <a href="https://www.kaggle.com/code/currypurin/nbme-final-cv-ver2?scriptVersionId=94661636" target="_blank">Notebook</a></p>
<table>
<thead>
<tr>
<th>exp-num</th>
<th>Hugging Fase Model</th>
<th>cv</th>
<th>PubLB<br>(5/10folds+all)</th>
<th>PriLB<br>(5/10folds+all)</th>
<th>final submit<br>used weights</th>
</tr>
</thead>
<tbody>
<tr>
<td>107</td>
<td>deberta-v3-large</td>
<td>0.8925</td>
<td>0.8931</td>
<td>0.8929</td>
<td>4/10folds+all</td>
</tr>
<tr>
<td>092</td>
<td>deberta-v2-xlarge</td>
<td>0.8914</td>
<td>0.8930</td>
<td>0.8911</td>
<td>3/10folds+all</td>
</tr>
<tr>
<td>106</td>
<td>deberta-large</td>
<td>0.8913</td>
<td>0.8922</td>
<td>0.8925</td>
<td>2/10folds+all</td>
</tr>
<tr>
<td>102</td>
<td>deberta-v2-xlarge</td>
<td>0.8925</td>
<td>0.8928</td>
<td>0.8940</td>
<td>3/10folds+all</td>
</tr>
<tr>
<td>101</td>
<td>deberta-large</td>
<td>0.8907</td>
<td>0.8918</td>
<td>0.8934</td>
<td>4/10folds</td>
</tr>
<tr>
<td>104</td>
<td>deberta-v2-large</td>
<td>0.8921</td>
<td>0.8925</td>
<td>0.8920</td>
<td>2/10folds+all</td>
</tr>
</tbody>
</table>
<p><img src="https://user-images.githubusercontent.com/24523856/167264854-d4614212-45f3-4f2b-b9b5-0acdf6841515.jpeg" alt="NBME Model PipeLine 003"></p>
<h2>Things that worked.</h2>
<ul>
<li><p>AWP</p>
<ul>
<li><a href="https://www.kaggle.com/code/wht1996/feedback-nn-train/notebook" target="_blank">Feedback Prize 1st Place Great Notebook</a> is very helpful.</li>
<li>trying several parameters, the following improved the cv<ul>
<li>adv_lr: 1.0, adv_eps: 0.01</li></ul></li>
<li>cv increase about 0.002</li>
<li><a href="https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/discussion/323095#1777969" target="_blank">my code</a></li></ul></li>
<li><p><a href="https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/discussion/316172" target="_blank">MLM</a></p>
<ul>
<li>Pre-training is performed on 'patient_notes.csv' except for the data in 'train.csv'.</li>
<li>cv increase about 0.002</li></ul></li>
<li><p>pseudo labeling</p>
<ul>
<li>Ratio, pseudo labeling data: 90%, train data: 10%</li>
<li>soft label<ul>
<li>In my experiments, soft labels improved cv score better than hard labels.</li>
<li>We attempted to limit the data used for pseudo labeling, such as those with more than 0.95 predictive value for any one letter, but sampling randomly from the entire data set improved the cv.</li></ul></li></ul></li>
<li><p>augmentation</p>
<ul>
<li>Remove one sentence from the text.</li>
<li>Probability of augmentation, p=0.2</li>
<li>cvscore is almost unchanged. However, it seemed to be effective for this task and data, so we adopted it.</li>
<li>Pseudo-label data did not try augmentation.</li></ul></li>
<li><p>Auxiliary Target Learning</p>
<ul>
<li>Add the beginning and end of the target, respectively. and learn with 3chanel.<ul>
<li>channel_0: Normal target</li>
<li>channel_1: start: 1, else: 0</li>
<li>channel_2: end: 1, else: 0</li></ul></li>
<li>The cv improvement is slight, but can be used for ensembles(blend) with pseudo labels. (I think taking the average of two pseudo labels would improve the accuracy of the pseudo label.)</li>
<li>Checking the PrivateLB scores, it appears that this attempt was effective.</li></ul></li>
</ul>
<h2>Not worked</h2>
<ul>
<li>shuffle augmentation<ul>
<li>augmentation shuffles sentences, makes cv worse.</li></ul></li>
<li>label smoothing loss </li>
<li>clip_grad_norm<ul>
<li>I tried the following two things, but the cv got worse. Maybe we need to adjust some more parameters.<ul>
<li>torch.nn.utils.clip_grad_norm_(self.model.parameters(), 1000)</li>
<li>torch.nn.utils.clip_grad_norm_(self.model.parameters(), 10)</li></ul></li></ul></li>
</ul>
<h2>postprocessing</h2>
<ul>
<li>Remove leading and trailing spaces.</li>
<li>this excellent notebook <a href="https://www.kaggle.com/code/junkoda/be-aware-of-white-space-deberta-roberta" target="_blank">Be aware of white space [DeBERTa+RoBERTa]</a>very helpful.</li>
</ul>
<h2>cv split</h2>
<p>Up until midway through the competition, we were using 5fold GroupKFolds. The change from 10folds to GroupStratifiedKFolds has been a huge improvement for our team.</p>
<h2>Notebook</h2>
<p>submit: <a href="https://www.kaggle.com/currypurin/nbme-final-submit-currypurin" target="_blank">https://www.kaggle.com/currypurin/nbme-final-submit-currypurin</a><br>
MLM: <a href="https://www.kaggle.com/code/currypurin/nbme-mlm/" target="_blank">https://www.kaggle.com/code/currypurin/nbme-mlm/</a></p>
<h2>Edit</h2>
<p>Edit: cv score added</p>
<p>Thanks!! Ggood parameter might depend on the batch size. The batch size I used is as follows.</p>
<ul>
<li>deberta-v3-large, deberta-large: 8</li>
<li>deberta-v2-large: 4 or 5</li>
</ul>
<p>Thanks!<br>
CV/LB correlation may not be improving. cv score improved about 0.001~0.002 and LB score improved 0.002~0.003. I am not sure if this is due to the change from 5fold to 10fold or from GroupKFold to GroupStratifiedKfold.</p>
<p>My AWP code. I use pl.LightningModule. I tried several patterns, but this method gave me the good cv score.</p>
<pre><code>class Learner(pl.LightningModule):
def __init__(self, model, num_train_steps, num_warmup_steps,
valid_df, wandb):
super().__init__()
self.model = model
self.criterion = nn.BCEWithLogitsLoss(reduction="none")
self.num_train_steps = num_train_steps
self.num_warmup_steps = num_warmup_steps
self.best_f1_score = 0
self.steps = 0
self.automatic_optimization = False
def training_step(self, batch, batch_idx):
opt = self.optimizers(use_pl_optimizer=True)
d_ = batch
label = d_['label']
out = self.model(d_['tokens'], d_['attention_mask'], d_["token_type_ids"])
loss = self.criterion(out.view(-1, 1), label.view(-1, 1))
loss = torch.masked_select(loss, label.view(-1, 1) != -1).mean()
opt.zero_grad()
self.manual_backward(loss)
opt.step()
if self.best_f1_score > 0.83:
adv_loss = self.awp.attack_backward(d_['tokens'], d_['attention_mask'], d_["token_type_ids"],
d_['label'], self.current_epoch)
self.manual_backward(adv_loss)
opt.step()
self.awp._restore()
self.log('Loss/train', loss, on_step=True, on_epoch=True, prog_bar=True, logger=True)
sch = self.lr_schedulers()
sch.step()
self.steps += 1
lr = float(sch.get_last_lr()[0])
self.log('lr', lr, on_step=True, on_epoch=True, prog_bar=False, logger=True)
class AWP:
def __init__(
self,
model,
optimizer,
adv_param="weight",
adv_lr=1.0,
adv_eps= 0.01,
start_epoch=0,
adv_step=1,
):
self.model = model
self.optimizer = optimizer
self.adv_param = adv_param
self.adv_lr = adv_lr
self.adv_eps = adv_eps
self.start_epoch = start_epoch
self.adv_step = adv_step
self.backup = {}
self.backup_eps = {}
self.criterion = nn.BCEWithLogitsLoss(reduction="none")
def attack_backward(self, tokens, attention_mask, token_type_ids, label, epoch):
if (self.adv_lr == 0) or (epoch < self.start_epoch):
return None
self._save()
for _ in range(self.adv_step):
self._attack_step()
with torch.cuda.amp.autocast():
out = self.model(tokens, attention_mask=attention_mask, token_type_ids=token_type_ids).view(-1, 1)
adv_loss = self.criterion(out, label.view(-1, 1))
adv_loss = torch.masked_select(adv_loss, label.view(-1, 1) != -1).mean()
self.optimizer.zero_grad()
return adv_loss
def _attack_step(self):
e = 1e-6
for name, param in self.model.named_parameters():
if param.requires_grad and param.grad is not None and self.adv_param in name:
norm1 = torch.norm(param.grad)
norm2 = torch.norm(param.data.detach())
if norm1 != 0 and not torch.isnan(norm1):
r_at = self.adv_lr * param.grad / (norm1 + e) * (norm2 + e)
param.data.add_(r_at)
param.data = torch.min(
torch.max(param.data, self.backup_eps[name][0]), self.backup_eps[name][1]
)
# param.data.clamp_(*self.backup_eps[name])
def _save(self):
for name, param in self.model.named_parameters():
if param.requires_grad and param.grad is not None and self.adv_param in name:
if name not in self.backup:
self.backup[name] = param.data.clone()
grad_eps = self.adv_eps * param.abs().detach()
self.backup_eps[name] = (
self.backup[name] - grad_eps,
self.backup[name] + grad_eps,
)
def _restore(self):
for name, param in self.model.named_parameters():
if name in self.backup:
param.data = self.backup[name]
self.backup = {}
self.backup_eps = {}
</code></pre>
<p>Thank you!</p>
<blockquote>
<p>Did I understand correctly that you trained all 10 folds, but only used a subset in the submission ensemble?</p>
</blockquote>
<p>I trained fold 0~4 and all data, fold 5~9 not trained. This is because inference time is "9 hours run-time" and there is no time to submit everything.</p>
<blockquote>
<p>What was your strategy to select the folds for the final ensemble</p>
</blockquote>
<p>It's random and I tried to ensure that all fold are selected the same number of times as possible.</p>
<blockquote>
<p>Did you use the fold "all data" throughout the pipeline or only at the end?</p>
</blockquote>
<p>"all data" is used throughout the pipeline.</p>
<blockquote>
<p>How did you select the schedule for "all data" (same as that of regular folds, which was fixed; some average of the lrs that worked for the regular folds, …)?</p>
</blockquote>
<p>Same. This time I am not using early stopping, so both regular folds and all data are trained for 5 epochs based on the number of data in train.</p>
|
NBME - Score Clinical Patient Notes
|
20th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NBME - Score Clinical Patient Notes
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a>, and congratulations on the impressive 2nd place ! </p>
<p>I think your models without PLs were much stronger than mine (ours ?), so in the end I mostly did several rounds of PLs to compensate that. Roberta-large didn't help our ensemble much anyways. </p>
<p>Thanks a lot everybody for the competition, and especially to my teammates <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> <a href="https://www.kaggle.com/horsek" target="_blank">@horsek</a> <a href="https://www.kaggle.com/shinomoriaoshi" target="_blank">@shinomoriaoshi</a> for the great experience :)</p>
<h3>Pseudo Labels</h3>
<p>Our solution (like most of the winning ones) heavily relies on pseudo-labelling. <br>
The pipeline described below summarizes it, and we report improvements on CV and LB for a single deberta-(v3)-large. </p>
<p><a href="https://ibb.co/85fj3r3"><img src="https://i.ibb.co/ncYD272/PL-drawio-2.png" alt="PL-drawio-2"></a></p>
<ul>
<li>Additional details :<ul>
<li>Pseudo labels were generated on a blend of models (usually 2 or 3 models)</li>
<li>Some of our models used 3 iterations of PLs, some used 2, some used 1</li>
<li>Models were trained with different proportions of the PL data varying from 20% to 100%</li>
<li>We had different pipelines and used different sets of pseudo-labels to keep some diversity. </li>
<li>Some of the models used soft PLs</li></ul></li>
</ul>
<p>In order to squeeze a bit more from our pipeline, some of the models were trained with pseudo labels averaged for the 5 folds. This made CV useless but resulted in stronger PLs and a consistent +0.001 public LB boost for our single models.</p>
<h3>Model diversity</h3>
<ul>
<li>Backbones : <ul>
<li>deberta-v3-large, deberta-large, deberta-xlarge, electra-large, roberta-large</li></ul></li>
<li>Heads : <ul>
<li>LSTM head, 2-layer CNN head on the last or 8 last layers, 1 layer fc head on the last or 8 last layers</li></ul></li>
<li>Some models were MLM-pretrained</li>
<li>Some models used data augmentations (cf comments)</li>
</ul>
<h3>Post-processing</h3>
<p>Nakama’s baseline had issues with start characters, and you could get a decent (0.001-0.002) boost simply by fixing that (cf comments). In the end we just used Theo’s Roberta notebook to convert token probabilities to character spans. We couldn’t find a more reliable post-processing.</p>
<h3>Ensembles</h3>
<p>The CV of ensembles did not correlate well with LB, probably because of label noise, so we just decided to blend together all of our strongest models. Scores are public/private LB.</p>
<ul>
<li><p>Tri (weight=0.2)</p>
<ul>
<li>deberta-large : LB 0.890/0.890</li>
<li>electra-Large : LB 0.891/0.889</li></ul></li>
<li><p>Theo (weight=0.6)</p>
<ul>
<li>deberta-v3-large : LB 0.892/0.892</li>
<li>deberta-large : LB 0.892/0.892</li>
<li>deberta-xlarge (2 folds) : LB 0.891/0.892</li>
<li>roberta-large : not submitted but about ~0.891/0.891</li>
<li>deberta-xlarge (2 folds) : not submitted but ~891/0.892</li></ul></li>
<li><p>Chris (weight=0.1)</p>
<ul>
<li>deberta-xlarge (2 folds) : LB 0.891/0.892</li></ul></li>
<li><p>Hiro (weight=0.1)</p>
<ul>
<li>deberta-v3-large not submitted</li></ul></li>
</ul>
<p>We had 30 submissions scoring better than the one we chose, including single models in the gold range. But the difference between a mid-range 0.892 sub and a 0.893 sub is actually really small so there’s not really anything we could do about it. </p>
<h3>Final wise words by <a href="https://www.kaggle.com/shinomoriaoshi" target="_blank">@shinomoriaoshi</a></h3>
<blockquote>
<p>Even though we got the shake-down and I missed the chance to go to the Master tier again, the learning opportunity is much more valuable.</p>
</blockquote>
<p><em>Thanks for reading !</em></p>
|
NBME - Score Clinical Patient Notes
|
28th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NBME - Score Clinical Patient Notes
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks Kaggle and the competition hosts for a well designed NLP competition. This summarizes the effort of team Y3S and it was a great experience working with my teammates <a href="https://www.kaggle.com/sgalib" target="_blank">@sgalib</a> <a href="https://www.kaggle.com/syhens" target="_blank">@syhens</a> and <a href="https://www.kaggle.com/smnomaan" target="_blank">@smnomaan</a> ! This is a writeup of our collective team effort. </p>
<p>Even though we were hit by the shakeup and dropped from our 11th place gold position, this was a great learning experience. </p>
<h1>Data</h1>
<p>We realized that Pseudo Labeling (PL) was going to be an important piece in the competition so we spent significant time getting good PLs using a 2 stage approach. The first set of PL were based on our best <strong>deberta-v3-large</strong> model. We trained a number of models on 5 folds using these PL and then used an ensemble of models to further improve the PL. We only used per fold models to generate PL to avoid leakage. This approach improved CV and LB significantly (+0.002). For example <strong>roberta-large</strong> was able to get to 0.890 LB with this 2 stage approach and 20% sampled PL. </p>
<h1>Models</h1>
<p>Our team worked on training different architectures with different training procedures including:</p>
<ul>
<li>Adversarial Training (AWP)</li>
<li>LSTM head for certain models</li>
</ul>
<p>The 5 fold mean CV for most of these models were 0.891x so I will include the LB scores</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>Derberta-v3-large</td>
<td>0.891</td>
<td>0.892</td>
</tr>
<tr>
<td>Derberta-v2-xlarge</td>
<td>0.891</td>
<td>0.890</td>
</tr>
<tr>
<td>Derberta-v3-base</td>
<td>0.890</td>
<td>0.890</td>
</tr>
<tr>
<td>roberta-large</td>
<td>0.890</td>
<td>0.889</td>
</tr>
<tr>
<td>longformer-large</td>
<td>0.890</td>
<td>0.889</td>
</tr>
<tr>
<td>electra-large</td>
<td>0.890</td>
<td>0.890</td>
</tr>
</tbody>
</table>
<p>What is amazing is that we can even get <strong>0.890</strong> on the LB using a simple <strong>deberta-v3-base</strong> model !</p>
<h1>Post Processing</h1>
<p>We added some custom post processing based on the error analysis of OOF prediction:</p>
<ul>
<li>Dropping spans if they were less than minimum length for that feature num based on training data. </li>
<li>Modify start positions based on if the predicted text is valid (not space etc.) </li>
</ul>
<p>Post processing gave us a <code>+0.001</code> boost in CV and LB. </p>
<h1>Ensembles</h1>
<p>We used the char level averaging similar to public approaches. We developed a faster version of ensembling which allowed us to ensemble 5-6 5-fold (25-30 total) base/large/xlarge models. Weights were optimized based on Optuna on OOF CV. </p>
<h1>Final Selection</h1>
<p>Our final selection was an ensemble of diverse models </p>
<p>debv2-xl + debv3-large (awp) + debv3-large (no awp)+ longformer-large (lstm head) + roberta-large + debv3-base <strong>Mean CV 0.8947</strong></p>
<h1>Things that did not work for us</h1>
<ul>
<li>Different ensembling techniques like voting </li>
<li>Post Processing techniques like dropping spans if they were less than X % threshold of text similarity to all train data (including PL) - While this showed a good improvement in CV, it didnt work on LB. </li>
</ul>
<p>Looking forward to the next challenge! </p>
<p>AWP was mainly used to avoid overfitting and add to the model diversity. One of our final models in the ensemble used that. We had slight improvement in CV/LB but I dont think it was a large boost. <br>
Yes, deberta-xlarge was v2 (fixed now) :) </p>
<p>Yes all those single models scored between 0.889-0.892 on private lb. Our selected ensemble degraded the private score because I think we relied on CV that was affected by label noise once ensembled. There was no clear correlation between ensemble CV and LB for us and I think <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a>'s team observed the same based on our their solution writeup.</p>
|
NBME - Score Clinical Patient Notes
|
30 th Place Simple Solution & Eid Mubarak
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NBME - Score Clinical Patient Notes
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I mean not to use Kaggle GPU for training if you have other options. Because Kaggle provides only 30 hrs GPU time weekly. It is difficult to train a model, do any experiments and inference or test the model using that short GPU time.</p>
<p>Indeed, I faced those problems a lot in this competition as I had to rely on Kaggle GPU</p>
<p>First, train the MLM model using nborad notebook. It will give you the new model weights and other necessary things in a folder (if you use nbroad notebook). Lets say the folder name is <code>mlm_deberta_weights</code>. Now, for training the librauee notebook, we need to insert the model folder of MLM in this notebook. </p>
<p>After that, in CFG class, change the <code>model</code> attribute with the path of your MLM model folder something like <code>model = '../input/mlm_deberta_weights/'</code>. Now, train your model.</p>
<p>Sorry for my poor English. I hope you get your answer. </p>
<p>First of all, Eid Mubarak [A common greeting during Eid al-Fitr (the greatest festival of Muslims) is "Eid Mubarak," which means "Blessed feast, festival, or celebration".]</p>
<p>It is a great day for me as I celebrated <strong>two Eid</strong> at a time. One is Eid al-Fitr and another is achieving my <strong>first solo silver</strong> in any Kaggle Competition </p>
<p>I would like to thank the organizers for hosting such nice competition. In this competition my goal was to learn from the <em>Kagglers</em>. So, I followed the public notebooks. My solution is based on the notebooks.</p>
<h1>Training</h1>
<p>For training I follow two public notebook. The top voted <a href="https://www.kaggle.com/code/yasufuminakama/nbme-deberta-base-baseline-train" target="_blank">notebook</a> by <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> and another awesome <a href="https://www.kaggle.com/code/librauee/train-deberta-v3-large-baseline" target="_blank">notebook</a> by <a href="https://www.kaggle.com/librauee" target="_blank">@librauee</a> </p>
<h1>MLM</h1>
<p>For MLM I use <a href="https://www.kaggle.com/code/nbroad/deberta-mlm-tests-nbme" target="_blank">this amazing notebook</a>. I keep the configuration unchanged but create two version of it with different eval data.</p>
<h1>Model</h1>
<p>Basically I focused on only two models as backbone </p>
<ol>
<li>DeBerta-v1-base : For any kinds of experiment </li>
<li>DeBerta-v3-large : As a Final Model</li>
</ol>
<h1>Different Classification Head</h1>
<p>Due to choosing only one model for submission, to bring diversity (small) I use different classification head</p>
<ol>
<li>Linear Head</li>
<li>Last 4 layers sum</li>
<li>Last 4 layers concat (after projecting those layer outputs)</li>
<li>LSTM Head</li>
<li>1d CNN Head</li>
<li>2 layer MLP Head</li>
</ol>
<h1>Post Process</h1>
<p>I used exactly same post-process techniques that used in <a href="https://www.kaggle.com/code/theoviel/roberta-strikes-back" target="_blank">this notebook</a> by <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> <br>
It improves LB by +0.003</p>
<h1>Ensemble</h1>
<p>For ensemble I use <a href="https://www.kaggle.com/code/motloch/nbme-ensemble-debertas" target="_blank">this notebook</a> by <a href="https://www.kaggle.com/motloch" target="_blank">@motloch</a> </p>
<h1>Some Tricks</h1>
<ol>
<li>During training time, I use some tricks like AWP that are discussed <a href="https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/discussion/315707" target="_blank">in this thread</a> by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> </li>
<li>I use 5 fold CV. I observe CV/LB score for each fold. For the final ensemble, instead of making prediction of all folds of each models, I focus on different folds on different models which have good CV/LB correlation</li>
<li>For ensemble more models I used the padding optimization trick used <a href="https://www.kaggle.com/code/anyai28/fast-inference-by-padding-optimization" target="_blank">in this notebook</a> by <a href="https://www.kaggle.com/anyai28" target="_blank">@anyai28</a> </li>
</ol>
<h1>Things I learn</h1>
<ol>
<li>Experiment is all you need</li>
<li>Following the Discussions and Notebooks in a competition </li>
<li>Trust the CV </li>
<li>Exploring Competition Related paper </li>
<li>Try to avoid training using Kaggle GPU (but I have no option)</li>
</ol>
<p>I want to give a special thank to <a href="https://www.kaggle.com/abdulkadirguner" target="_blank">@abdulkadirguner</a> for suggesting me to submit a model to the competition when I have only few seconds GPU time <a href="https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/discussion/322593" target="_blank">in this thread</a></p>
|
NBME - Score Clinical Patient Notes
|
3rd Place Solution: Meta Pseudo Labels + Knowledge Distillation
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NBME - Score Clinical Patient Notes
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi <a href="https://www.kaggle.com/thainq07" target="_blank">@thainq07</a>, thanks a lot! For MPL, you can refer to these notebooks: (1) <a href="https://www.kaggle.com/code/hengck23/playground-for-meta-pseudo-label" target="_blank">https://www.kaggle.com/code/hengck23/playground-for-meta-pseudo-label</a> (2) <a href="https://www.kaggle.com/code/conjuring92/nbme-meta-pseudo-labels" target="_blank">https://www.kaggle.com/code/conjuring92/nbme-meta-pseudo-labels</a> . For KD, I have posted the corresponding code snippet in the comment section of this discussion. Hope these references help! </p>
<p>Thanks :) </p>
<p>I used 3 labels as follows: </p>
<ul>
<li>outside tokens -> [0, 0, 0] </li>
<li>inside token which is also start of answer span -> [1, 1, 0]</li>
<li>inside token which is also end of answer span -> [1, 0, 1]</li>
<li>inside token which is also beginning and end of answer span i.e. span with only one token -> [1, 1, 1]</li>
<li>other inside tokens -> [1, 0, 0]</li>
</ul>
<p>thanks a lot :)</p>
<p>Thanks :) Congratulation to you too!</p>
<p>Thanks a lot! Congratulations to you too!</p>
<blockquote>
<p>I think you created 3 type models, teacher models, student models(labeled data finetuned) and deberta-large with KD.<br>
Which model did you use for ensemble ? Of course I think you used student models and deberta-large with KD, but you used teacher models as well?</p>
</blockquote>
<p>yes, you are right. I didn't use the teacher model (was strongly overfitted to the labelled data). I used two fine-tuned students in ensemble each of which leveraged 180k unlabelled data during master-student training. These two student models became teacher for the KD. The student model out of KD was also included in ensemble.</p>
<blockquote>
<p>In MPL, I think we need validation data to update teacher model. so how did you train the models with all data? Also, you mean you did not calculate CV? (= did you trusted LB score during competition?)</p>
</blockquote>
<p>Initially, I applied MPL for 2 out of my 5 folds CV, each of which gave ~+0.005 boost. However, the training time was not feasible, each of them was taking >12hrs in Colab. So I decided to use all train data and train two versions of MPL (each sees 180k randomly sampled unlabelled data). During MPL training, I was monitoring the student and teacher losses & had rough idea that performance plateaus around 180k unlabelled data. Beyond this, I trusted public LB (I felt public CV has enough examples to be reliable, didn't expect the shake up)</p>
<p>Thanks a lot <a href="https://www.kaggle.com/lachlangillian" target="_blank">@lachlangillian</a> :)</p>
<p>Hi <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a>, Thanks a lot!! Many congratulations on your 2nd Place finish, fully deserved!! Highly appreciate your posts and sharings, very informative & insightful.</p>
<p>In the case of gradient checkpointing for deberta, I just use</p>
<p><code>
base_model = AutoModel.from_pretrained(config["base_model_path"])
base_model.gradient_checkpointing_enable()
</code></p>
<p>and <code>import torch.utils.checkpoint</code> at the start</p>
<p>Thanks a lot :) <br>
Here is a snippet from my KD training script. Hope this helps! I plan to make a demo notebook on this later.</p>
<pre><code> student = NbmeKD(config)
swa_model = AveragedModel(student)
teacher_one = NbmeModelMTLNeo.load_from_checkpoint(config["teacher_one_ckpt"], config=config)
teacher_two = NbmeModelMTLNeo.load_from_checkpoint(config["teacher_two_ckpt"], config=config)
s_optimizer = get_optimizer(student, config)
#------- get train, valid and unlabelled dataloaders -------------#
train_dl, valid_dl = get_train_valid_dataloaders(config)
unlabelled_dl = get_unlabelled_dataloader(config)
#------- prepare accelerator ------------------------------------#
accelerator = Accelerator(fp16=True)
student, teacher_one, teacher_two, s_optimizer, train_dl, valid_dl, unlabelled_dl = accelerator.prepare(
student, teacher_one, teacher_two, s_optimizer, train_dl, valid_dl, unlabelled_dl
)
print("=="*40)
print("GPU utilization after accelerator preparation:")
print_gpu_utilization()
print("=="*40)
#------- setup schedulers ---------------------------------------#
num_epochs = 1
grad_accumulation_steps = 1
warmup_pct = config["warmup_pct"]
n_train_steps_per_epoch = len(train_dl)//grad_accumulation_steps
n_mpl_steps_per_epoch = len(unlabelled_dl)//grad_accumulation_steps
n_steps_per_epoch = max(n_train_steps_per_epoch, n_mpl_steps_per_epoch)
num_steps = num_epochs * n_steps_per_epoch
num_warmup_steps = int(warmup_pct*num_steps)
s_scheduler = get_scheduler(s_optimizer, num_warmup_steps, num_steps)
#------- SWA LR & CONFIG ------------------------------------------#
swa_scheduler = SWALR(s_optimizer, swa_lr=1e-6)
swa_start = config["swa_start"] # CHANGE ME
#------- Scorer & Trackers ----------------------------------------#
best_student_score = 0
valid_ds_path = os.path.join(config["output_dir"], config["valid_dataset_path"])
valid_ds = load_from_disk(valid_ds_path)
scorer_fn = partial(
scorer,
valid_dataset=valid_ds,
threshold=0.5,
focus_seq=config["text_sequence_identifier"]
)
#------------- Data Iterators -------------------------------------#
train_iter = iter(train_dl)
unlabelled_iter = iter(unlabelled_dl)
# ------- Training Loop ------------------------------------------#
teacher_one.eval()
teacher_two.eval()
for step in range(num_steps):
#------ Reset buffers After Validation ------------------------#
if step % config["validation_interval"] == 0:
progress_bar = tqdm(range(min(config["validation_interval"], num_steps)))
s_sft_loss_meter = AverageMeter()
s_kdu_loss_meter = AverageMeter()
s_kdt_loss_meter = AverageMeter()
s_tot_loss_meter = AverageMeter()
student.train()
s_optimizer.zero_grad()
#------ Get Train & Unlabelled Batch -------------------------#
try:
train_b = train_iter.next()
except Exception as e: # TODO: change to stop iteration error
train_b = next(train_dl.__iter__())
try:
unlabelled_b = unlabelled_iter.next()
except:
unlabelled_b = next(unlabelled_dl.__iter__())
#------- KD Training Steps ---------------------------------#
# get loss of current student on labelled train data
s_logits_train_b = student.get_logits(train_b)
# get loss of current student on labelled train data
train_b_labels = train_b["labels"]
train_b_masks = train_b_labels.gt(-0.5)
s_loss_train_b = student.compute_loss(
logits=s_logits_train_b,
labels=train_b_labels,
masks=train_b_masks,
)
# get teacher generated pseudo labels for labelled & unlabelled data
with torch.no_grad():
t_logits_one_unlabelled_b = teacher_one.get_logits(unlabelled_b)
t_logits_two_unlabelled_b = teacher_two.get_logits(unlabelled_b)
t_logits_unlabelled_b = (t_logits_one_unlabelled_b + t_logits_two_unlabelled_b)/2.0
pseudo_y_unlabelled_b = torch.sigmoid(t_logits_unlabelled_b) # soft pseudo label
t_logits_one_train_b = teacher_one.get_logits(train_b)
t_logits_two_train_b = teacher_two.get_logits(train_b)
t_logits_train_b = (t_logits_one_train_b + t_logits_two_train_b)/2.0
pseudo_y_train_b = torch.sigmoid(t_logits_train_b) # soft pseudo label
# hard pseudo label
# pseudo_y_unlabelled_b = (t_logits_unlabelled_b > 0).float()
#------ Train Student: Pesudo Label + Train Data ------------------#
unlabelled_b_masks = unlabelled_b["label_mask"].eq(1).unsqueeze(-1)
s_logits_unlabelled_b = student.get_logits(unlabelled_b)
s_kd_loss_unlabelled_b = student.compute_loss(
logits=s_logits_unlabelled_b,
labels=pseudo_y_unlabelled_b,
masks=unlabelled_b_masks,
is_soft=True,
)
s_kd_loss_train_b = student.compute_loss(
logits=s_logits_train_b,
labels=pseudo_y_train_b,
masks=train_b_masks,
is_soft=True,
)
s_loss = 0.15*s_loss_train_b + 0.15*s_kd_loss_train_b + 0.70*s_kd_loss_unlabelled_b
# backpropagation of student loss on unlabelled data
accelerator.backward(s_loss)
s_optimizer.step() # update student params
if step < swa_start:
s_scheduler.step()
#------ Progress Bar Updates ----------------------------------#
s_tot_loss_meter.update(s_loss.item())
s_sft_loss_meter.update(s_loss_train_b.item())
s_kdu_loss_meter.update(s_kd_loss_unlabelled_b.item())
s_kdt_loss_meter.update(s_kd_loss_train_b.item())
progress_bar.set_description(
f"STEP: {step+1:5}/{num_steps:5}. "
f"LR: {get_lr(s_optimizer):.4f}. "
f"LT: {s_tot_loss_meter.avg:.4f}. "
f"LSFT: {s_sft_loss_meter.avg:.4f}. "
f"LKDU: {s_kdu_loss_meter.avg:.4f}. "
f"LKDT: {s_kdt_loss_meter.avg:.4f}. "
)
progress_bar.update()
</code></pre>
<p>Thanks a lot! Congratulations to you too! </p>
<p>Yes, thanks for the explanation, clear as always. I applied MPL to other tasks such as NER recently, there it also works quite well, specifically in low-resource settings.</p>
<p>First of all, thank you NBME and Kaggle for hosting such an amazing competition. It has been an enriching experience with a steep learning curve. My sincere thanks to the community for sharing great ideas and engaging discussions.</p>
<h2>Code</h2>
<ul>
<li><a href="https://github.com/rbiswasfc/kaggle-nbme-3rd-place-solution" target="_blank">https://github.com/rbiswasfc/kaggle-nbme-3rd-place-solution</a></li>
</ul>
<h2>Summary</h2>
<ul>
<li>Task adaptation with MLM pre-training</li>
<li>Multi label token classification (Predicting Inside, Beginning and End of answer spans)</li>
<li>Training Diversity<ul>
<li>Meta Pseudo Labels</li>
<li>Knowledge Distillation from Model Ensemble</li>
<li>Marked tokens: add a prefix token to feature text e.g.: ["QA CASE=0"]</li>
<li>Stochastic Weight Averaging (SWA)</li></ul></li>
<li>Blending: weighted average of character level predictions from different models</li>
<li>Post Processing: time duration related features e.g. for <code>feature 309: duration-2-months</code> filter out spans containing <code>2 weeks</code>, <code>years</code> in it etc</li>
</ul>
<h2>Models</h2>
<ul>
<li>DeBERTa Large</li>
<li>DeBERTa XLarge</li>
<li>DeBERTa V2 XLarge</li>
<li>DeBERTa V3 Large</li>
</ul>
<h2>Workflow</h2>
<h3>1. Task Adaptation</h3>
<p>I continued pre-training of all DeBERTa models using the patient notes. Used standard masked language modelling for pre-training with token masking probability of 0.2. In the subsequent training steps, task-adapted models are used as backbone.</p>
<ul>
<li>References:<ul>
<li>Don't Stop Pretraining: Adapt Language Models to Domains and Tasks (<a href="https://arxiv.org/abs/2004.10964" target="_blank">https://arxiv.org/abs/2004.10964</a>)</li></ul></li>
</ul>
<h3>2. Meta Pseudo Labels (MPL)</h3>
<p>I felt the key to doing well in this competition was to make most out of the unlabelled data. To this end, I experimented with (1) standard Pseudo Labelling, (2) Weak Supervision: Contrastive-Regularized Self-Training Approach (<a href="https://arxiv.org/pdf/2010.07835.pdf" target="_blank">https://arxiv.org/pdf/2010.07835.pdf</a>) and (3) Meta Pseudo Labels (MPL). Among these, MPL worked the best. Thanks to <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> for sharing this excellent notebook on MPL: <a href="https://www.kaggle.com/code/hengck23/playground-for-meta-pseudo-label" target="_blank">https://www.kaggle.com/code/hengck23/playground-for-meta-pseudo-label</a>. I adapted this notebook for NBME token classification task. I used both soft & hard pseudo labels for diversity during different training runs.</p>
<ul>
<li>References:<ul>
<li>Meta Pseudo Labels (<a href="https://arxiv.org/abs/2003.10580" target="_blank">https://arxiv.org/abs/2003.10580</a>)</li>
<li>Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach (<a href="https://arxiv.org/pdf/2010.07835.pdf" target="_blank">https://arxiv.org/pdf/2010.07835.pdf</a>)</li></ul></li>
</ul>
<h3>3. Fine-tuning of Student</h3>
<p>During MPL, the student model is trained only on unlabelled data using pseudo labels from teacher. So the student can be further fine-tuned on actual training data for additional performance boost. During fine-tuning, I also used Stochastic Weight Averaging (SWA) for better generalization.</p>
<h3>4. Knowledge Distillation (KD)</h3>
<p>For DeBERTa Large, I trained two models using MPL (each with 180k randomly sampled unlabelled data + all labelled data). Next I used these two models as teacher to distill their combined knowledge (mean token predictions) into another DeBERTa Large model. I used the following loss during distillation: <code>0.15 * BCE Loss on Labelled Data (actual annotations) + 0.15 * BCE Loss on Labeled Data (pseudo labels from teaches) + 0.7 * BCE Loss on Unlabelled Data with Pseudo Labels</code>. Rational for using KD: In literature, there are examples where student outperforms teachers specially when additional data is involved. Furthermore, it adds to model diversity.</p>
<p>References:</p>
<ul>
<li>Can Students Outperform Teachers in Knowledge Distillation Based Model Comparison? (<a href="https://openreview.net/pdf?id=XZDeL25T12l" target="_blank">https://openreview.net/pdf?id=XZDeL25T12l</a>)</li>
</ul>
<h2>Marked Tokens</h2>
<p>This basically involves modifying the feature text with a prefix token. I created 10 new tokens for each case e.g. "[QA CASE=0]". I thought some of the feature texts from different cases are very close to each other e.g. Feature 314: nausea, Feature 508: Associated-nausea. Therefore I felt, giving additional context of case number in feature text would help the model to distinguish between these cases.</p>
<h2>Additional Details</h2>
<ul>
<li>For MPL training, both Teacher and Student models need to be loaded into memory. To avoid memory overflow issues, I used the usual tricks such as mixed precision training, 8-bit adam optimizer, batch size of 4, freezing of lower layers & gradient checkpointing.</li>
<li>In ensemble, majority of the models are trained using all data instead of individual folds</li>
<li>For token classification head, concatenated hidden outputs from last 12 transformer layers.</li>
<li>Reinitialized last 4-12 transformer layers</li>
<li>LR Scheduler: cosine schedule with warmup</li>
<li>Multi-label classification: 3 labels -> [token is inside answer span, token is start of answer span, token is end of answer span].</li>
</ul>
<h2>Things that didn't work</h2>
<ul>
<li>Weighted Box Fusion (WBF): With WBF, the LB scores decreased by around 0.002</li>
<li>Non Maximum Suppression (NMS): simple blending of char probs worked the best</li>
<li>Tuning of model weights</li>
<li>Tuning of thresholds (used 0.5 for all features)</li>
<li>Most of my post-processing attempts</li>
</ul>
<p>Glad that you find the post useful, will share my training script soon. Congrats on your performance!</p>
<p>Hi <a href="https://www.kaggle.com/fightingmuscle" target="_blank">@fightingmuscle</a>,</p>
<p>Sorry for my late reply. NbmeKD was a custom pytorch model for the knowledge distillation:</p>
<pre><code>class NbmeKD(nn.Module):
"""The Multi-task NBME model class for Knowledge Distillation
"""
def __init__(self, config):
super(NbmeKD, self).__init__()
self.config = config
# base transformer
self.base_model = AutoModel.from_pretrained(
self.config["base_model_path"],
)
self.base_model.gradient_checkpointing_enable()
n_freeze = config["n_freeze"]
if n_freeze > 0:
print(f"setting requires grad to false for last {n_freeze} layers")
self.base_model.embeddings.requires_grad_(False)
self.base_model.encoder.layer[:n_freeze].requires_grad_(False)
hidden_size = self.base_model.config.hidden_size
num_layers_in_head = self.config["num_layers_in_head"]
# token classification head
self.tok_classifier = nn.Linear(
in_features=hidden_size * num_layers_in_head,
out_features=self.config['mtl_tok_num_labels'],
)
self.dropout1 = nn.Dropout(0.1)
self.dropout2 = nn.Dropout(0.2)
self.dropout3 = nn.Dropout(0.3)
self.dropout4 = nn.Dropout(0.4)
self.dropout5 = nn.Dropout(0.5)
def get_logits(self, batch):
input_ids = batch["input_ids"]
attention_mask = batch["attention_mask"]
token_type_ids = batch["token_type_ids"]
out = self.base_model(
input_ids=input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
output_hidden_states=True
)
all_hidden_states = out["hidden_states"]
# token classification logits
n = self.config["num_layers_in_head"]
tok_output = torch.cat(all_hidden_states[-n:], dim=-1)
# pass through 5 dropout layers and take average
tok_output1 = self.dropout1(tok_output)
tok_output2 = self.dropout2(tok_output)
tok_output3 = self.dropout3(tok_output)
tok_output4 = self.dropout4(tok_output)
tok_output5 = self.dropout5(tok_output)
tok_output = (tok_output1 + tok_output2 + tok_output3 + tok_output4 + tok_output5)/5
tok_logits = self.tok_classifier(tok_output)
return tok_logits
def compute_loss(self, logits, labels, masks, is_soft=False):
loss = F.binary_cross_entropy_with_logits(
logits, labels,
reduction='none'
)
if is_soft:
# additional confidence mask
conf_mask = torch.abs(labels - 0.5) > 0.25
masks = torch.logical_and(masks, conf_mask)
loss = torch.masked_select(loss, masks).mean()
return loss
</code></pre>
<p>Will create and share a cleaned notebook for this when i get more time. But you can refer to other code snippets from this discussion thread. This is a great article for reference: <a href="https://medium.com/huggingface/distilbert-8cf3380435b5" target="_blank">https://medium.com/huggingface/distilbert-8cf3380435b5</a></p>
<p>Thanks!</p>
<p>sorry for my late reply. AveragedModel is a module from pytorch, you can import it as </p>
<pre><code>from torch.optim.swa_utils import AveragedModel
</code></pre>
<p>NbmeKD is custom model i created for Knowledge Distillation:</p>
<pre><code>class NbmeKD(nn.Module):
"""The Multi-task NBME model class for Knowledge Distillation
"""
def __init__(self, config):
super(NbmeKD, self).__init__()
self.config = config
# base transformer
self.base_model = AutoModel.from_pretrained(
self.config["base_model_path"],
)
self.base_model.gradient_checkpointing_enable()
n_freeze = config["n_freeze"]
if n_freeze > 0:
print(f"setting requires grad to false for last {n_freeze} layers")
self.base_model.embeddings.requires_grad_(False)
self.base_model.encoder.layer[:n_freeze].requires_grad_(False)
hidden_size = self.base_model.config.hidden_size
num_layers_in_head = self.config["num_layers_in_head"]
# token classification head
self.tok_classifier = nn.Linear(
in_features=hidden_size * num_layers_in_head,
out_features=self.config['mtl_tok_num_labels'],
)
self.dropout1 = nn.Dropout(0.1)
self.dropout2 = nn.Dropout(0.2)
self.dropout3 = nn.Dropout(0.3)
self.dropout4 = nn.Dropout(0.4)
self.dropout5 = nn.Dropout(0.5)
def get_logits(self, batch):
input_ids = batch["input_ids"]
attention_mask = batch["attention_mask"]
token_type_ids = batch["token_type_ids"]
out = self.base_model(
input_ids=input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
output_hidden_states=True
)
all_hidden_states = out["hidden_states"]
# token classification logits
n = self.config["num_layers_in_head"]
tok_output = torch.cat(all_hidden_states[-n:], dim=-1)
# pass through 5 dropout layers and take average
tok_output1 = self.dropout1(tok_output)
tok_output2 = self.dropout2(tok_output)
tok_output3 = self.dropout3(tok_output)
tok_output4 = self.dropout4(tok_output)
tok_output5 = self.dropout5(tok_output)
tok_output = (tok_output1 + tok_output2 + tok_output3 + tok_output4 + tok_output5)/5
tok_logits = self.tok_classifier(tok_output)
return tok_logits
def compute_loss(self, logits, labels, masks, is_soft=False):
loss = F.binary_cross_entropy_with_logits(
logits, labels,
reduction='none'
)
if is_soft:
# additional confidence mask
conf_mask = torch.abs(labels - 0.5) > 0.25
masks = torch.logical_and(masks, conf_mask)
loss = torch.masked_select(loss, masks).mean()
return loss
</code></pre>
|
NBME - Score Clinical Patient Notes
|
4th place solution: Deberta models & postprocess
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NBME - Score Clinical Patient Notes
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you!</p>
<blockquote>
<p>how did you do MLM pretraining?</p>
</blockquote>
<p>Your understanding is correct. <br>
There are some sample codes for MLM pretraining like below. I think you can try it.</p>
<ul>
<li>huggingface<ul>
<li><a href="https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm_no_trainer.py" target="_blank">https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm_no_trainer.py</a></li></ul></li>
<li>kaggle<ul>
<li><a href="https://www.kaggle.com/code/rhtsingh/commonlit-readability-prize-roberta-torch-itpt" target="_blank">https://www.kaggle.com/code/rhtsingh/commonlit-readability-prize-roberta-torch-itpt</a></li>
<li><a href="https://www.kaggle.com/code/maunish/clrp-pytorch-roberta-pretrain" target="_blank">https://www.kaggle.com/code/maunish/clrp-pytorch-roberta-pretrain</a></li></ul></li>
</ul>
<blockquote>
<p>How much pretraining improved?</p>
</blockquote>
<p>MLM pretraining improved CV +0.002~0.003, LB +0.001~0.002.</p>
<blockquote>
<p>Did you train MLM from scratch with new tokenizer for this dataset or finetuned MLM with existing pretrained tokenizers?</p>
</blockquote>
<p>We used pretrained tokenizers. </p>
<blockquote>
<p>If you finetuned did you add any new tokens for this dataset?</p>
</blockquote>
<p>No.</p>
<blockquote>
<p>How did you train char model, did you use custom tokenizer?</p>
</blockquote>
<p>We added a character level GRU after Deberta's output.<br>
This <a href="https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/discussion/315707" target="_blank">thread</a> may be helpful.</p>
<p>This one.</p>
<pre><code>class FocalLoss(nn.Module):
def __init__(self, reduction='none', alpha=1, gamma=2):
super().__init__()
self.reduction = reduction
self.alpha = alpha
self.gamma = gamma
def forward(self, inputs, targets):
bce_loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction='none')
pt = torch.exp(-bce_loss)
loss = self.alpha * (1. - pt)**self.gamma * bce_loss
if self.reduction == 'none':
loss = loss
elif self.reduction == 'sum':
loss = loss.sum()
elif self.reduction == 'mean':
loss = loss.mean()
return loss
class SmoothFocalLoss(nn.Module):
def __init__(self, reduction='none', alpha=1, gamma=2, smoothing=0.0):
super().__init__()
self.reduction = reduction
self.focal_loss = FocalLoss(reduction='none', alpha=alpha, gamma=gamma)
self.smoothing = smoothing
@staticmethod
def _smooth(targets:torch.Tensor, smoothing=0.0):
assert 0 <= smoothing < 1
with torch.no_grad():
targets = targets * (1.0 - smoothing) + 0.5 * smoothing
return targets
def forward(self, inputs, targets):
targets = SmoothFocalLoss._smooth(targets, self.smoothing)
loss = self.focal_loss(inputs, targets)
if self.reduction == 'none':
loss = loss
elif self.reduction == 'sum':
loss = loss.sum()
elif self.reduction == 'mean':
loss = loss.mean()
return loss
</code></pre>
<p>Thank you! </p>
<blockquote>
<p>is MLM approach and Pseudo Labeling approach overlapping?</p>
</blockquote>
<p>Yes. It didn't matter.</p>
<p>Thank you!</p>
<blockquote>
<p>could you translate the Japanese comments in pp to English</p>
</blockquote>
<p>Done! Thanks for your suggestion!</p>
<p>First of all, thanks to competition organizers for hosting this competition and great teammates ( <a href="https://www.kaggle.com/takoihiraokazu" target="_blank">@takoihiraokazu</a>, <a href="https://www.kaggle.com/shuheigoda" target="_blank">@shuheigoda</a>, <a href="https://www.kaggle.com/copasta" target="_blank">@copasta</a> ).<br>
Thanks also to the community for sharing many ideas in Notebook and Discussion.</p>
<h1>Summary</h1>
<p>We ensembled 4 token classification models and set threshold for each case_num and did postprocess.<br>
<img src="https://user-images.githubusercontent.com/30746280/168423739-557f36aa-8e52-4006-8c49-03dbafc107de.png" alt=""></p>
<h1>Models</h1>
<p>We trained following 4 token classification models and used them for final submission.</p>
<ul>
<li>token classification Deberta-v3-large 4fold<ul>
<li>MLM(0.15)</li>
<li>finetune</li>
<li>SmoothFocalLoss</li>
<li>or replace augmentation for feature text</li>
<li>lower text</li>
<li></li>
<li>2 times pseudo labeling (This is the one for final submission but 4 times pseudo labeling was better on Private LB)</li>
<li>Public LB: 0.891 with pp</li>
<li>Private LB: 0.892 with pp</li></ul></li>
<li>token classification Deberta-v3-large 5fold<ul>
<li>MLM(0.15)</li>
<li>finetune</li>
<li>SmoothFocalLoss</li>
<li>mask augmentation</li>
<li>pseudo labeling</li>
<li>Public LB: 0.891 with pp</li></ul></li>
<li>token classification Deberta-v2-xlarge 5fold<ul>
<li>MLM(0.15)</li>
<li>finetune</li>
<li>SmoothFocalLoss</li>
<li>mask augmentation</li>
<li>pseudo labeling</li>
<li>Public LB: 0.890 with pp</li></ul></li>
<li>token classification Deberta-v2-xxlarge 4fold<ul>
<li>MLM(0.10)</li>
<li>finetune</li>
<li>SmoothFocalLoss</li>
<li>mask augmentation</li>
<li>pseudo labeling (Deberta-v2-xlarge)</li>
<li>Public LB: 0.892 with pp (5fold)</li></ul></li>
</ul>
<p>We also trained 1 char classification model and ensemble of 3 token classification models and 1 char classification model is the best private submission but couldn't select it for final submission.</p>
<ul>
<li>char classification Deberta-v3-large 4fold<ul>
<li>MLM(0.15)</li>
<li>finetune</li>
<li>BCELoss</li>
<li>GRU Head(4 layer)</li>
<li>pseudo labeling<ul>
<li>char classification Deberta-v3-large 4fold</li>
<li>char classification Deberta-xlarge 4fold</li>
<li>char classification Deberta-large 4fold</li></ul></li>
<li>Public LB: 0.890 with pp</li></ul></li>
</ul>
<h1>Ensemble</h1>
<p>Weighted Blending of 4 models based on char probabilities.</p>
<h1>Threshold</h1>
<p>We set threshold for each case_num.</p>
<h1>Post Process</h1>
<p>Here is our postprocessing code.</p>
<pre><code>def postprocess(texts, preds):
from nltk.tokenize import word_tokenize
preds_pp = preds.copy()
tk0 = tqdm(range(len(preds_pp)), total=len(preds_pp))
for raw_idx in tk0:
pred = preds[raw_idx]
text = texts[raw_idx]
if len(pred) != 0:
# pp1: indexが1から始まる予測値は0から始まるように修正 ## +0.00123
# if index starts with 1, replace it to 0
if pred[0][0] == 1:
preds_pp[raw_idx][0][0] = 0
for p_index, pp in enumerate(pred):
start, end = pred[p_index]
if start == 0:
break
# pp2: startとendが同じ予測値はstartを前に1ずらす ## +0.00012
# if start == end, start should be start - 1
if start == end:
preds_pp[raw_idx][p_index][0] = start - 1
break
# pp3: 始点が改行の場合始点を1つ後ろにずらす ## +0.00032
# if text[start] == '\n', start should be start + 1
if text[start] == '\n':
preds_pp[raw_idx][p_index][0] = start + 1
start = start + 1
# pp4: 1-2などは-2で予測されることがあるので修正 ## +0.00001
# [digit]-[digit] sometimes predicted as -[digit], if so start should be start - 1
if text[start-1].isdigit() and text[start] == '-' and text[start+1].isdigit():
preds_pp[raw_idx][p_index][0] = start - 1
start = start - 1
if text[start-1].isdigit() and text[start] == '/' and text[start+1].isdigit():
preds_pp[raw_idx][p_index][0] = start - 1
start = start - 1
# pp5: 67などは7で予測されることがあるので修正 ## +0.00001
# [digit][digit] sometimes predicted as [digit], if so start should be start - 1
if text[start-1].isdigit() and text[start].isdigit():
preds_pp[raw_idx][p_index][0] = start - 1
start = start - 1
# pp6: 2文字前が記号のやつに対するpp ## +0.00001
# To fix some letter sign tokenization problem
if text[start-2] == ',' and text[start-1] != ' ':
preds_pp[raw_idx][p_index][0] = start - 1
start = start - 1
if text[start-2] == '-' and text[start-1] != ' ':
preds_pp[raw_idx][p_index][0] = start - 1
start = start - 1
if text[start-2] == '\"' and text[start-1] != ' ':
preds_pp[raw_idx][p_index][0] = start - 1
start = start - 1
if text[start-2] == ':' and text[start-1] != ' ':
preds_pp[raw_idx][p_index][0] = start - 1
start = start - 1
if text[start-2] == '(' and text[start-1] != ' ':
preds_pp[raw_idx][p_index][0] = start - 1
start = start - 1
if text[start-2] == ')' and text[start-1] != ' ':
preds_pp[raw_idx][p_index][0] = start - 1
start = start - 1
# pp7: heart -> h + eart となっているようなものを汎用的に修正する ## +0.00050
# To fix some tokenization problem like heart -> h + eart
try:
text_token = word_tokenize(text[start-1:end])
text_token = [text for text in text_token]
first = text[start:end].split()[0]
if first not in text_token:
for t in text_token:
if (first == t[-len(first):]) and (t[0].isalpha()):
sub = len(t) - len(first)
preds_pp[raw_idx][p_index][0] = start - sub
start = start - sub
break
except:
None
# pp8: endの修正 ## +0.00001
# fix end
if text[end-1:end] == '.':
preds_pp[raw_idx][p_index][1] = end - 1
end = end - 1
if text[end-1:end] == '-' and text[end:end+1].isnumeric():
preds_pp[raw_idx][p_index][1] = end + 1
end = end + 1
return preds_pp
</code></pre>
<p>In my case, sampling almost same number of pn_num that include pseudo positive label for each feature_text worked steadily.<br>
Here is my sampling code.</p>
<pre><code>def get_sampled_pseudo_df(fold, sample_score=1.0, positive_th=0.5, max_samples=10):
# calcurate oof scores
oof_df = pd.read_pickle(CFG.finetune_dir+'oof_df.pkl')
scores = {}
for feature_text, oof_tmp in tqdm(oof_df.groupby('feature_text')):
oof_tmp = oof_tmp.reset_index(drop=True)
truths = create_labels_for_scoring(oof_tmp)
char_probs = get_char_probs(oof_tmp['pn_history'].values, oof_tmp[[i for i in range(CFG.max_len)]].values, CFG.tokenizer)
results = get_results(char_probs, th=0.5)
preds = get_predictions(results)
try:
score = get_score(preds, truths)
except:
score = np.nan
scores[feature_text] = {'score': score, 'num_labels': len(oof_tmp[oof_tmp['annotation_length'] > 0])}
scores = sorted(scores.items(), key=lambda i: i[1]['score'])
# sample pseudo labels
pseudo_df = pd.read_pickle(CFG.pseudo_dir+f'pseudo_df_fold{fold}.pkl')
pseudo_pn_num_list = []
for text, d in tqdm([(text, d) for text, d in scores if d['score'] < sample_score]):
pseudo_df_tmp = pseudo_df[pseudo_df['feature_text'] == text]
pseudo_df_tmp['positive_count'] = (pseudo_df_tmp[[i for i in range(311)]] > positive_th).sum(1)
pseudo_pn_num = pseudo_df_tmp[pseudo_df_tmp['positive_count'] > 0]['pn_num'].values.tolist()
if len(pseudo_pn_num) > 0:
print(f"{text}({d}) has {len(pseudo_pn_num)} pseudo positive labels")
if len(pseudo_pn_num) > max_samples:
pseudo_pn_num = random.sample(pseudo_pn_num, max_samples)
print(f"sampled by max_samples -> {text}({d}) has {len(pseudo_pn_num)} pseudo positive labels")
pseudo_pn_num_list.append(pseudo_pn_num)
else:
pseudo_pn_num_list.append(pseudo_pn_num)
pseudo_pn_num_list = sum(pseudo_pn_num_list, [])
sampled_pseudo_df = pseudo_df[pseudo_df['pn_num'].isin(pseudo_pn_num_list)].reset_index(drop=True)
print(f"sampled_pseudo_df.shape: {sampled_pseudo_df.shape}")
return sampled_pseudo_df
pseudo_df = get_sampled_pseudo_df(fold, sample_score=1.0, positive_th=0.5, max_samples=10)
</code></pre>
<p>Thank you. <br>
Yes, we didn't use full 42K patient data. We randomly sampled that were enough to improve results.</p>
<blockquote>
<p>Did you see gains in making feature text inclusive of some technical texts used in P notes but not present in feature text</p>
</blockquote>
<p>I tried it but no improvement on CV and LB.</p>
<blockquote>
<p>What in mlm(0.15) ,0.15 indicates?</p>
</blockquote>
<p>It's mlm_probability.</p>
<blockquote>
<p>what is mask augmentation ?</p>
</blockquote>
<p>It means masking some tokens during training for augmentation.</p>
<blockquote>
<p>what is "or replace augmentation for feature text"?</p>
</blockquote>
<p>It means below code.</p>
<pre><code>augmented_feature_text = '-or-'.join(np.random.permutation(feature_text.split('-or-')))
</code></pre>
<p>Thanks! :)</p>
<blockquote>
<p>Have you measured the impact of MLM?</p>
</blockquote>
<p>Yes, it improved CV +0.002~0.003.</p>
<blockquote>
<p>Did you apply any rules?</p>
</blockquote>
<p>No specific rules.</p>
<blockquote>
<p>Could you please share more details on how you did it?</p>
</blockquote>
<p>Here is my training config.</p>
<pre><code>class CFG:
env='kaggle' if os.getcwd() == '/kaggle/working' else 'colab'
wandb=False # True
competition='NBME'
_wandb_kernel='nakama'
debug=False
apex=True
print_freq=100
num_workers=4
model="microsoft/deberta-v3-large"
mlm_dir='./drive/MyDrive/Colab Notebooks/NBME/mlm/kaggle-exp1/' # None
scheduler='cosine' # ['linear', 'cosine']
batch_scheduler=True
num_cycles=0.5
num_warmup_steps=0
epochs=4 # 5
encoder_lr=2e-5
decoder_lr=2e-5
min_lr=1e-6
eps=1e-6
betas=(0.9, 0.999)
batch_size=6 # 8
fc_dropout=0.6
loss='bce' # ['bce', 'ce']
target_size=1 if loss == 'bce' else 2
max_len=512
alpha=1
gamma=2
smoothing=0.0001
p_aug=0.5
p_aug_epoch=2
weight_decay=0.01
gradient_accumulation_steps=1
max_grad_norm=1000
seed=42
n_fold=4
trn_fold=[0, 1, 2, 3]
train=True
# ====================================================
# MLM
# ====================================================
mlm=True
extension='csv'
pad_to_max_length=True
max_seq_length=512
overwrite_cache=True
mlm_probability=0.15
</code></pre>
<p>Yes. Not completely the same because we sampled randomly based on the probability > 0.5 and each model outputs different probability.</p>
<p>Thank you!<br>
As for case_num threshold, we narrow down the range of optimization for 0.47~0.53 to avoid overfitting and there is steady boost on LB. As for feature_num threshold, we also didn't succeed. </p>
<p>Focal Loss worked before using pseudo labeled data and maybe that's not the point why your pseudo labeling didn't work. In my case, sampling almost same number of pn_num that include pseudo positive label for each feature_text worked steadily.</p>
<p>My pleasure! :)</p>
<p>Yes, that's correct.</p>
<p>Yes. <br>
1st model - only train labels<br>
2nd model - train labels + pseudo labels of 1st model<br>
3rd model - train labels + pseudo labels of 2nd model<br>
4th model - train labels + pseudo labels of 3rd model <- This</p>
<p>Thank you!</p>
<p>Thank you!</p>
<blockquote>
<p>About "lower text", do you mean that only the short pn_history was used for training?</p>
</blockquote>
<p>"lower text" means that unifying texts as lower case letter.</p>
<pre><code>features['feature_text'] = features['feature_text'].str.lower()
patient_notes['pn_history'] = patient_notes['pn_history'].str.lower()
</code></pre>
<p>This is because there was a difference in the way tokenize was done for such as those starting with a capital letter (ex. Recently) and those starting with a lower case letter (ex. recently), unifying gives generalization.</p>
<blockquote>
<p>About "set threshold for each case_num", we also tried the threshold adjustment for each case_num. In our case, local score boosted a lot, but many times LB score decrease slightly. Did you use this because you believed local score score? Or in your case, LB score also increased?</p>
</blockquote>
<p>We narrow down the range of optimization for 0.47~0.53 to avoid overfitting and there is steady boost on LB.</p>
|
NBME - Score Clinical Patient Notes
|
5th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NBME - Score Clinical Patient Notes
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank competition organizers for hosting this interesting competition. And<br>
thanks to my great teammates <a href="https://www.kaggle.com/bestpredict" target="_blank">@bestpredict</a> <a href="https://www.kaggle.com/lxf615712" target="_blank">@lxf615712</a> , thanks for more than a month of hard work. And also thanks to the kaggle community, I had learned a lot from notebooks and discussions.Thanks to the excellent <a href="https://www.kaggle.com/code/yasufuminakama/nbme-deberta-base-baseline-train/notebook" target="_blank">baseline </a> and <a href="https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/discussion/315707" target="_blank">experiment results</a>, <a href="https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/discussion/315314" target="_blank">case5</a></p>
<h3><strong>method</strong></h3>
<ul>
<li>MLM(20epoch)</li>
<li>Stratified and greater learning rate ( large 2e-5, xlarge 1e-5, xxlarge 5e-6)</li>
<li>pseudo labeling ( 2000-4000 nums,<a href="https://www.kaggle.com/c/google-quest-challenge/discussion/129840" target="_blank">5folds</a>,different models take different samples, resampling of Insufficient case_num, some models not used pseudo labels, lb is better, but pb is not robustness)</li>
<li>multidrop</li>
<li>fgm</li>
<li><a href="https://www.kaggle.com/competitions/feedback-prize-2021/discussion/313330" target="_blank">deberta v2/v3 tokenizer</a></li>
</ul>
<h3><strong>Models</strong></h3>
<ul>
<li>Deberta- v1-large </li>
<li>Deberta- v1-xlarge </li>
<li>Deberta-v2-xlarge </li>
<li>Deberta-v2-xxlarge</li>
<li>Deberta-v3-large </li>
</ul>
<h3><strong>Ensemble</strong></h3>
<p>Blending of 5 models based on char probabilities.In the last few days, we add v2 xxlarge, it's cv884 and lb883(2folds) is not good, but model emsemblings is better at lb, worse at pb. We had not selected 30+ better pb submissions.<strong>Trust your CV</strong></p>
<h3><strong>Postprocess + Threshold</strong></h3>
<p>Space migration + oof set(<a href="https://www.kaggle.com/code/abhishek/two-longformers-are-better-than-1" target="_blank">word probs</a>) , We set thresholds for some features(lower than all best case_num threshold cv on pb) .</p>
<p>`<br>
def convert_offsets_to_word_indices(preds_offsets,texts, case_nums, feature_nums, th=0.5):</p>
<pre><code> predicts = []
for text,preds,case_num, feature_num in zip(texts,preds_offsets, case_nums, feature_nums):
encoded_text = tokenizer(text, add_special_tokens=True,max_length=CFG.max_len,padding="max_length",return_offsets_mapping=True)
offset_mapping = encoded_text['offset_mapping']
sep_index = encoded_text["input_ids"].index(tokenizer.sep_token_id)
result=np.zeros(len(preds))
results = np.zeros(sep_index)
for idx, (offset, pred) in enumerate(zip(offset_mapping[:sep_index], preds)):
start = offset[0]
results[idx] = preds[start]
sample_pred_scores = results
if str(feature_num)[-1] == '3' and (str(case_num)=='0' or str(case_num)=='3' ):
result = [1 if s >= 0.54 else 0 for s in results]
elif str(feature_num)[-1] == '3' and (str(case_num)=='1' ):
result = [1 if s >= 0.45 else 0 for s in results]
elif str(feature_num)[-1] == '3' and str(case_num)=='6':
result = [1 if s >= 0.52 else 0 for s in results]
elif str(case_num)=='5' and ( str(feature_num) == '503' ):
result = [1 if s >= 0.49 else 0 for s in results]
elif str(case_num)=='5' and ( str(feature_num) == '504' ):
result = [1 if s >= 0.4 else 0 for s in results]
elif str(case_num)=='5' and ( str(feature_num) == '508' ):
result = [1 if s >= 0.49 else 0 for s in results]
elif str(case_num)=='5' and ( str(feature_num) == '509' ):
result = [1 if s >= 0.4 else 0 for s in results]
elif str(case_num)=='5' and ( str(feature_num) == '510' ):
result = [1 if s >=0.55 else 0 for s in results]
elif str(case_num)=='5' and ( str(feature_num) == '511' ):
result = [1 if s >=0.55 else 0 for s in results]
elif str(case_num)=='5' and ( str(feature_num) == '512' ):
result = [1 if s >=0.45 else 0 for s in results]
elif str(case_num)=='5' and ( str(feature_num) == '513' ):
result = [1 if s >=0.59 else 0 for s in results]
elif str(case_num)=='5' and ( str(feature_num) == '514' ):
result = [1 if s >=0.49 else 0 for s in results]
elif str(case_num)=='5' and ( str(feature_num) == '516' ):
result = [1 if s >=0.41 else 0 for s in results]
elif str(case_num)=='7' and ( str(feature_num) == '702' ):
result = [1 if s >=0.56 else 0 for s in results]
else:
result = [1 if s >= 0.47 else 0 for s in results]
preds = result
sample_text = text
sample_input_ids = encoded_text["input_ids"]
sample_preds = []
if len(preds) < len(offset_mapping):
preds = preds + [0] * (len(offset_mapping) - len(preds))
sample_pred_scores = list(sample_pred_scores) + [0] * (len(offset_mapping) - len(sample_pred_scores))
idx = 0
phrase_preds = []
while idx < sep_index:
start, _ = offset_mapping[idx]
# print(start,idx)
if preds[idx] != 0:
label = preds[idx]
else:
label = 0
phrase_scores = []
phrase_scores.append(sample_pred_scores[idx])
# idx += 1
while idx < sep_index:
if label == 0:
matching_label = 0
else:
matching_label = 1
if preds[idx] == matching_label:
_, end = offset_mapping[idx]
phrase_scores.append(sample_pred_scores[idx])
idx += 1
else:
break
if "end" in locals():
phrase = sample_text[start:end]
phrase_preds.append((phrase, start, end, label, phrase_scores))
temp = []
for phrase_idx, (phrase, start, end, label, phrase_scores) in enumerate(phrase_preds):
newlabel = ''
word_start = len(sample_text[:start].split())
word_end = word_start + len(sample_text[start:end].split())
word_end = min(word_end, len(sample_text.split()))
ps = " ".join([str(x) for x in range(word_start, word_end)])
if label != 0:
if sum(phrase_scores) / len(phrase_scores) >= th and word_end !=0:
if start!=0 and sample_text[start]==' ':
start+=1
temp.append(np.array([start,end]))
if len(temp)==0:
temp = [list(g) for _, g in itertools.groupby(temp, key=lambda n, c=itertools.count(): n - next(c))]
temp = [f"{min(r)} {max(r)}" for r in temp]
# print(temp)
else:
# print(temp)
# print(type(temp[0][0]))
temp = [f"{min(r)} {max(r)}" for r in temp]
# print(temp)
predict = ";".join(temp)
predicts.append(predict)
return predicts`
</code></pre>
<p>good luck, you can</p>
<p>Thank you, my bro</p>
<p>Thank you, my brothers</p>
<p>Thank you. These are useful and better tricks, we do many other works. ex: 50 epoch EMD pretrained is lower 0.0003 compared with MLM, AWP and smart adversarial training methods are lower than FGM, modify model structure is unuseful. Only sampling pseudo datas,we try several methods. Except for cv, you can trust many a little makes a mickle.</p>
<p>only pn_history 1.3w. Use these features threshold is lower than all case_num on pb.</p>
<p>Thank you. I try it on v3-large, it's useful.</p>
<p>Thank you, mlm+fgm +~0.004</p>
|
NBME - Score Clinical Patient Notes
|
6th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NBME - Score Clinical Patient Notes
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It was nice to see the shake up fall kindly on us :) Congrats all and thank you NBME and Kaggle for hosting such an amazing competition.</p>
<h3>TL;DR</h3>
<p>Our solution is based on knowledge distillation of an ensemble of DeBERTa models and efficient domain-finetuning utilizing pseudo-labeling of a big text-corpus of unlabeled data. We used a 2-level approach: In a first step we trained models on the given training data and used an ensemble of them to pseudo label the given unlabeled patient notes. In a second step we trained 4 models on the training + pseudo-labeled data. </p>
<h3>Pre/Data processing</h3>
<p>During the competition we tried a variety of data processing steps. While most of them did not help significantly we used them in different models just for the sake of creating diversity. For example we used</p>
<ul>
<li>special tokens for line breaks and medical acronyms; </li>
<li>lowercasing all caps text</li>
<li>Mixup of labels </li>
<li>Token dropout</li>
</ul>
<p>What did help significantly was MLM pre-training of weights, but unfortunately the benefit was redundant with pseudo-labeling which we performed at a later stage. Of course we also tried a lot of other things which did not help. We evaluated all models of the same 5 fold split and saw good agreement of CV score and public LB score (and private LB score, too). </p>
<h3>Level1 Models</h3>
<p>We used </p>
<ul>
<li>deberta-large</li>
<li>deberta-v2-xlarge</li>
<li>deberta-v2-xxlarge</li>
<li>deberta-v3-large</li>
</ul>
<p>Backbones from huggingface repository and trained a number of different models on different preprocessing, seen above. We then generated pseudos for each set and saved the probabilities. Based on CV, we built the best blend of models, and created a pseudo set using a blend of the character level probabilities. CV score of this blend was around 0.897.</p>
<h3>Level2 Models</h3>
<p>In a second stage we trained new DeBERTa models on training data + pseudos, namely</p>
<ul>
<li>deberta-large</li>
<li>deberta-v2-xlarge</li>
<li>deberta-v3-large</li>
</ul>
<p>As you can see we did not have deberta-v2-xxlarge in our level2 models because it was too heavy for the final inference kernel. It still contributed significantly via the pseudo labeling and the related knowledge distillation. The final sub was 4 models, 3 folds each. </p>
<p>Two things are quite interesting to note:<br>
We experimented with adding more architectures to our level1 blend and tried a variety of backbones including BioBert, BioMegatron, RoBERTa, BART. But they did not improve the cv of our blend because they were too far behind the DeBERTa in terms of single model score<br>
Given they are trained on (roughly) the same pseudo-blend the level2 models had little diversity. However a single model of those already achieves a Top10 position on LB</p>
<h3>Post processing</h3>
<p>Not much postprocessing except stripping spaces and removing any predictions on line break characters. Both of these did help. <br>
Also, after Feedback we made sure to try WBF, xgboost and rnn stackers, all which did not help here 😁 </p>
<p>both (MLM and pseudo labeling) help to shift the pretrained models towards the vocabulary we have here. Some tokens e.g. "f" and "m" have a more specific meaning in the nbme dataset. Pseudo labeling has the additional benefit in also moving pretrained models in the direction of QA task and is useful for knowledge distillation. </p>
|
NBME - Score Clinical Patient Notes
|
70th place Reflection
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NBME - Score Clinical Patient Notes
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank NBME host and all the participants.<br>
Reflection is as valuable as Solutions. Here is a summary of what I should have reflected on after this competition. What did I not do as compared to others? What were the wasteful aspects of my solution?<br>
By reflection, I hope that the next competition will be a better challenge.</p>
<h1>Reflection</h1>
<p><strong>Not having done MLM</strong><br>
I had assumed that MLM and Pseudo Labeling would not create synergy… I mistakenly thought that Pseudo Labeling alone was sufficient.</p>
<p><strong>Spending too much time on model variations</strong><br>
I ended up with an ensemble of 6 models, which did not significantly improve the LB. I also tried BioBERT, PubMedBERT, and BioClinicalBERT, but they did not produce score improvement.</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Folds</th>
<th>CV</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>deberta-v3-large</td>
<td>5</td>
<td>0.8892</td>
<td>0.88865</td>
<td>0.89002</td>
</tr>
<tr>
<td>deberta-large</td>
<td>5</td>
<td>0.8888</td>
<td>0.89047</td>
<td>0.89042</td>
</tr>
<tr>
<td>deberta-xlarge</td>
<td>5</td>
<td>0.8898</td>
<td>0.89062</td>
<td>0.89073</td>
</tr>
<tr>
<td>roberta-large</td>
<td>5</td>
<td>0.8892</td>
<td>0.88961</td>
<td>0.88909</td>
</tr>
<tr>
<td>funnel-transformer-medium</td>
<td>5</td>
<td>0.8873</td>
<td>0.88628</td>
<td>0.88773</td>
</tr>
<tr>
<td>funnel-transformer-xlarge</td>
<td>5</td>
<td>0.8901</td>
<td>0.88820</td>
<td>0.89004</td>
</tr>
<tr>
<td>6 models Ensemble</td>
<td>5</td>
<td>0.8942</td>
<td>0.89211</td>
<td>0.89067</td>
</tr>
</tbody>
</table>
<h1>Successes</h1>
<p><strong>Pseudo Labeling</strong><br>
Subsequent training used hard labels. Also, I used all Pseudo Labels (612k) in each subsequent training. However, this seems to have been excessive.</p>
<p><strong>Post-Processing</strong><br>
As is already well known from others' solutions, the first character of the predicted string is often missing. By restoring this, I obtained +0.002.</p>
<pre><code>def get_results(char_probs, th=0.5):
results = []
texts = test.pn_history.values
for text, char_prob in zip(texts, char_probs):
_result = np.where(char_prob >= th)[0] + 1
_result = [list(g) for _, g in itertools.groupby(_result, key=lambda n, c=itertools.count(): n - next(c))]
result = []
for r in _result:
min_r = min(r)
max_r = max(r)
if min_r > 0 and text[min_r - 1] not in [' ', '\r', '\n']:
min_r -= 1
result.append(f"{min_r} {max_r}")
result = ";".join(result)
results.append(result)
return results
</code></pre>
<p><strong>Faster inference</strong><br>
By minimizing the max_len of the text, I was able to complete the LB calculation on 30 models in about 480 minutes. The following notebook was helpful. (It is surprising that not many votes are cast.)<br>
<a href="https://www.kaggle.com/code/hengck23/deberta-v3-large-baseline-fast-version" target="_blank">https://www.kaggle.com/code/hengck23/deberta-v3-large-baseline-fast-version</a></p>
<p>This competition was time consuming for me, but I believe it was an important experience that will lead me to the next step.<br>
See you at the next competition😉</p>
|
NBME - Score Clinical Patient Notes
|
7th place solution: Get 0.892 in just 10 minutes
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NBME - Score Clinical Patient Notes
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Maybe ;)<br>
We could always be inspired from various discussions and notebooks from this large community.</p>
<p>Thanks! And congrates for prize zone you winning!</p>
<p>Thanks for your mention of PL leakage problem !!</p>
<p>Firstly, we apply <code>feature_num</code> mapping to make it consecutive</p>
<pre><code>feature_uniques = train.feature_num.unique()
idx_to_fea = dict()
fea_to_idx = dict()
for ii in range(len(feature_uniques)):
idx_to_fea[ii] = feature_uniques[ii]
fea_to_idx[feature_uniques[ii]] = ii
for ii in range(len(test)):
test.loc[ii, 'feature_num'] = fea_to_idx[test.loc[ii, 'feature_num']]
test['idx'] = 0
for ii in range(len(test)):
test.loc[ii, 'idx'] = ii # store index
</code></pre>
<p>Then we apply the function below on <code>test</code> to get <code>test_folds</code>.</p>
<pre><code>def apply_list(df):
id_list = df.groupby('pn_num')['id'].apply(list)
pn_list = df.groupby('pn_num')['pn_num'].first()
case_list = df.groupby('pn_num')['case_num'].first()
feature_list = df.groupby('pn_num')['feature_num'].apply(list)
idx_list = df.groupby('pn_num')['idx'].apply(list)
# anno_list = df.groupby('pn_num')['annotation'].apply(list)
# loc_list = df.groupby('pn_num')['location'].apply(list)
pn_list = df.groupby('pn_num')['pn_history'].first()
# anno_len_list = df.groupby('pn_num')['annotation_length'].apply(list)
# fold_list = df.groupby('pn_num')['fold'].first()
feature_text_list = df.groupby('pn_num')['feature_text'].apply(list)
tmp_df = pd.DataFrame({'id': id_list,
'idx': idx_list,
'case_num': case_list,
'pn_num': pn_list,
'feature_num': feature_list,
'feature_text': feature_text_list,
'pn_history': pn_list}).reset_index(drop=True)
return tmp_df
</code></pre>
<p><br>
The <code>test_folds</code> has the length of <code>pn_num</code>.</p>
<p>The following code would make token probs to char probs.</p>
<pre><code>prediction = inference_fn(test_loader, model, device) # prediction shape: [len(test_folds), max_len, 143]
prediction = prediction.transpose((0, 2, 1)) # prediction shape: [len(test_folds), 143, max_len]
prediction_list = np.zeros((len(test), CFG.max_len))
for t in range(len(test_folds)):
tmp_feature_num = np.array(test_folds.loc[t, 'feature_num'])
tmp_idx = np.array(test_folds.loc[t, 'idx'])
for k, m in zip(tmp_idx, tmp_feature_num):
prediction_list[k] = prediction[t][m]
char_probs = get_char_probs(test['pn_history'].values, prediction_list, CFG.tokenizer)
</code></pre>
<p><a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> Thanks and congrates to you 2nd place!</p>
<blockquote>
<p>with a 143 multiclass model. It was training fast but was not learning…</p>
</blockquote>
<p>Yeah. If only use training data, we could get score only 0.0x, the model not learning at all</p>
<p>We did not use the <code>feature_text</code> ..</p>
<p><a href="https://www.kaggle.com/Naoism" target="_blank">@Naoism</a> Right!</p>
<p><a href="https://www.kaggle.com/naoism" target="_blank">@naoism</a> Thanks!</p>
<blockquote>
<p>I think the model's input is "pn_history" and output is probabilities of 143 features</p>
</blockquote>
<p>Yes, the model output is probabilities of 143 features. </p>
<p>When a 'pn_history' feed into the model, it would ouput an matrix <code>preds</code> of shape (143, max_len). As the <code>feature_num</code> has been already known, we could access the feature probs with <code>preds[k]</code>. Where <code>k</code> is the <code>feature_num</code>. Then the feature probs could be processed by original code to get char_probs.</p>
<p>And we mapping the <code>feature_num</code> to consecutive numbers (0, 1, 2, …, 142) for convenience.</p>
<p>First of all, I would like to thank competition organizers for hosting this interesting competition. And <br>
thanks to my great teammate <a href="https://www.kaggle.com/zzy990106" target="_blank">@zzy990106</a> , we discuss and work hard for the whole last month to explore new methods.. <br>
And also thank to the community of great notebooks and discussions.</p>
<h2>Summary</h2>
<ul>
<li>Baseline</li>
<li>Models</li>
<li>Pseudo labeling</li>
<li>Postprocess</li>
<li>New pipeline (Get 0.892 in just 10 minutes)</li>
<li>Submission</li>
</ul>
<h2>Baseline</h2>
<p>We used the baseline based on the great notebook of <a href="https://www.kaggle.com/code/yasufuminakama/nbme-deberta-base-baseline-train" target="_blank">this one</a>.</p>
<h2>Models</h2>
<p>We used the following models:</p>
<ul>
<li>deberta-v3-large</li>
<li>deberta-large</li>
<li>roberta-large</li>
<li>deberta-v2-xlarge</li>
</ul>
<p>Before training with baseline, these models were pretrained on unlabeled data, it improves both cv (+~0.005) and lb.</p>
<h2>Pseudo labeling</h2>
<p>The PL were based on ensembling of models mentioned above. To avoid cv leakage, we used single fold to PL. You could see detail <a href="https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/discussion/315321" target="_blank">here</a>.</p>
<p>For training process, we randomly sample 100k PL data and combine with train data on each epoch.</p>
<h2>Postprocess</h2>
<p>When we did bad case analysis, we found such mismatched case:</p>
<p><a href="https://postimg.cc/753jL58N" target="_blank"><img src="https://i.postimg.cc/vZky3n0C/1651631554207.jpg" alt="Mismatched case"></a></p>
<p><code>j</code> was lost in prediction. This only happened the words not begin with a space. So we changed the postprocess code:</p>
<pre><code>def get_results_with_text(char_probs, texts, th=0.5):
results = []
for char_prob, text in zip(char_probs, texts):
result = np.where(char_prob >= th)[0] + 1
result = [list(g) for _, g in itertools.groupby(result, key=lambda n, c=itertools.count(): n - next(c))]
tmp_result = []
for r in result:
# print(r)
min_r = min(r)
max_r = max(r)
if min_r > 0 and text[min_r - 1] != ' ':
# print(text[min_r])
min_r -= 1
tmp_result.append(f"{min_r} {max_r}")
tmp_result = ";".join(tmp_result)
results.append(tmp_result)
# result = [f"{min(r)} {max(r)}" for r in result]
# result = ";".join(result)
# results.append(result)
# break
temp = []
for pred in results:
if len(pred)>0 and pred[0:2] == '1 ':
pred = '0' + pred[1:]
# print(pred)
temp.append(pred)
results = temp
return results
</code></pre>
<p>It could boost both cv (+0.001~0.002) and lb (+0.001). </p>
<h2>New pipeline</h2>
<p>In the last period of competition, we thought that the baseline we used, not concerned about relationship of <code>features</code> in the same <code>case num</code>. After discussion, <a href="https://www.kaggle.com/zzy990106" target="_blank">@zzy990106</a> proposed that we could use multi-class classification. So we coded a new pipeline with 143 (143 features) classification head.</p>
<p>For the new pipeline, we regarded each <code>pn_num</code> text as a sample. As we have PL extra data (About 40000 samples), so we could make models train better. Otherwise, the train dataset only contains 1000 samples (1000 <code>pn_num</code>)…</p>
<p>The inference for this new pipeline is really fast, only takes ~10mins for deberta-v3-large (5 folds), ~15mins for deberta-v2-xlarge (5 folds).</p>
<p>And the cv and lb are pretty high:</p>
<ul>
<li>deberta-large (cv: 0.895, LB: 0.892, PB: 0.891)</li>
<li>deberta-v3-large (cv: 0.894, LB: 0.891, PB: 0.892)</li>
<li>roberta-large (cv: 0.894, LB: 0.891, PB: 0.892)</li>
<li>deberta-v2-xlarge (cv: 0.8952, LB: 0.891, PB: 0.892)</li>
</ul>
<p>Maybe we could ensemble more than 20 models using this new pipeline…</p>
<h2>Submission</h2>
<p>The submission we select based on 8 models ensembling, but this was not our best pb. We still have 6 higher pb submissions not to choose… They all have very high cv.</p>
<p>We found that the variance of LB was pretty high, and there could be a shakeup. So we trust more on our cv but not LB.</p>
|
NBME - Score Clinical Patient Notes
|
8th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NBME - Score Clinical Patient Notes
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Sorry for the typo, fixed.</p>
<p>First of all, I would like to thank the organizers for hosting this competition, and our awesome teammates <a href="https://www.kaggle.com/goldenlock" target="_blank">@goldenlock</a> <a href="https://www.kaggle.com/syzong" target="_blank">@syzong</a> <a href="https://www.kaggle.com/leolu1998" target="_blank">@leolu1998</a> for their continuous efforts in the last two months. (BTW, we are the champion in the number of submissions, lol)</p>
<p>We ended up in 8th place, with scores of 0.893 for both Private LB and Public LB, which is relatively stable. But we moved up a lot in private LB, from 16th place in Public LB to 8th place in Private LB.</p>
<h3>Training and Inference</h3>
<p>We mainly refer to the notebook:<br>
<a href="https://www.kaggle.com/code/yasufuminakama/nbme-deberta-base-baseline-train" target="_blank">https://www.kaggle.com/code/yasufuminakama/nbme-deberta-base-baseline-train</a><br>
<a href="https://www.kaggle.com/code/yasufuminakama/nbme-deberta-base-baseline-inference" target="_blank">https://www.kaggle.com/code/yasufuminakama/nbme-deberta-base-baseline-inference</a></p>
<p>Big thanks for <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a></p>
<h3>ITPT</h3>
<p>The pn_history data in the patient_notes.csv is utilized to In-Task Pre-Training based on deberta-v3-large and deberta-xlarge. The data is processed to remove the line breaks, since the pre-training data is based on each line as a sentence. The code is as follows:</p>
<pre><code>def clean_spaces(txt):
txt = re.sub('\n', ' ', txt)
txt = re.sub('\t', ' ', txt)
txt = re.sub('\r', ' ', txt)
return txt
</code></pre>
<p>Parameters:</p>
<pre><code>mlm_probability=0.15
num_train_epochs=30
per_device_train_batch_size=4
per_device_eval_batch_size=8
learning_rate=1.5e-5 #(xlarge)
learning_rate=3e-5 #(v3-large)
gradient_accumulation_steps=8
</code></pre>
<p>For the comparison test of the mlm_probability, we tried 0.1, 0.2 and 0.5, and none of them worked as well as the default parameter (0.15). The improvement from ITPT is obvious, from 0.883 to 0.886 on a single model (5 folds).</p>
<h3>Adversarial Training (FGM)</h3>
<p>CV is improved about 0.0005 after adding FGM at fine-tunning based on deberta-v3-large, but it didn’t work on deberta-xlarge.</p>
<h3>Post-processing</h3>
<p>In the process of analyzing the data, we found a number of cases with unrecognized first letter of the word, as shown in the figure below:</p>
<p><img src="https://ibb.co/sPZ2myd" alt=""></p>
<pre><code>def get_results_pp2(char_probs, th=0.5, texts=None):
results = []
for idx, char_prob in enumerate(char_probs):
text = texts[idx] # 对照文本
result = np.where(char_prob >= th)[0] + 1
result = [list(g) for _, g in itertools.groupby(result, key=lambda n, c=itertools.count(): n - next(c))]
temp = []
for r in result:
start = min(r)
end = max(r)
if start <= 1: # 修复丢失文本第 0 个字符的情况
start = 0
elif start == end:
start -= 1
elif re.match(r'^[a-zA-Z0-9]$', text[start - 1]) and (text[start - 2] in ['\t', '\n', '\r', ',', '.', ':', ';', '-', '+', '"', '(', '/', '&', '*']): # 修复没有空格而丢失单词第一个字符的情况
start -= 1
else:
pass
temp.append(f"{start} {end}")
temp = ";".join(temp)
results.append(temp)
return results
</code></pre>
<p>This post-processing can improve 0.002 on a single model and about 0.001 after model ensemble.</p>
<h3>Pseudo-Labeling</h3>
<p>We first ensemble the 5-fold deberta xlarge and v3-large model with post-processing, and got 0.891 in Public LB, we further used this model to pseudo-labelling the unlabeled data in patient_notes.csv. (sample 2000 unlabeled pn_num)</p>
<p>This practice of pseudo-labeling without separating folds can lead to overfitting offline scores (0.90+), refer to discussion:</p>
<ul>
<li><a href="https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/discussion/315321" target="_blank">https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/discussion/315321</a></li>
<li><a href="https://www.kaggle.com/c/google-quest-challenge/discussion/129840" target="_blank">https://www.kaggle.com/c/google-quest-challenge/discussion/129840</a></li>
</ul>
<p>Separate multi-fold to make pseudo-label: </p>
<p>train_data_fold0 -> models_fold0 -> pl_fold0 … models_fold4 -> pl_fold4<br>
and then train_data_fold0 + pl_fold0 -> models_fold0_pseudo<br>
(each fold using different 1000 pn_num, total 5000 pn_num)</p>
<p>CV and LB score are relatively consistent, CV 0.890~0.891 (LB 0.890) and CV 0.891 (LB 0.892). We also tried the soft label through pseudo-labeling, which is a char prob floating-point number instead of hard label (0,1), and hard label works better on LB. Public LB score reached 0.892 (single model) after pseudo-labeling.</p>
<h3>Ensemble</h3>
<p>We using char prob blending.</p>
<p>We were the first team to exceed 0.893 on LB (based on overfitting pseudo-label model: 5 folds deberta-xlarge+5 folds deberta-v3-large, which scored 0.891 on PB)<br>
In the last month we were stuck in a bottleneck of LB 0.893.<br>
We trained (using separate multi-fold pseudo labeling) 5 folds deberta-xlarge and 5 folds deberta-v3-large two weeks before the competition deadline, and also achieved a LB score of 0.893. (Which also scored 0.891 on PB)<br>
Finally, we ensemble the two 0.893 models as the final submission (2 * 5fold deberta-xlarge, 3 * 5fold deberta-v3-large)</p>
<h3>Threshold</h3>
<p>Several thresholds 0.43, 0.44, 0.45, 0.48, etc. were tested on the Public LB, and the best result was 0.44. But after the Private LB was released, we found that 0.45 was a better threshold than 0.44. Unfortunately, we did not select it.<br>
Meanwhile, we set the threshold based on case_num and feature_num separately, but it does not reach the expected result (may be our threshold range was not well constrained), refer to the 4th place’s sharing:<br>
<a href="https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/discussion/322799" target="_blank">https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/discussion/322799</a></p>
<h3>Others</h3>
<p>Some experiments not work (on Public LB):</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Method</th>
<th>CV</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>Deberta-v3-large(3 fold)</td>
<td>smoothFocalLoss</td>
<td>0.880</td>
<td>0.880</td>
<td>0.883</td>
</tr>
<tr>
<td>Deberta-v3-large(1 fold)</td>
<td>+LSTM</td>
<td>0.876</td>
<td>0.874</td>
<td>0.878</td>
</tr>
<tr>
<td>Deberta-v3-large(5 fold)</td>
<td>+WeightedLayerPooling</td>
<td>0.882</td>
<td>0.884</td>
<td>0.880</td>
</tr>
<tr>
<td>Deberta-v3-large(5 fold)</td>
<td>+Stage 2 : LGBM</td>
<td>0.949</td>
<td>0.883</td>
<td>0.885</td>
</tr>
</tbody>
</table>
|
NBME - Score Clinical Patient Notes
|
9th Weight search and threshold modification
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NBME - Score Clinical Patient Notes
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks,Congratulations on your good grades, too. We selected some feature_num that are greatly affected by the threshold and have a long annotation, because the short annotation will be overfitted due to the fine-tuning threshold. We searched the threshold of the fusion model, set the threshold range to 0.4-0.55, and verified it on every fold. PB and LB can increase by about 0.001~0.002.</p>
<p><a href="https://www.kaggle.com/yanbojwang" target="_blank">@yanbojwang</a> <a href="https://www.kaggle.com/qinhui1999" target="_blank">@qinhui1999</a> <a href="https://www.kaggle.com/yankuoaaagmailcom" target="_blank">@yankuoaaagmailcom</a> <a href="https://www.kaggle.com/alex821" target="_blank">@alex821</a></p>
<h1>9th Solution</h1>
<p>First, great thanks to Kaggle and NBME for hosting such an amazing competition, and congrats to all winners!</p>
<p>thanks to my great teammate <a href="https://www.kaggle.com/yanbojwang" target="_blank">@yanbojwang</a> <a href="https://www.kaggle.com/qinhui1999" target="_blank">@qinhui1999</a> <a href="https://www.kaggle.com/yankuoaaagmailcom" target="_blank">@yankuoaaagmailcom</a> <a href="https://www.kaggle.com/alex821" target="_blank">@alex821</a> </p>
<h2>Summary</h2>
<p>We ensembled 6 token classification models and set threshold for feature_num with large changes and did postprocess.</p>
<h2>Models</h2>
<p>We trained following 6 token classification models and used them all for final submission(29 models).</p>
<h3>Here is Public LB</h3>
<ul>
<li>token classification Deberta-v3-large 5fold: 0.887 with pp</li>
<li>token classification Deberta-v1-large 5fold: 0.887 with pp</li>
<li>token classification Deberta-v2-xlarge 4fold: 0.886 with pp</li>
<li>token classification Deberta-v1-xlarge 5fold: 0.890 with pp</li>
<li>token classification funnel-transformer 5fold: 0.883 with pp</li>
<li>token classification electra 5fold:0.886 with pp</li>
</ul>
<h3>Deberta-v3-large, Deberta-v1-large, Deberta-v2-xlarge,Deberta-v1-xlarge Method used:</h3>
<ul>
<li>MLM(0.30)</li>
<li>FocalLoss</li>
<li>2fold pseudo labeling</li>
<li>FGM</li>
<li>multiple-dropout</li>
</ul>
<h3>funnel-transformer, electra Method used:</h3>
<ul>
<li>MLM(0.30)</li>
<li>BCE</li>
<li>Special char (add ‘\n,\s,\t’)</li>
<li>No pseudo labeling</li>
<li>FreeLB</li>
<li>SMA</li>
</ul>
<h2>Threshold</h2>
<p>According to the text length of different features and the change trend of F1, we select some features to set the threshold separately. Instead of selecting more settings in private, we chose a small number. However, setting more values can get higher private scores.</p>
<h2>Tricks</h2>
<p>We achieve super acceleration by sorting multiple times with different tokenizer(30 models in 9h).<br>
Because our pseudo tag online performance is not friendly enough, we only took a few fold of data for pseudo tag training, and added the final fusion to prevent jitter. According to CV, we equipped each model with weight to obtain a more friendly LB.<br>
By setting thresholds for some features, it can increase 0.001-0.002 in PB <br>
Weight search can improve ~0.001</p>
<h2>Post Process</h2>
<p>if len(result) > 0: <br>
for start,end in result: <br>
if start < len(valid_text): <br>
if valid_text[start-1] != ' ': <br>
start -= 1 <br>
if valid_text[start] == '\n': <br>
start += 1 <br>
Our score in the private list can actually reach 0.894,We expect a big shake in this game, so we choose to be too conservative</p>
|
NBME - Score Clinical Patient Notes
|
Solution Rank 46th with Some Feature corrections..
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NBME - Score Clinical Patient Notes
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>. We used MLM trainings of the models and below feature correction in one ensembled models. PLS lead to some how overfitting of cv so we mixed it with Non PLs models to get gain and neutralize the overfit part of it.<br>
Our Best final solution was mix of<br>
1) Deb v3 Large<br>
2) Deb large<br>
3) Deb v3 Large PL </p>
<p>Second best selected solution was mix of models<br>
4 models ->deb l + debv3 No PL+ debv3 feature corrections No PL +deberta base PL</p>
<p>Based on the error analysis or missing places in predictions i came up with below feature corrections.<br>
It did improve some fold wise CV dramatically but unfortunately LB dint see that much gain.<br>
<strong>If any one wants to try out their best solutions with these corrections try it out and let me know the results.</strong> </p>
<pre><code> print(features.iloc[132,2])
features.iloc[132,2]='Vomiting-or-V'
print(features.iloc[73,2])
features.iloc[73,2]='Episodes-of-heart-racing-or-heart-poundings-or-palpitations'
print(features.iloc[90,2])
features.iloc[90,2]='Recent-upper-respiratory-infection-symptoms or-nasal-congestion'
print(features.iloc[119,2])
features.iloc[119,2]='Diminished-energy-or-feeling-drained-or-somnolence'
print(features.iloc[76,2])
features.iloc[76,2]='No-illicit-drug-use-or-No-toxic-habbits'
print(features.iloc[6,2])
features.iloc[6,2]='Adderall-use-or-Amphetamine'
print(features.iloc[9,2])
features.iloc[9,2]='heart-pounding-OR-heart-racing-or-palpitations'
print(features.iloc[102,2])
features.iloc[102,2]='LMP-2-months-ago-or-Last-menstrual-period-2-months-ago'
print(features.iloc[[12,51,88],2])
features.iloc[[12,51,88],2]='Male-or-M-or-Mr.'
print(features.iloc[131,2])
features.iloc[131,2]='Neck-pain-or-Pain-neck'
print(features.iloc[135,2])
features.iloc[135,2]='viral-symptoms-OR-rhinorrhea-OR-scratchy-throat(ST)'
#new additions
print(features.iloc[95,2])
features.iloc[95,2]='No-shortness-of-breath-or-denies-dyspnea'
print(features.iloc[63,2])
features.iloc[63,2]='No-depressed-mood(good energy level)'
print(features.iloc[2,2])
features.iloc[2,2]='Chest-pressure-or-chest-pain'
print(features.iloc[4,2])
features.iloc[4,2]='Lightheaded-or-dizziness-or-presyncopal'
#print(features.iloc[3,2])
#features.iloc[3,2]='Intermittent-symptoms-or-random-episode'
#print(features.iloc[74,2])
#features.iloc[74,2]='Recent-visit-to-emergency-department(ED/ER)-with-negative-workup'
print(features.iloc[97,2])
features.iloc[97,2]='No-or-minor-relief-with-asthma-inhaler'
print(features.iloc[82,2])
features.iloc[82,2]='Feels-hot-OR-Feels-clammy-or-sweat'
print(features.iloc[77,2])
features.iloc[77,2]='Associated-nausea(N)'
print(features.iloc[81,2])
features.iloc[81,2]='Associated-throat-tightness-or-throat-narrowing'
print(features.iloc[54,2])
features.iloc[54,2]='No-blood-in-stool-or-BRBPR'
print(features.iloc[129,2])
features.iloc[129,2]='Myalgias-or-muscle-achiness'
#print(features.iloc[50,2])
#features.iloc[50,2]='2-to-3-beers-a-week-or-quit-or-few'
print(features.iloc[142,2])
features.iloc[142,2]='Subjective-fever-or-high-temperature'
print(features.iloc[117,2])
features.iloc[117,2]='Unsuccessful-napping-or-no-naps'
print(features.iloc[[57,134],2])
features.iloc[[57,134],2]='Nausea-or-N'
#features.iloc[5,2]='No-hair-changes-OR-no-nail-changes-OR-no-temperature-intolerance-or-no-
</code></pre>
|
NBME - Score Clinical Patient Notes
|
[129th place] Error analysis did the trick!
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NBME - Score Clinical Patient Notes
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="https://www.kaggle.com/vad13irt" target="_blank">@vad13irt</a>, also thank you for sharing your ideas in one of the other discussion posts, it gave me new insights! The indexes are correct but I might not have explained very clearly, my bad. </p>
<p>For the second picture, it is an example of a case where the word starts on the first line of the patient history, hence the start index = 0. When you run this code <code>np.where(char_prob >= th)[0] + 1</code>, the output becomes [1, 6] because of the +1. </p>
<p>Thank you host and kagglers for this competition, always interesting to see how AI solutions can be applied in a medical/healthcare setting. Big thank you to <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> and <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> for sharing their baseline code, I learn a great deal by reading through them. </p>
<p>This post might seem trivial as compared to the many other solutions write-up the top kagglers are posting, but I really wanted to highlight the importance of error analysis which prove to be a differentiator in my case. As highlighted in other solution write-up, the <code>get_results()</code> function from the <a href="https://www.kaggle.com/code/yasufuminakama/nbme-deberta-base-baseline-train" target="_blank">baseline notebook</a> can be tweaked to improve CV/LB. This becomes apparent when you do an error analysis and visualize the predictions using Spacy. In fact, one can easily gotten a private LB of 0.888, which would put you in the bronze medal range, if you use the best scoring notebook <a href="https://www.kaggle.com/code/motloch/nbme-ensemble-debertas" target="_blank">here</a> and adding a few more lines of code in the <code>get_results()</code> function. More info in the post-processing section. </p>
<p><a href="https://postimg.cc/LgDQtqfk" target="_blank"><img src="https://i.postimg.cc/nMf8tBhW/Screenshot-2022-05-04-130401.png" alt="Screenshot-2022-05-04-130401.png"></a></p>
<p><strong>Baseline</strong><br>
I used the really insightful notebook from <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> found <a href="https://www.kaggle.com/code/yasufuminakama/nbme-deberta-base-baseline-train" target="_blank">here</a></p>
<p><strong>Models</strong><br>
The following models were ensembled by taking a weighted average of character level probabilities:</p>
<ul>
<li>deberta-v1-large</li>
<li>deberta-v2-xlarge</li>
<li>deberta-v3-large x2</li>
<li>electra-large</li>
</ul>
<p>Weightage and overall threshold were found using Optuna.</p>
<p><strong>Post-processing</strong><br>
I made some adjustments to the get_results() function. This boosted LB by ~0.003. </p>
<p>As DeBERTa tokenizes the white-space together with the words, the white-space has the same character level probability as the token itself. In the case where the word of interest has a white-space before it, the <code>get_results()</code> function works perfectly. </p>
<p>Using an example below, the output would be [1,7] after this line of code in the <code>get_results()</code> function. <br>
<code>result = np.where(char_prob >= th)[0] + 1</code></p>
<p>Since the last index of a list is non-inclusive, the resulting annotation would be 'father' without any white-space. </p>
<p><a href="https://postimg.cc/DW2Z50Sj" target="_blank"><img src="https://i.postimg.cc/DzGXvbnT/Screenshot-2022-05-04-132601.png" alt="Screenshot-2022-05-04-132601.png"></a></p>
<p>However, this becomes an issue if your word of interest is at the start of a new line and does not have a white-space before it. In this case, the output would be [1,6] and the resulting annotation will be 'ather'. </p>
<p><a href="https://postimg.cc/XZCrpmy3" target="_blank"><img src="https://i.postimg.cc/VN7nHcfS/Screenshot-2022-05-04-133401.png" alt="Screenshot-2022-05-04-133401.png"></a></p>
<p>To rectify this, I edited the post-processing function.</p>
<pre><code>def get_results(char_probs, texts, th=0.5):
results = []
for char_prob, text in zip(char_probs, texts):
result = np.where(char_prob >= th)[0] + 1
result = [list(g) for _, g in itertools.groupby(result, key=lambda n, c=itertools.count(): n - next(c))]
result = [f"{min(r)} {max(r)}" for r in result]
adjusted_results = []
for indexes in result:
start_index, end_index = indexes.split()
prev_index = int(start_index) - 1
if prev_index == 0 and char_prob[prev_index] >= th:
new_result = str(prev_index) + ' ' + end_index
adjusted_results.extend([new_result])
elif char_prob[prev_index] >= th and (text[prev_index] != " " and text[prev_index] != ","):
new_result = str(prev_index) + ' ' + end_index
adjusted_results.extend([new_result])
else:
adjusted_results.extend([start_index + ' ' + end_index])
adjusted_results = ";".join(adjusted_results)
results.append(adjusted_results)
return results
</code></pre>
|
NBME - Score Clinical Patient Notes
|
[24th place solution]: Pretrain tasks is the key!
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NBME - Score Clinical Patient Notes
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>With all unlabels data, we training 10 epochs, if sampling i think <5 epoch is enough</p>
<p>First, thanks Kaggle and the competition hosts for a challenging competition in NLP after Feedback Prize - Evaluating Student Writing ended. This summarizes the effort of our team and it was a great experience working with my teammates ! This is a writeup of our collective team effort. We don't have much time for NBME challenge (about 3 weeks) after focusing on the Happywhale competition.</p>
<p>After reading solutions by top teams we realized what was missing from the team was the pseudo label (actually noticed before, we just didn't make it work like other teams did) </p>
<h1>Modeling</h1>
<h2>Pretrain task</h2>
<ol>
<li>Task adaptation with MLM (mask language pre-training (used standard masked language modelling for pre-training with token masking probability of 0.15): Deberta-V2-xlarge, Deberta-V2-xxlarge and Deberta-V3-large</li>
<li>Task adaptation with WWM (whole word masking) pre-training: Deberta-V2-xlarge and Deberta-V3-large</li>
</ol>
<h2>Model Experience</h2>
<p>We try a lot of models and custom heads:</p>
<ul>
<li>Deberta-V3-large</li>
<li>Deberta-V2-xlarge</li>
<li>Deberta-V2-xxlarge</li>
<li>Roberta-large</li>
<li>Deberta-V3-large with LSTM/GRU head</li>
<li>Deberta-V3-large with custom head (CNN-1D)</li>
<li>Deberta-V3-large concat last 4/12 layers</li>
</ul>
<p>Some of the results are detailed in the photos below<br>
<a href="https://postimg.cc/XBMMv8z4" target="_blank"><img src="https://i.postimg.cc/D007BpQJ/nbmemodel.jpg" alt="nbmemodel.jpg"></a></p>
<h1>Ensembling</h1>
<p>Simple average is used by us for final ensemble solution, maybe weighted average would be better but we didn't try. I also want to try Stacking but don't have time.</p>
<h1>Post Processing</h1>
<p>Our post process method consists of 2 parts:</p>
<ul>
<li>One that is almost the same as the Misaling annotations like 11th place solution</li>
<li>One is the same as the threshold of the 4th place solution, our team uses the threshold for each case num and feature num, CV increased from 0.895 to 0.896 but LB decreased a bit, so we stopped to range 0.45-0.55 (this range threshold). And we tune based on CV to reduce overfit on public LB, we use score increment on both CV and LB. The results of post processing on LB (both public and private) increased by about 0.001. Our rankings haven't changed much on public and private LB :D</li>
</ul>
<h1>Thing that work but not have time to try</h1>
<ol>
<li>Meta Pseudo Labels (MPL)</li>
<li>Knowledge Distillation (KD)</li>
<li>Mixup, Random mask</li>
</ol>
<h1>What didn't work for our team</h1>
<ol>
<li>Multisample Dropout</li>
<li>Pseudo labeling</li>
<li>LSTM/GRU head</li>
</ol>
<p>Thanks you <a href="https://www.kaggle.com/trushk" target="_blank">@trushk</a> :D</p>
|
NBME - Score Clinical Patient Notes
|
deberta deberta deberta (68th place solution)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NBME - Score Clinical Patient Notes
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Deberta was my theme for this comp, so it is only fitting that the final predictions were made using only deberta models. </p>
<p>The only notable thing in this write-up is that I figured out how to get the masked language modeling for deberta to be a bit better. My <a href="https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/discussion/320670" target="_blank">previous post</a> showed how to get slightly better results than the standard implementation, but I did more investigation and I realized that the output hidden states need to be multiplied by the input embeddings to get token logits. If you do this, the fill-mask predictions start to make sense for the pretrained models right out of the box. <a href="https://colab.research.google.com/drive/1fkx_wZ3j-c0N6xZYYSyqCspzNhm4ZnKd?usp=sharing" target="_blank">See here for details</a>. I'll do my best to get this incorporated into <code>transformers</code> as soon as possible. I still think this implementation is incomplete as it does not utilize the EMD (enhanced mask decoder) feature, so more experimentation is needed.</p>
<p>I used this new implementation to do more MLM for deberta v1 large, v2 xlarge, v2 xxlarge, and v3 large using all of the patient notes. I was able to get 75-85% accuracy on the masked predictions. Reinitializing the top layers helped for the larger models. </p>
<p>Regarding data, I tried to clean up the noisy labels as much as possible. I looked at the training samples that had consistently wrong predictions, and most of them turned out to be improperly labeled. It was very tedious, but I spent a couple of hours correcting about 120 or so. If I spent more time on this, I think I could have had a higher quality data set, but it was tedious and I ran out of time. This helped improve scores by about .003</p>
<p>Some other training tricks</p>
<ul>
<li>multi sample dropout</li>
<li>random masking </li>
<li>inserting newline tokens for v2 and v3 (these were added before MLM so that the embeddings got trained a little)</li>
<li>modifying the feature text ("IUD" --> "IUD or intrauterine device")</li>
</ul>
<p>The final score (0.889/0.890, public/private) was with 2 v1 large, 1 v2 xxlarge, and 3 v3 large models. The predictions were mapped to characters and averaged over all models equally. This only took 3 hours to run, so there was definitely more potential for other models. </p>
<p>Most of the models used were trained in the last few days, so there was some chaos which resulted in mistakes being made. For example, the models trained with modified feature text only got the default feature text (but I only realized this after all the subs were used). I even forgot to use my v2 xl models and overwrote some others. oops.</p>
<p>No pseudolabels were used for these models.</p>
<p>Some thoughts that never got implemented:</p>
<ul>
<li>lower case the ALL CAPS TEXT</li>
<li>use very custom post-processing depending on the feature num/feature text (e.g. make sure there is the correct number for all features with ages: 45-year, 67-year, etc., if there isn't a typical prediction for feature text = "Female" (such as "f", "female", "ms", "woman" ) but there is "G2P2", use that as the prediction. )</li>
<li>CRF </li>
<li>use sentence transformers to compare predictions with labeled annotations</li>
<li>come up with an iterative process of generating pseudolabels, training on the the labels, and relabeling the problematic ones. My manual process was quite tedious so an automated way would have been much better.</li>
<li>use models other than deberta 😄</li>
</ul>
<p>Thanks to <a href="https://www.kaggle.com/kishalmandal" target="_blank">@kishalmandal</a> for being a good teammate 🤗</p>
<p>Compute was done on Kaggle, Colab, and Jarvislabs.</p>
|
NFL 1st and Future - Impact Detection
|
15th place solution: single stage object tracking
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL 1st and Future - Impact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks. I haven't tried each postprocess separately because they should be optimized at the same time.</p>
<p>Thanks <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> for kind words, and congrats for finishing in gold zone! I should have teamed up with you guys, it would be definitely fun! I was actually desperate for talking and discussing about this interesting challenge with talented teammate…😂</p>
<p>Thanks. Yes, it was actually too good to discard and try completely different thing when I figured it out that it couldn't achieve over 0.50 a week before of deadline…lol</p>
<p>Thanks, but i joined this competition at the very early stage, so I can't say it's matter of time😄</p>
<p>FIrst of all, thanks to Kaggle and host for having such a interesting competition. And also congrats to all the winners. I believe it was one of the toughest competition since there were so many talented kagglers while total number of competitor was relatively small.</p>
<p>Here I'll write up a short summary of my solution and also where I failed.</p>
<h1>TLDR;</h1>
<ul>
<li>No ensemble single stage model</li>
<li>FairMOT as baseline for helmet tracking</li>
<li>Additional impact detection branch and team feature branch</li>
<li>A bunch of postprocess with track id, team and position</li>
</ul>
<h1>Baseline</h1>
<p>At first sight this task seems the best application of object tracking because we need to aggregate timeseries information due to metric design. So I chose <a href="https://arxiv.org/abs/2004.01888" target="_blank">FairMOT</a> as my baseline because it is the SoTA of the MOT benchmark. In short, FairMOT is object tracking framework which is CenterNet with metric learning based id feature branch. Additional to helmet tracking, my model also predict impact probability for each impact types and also team feature to distinguish opponent helmet. So in total, my model's outputs are following:</p>
<ul>
<li>Helmet image coords and bbox wh and confidence (N * 5)</li>
<li>Id feature (N * 128)</li>
<li>Team feature (N * 128)</li>
<li>Impact probability (N * 6)</li>
</ul>
<p>where N is number of helmets.</p>
<p>As a small trick, I added same tracking label for same player in both view, so my model should output same id feature for same player in both view. While tracking I associated helmets in both view. Additionally I devide helmets in 2 groups, or teams, in the view. I simply use kmeans to clustering, but it works very well. After tracking, My model's outputs are following (in addition to above):</p>
<ul>
<li>Tracking id (N,)</li>
<li>Matched tracking id in another view (N,)</li>
<li>team class (0 or 1) (N,)</li>
</ul>
<p>Other small details of training:</p>
<ul>
<li>backbone: HRNet32</li>
<li>image size: (708, 1280)</li>
<li>num epochs: 20 for Endzone, 35 for Sideline</li>
<li>extend impact label to +- 4 frames</li>
</ul>
<h1>Post process</h1>
<p>Even though my model predict impact probability for all impact type, in my best submission I used only prediction for impact between helmets. And my postprocess procedure heavily rely on the hypothesis in which helmet impact happens between the pair of helmets from different team.<br>
My post process is consists of following steps:</p>
<ol>
<li>Calculate maximum iou with opponent helmet</li>
<li>Take maximum of impact probability between the pair of highest iou helmets</li>
<li>Calculate minimum pixel distance with opponent helmet and normalize it by mean box size in frame</li>
<li>filter by helmet detection confidence threshold</li>
<li>filter by distance threshold</li>
<li>filter by impact probability threshold</li>
<li>maxpooling nms in time axis for same id</li>
<li>remove solo helmets which has no pair helmets in near frames</li>
</ol>
<p>As you can see there are a lot of hyper parameters. I optimized these thresholds with optuna for Endzone and Sideline separately. <br>
By doing so my cv was about 0.42 for Endzone and 0.53 for Sideline, and 0.45 in total. CV and LB correlation was not so bad, my final score was actually 0.45 both on public and private.</p>
<h1>Where I failed?</h1>
<p>Obviously I missed to aggregate timeseries information, such as 3d model or 2 stage model which is most of other solutions have. Instead of that, I spent most of time in last month to aggregate relationship between instances in same frame and other view. I believe there shuold be some breakthrough but I couldn't make it works. What I tried were:</p>
<ul>
<li>Making use of other view prediction. (ex. if there is no prediction in other view, filter out more)</li>
<li>Making use of matching id even though it has best matching accuracy among the method I tried.</li>
<li>Homography estimation and optimization between End and Side or image and sensor tracking position.</li>
<li>Training and matching with <a href="https://github.com/magicleap/SuperGluePretrainedNetwork" target="_blank">superglue</a>. I used id feature as descriptor and center points as keypoint. I added superglue on top of FairMOT and trained simultaneously. It works somehow but simple linear sum assignment was better than this.</li>
</ul>
<p>I'd like to release code, but it's so messy now. If I success to clean it up, I'll update.</p>
|
NFL 1st and Future - Impact Detection
|
181th solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL 1st and Future - Impact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for organizing this fun competition!</p>
<p>It didn't go well, but here's what i tried.</p>
<p>Extend the bbox<br>
In the provided bbox, it was difficult to distinguish between impact and not_impact, and I thought it would lead to false positives.</p>
<p>So I thought that extending the bbox would make it easier to distinguish between them, so I did four types of hand labels: Healmet, Shoulder, Body, and Ground.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4289810%2Ff8aae5f74fb6d393b69c6b9d4e7f91f8%2F2021-01-05%2016.02.18.png?generation=1609833984847349&alt=media" alt=""></p>
<p>In the case of Head, we hand-labeled the two helmets so that they are included.<br>
As shown in the figure, the detected impacts from the two models were compared, and if there were two or more detected impact with an IoU greater than 0.1, the top two impact were adopted as the impact.</p>
<p>If there was only one or no overlapping impacts, it was considered a false positive.<br>
By doing this, we thought we could prevent false detections of only one helmet in situations where there was no impact at all, or false detections of unrelated areas such as the ground, the player's body, and others.</p>
<p>I'm sure there were a lot of things I misunderstood, but thank you for reading.</p>
|
NFL 1st and Future - Impact Detection
|
18th Solution : YoloV5 and filtering using tracking data
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL 1st and Future - Impact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes, this is that.<br>
We also give a try to an exact match by interpolating tracking data to train data’s frequency but no success in results.</p>
<p>Good question, devil is in the details ;).</p>
<p>We followed guidelines provided by host here :<br>
<a href="https://www.kaggle.com/c/nfl-impact-detection/discussion/198728" target="_blank">https://www.kaggle.com/c/nfl-impact-detection/discussion/198728</a></p>
<p>But then, you have some decisions to make :</p>
<ul>
<li>manage the fact that you will not get exact match from "train frame" to "tracking timestamp" : either you take the closest (what we choose) or you spread impact on two "tracking timestamps"</li>
<li>definition of impact : either a boolean (true or false) or the number of impacts at this timestamp. Our own experiments fit better with the first option but i believe that a correct setup can take benefit of second case.</li>
<li>sideline/endzone inconsistency : around 66% of marked impacts on sideline/endzone point to the same "tracking timestamp". Decision need to be taken for the remaining 33% : stay as it is, remove, merged. We experiment the two first options, first one was better in our experiments.</li>
</ul>
<p>My teammates (@mathurinache & <a href="https://www.kaggle.com/matthieuplante" target="_blank">@matthieuplante</a>) and I would like to thanks Kaggle and the hosts for the organisation of this competition. <br>
We would like also to thanks <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> for the public baseline notebook he provided in the early stage of the competition.</p>
<p>We will describe our solution in an iterative way to see how, from this public baseline, we reached a private LB of 0.42+.</p>
<p><strong>Public baseline improvement</strong><br>
We take the idea shared by <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> to extend impacts on a [-4, 4] range. But, instead of only training on the frames with impact(s), we added in our train set the frames corresponding to -5, +5 offsets. From our point of view, it helps the model to understand the boundaries between impact or no impact. Using EfficientDetB5 trained on imgsize of 512x512 with a simple 80/20 split, we get as CV/Public/private results : 0.18/0.22/0.18</p>
<p>Based on the previous model, adding post processing that will be describe in last section, we reached CV/Public/private results of 0.22/0.25/0.22</p>
<p>Moving, then, to yolov5 as based model and using original resolution for our 2 classes detection problem, we reached CV/Public/private results of 0.25/0.28/0.24.</p>
<p>Finally, doing some ensembling (that revealed to be bad for private LB) with 0.226 public notebook, we reached a public LB of 0.35+ but a private LB of 0.26+</p>
<p><strong>Using tracking data as a frame selector</strong><br>
Tracking data seems a nice playground for us to detect frames with impact as information such as speed and acceleration at player level was provided.</p>
<p>To work with this file, we decided to change a bit it structure from :<br>
[time, player info]</p>
<p>to :<br>
[time, player info, closest opponent info, global aggregates, impact]</p>
<p>Global aggregates are mean/std of player speed/acceleration/… at a given timestamp.<br>
We also take the care to remove any translation or rotation dependent features such as : x, y, direction, orientation. For direction and orientation, we add the scalar product between player and opponent values.<br>
Impact is obtained by reconciliation with the train csv file.</p>
<p>Then, we build a model based on a simple Transformer encoder architecture (6 attentions, 8 heads) with a linear head classifier that will process our new tracking data over 9 consecutive timesteps (9 is a parameter).</p>
<p>We performed a 5 folds training on this data, managing class imbalance by downsampling. Each fold provided more or less a F1 score of 0.4 for more or less 0.4 precision and recall. At first, this seemed poor, but because we are at player level, going back to frame level, we get the following results : recall of 0.7 for a precision of 0.1. Accepting to set an upper bound of 0.7 to our recall, we were now in a situation where 1 frame out of 10 contains at least an impact (original situation on train data was 3 out of 100).</p>
<p>Retraining our yolo based model using an 5 folds strategy (not using the same seed as the frame selector) and using this frame selector for filtering did not improved our result at first ! <br>
Indeed, as we changed our « prediction space », we needed to change also our validation space by limiting it to the scope of the frame selector.</p>
<p>This strategy gives us a CV/public/private scores of 0.3711/0.3749/0.3767</p>
<p>At last, we enhanced our train set adding the FP from our frame selector (guessing there are some hard samples in it) and we reached the following CV/Public/private of : 0.4117/0.3636/0.4220<br>
Little tweak attempts on postprocessing give us our best public LB of 0.4666 for a private score of 0.4244.</p>
<p><strong>Post processing</strong><br>
Several steps composed our post processing :<br>
1 - basic model confidence filter<br>
2 - 5-85% filter : exclude all frames below 5% or above 85% of the video<br>
3 - endzone - sideline constraint : keep only frames where there are predictions on sideline and endzone with at most 3 frames margin<br>
4 - NMS (using torch implementation) on a 9 frames sliding windows with a 0.2 iou threshold<br>
5 - box limit : take only the 4 most confident predictions per frame</p>
|
NFL 1st and Future - Impact Detection
|
1st place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL 1st and Future - Impact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>One change I have made to the original TSM: for the last few blocks I shifted values by more than one frame (by 2 or 3, was sufficient for this task), similary to convolution dilation. It allows to increase the receptive field in the time dimension.</p>
<p>Thanks! I received the similar result, when I removed all helmet to helmet impacts, F1 score dropped approximately 2x to 0.33</p>
<p>Thanks to the organisers of this competition, it was a very interesting task to solve!</p>
<p>My approach was very similar to the other top solutions: 2d detection of all helmets, tracking of helmets, 3d classification of crops and post processing to suppress false positives.</p>
<h1>Detection</h1>
<p>I used the YoloV5-l trained on 10k images using the full resolution.<br>
I started using EfficientDet based detector with modified set of anchors to better match the helmet size distribution and trained on video frames, but YoloV5 worked better.</p>
<h1>Helmets Tracking</h1>
<p>For each detected helmet, I found the optical flow between a few surrounding frames.<br>
I tried to use either OpenCV or RAFT (Thanks <a href="https://www.kaggle.com/daigohirooka" target="_blank">@daigohirooka</a> for the great notebook and referring RAFT). I used the optical flow to track helmets between frames and to estimate the average helmet velocity on the image plane over a few surrounding frames. I think RAFT worked slightly better but my top submit still used OpenCV for helmets tracking.</p>
<h1>2.5D Classification</h1>
<p>For classification I used 16x3x128x128 crops, scaled so the median helmet size over the current frame is mapped to the center 40 pixels. It provides the consistent scale, robust to detected bounding box variations, especially for partially visible helmets.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F743064%2Fc22c2310d8795c4235fc1cc160c6eb99%2FScreenshot%20from%202021-01-07%2022-50-44.png?generation=1610023878290751&alt=media" alt=""></p>
<p>I also corrected for the linear helmet movement between frames. The current frame at 8th slice is always centered, but all the other frames are shifted using the current box velocity, estimated during the tracking stage. So when the player is running with the constant speed or camera is panning, the helmet stays at the frame center, but during acceleration it would move to and from center.</p>
<p>The intuition behind - the acceleration is important for classification, but it's harder to estimate on top of potentially fast movement due to camera movement.</p>
<p>For classification models, instead of 3d convolution I used the <a href="https://arxiv.org/pdf/1811.08383.pdf" target="_blank">Temporal Shift Module</a>:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F743064%2F54a8a6bc002ffe63f1aa4ef207cacfd0%2Ftsm.png?generation=1610023740252006&alt=media" alt="Temporal Shift Module"></p>
<p>TSM has very little overhead comparing to N 2d models, can reuse any ImageNet pretrained 2d classification model and performed reasonably well for video understanding tasks.</p>
<p>I also had promising results with the <a href="https://arxiv.org/abs/2007.09933" target="_blank">MotionSqueeze: Neural Motion Feature Learning for Video Understanding</a> approach, which adds flow estimation to TSM based model, but the implementation is not trivial and does not have license specified.</p>
<p>I found small models with TSM added at the start of residual blocks work well and are usually easy and fast to train (from an hour to a few hours per model). I used the ensemble of EfficientNet B0-B3, Resnet18 and Resnet34. When comparing the TSM approach to 3d convolution, it's easy to notice the lack of pooling in the time dimension, so instead of shifting features by one frame, I shifted by 2 and 3 frames in the last blocks of resnets/efficient nets to mimic the dilated convolution.</p>
<p>During training I marked 3 frames around the impact as positive and used 5 or 10% positive samples.<br>
In addition to annotated training labels, I added the false positive prediction from a few undertrained detection models. I used 4 folds CV, for submission I averaged predictions of multiple models from all 4 <br>
folds. To speed up prediction, I ran classification on a few fast models first, to filter 95% of simple to predict negative samples.</p>
<h1>Post-Processing</h1>
<p>The postprocessing is quite simple, I select the detected/classified box with highest impact confidence and use flow/tracking information to suppress the detections for the same player for surrounding frames, with suppression decaying from the current confidence to 0 over 16 frames. I used the same threshold for all video files and frames.</p>
<p>I checked if such post-processing would suppress the positive samples using training dataset and found surprisingly significant amount of positive samples within 3-10 frames for the same player. Most of such sequential impacts were of the different type, for example helmet to helmet impact followed by helmet to body. To address this, I changed the classification model to predict the impact type as well, and suppressed the predicted impact type only. The score improvement was quite small, less than 0.01 estimated on local CV.</p>
<p>Thanks, unfortunately I don't have comparison with other 3d models, did not have enough time to try different approaches.</p>
<p>I found with oversampling of positive cases to 5-10% TSM based models very quickly overfit even with Resnet18 or EfficientNet B0, but still worked better comparing to training with 2% sampling of positive cases. I'd expect more agressive agmentations would help, including shift between frames. But at least it was the indication the small 2d models with the TSM module are powerful enough for the task. With more time available for competition, I'd certainly try another 3d approaches. Time permitting, would be interesting to train one of other teams models with the rest of my pipeline to compare.</p>
|
NFL 1st and Future - Impact Detection
|
2nd Place Solution: Helmet Detection + 3d Impact Detection + Post Processing
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL 1st and Future - Impact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi <a href="https://www.kaggle.com/morizin" target="_blank">@morizin</a>, </p>
<p>Stage 1 predicts the boxes for all helmets and stage 2 decides whether each of those boxes is an impact or not. As we don't have labels for the stage 1 boxes (they are just predictions themselves!) we need to somehow determine whether they should be classified as impacts or not. If a predicted box had an overlap of >=0.35 with a GT box then we labelled it as 'impact'. If it didn't then it was labelled 'non-impact'. Hope that clarifies.</p>
<p>Cheers, and well done on your medal!</p>
<p>Ciara</p>
<p>Thanks to both the organisers of this competition and to all of the other competitors - well done to everyone who worked hard over the holiday period!</p>
<p><a href="https://www.kaggle.com/ilu000" target="_blank">@ilu000</a> and <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a> are now competition GMs! Well deserved - congratulations!</p>
<p>Here's the solution from the team <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a>, <a href="https://www.kaggle.com/ilu000" target="_blank">@ilu000</a>, <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a>, <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a>, <a href="https://www.kaggle.com/fergusoci" target="_blank">@fergusoci</a></p>
<h1>Brief Summary</h1>
<p>Our solution is a two stage approach. Our first stage is a YOLOv5 model which detects helmets in each frame of the video. The Stage 2 ensemble consists of various 3D CNN models which use cropped helmet locations and +/- 4 images from frames surrounding the cropped location. Our stage 2 models predict targets by impact type. Finally, we apply post processing to our ensembled staged 2 predictions.</p>
<h1>Detailed Summary</h1>
<p>Below we give a detailed description of our solution:</p>
<h2>Validation strategy</h2>
<p>We used GroupKfold by GameID to create a 5Fold split of the data.</p>
<p>We trained various stage 1 models for helmet detection, both on our KFold splits and on the entire training set. We determined the maximum possible score of a stage 2 model based on our stage 2 bounding boxes would be 0.9868 (out of 1888 impacts, the stage one model predicted matching boxes for 1863 of those).</p>
<h2>Pipeline</h2>
<p>We approached the competition collaboratively, with each team member working off one central pipeline. We used the following:</p>
<ul>
<li>Github: Versioning and code sharing</li>
<li>Neptune: logging and visualisation</li>
<li>Kaggle API: dataset upload/ download</li>
<li>AWS: data storage</li>
<li>Push kaggle dataset github action: <a href="https://github.com/marketplace/actions/push-kaggle-dataset" target="_blank">https://github.com/marketplace/actions/push-kaggle-dataset</a> For automatically pushing github commits to kaggle.</li>
</ul>
<p>We are planning to release our code soon after some clean up.</p>
<h2>Architectures</h2>
<p>Our best stage 2 ensemble consisted of 6 different efficientnets + horizontal flip tta. We replaced the first 2d conv layers in the inverted residual blocks of efficientnet with 3d conv layers as <a href="https://www.kaggle.com/davletag" target="_blank">@davletag</a> did in the <a href="https://www.kaggle.com/c/deepfake-detection-challenge/discussion/158158" target="_blank">Deepfake competition</a>. Hat-tip <a href="https://www.kaggle.com/davletag" target="_blank">@davletag</a>, that also seemed to work well here.</p>
<p>At the last minute we trained 3D Resnext50 & 3D Resnext101 (pretrained on kinetics dataset from <a href="https://github.com/okankop/Efficient-3DCNNs)" target="_blank">https://github.com/okankop/Efficient-3DCNNs)</a>. Although initial results looked promising we lacked time to properly tune hyper-parameters and did not use it in our selected submissions.</p>
<h2>Training strategy/ schedule</h2>
<h3>Stage 1: Helmet detection</h3>
<p>During the competition we trained YOLOv5 per fold using our GroupKfold splits but also adding the extra images. Fortunately, YOLO works out-of-the box with DDP, by simply running torch.distributed.launch so we could speed up training a lot. We used the standard config with pretrained weights and applied put flags</p>
<p>--batch-size 128 <br>
--img-size 704 <br>
--epochs 50 <br>
--data dataset_fold0.yaml <br>
--weights yolov5x.pt <br>
--single-cls </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F621560%2F364c7b679f147495cf18c959c61aefde%2FScreenshot%202021-01-05%20at%2019.49.09.png?generation=1609876230711342&alt=media" alt=""></p>
<p>For our final submission, we trained on the full data without a validation set. For inference we saw that using the built-in TTA flag and using an img-size of 1280 gives nice improvements of box predictions. We kept all boxes, which had a confidence level greater than 0.05 or 0.1 and went on with assigning labels. Since our stage1 model only detected helmets, i.e. is a single class detector, we need to assign impact/ non-impact labels after we generated the boxes. For that we took the GT boxes and assigned an impact to each predicted box that had an overlap of 0.35 IOU with an impact GT box. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F621560%2F5bb26ea6292c3709e100b422f60c9a5a%2FScreenshot%202021-01-05%20at%2019.51.19.png?generation=1609876316497380&alt=media" alt=""></p>
<h3>Stage 2: Impact detection from helmets</h3>
<p>Input data for our 3D models was a stack of 9 helmet detections from Stage 1: the central crop +- 4 frames on either side. The detection was padded so that we cropped 2x width and height of the original. Some of our models included the original box size as context.</p>
<p>We used the different impact types for the centre frame as our output variable (no impact, helmet, shoulder, body, ground impact) and optimized a softmax loss with class weights split 0.8:0.2 (non-impact : impact). We had approximately 1.1M detections from Stage 1 across all training videos, so we used the following sampling scheme during training:</p>
<p>At each epoch we selected all of the positive impact samples and a random sample of negative impact samples according to a specified ratio (we used 0.99:0.01 non-impact:impact). The negative impacts were sampled without replacement so that we saw as many diverse images as possible over the course of training. </p>
<p>Augmentations included: HorizontalFlip, CoarseDropout, GridMask, ShiftScaleRotate, RandomBrightnessContrast, Blur.</p>
<p>We trained 3D models with efficientnet-b3 and b5 backbones (2d conv layers converted to 3d as outlined previously), and various heads. Each model was trained with a cosine annealing scheduler and Adam optimizer for 30 epochs. Pytorch’s distributed data parallel (DDP) and saving a cache of 9x3 channel boxes to disk allowed us to speed up the training process significantly.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F621560%2Fca5c1647b86bd73b3832db2927963c18%2FScreenshot%202021-01-05%20at%2019.52.25.png?generation=1609876379774000&alt=media" alt=""></p>
<h2>Post-processing</h2>
<p>As the metric was a bit special, and we did not find a good way to reflect it directly within a loss function, we had to rely on certain post-processing consisting of several steps outlined below:</p>
<ul>
<li><p>Filtering stage 1 boxes: For inference, we remove all boxes predicted by our stage 1 model that exhibit a confidence score below ~0.4. This not only speeds up our inference, but also quite significantly improves our stage 2 predictions. Note that removing those boxes also from training stage 2 does not have much impact.</p></li>
<li><p>Filtering early frames: We additionally filter out any frame earlier than 25 as there are usually no impacts detected, this has very minimal influence as our models only very, very rarely would predict impacts in those frames but further speeds up inference.</p></li>
<li><p>Video NMS: This was one of our most important aspects of PP, specifically with weaker models a bit earlier in the competition. The idea was to apply NMS with IOU threshold of 0.5 to all frames of a single video combined. This means that NMS filters out duplicate boxes in subsequent frames and only keeps the ones with highest stage 2 scores. This benefits the metric quite well, as it removes quite a lot of FPs. It got a little bit less important in our latest models that on their own were stronger, but is still an integral part of our pipeline.</p></li>
<li><p>TOPK filter: For each video, we only consider the top 19 predicted boxes based on their stage 2 score. This matches the average number of impacts in training videos and appeared to be a very solid solution to balance TPs, FPs, and FNs. </p></li>
<li><p>Final threshold: Additionally after topk filtering we remove any leftover prediction below a threshold of 0.15.</p></li>
</ul>
<p>We evaluated the parameter of these PP steps on CV, and they turned out to be very robust on private LB and also public LB (even though it was quite tiny).</p>
<h2>Blending</h2>
<p>Our original idea was to blend stage 2 models on different stage 1 models leading to different bounding boxes that makes blending not trivial. To that end, we experimented with WBF to blend the different boxes and assign scores to them. However, this not only extended our inference runtime due to multiple stage 1 models, but also did not lead to satisfying results. Ultimately, we settled on a simpler approach that only uses one single stage 1 model (fitted on full data) and then simply averages the scores of all stage 2 models on the same boxes. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F621560%2F67e11084163cd0980e6e504f52d33a55%2FScreenshot%202021-01-05%20at%2019.53.38.png?generation=1609876446208333&alt=media" alt=""></p>
<h1>What did not work</h1>
<ul>
<li>Determining teams’ helmets using KMeans clustering of RGB colors and adding as features.</li>
<li>Applying Video NMS to each team individually (based on the team clusters above).</li>
<li>Adding bounding boxes for all helmets as additional channels before feeding into stage 2 model.</li>
<li>Adding additional context such as: frame number, view, x/y coordinates of bbox center.</li>
<li>Adding more or less than +/-4 frames padding, or applying gaussian smoothed weights to the target.</li>
<li>Training on padded weights.</li>
<li>Dual-View siamese stage 2.</li>
<li>Syncing tracking data with the video - and then adding frame level features like player speed, direction, and relative distances, etc.</li>
<li>Different stage 1 models. None outperformed yolov5.</li>
<li>Imputing or removing stage 1 bounding boxes based on boxes in previous/post frames.</li>
<li>Attempting to track and label individual players throughout the entirety of the video.</li>
<li>Using SORT for improving stage1 boxes</li>
<li>CenterTrack as a single stage model</li>
<li>Learning coordinate transformation between views</li>
</ul>
<p>Thanks for reading. Questions welcome.</p>
<p>Hi <a href="https://www.kaggle.com/sj161199" target="_blank">@sj161199</a>,</p>
<p>Of all the helmet detections that we have from stage 1 the vast majority are non-impact, only a small number are impact samples. </p>
<p>Let's say we have 1M non-impact samples and 1500 impact samples. For each epoch we sampled a ratio of 1:99: we selected all of the impact samples (1500) and (1500 * 99) non-impact samples. We continued sampling all of the impact and (1500 *99) of the non-impact samples that we hadn't already seen until we ran out of samples, then we started over.</p>
|
NFL 1st and Future - Impact Detection
|
3rd Place Solution -- Simple 2-Stage Pipeline
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL 1st and Future - Impact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>10k static images here I mean the images from the <code>images</code> folder</p>
<p>Thanks! We have local recall around 0.97 after ensemble 7 effdet models by using threshold 0.17<br>
The number of candidates is over 200k.</p>
<p>Congrats! Waiting for reading yours!</p>
<p>Thanks! 🙌</p>
<p>Thanks and congrats to you, too!</p>
<p>Thanks to the organizers and congrats to all the winners and my wonderful teammates! <a href="https://www.kaggle.com/boliu0" target="_blank">@boliu0</a> <a href="https://www.kaggle.com/garybios" target="_blank">@garybios</a><br>
The dataset of this competition is much more complicated than normal ones and full of places to explore.<br>
So I'd like to call it Multi-view Video Event Detection task, rather than simple Object Detection task, because:</p>
<ul>
<li>This data set contains time series information</li>
<li>This data set has multi-view for a same scene</li>
<li>Although the impact bbox is marked on the helmet, the marking must depend on the environment around the helmet<br>
In addition, the understanding of the metric is also extremely important for this competition.</li>
</ul>
<h1>Summary</h1>
<ul>
<li>Only use videos and images.</li>
<li>Use EfficientDet to generate candidate impact bboxes, crop them into small images.</li>
<li>Use binary image classfication model to classify helmet crops through 9 frames.</li>
<li>Adjust scores for each bbox by multi-view information (Post-processing).</li>
<li>Drop similar bbox through 9 frames (Post-processing).<br>
This is a illustration of our pipeline (except for post-processing):<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F448347%2Fb44aec006fc2f22e0b868e6f279fcf08%2Fimage.png?generation=1609816179094206&alt=media" alt=""></li>
</ul>
<h1>Generate Candidate BBoxes</h1>
<p>We train EfficientDet on 2 classes (helmet and impact) then take all bboxes with class == 'impact' and score > 0.17 as candidate bboxes.<br>
If a player has an impact, we regard all his bboxes within +-4 frames as positive bboxes, because the metric treats predictions within 4 frames as TP. Base on this rule, if a positive bbox appears in a frame, we will regard it as a positive frame<br>
While training EfficientDet, we use all positive frames and 50% of the negative frames (which mean no positive bbox in it) from all videos, and all the 10,000 static images.</p>
<h1>Binary Classification Model</h1>
<p>As shown in the illustration, we use binary classification models to further predict whether bboxes crops are impacts or not.<br>
For training binary models, we expand the height and width of the bbox in the videos to 3 times the original length, crop them from frames, and use this as the training data for the binary classification model.<br>
When cropping a specific bbox, we crop it through 9 frames (t-4, t-3, t-2, t-1, t, t+1, t+2, t+3, t+4), turn it into gray scale and stack together to get an input of shape (h, w, 9)<br>
It should be noted that the <code>9</code> here is a variable parameter.<br>
Through this method we can get around 900k helmet crops, which is consistent with the number of bboxes in this data set.<br>
But unlike training EfficientDet, when training binary models, we only regard the impact bbox and its +-1 frame corresponding bbox as positive, which is less than 6k.<br>
When predicting, we use the same cropping method for all the candidate bboxes and then infer these crops by binary models.<br>
Through the binary model, our single model local score jump from 0.3x to near 0.6 with our post-processing methods, which is the biggest highlight in our pipeline.</p>
<h1>Multi-View Post-Processing</h1>
<p>We found that when one view has an impact, there is a high probability that the other view has an impact within the +-1 frame range. Therefore, we tuned separated thresholds depending on the predictions in the other view. For instance, the threshold for a certain Endzone frame depends on whether there is a predicted bbox in the Sideline view within +-1 frame. If yes, the threshold is lower (say 0.25); if not, the threshold is higher (say 0.45).</p>
<h1>Drop Similar BBox</h1>
<p>According to the official metric, although the predicted bbox is counted as TP in the +-4 frame range, a label can only accept one predicted bbox. Therefore, we hope that an impact will only output one prediction bbox.<br>
So we designed a function to drop all but one "similar" bboxes within any 9 consecutive frames, keeping only the one with largest confidence. For this purpose, we define any cross-frame bbox to be similar when they have iou > 0.25</p>
<h1>Ensemble</h1>
<p>Finally we ensemble 7 EfficientDet Models and 18 Binary Models that achieved local score 0.64, public LB 0.66 and private LB 0.69</p>
<h1>Acknowledgement</h1>
<p>Special Thanks to Z by HP & NVIDIA for sponsoring me a Z8G4 Workstation with dual RTX6000 GPU and a ZBook with RTX5000 GPU.<br>
It's really fast and powerful enough for this harsh competition!</p>
|
NFL 1st and Future - Impact Detection
|
4th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL 1st and Future - Impact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="https://www.kaggle.com/mlubbilgee" target="_blank">@mlubbilgee</a> <br>
My notebook is not publicly available, but I plan to upload the code to GitHub soon.</p>
<p>Thanks <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> <br>
I mean, I used the data from the images folder (image_labels.csv) and extracted frames from video files.</p>
<p>Thank you! Congratulations to your team for a well-deserved place!<br>
Unfortunately, I didn't have time to do the experiment without 5 channels. As you can see, I've made large ROI around the helmet to capture more context. There may be other helmets in the ROI, so it seems to me that additional channels should help a lot. Without these channels, it will be difficult for the neural network to figure out which helmet to make predictions for. I think that in the near future I will do an experiment without additional channels and write the result here.</p>
<p>When I started solving this competition, a solution based on a two-class detector and post-processing was already published on the forum. At first I wanted to use this approach, but after looking at the data carefully, I realized that in order to achieve a high score, it is necessary to use a temporal context. Therefore, I chose a two-stage approach: helmet detection and ROI classification using a 3D convolutional network.</p>
<h2>Data preparation</h2>
<p>For convenience, I extracted all video frames into separate files. There weren't very many frames, so I saved them to the SSD without any problems. Then I split all the data into training and validation (20%) randomly by game_id.</p>
<h2>Helmet detector</h2>
<p>It was the easiest part. I trained Faster-RCNN with ResNet-50 FPN backbone using <a href="https://github.com/facebookresearch/detectron2" target="_blank">detectron2</a>. I made the following changes to the <a href="https://github.com/facebookresearch/detectron2/blob/master/configs/COCO-Detection/faster_rcnn_R_50_FPN_1x.yaml" target="_blank">standard</a> config:</p>
<ul>
<li>reduced the minimum anchor size from 32 to 16, since there are many small helmets in the dataset</li>
<li>used single class (helmet only)</li>
<li>used both frames and images for training</li>
<li>used SyncBN </li>
</ul>
<p>The first trained model achieved a fairly good detection accuracy. On validation, the AP50 is 93.607 (the recall of ground truth boxes with definite impact label is 84%). I did not do more experiments with the detector, but perhaps a more accurate detector would give a better final result.</p>
<h2>ROI classifier</h2>
<p>I used FAIR's <a href="https://github.com/facebookresearch/SlowFast" target="_blank">SlowFast</a> library to train the ROI classifier. With this library, I have implemented two alternative approaches:</p>
<h3>Action detection</h3>
<p>The approach is as follows: a sequence of full frames is fed to the 3D CNN input, a feature map is calculated, and using the ROIAlign operation, features for the ROIs are extracted and classified. I used the SlowFast model with the ResNet50 backbone with the following modifications:</p>
<ul>
<li>used a sequence of 20 frames</li>
<li>removed stage5 from the backbone and added 3 upscale deconvolutions to make the feature map large enough</li>
<li>for each ground truth impact added additional ground truths to +-2 adjacent frames</li>
<li>increased the number of frames that have an impact to 25% of the total number of frames using oversampling</li>
<li>from augmentations used multiscale, random crop, random flip and color augmentation</li>
<li>to remove duplicates on inference, I used an IOU tracker. The tracker combines impact boxes from adjacent frames and then the most confident prediction is selected from each track. </li>
</ul>
<p>The result of this model is 0.3915 on the private leaderboard.</p>
<h3>Action classification</h3>
<p>The approach is as follows: the sequence of frames cropped around the target box, then the sequence of crops is fed to the 3D CNN input, and then the impact probability is calculated directly. Again I used the SlowFast model with the ResNet50 backbone with the following modifications:</p>
<ul>
<li>used a sequence of 20 frames</li>
<li>used 128 x 128 input size</li>
<li>used 5 input channels instead of 3 RGB channels. The first additional channel is the heatmap of the center of the helmet of interest. The second additional channel is the heatmap of the centers of all helmets (see picture below).</li>
<li>for each ground truth impact added additional ground truths to +-2 adjacent frames</li>
<li>increased the number of definite impact crops to 10% of the total number of crops using oversampling</li>
<li>from augmentations used multiscale, random crop, random flip and color augmentation</li>
<li>to remove duplicates on inference, I used an IOU tracker. The tracker combines impact boxes from adjacent frames and then the most confident prediction is selected from each track. </li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F409174%2F2cacb25e7a4a393e94a29b16565cc3a2%2Fcrop_heatmaps_example.jpg?generation=1609865045084879&alt=media" alt="input example"></p>
<p>The result of this model is 0.6037 on the private leaderboard. Then I added horizontal flips as a TTA and increased the score to 0.6341. Finally, I added the action detector described above to the ensemble and got a minor improvement: 0.6355 on the private leaderboard.</p>
<p><a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a> I just did an experiment with 3 channels instead of 5. On validation, the result is slightly worse than with 5 channels (0.569 vs 0.594)</p>
|
NFL 1st and Future - Impact Detection
|
50th place solution (single model)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL 1st and Future - Impact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for your kind words! Congratulations to you and your teammates too.</p>
<p>First of all, I'd like to thank the Kaggle and NFL for hosting this interesting competition. And, special thanks to <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> and <a href="https://www.kaggle.com/nvnnghia" target="_blank">@nvnnghia</a>.</p>
<p>The main ideas of my solution are model architecture and post-processing. Let me explain them.</p>
<h1>Model Architecture</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1497263%2F6c129f8ae06a934bde53688382645c93%2F2021-01-05%2011.43.59.png?generation=1609814660849278&alt=media" alt=""></p>
<ul>
<li>Using multi frames(t, t-5, t-10)</li>
<li>Using 1x1 Conv2D to reduce computational costs</li>
<li>Training and predicting only the first 105 frames</li>
</ul>
<h1>Post-processing</h1>
<ol>
<li>Remove all boxes that aren't present in both Sideline and Endzone views<br>
<a href="https://www.kaggle.com/artkulak/2class-object-detection-inference-with-filtering/data" target="_blank">https://www.kaggle.com/artkulak/2class-object-detection-inference-with-filtering/data</a></li>
<li>Remove duplicate frames within +-4 frames (apply 2 times changing threshold)</li>
</ol>
<pre><code>result_df = pd.DataFrame()
for video_name in test_df.video.unique():
tmp = test_df.query('video == @video_name').sort_values(by='frame').reset_index(drop=True)
tmp_copy = deepcopy(tmp)
result_idxs = []
while len(tmp_copy) > 0:
idx = tmp_copy.index.min()
frame = tmp_copy.loc[idx, 'frame']
max_frame = frame + 9
candidates = tmp_copy.query('@frame < frame & frame < @max_frame')
if len(candidates) == 0:
result_idxs.append(idx)
tmp_copy.drop([idx], axis=0, inplace=True)
continue
frames = [idx]
for j in candidates.index:
s = 0
for col in ['left', 'top', 'width', 'height']:
s += abs(tmp_copy.loc[idx, col] - candidates.loc[j, col])
if s < 16:
frames.append(j)
if len(frames) == 1:
result_idxs.append(idx)
tmp_copy.drop(frames, axis=0, inplace=True)
elif len(frames) == 2:
result_idxs.append(tmp_copy.loc[frames]['scores'].idxmax())
tmp_copy.drop(frames, axis=0, inplace=True)
elif len(frames) >= 3:
result_idxs.append(frames[len(frames) // 2])
tmp_copy.drop(frames, axis=0, inplace=True)
result_df = pd.concat([result_df, tmp.iloc[result_idxs]])
result_df.reset_index(drop=True, inplace=True)
</code></pre>
|
NFL 1st and Future - Impact Detection
|
5th place solution: one-stage I3D with additional 2D model for bbox centers
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL 1st and Future - Impact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the organizers and congrats to all the winners!</p>
<p>My solution:</p>
<h2>Data preparation</h2>
<p>The videos were split into fragments of shape <strong>8х224х224х3</strong>. As positive examples I chose a frame with impact and four frames before it and three after it. Using a grid with a constant step I made patches of size 224x224. As negative examples I chose random sequences of frames, which had no impacts. Also, I saved bounding boxes' positions and sizes for each sequence of frames.</p>
<h2>Models</h2>
<p>From the very start, I used 3D CNN, namely <strong>I3D</strong> from <a href="https://github.com/facebookresearch/SlowFast" target="_blank">SlowFast</a>. The one change I made to it was adding the fourth channel in input, which was always equal one for the fifth frame (don't know whether it had any sense or not). This is the fifth frame that the model predicts bboxes for. Then I appended FPN from <a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank">pytorch_segmentation</a> with six output channels without upsampling.</p>
<p>The 3d feature maps from second to fifth blocks of the backbone net were converted to 2d feature maps using <code>F.adaptive_avg_pool3d</code> and then passed to FPN.</p>
<p>The FPN produced a grid of size 6x56x56. The first channel of the grid was responsible for the presence of a bbox in a particular cell of the grid. The next four channels were responsible for the position of the bbox inside a grid cell and the bbox size. The last channel predicted the presence of impact in a cell. All these predictions were made only for the fifth frame in the sequence of eight frames. Overall this grid resembles yolo output except for anchors.</p>
<h2>Training</h2>
<h3>1st stage. Training on 224x224 patches:</h3>
<p>Augmentations:</p>
<pre><code>A.Compose([
A.HorizontalFlip(p=0.5),
A.OneOf([
A.ColorJitter(p=1),
A.HueSaturationValue(40, 60, 40, p=1),
A.RGBShift(60, 60, 60, p=1),
A.RandomGamma((20, 200), p=1)
], p=0.8),
A.OneOf([
A.GaussianBlur(p=1),
A.GaussNoise((10.0, 50.0), p=1),
A.MedianBlur(blur_limit=5, p=1),
A.MotionBlur(p=1),
], p=0.6),
A.ShiftScaleRotate(shift_limit=0.1, scale_limit=0.1, rotate_limit=0,
border_mode=0, p=0.4),
# A.GridDistortion(p=0.2, border_mode=0),
])
</code></pre>
<h4>Loss functions</h4>
<p>I had several configurations of losses for training.<br>
<strong>HuberLoss</strong> was used as a loss for bboxes position and sizes. <strong>FocalBinaryLoss</strong> or <strong><code>nn.BCEWithLogitsLoss</code></strong> with different pos_weight parameter were used for predicting presence of bboxes and impacts.</p>
<h4>Schedule and optimizer</h4>
<p><strong>Adam</strong> with default parameters and multistage <strong>decreasing of lr in 10 times</strong> were used. There were ten epochs.</p>
<h4>Data for training</h4>
<p>For each epoch, I chose all the positive examples and <strong>positive_count*2</strong> negative examples. Starting from the second epoch <strong>hard negative mining</strong> was used. For this, I got predictions for all negative examples and chose those of them that had the impact confidence above some threshold (0.1). If there were not enough hard examples I added negative examples by choosing randomly.</p>
<h3>2nd stage. Training on the whole image</h3>
<p><strong>I froze backbone</strong> and trained only FPN on whole images of size <strong>8x3x720x1280</strong> for a couple of epochs. The augmentations and optimizer were the same as at stage 1.</p>
<p><strong>The ensemble of four such models</strong> got score <strong>0.48</strong> on my local validation.<br>
Then I noticed that the detection of helmet centers worked not well. It was often mistaken a shoulder protector (I don't know what exactly this thing is called) for a helmet and gave it more confidence than it gave for some helmets.<br>
I assumed that it was because the ground truth data for bbox centers contained only points and those points were not always located in the centers of helmets so it was difficult for a model to understand where it should output points. So instead of points in the centers of bboxes I decided to predict circles with diameters equal to the average between a bbox's width and height. These circles were fading towards their boundaries. As it is shown in the picture.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1187467%2Fa304d9baca7da6f8ead634ce5fe030ea%2Fhelmet_circles.png?generation=1609958610819611&alt=media" alt=""></p>
<p>I trained standalone <strong>FPN with resnet34 as backbone</strong> on single 224x224x3 patches and these circles as ground truth. Also, I used <code>nn.BCEWithLogitsLoss</code> as a loss (yes, you can use it for a regression task). Using such a model for predicting bboxes' centers gave a significant boost to the score, up to <strong>0.58</strong> on local validation.</p>
<h2>Predicting</h2>
<p>There was an ensemble of <strong>four models based on I3D</strong>, which had been trained using different configurations, and an <strong>ensemble of two FPN-resnet34</strong> models for predicting bbox centers. The predictions from the models would be averaged. Including bbox positions and sizes.</p>
<p>For each frame in a video, I got four previous frames and three next and concatenated them into a sequence. Then I padded the image to 736x1280.</p>
<p>So the input for I3D models had the size of <strong>3x8x736x1280</strong> and output had the size of <strong>6x184x320</strong>. As input for FPN-resnet34 models I used one current frame and the output of these models was <strong>1х736х1280</strong>, which was downsampled to 184x320. Also for predicting helmet centers I used <strong>TTA</strong> <strong>scaling an image to [0.75, 1.0, 1.25, 1.5]</strong> and averaging predicted values with weights [0.3, 1.0, 0.3, 0.15] accordingly.<br>
Using predictions from FPN-resnet34 models for bbox centers and predictions from I3D for bbox sizes and impacts I made a list of helmets that had scores above <strong>0.4</strong>. Then I applied <strong>non-maximum suppression</strong> with parameter <strong>0.25</strong> to that list to get the final list of bboxes for a frame. Those bboxes that had in their center confidence of impact above <strong>0.45</strong> were marked as positive.</p>
<p>Then I applied suppression of positive bboxes of the same helmet on neighboring frames.</p>
<p>There were two ways. The first was to simply apply <strong>nms</strong> to <strong>+/-4</strong> frames with iou above <strong>0.3</strong>. Except that I used not helmet confidence but <strong>impact confidence</strong> as maximum values.</p>
<p>The second was slightly more complicated. I got one bbox with maximum impact confidence in the whole video sequence and designated this bbox as the current bbox. Then I looked for bboxes in neighboring frames in both directions that had maximum iou with this current bbox. If iou was above some thresholds I suppressed the bbox in the neighboring frame and designated this new bbox as the current bbox. Then I looked for bboxes with maximum iou in next/previous frame for this new bbox. And so on up to 4 frames in both directions. If there were no bboxes with iou above the threshold I discontinued propagating suppression.</p>
<p>Both approaches worked the same.</p>
<h2>Post-processing</h2>
<p>I used the two side view. If there was an impact in one view and there were no impacts in the other view and confidence of the impact was below <strong>0.55</strong> I removed this prediction of impact from the final submission.</p>
<p>This post-processing improves score not much, it gave just +<strong>0.01-0.015</strong>.</p>
|
NFL 1st and Future - Impact Detection
|
7-th place solution from AlphaGoal Team [2.5D CenterNet, 3D CNN]
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL 1st and Future - Impact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Intro</h1>
<p>Hi there. It was an exciting competition, thanks to the new domain of video analysis. Before I jump into technical details, a brief team intro:</p>
<ul>
<li><a href="https://www.linkedin.com/in/anastasiya-karpovich/" target="_blank">Anastasiya Karpovich</a> </li>
<li><a href="https://www.linkedin.com/in/tatigabru/" target="_blank">Tatiana Gabruseva, PhD</a> </li>
<li><a href="https://www.linkedin.com/in/cvtalks/" target="_blank">Eugen Khvedchenya</a></li>
</ul>
<p>For me, it was a great experience to solve this in a team. Since most activities happened during Christmas & NY holidays, we did our best to find the proper balance between life & Kaggling :) The help of my teammates was invaluable, and I'm grateful for being part of this team. <a href="https://www.kaggle.com/user189546" target="_blank">@user189546</a> <a href="https://www.kaggle.com/blondinka" target="_blank">@blondinka</a> great job, it seems we earned some new GMs & Master badges :)</p>
<h1>Approach</h1>
<p>There were a number of ways to approach this challenge, and here's our way to the gold zone:</p>
<p>We used three stages: helmet detection, impact classification, and post-processing.</p>
<p>The data were split into four folds making sure that both views, Endzone and Sideline, of the same video go to the same fold to avoid data-leak. Each fold has 15 plays (30 videos), which I believe is the same as the size of the test.</p>
<h2>TL;DR:</h2>
<ul>
<li>4 models of CenterNet for detection </li>
<li>1 model of 3D CNN for classification</li>
<li>Postprocessing</li>
</ul>
<h2>Helmet detection</h2>
<p>We experimented with CenterNet and EfficientDet detectors. And our final detection pipeline is what I call 2.5D CenterNet. The model's input was a short clip of 8 consecutive frames, which were passed through the encoder individually (like you would do in siamese networks), then intermediate concatenated and fed through UNet-like decoder to produce output heatmap & impacts map for 8 frames. We found this approach requires much less memory than regular 3D-CenterNet. </p>
<p>We experimented with different encoders, and DenseNet-based encoders worked the best in terms of Helmet F1 detection and Impact F1 detection scores. EfficientNets worked slightly worse, and ResNet-s performed the worst. On average, F1 score for helmets was around 0.93-0.94, and impact F1 was about 0.43 per fold.</p>
<p>For augmentations, we used albumentations library: <a href="https://github.com/albumentations-team/albumentations" target="_blank">https://github.com/albumentations-team/albumentations</a> </p>
<p>Combined 4 folds with TTA for CenterNet detector gave f1 0.95 with densenet121 backbone and medium-hard augmentations. 2.5D CenterNet ensemble reached the gold zone by itself, but we decided to push forward.</p>
<h2>Impact Classification:</h2>
<p>We used several approaches for the classification of the impact. Most of them did not work well enough.</p>
<ol>
<li>Second-level model using 1D CNN on top of the predictions from helmet detection model -- gave around f1 = 0.32 with post-processing (1 fold, no TTA)</li>
<li>EfficientDet pre-trained on 1 class for helmet detection and tuned for 2 classes, with medium augmentations and sampling → gave f1 around 0.33 with post-processing (1 fold, no TTA)</li>
<li>Mix of the first two approaches -- f1 = 0.34 on LB (1 fold, no TTA)</li>
<li>Siamese networks in videos </li>
<li>3D CNN classifier on top of the 2D helmets predictions -- this approach gave the best results -- f1 0.57 after post-processing</li>
</ol>
<p>For 3D CNN classification, we trained only one fold of 3D ResNet50, which, stacked together with predictions of CenterNet, boosted our score by +0.05 or so on the LB. </p>
<h2>Post Processing</h2>
<p>For post-processing we needed to suppress duplicates. We tracked the helmets using IoU, and selected the frame with the maximum impact probability score.<br>
We also tried to use the center of the boxes to track the euclidean distance as a criterion for similar results. </p>
<p>In the beginning, we removed impacts from the first 30 frames and the last frames, but as the model became more accurate, it did not make a difference.</p>
|
NFL 1st and Future - Impact Detection
|
9th Place - 2D Detection 3D Classification
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL 1st and Future - Impact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks Camaro. Congrats on your great finish too.</p>
<p>Thanks Khanh. Good luck in RIIID!</p>
<p>Same here. I really enjoyed working with all of you. Thanks <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> <a href="https://www.kaggle.com/jinssaa" target="_blank">@jinssaa</a> <a href="https://www.kaggle.com/vaghefi" target="_blank">@vaghefi</a></p>
<p>Thanks Tarun. I couldn't have done it without my great teammates </p>
<h1>Overview</h1>
<p>Our solution (like a lot of other teams) consists of a two-step pipeline (detection and classification) followed by some post-processing. We use 2D detection to find possible impact boxes, and then we use 3D classification to determine which boxes are impacts. </p>
<h1>Using 2D, Is This a Helmet Impact?</h1>
<p>Our detection model finds the yellow box as a possible helmet impact.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F941a0e534d355b76997947feae189af8%2Fquiz.png?generation=1609887772117223&alt=media" alt=""><br>
From only 2D it is impossible to know whether these helmets are about to impact. It appears probable but maybe the silver helmet is about to pass behind the white helmet.</p>
<h1>Using 3D, Is This a Helmet Impact?</h1>
<p>Our classification model uses frames before and after this frame. With this information, our classification model correctly identifies that these two helmets do not impact but rather pass by each other. <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F681af4fc454c479b02a760dc82480396%2Fframes.png?generation=1609887810082337&alt=media" alt=""></p>
<h1>Impact Detection Model</h1>
<p>(written by <a href="https://www.kaggle.com/jinssaa" target="_blank">@jinssaa</a>)</p>
<p>We used DetectoRS(ResNeXt-101-32x4d) model for detecting impacts. <a href="https://github.com/open-mmlab/mmdetection" target="_blank">https://github.com/open-mmlab/mmdetection</a> This model was built in two steps:</p>
<p>Helmet Detection Model for warm-up: We trained a Helmet Detection Model for 12 epochs using the train helmet dataset which is in the image folder, this model shows 'bbox_mAP_50', 0.877 validation score.<br>
2-Class Detection Model: We used the final weights of the Helmet Detection Model (as pretrained weights) to build a Helmet/Impact Detection model. We benched <a href="https://www.kaggle.com/tito" target="_blank">@tito</a>'s notebook and set +- 4 frames from impact as impact class.<br>
Using the Helmet Detection Model as pretrained weight makes the 2-Class Model converge much faster and detect impacts easier.</p>
<p>This approach showed good performance to detect impact. We set the confidence score up to 0.9 which showed 0.39 public LB score after a simple post processing (without any classifiers). Then, we set a lower confidence score to catch more true positives after we plug in the classifier. DetectoRS showed good performance to detect impact but it took a long time to test our ideas.</p>
<h1>Impact Classification Model</h1>
<p>(written by <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a>)</p>
<h3>2D Models</h3>
<p>I kept struggling with improving my EfficientDet models as I had no experience with object detection, I figured out it would be better to go back to what I can do : classification models. The main idea was that a crop around a helmet has the same information regarding whether it is an impact or not as the whole image. <br>
Therefore, I extracted a 64x64 crop around all the boxes in the training data and started building models to predict whether a crop had an impact. To tackle imbalance, I used the [-4, +4] extended label as a lot of people did. After a few tweaking, I had a bunch of models with a 0.9+ AUC (5 folds stratified by gameplay) : Resnet-18 & 34, EfficientNet b0 -> b3. </p>
<p>Tricks used for 2D models include :<br>
Limiting the number of boxes sampled per player at each epoch, in order to have more control on convergence<br>
Linear learning rate scheduling with warmup<br>
Removing the stride of the first layer : 64x64 is a small image size and it’s better not to decrease the network resolution too fast<br>
Stochastic Weighted Averaging<br>
Classical augmentations </p>
<p>Then, I used <a href="https://www.kaggle.com/tjungblut" target="_blank">@tjungblut</a>’s detector (<a href="https://www.kaggle.com/c/nfl-impact-detection/discussion/200995" target="_blank">https://www.kaggle.com/c/nfl-impact-detection/discussion/200995</a>) to benchmark how bad my models were on the public leaderboard. Turns out that after some post-processing, a resnet-18 + efficientnet-b1 + efficientnet-b3 blend achieved 0.33+, which at that time was in the gold zone.</p>
<h3>Merging</h3>
<p>Shortly after, I merged with the rest of the team with the goal of plugging my classification models on top of a strong detection model to leverage their potential. There were about two weeks left before the end of the competition, so we first focused on plugging <a href="https://www.kaggle.com/Uijin" target="_blank">@Uijin</a>’s detection models with my classifiers. For a while, we couldn’t beat the detector LB’s of 0.39+, but after setting up a good CV scheme and improving the detection model training strategy, we reached LB 0.41+. </p>
<h3>3D Models</h3>
<p>Around this time, I upgraded my 2D classifiers to be 3D. Models will now take as input frames [-8, -6, -4, -2, 0, 2, 4, 6, 8] instead of just frame 0. The pipeline was easy to adapt, the data loader was slightly modified, architectures were changed, and augmentations were removed for I was too lazy to dig into that.</p>
<p>The first batch of 3D model is based on <a href="https://github.com/kenshohara/3D-ResNets-PyTorch" target="_blank">https://github.com/kenshohara/3D-ResNets-PyTorch</a> : <br>
resnet-18 & resnet-34, using as target the same as the one of the middle frame (that was extended)<br>
resnet-18, using an auxiliary classifier to predict the impact type<br>
resnet-18, with a target extended to frames [-6, +6]</p>
<p>The only additional trick I used is getting rid of all the strides on the temporal axis. Models are made for videos longer than 9 frames so once again I adapt the networks to my small input size.</p>
<p>They helped CV, didn’t really help public LB. In fact our jump to 0.49 at the time came from retraining our detection model on the whole dataset & tweaking some post-processing parameters. <br>
They did however help private LB, but this was after the leak so we didn’t know. </p>
<h3>More 3D Models</h3>
<p>After recovering from NYE, I did some state of the art browsing and implemented 3 new 3D models that all outperformed the previous ones. <br>
i3d from <a href="https://github.com/piergiaj/pytorch-i3d" target="_blank">https://github.com/piergiaj/pytorch-i3d</a><br>
Slowfast from <a href="https://github.com/open-mmlab/mmaction2" target="_blank">https://github.com/open-mmlab/mmaction2</a><br>
Slowonly with Omnisource pretrained weights from <a href="https://github.com/open-mmlab/mmaction2" target="_blank">https://github.com/open-mmlab/mmaction2</a></p>
<p>This was done on the 2nd and 3rd of January, so we had 10 submissions to try these out because of my procrastinating. My first submissions using them gave a small LB boost and we reached 0.50+. </p>
<p>Fortunately the rest of the team worked hard on the detection models to compute results of our detection models on all our folds. This allowed them to find a powerful set of hyperparameters which worked in the end !</p>
<h1>Validation - "Trust Your CV"</h1>
<p>(written by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>)</p>
<p>Building a reliable validation was very important because it was easy to overfit the public LB or a hold out validation set of only a few videos. Our final model was configured using a full 120 video 5 fold validation. By reviewing our OOF predictions, we were able to tell what we needed to do to improve our model. One tool we used was to watch the ground truths and predictions together in all 3 views. More info <a href="https://www.kaggle.com/cdeotte/watch-submission-csv-as-video/" target="_blank">here</a> and <a href="https://www.kaggle.com/c/nfl-impact-detection/discussion/208782" target="_blank">here</a><br>
<a href="http://playagricola.com/Kaggle/labeled_57586_001934-pair_1.mp4" target="_blank"><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2Fe46b13bc1feb0e773fb476e689cf5403%2Fpreview.png?generation=1609809137273907&alt=media" alt="IMAGE ALT TEXT HERE"></a><br>
This gave us insight into choosing hyperparameters for our detection and classification models and it gave us insight into creating post process.</p>
<h1>Post-processing</h1>
<p>(written by <a href="https://www.kaggle.com/vaghefi" target="_blank">@vaghefi</a>)</p>
<h3>Thresholding</h3>
<p>We had to find good thresholds for both our detection and classification models. We observed that a single split is not enough and F1 score variation between folds is significant. Therefore, we used the CV calculated on the entire train set (5 folds) to optimize our thresholds. The following tricks improved our CV and LB:<br>
Different thresholds for detection and classification model: Originally we had the same threshold for detection and classification models (~0.5). As the classification model became better, we needed to lower detection thresholds (~0.35) to include more helmet detections and then use the classification model to classify the impacts.<br>
Different thresholds for Endzone and Sideline views: Endzone and Sideline videos are different in terms of image content, box area, box size, etc. We realized that using different thresholds can improve both CV and LB. We tried different combinations and best CV was achieved by using around 0.05 higher threshold in Sideline than Endzone<br>
Different thresholds over time: Both detection and classification models have no information about time-elapsed. From the training set, we know that the chance of impact decreases as frame number increases. We tried different combinations (fixed, piecewise, and linear) and ended up using a piecewise method where we used different thresholds for frame >= 150 and frame < 150.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F73d38ec1aa72b0375c3106e4ba1c4c0b%2Ftime2.png?generation=1609892194122820&alt=media" alt=""></p>
<h3>Box Expansion</h3>
<p>Increasing the size of the bounding boxes helped our models, for some reason. In our last submission, we made the bounding boxes 1.22x bigger.</p>
<h3>Adjacency Post-Processing</h3>
<p>The goal of this PP was to cluster detections (through multiple frames) which belong to the same player and remove FP. We used an IOU threshold and frame-difference threshold to optimize our algorithm. We also tried temporal NMS and soft-NMS but our custom algorithm performed better. </p>
<h3>View Post-Processing</h3>
<p>Impacts that were detected with a confidence lower than <code>T</code> were removed if no impact was found in the other view. That’s the best we came up with regarding merging information from both views.</p>
<h1>Final results</h1>
<p>We selected two submissions : Best public LB and best CV.</p>
<ul>
<li>CV 0.4885 - <strong>Public 0.5408 (10th)</strong> - Private 0.4873 (13th)</li>
<li><strong>CV 0.5125</strong> - Public 0.4931 (13th) - <strong>Private 0.5153 (9th)</strong>. </li>
</ul>
<p>Our competition inference notebook is <a href="https://www.kaggle.com/cdeotte/nfl-2d-detect-3d-classify-0-515" target="_blank">here</a></p>
<h2>Takeaways</h2>
<p>We should’ve trusted our classification models more, as they were 3D, they most likely were better than our detection model. It would’ve been a better approach to use detection models to detect all helmets instead of only impacts and then use 3D classification models to classify impacts. </p>
|
NFL 1st and Future - Impact Detection
|
Team tara 13th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL 1st and Future - Impact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>[Update]</p>
<p>Our codes are released <a href="https://github.com/kentaroy47/kaggle-nflimpact-13thplace" target="_blank">here</a>.</p>
<p>First of all, thank you for the hosts for organizing this great challenge! </p>
<p>Congrats to all the winners and big thanks to my wonderful teammates <a href="https://www.kaggle.com/tereka" target="_blank">@tereka</a> <a href="https://www.kaggle.com/hidehisaarai1213" target="_blank">@hidehisaarai1213</a> <a href="https://www.kaggle.com/rishigami" target="_blank">@rishigami</a>, it was a great opportunity to work with such talents.</p>
<p>We really wanted to celebrate a birth of a new GrandMaster but time shall wait! </p>
<p>It was a big challenge for me too, since it was my first kaggle competition after my baby son was born..</p>
<h1>Overview</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3508221%2Fa5abcefa4a1968535ed22d82b18569ee%2Fimage%20(1).png?generation=1609822336590711&alt=media" alt=""></p>
<p>Our solution is based on a 2-stage pipeline with the first stage detecting impact candidate boxes and the second stage classifying either the candidate box is true or false positive. </p>
<ul>
<li><p>Detectors<br>
We train two separate detectors with Endzone and Sideline videos, respectively. Training separate detectors held better CV in our experiments.</p></li>
<li><p>PP of detections<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3508221%2F6b4e699284aa5ff836225856ea0b9dc4%2Fimage%20(2).png?generation=1609822406670388&alt=media" alt=""></p></li>
</ul>
<p>One of the challenge we faced here was that impact boxes other than the time of impact had to be filtered. To cope with this, we use time-nms, where we grouped boxes with iou>0.3 within a certain duration (30 frames) and only the median box was extracted, which should represent the time of impact and other candidates are filtered out. Also we filter boxes that do not exist in both End or Side video within certain frames.</p>
<p>The output of the first stage scores <code>Precision 0.2, Recall 0.6, F1 0.2</code>. The parameters are tuned so the detector will achieve a high recall. </p>
<ul>
<li>Classification</li>
</ul>
<p>Like the 3rd place team's solution, we crop +- 4 frames of helmet images around the candidate box and classify them using Resnet3D. We implemented this based on the torchvision's video classifier model. We ensemble 8 classifiers with horizontal-flip TTA. We trained the classifiers using both Endzone and Sideline videos.</p>
<ul>
<li>Threshold optimization</li>
</ul>
<p>Since the impacts/video was about 10~20, we adaptively control the prediction threshold so that the final prediction/ video will range within 15-25 predication per video. This makes the model robust to threshold tuning and was quite important since a single video with a lot of FPs can corrupt the entire score. </p>
<p>The final CV was <code>Precision: 0.56, Recall 0.4, F1: 0.47</code> which has more room for improvement!</p>
<h2>Training the detector</h2>
<p>Training the detector was quite hard for us, since there were lots of options. We use mixup+cutmix augmentations and oversample images with impact during training. We use images without impacts as well, which improved the precision of the detectors.</p>
<h2>Training the classifier</h2>
<p>We were inspired by the <a href="https://www.kaggle.com/c/deepfake-detection-challenge/discussion/158158" target="_blank">Deepfake 3rd place solution</a> and used <a href="https://arxiv.org/pdf/1711.11248.pdf" target="_blank">3d CNNs</a> for classifiers.</p>
<p>We trained the classifier with all helmet images cropped in 96x96 or 128x128 images. Since there are lots of False images, we used focal loss as criterion. Also mixup contributed to the CV as well.</p>
<p>Since we didn't have much time to train, the detector and classifier were single-fold models.</p>
<h1>What didn't work (for us</h1>
<p>TTA, ensembling of detectors (TTA improved in PB but didn't work for CV and LB). Should have dug deeper here.</p>
<p>Classifiers with 27-channel input efficient nets. The CV/LB was much lower for us.</p>
<p>Using tracking data.</p>
<p>The CV for Endzone videos was about <code>0.43</code> and for sideline videos, <code>0.5</code>. We could not close this gap and think that Endzone videos are fundamentally hard because of occlusions between players and wonder what other teams did to close this gap.</p>
<p>Congrats on your gold too!<br>
Yes, this competition was very challenging not only the task but the number of GMs participating.. But it was great playing with the great kaggle GMs :)</p>
<p>I guess we needed one more push to get in the 10th place, will try harder next time!</p>
<p>Thank you.</p>
<p>Thank you!</p>
|
NFL 1st and Future - Impact Detection
|
Zoom call/presentation to discuss solutions
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL 1st and Future - Impact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>link for the meetup <a href="https://www.meetup.com/San-Diego-Machine-Learning/events/275552917/" target="_blank">https://www.meetup.com/San-Diego-Machine-Learning/events/275552917/</a></p>
<p>In the past Lyft competition I did a presentation with my meetup and zoom call with other competitors to discuss and exhibit various solutions to the problem. I would like to do another one for this competition. It was a much shorter period for this one so it would be great to share what was explored and deliberate on what the valuable take aways are. </p>
<p>I will update this post with a meetup/zoom link later once I get it scheduled. Intended date will be January 23rd at 10AM Pacific. </p>
<p><a href="https://www.meetup.com/San-Diego-Machine-Learning/events/275552917/" target="_blank">link to meetup </a></p>
<p>Here is a link to the recording of the previous zoom call to understand how it typically goes. <a href="https://www.youtube.com/watch?v=ScjnDwghHGM&t=2s" target="_blank">https://www.youtube.com/watch?v=ScjnDwghHGM&t=2s</a></p>
|
NFL 1st and Future - Impact Detection
|
[10th place solution] Extended Centernet
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL 1st and Future - Impact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a> congratulations to you and the team for the 2nd place and becoming GM.</p>
<p>thanks. I'm glad it helped</p>
<p>thanks <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> Congratulations to you and the team too.</p>
<p>Thanks. I've implemented this in my augmentation.</p>
<p>ăn hên thôi e. Toàn ăn gold cuối. :)) </p>
<p>it contributes ~0.03 to my CV. I didn't check the LB. </p>
<p>Thanks to NFL and Kaggle for hosting this interesting competition. Congratulation to all, especially to the prize winner and gold medalists.</p>
<p><em>The following is my approach.</em></p>
<p><strong>1. Detection models</strong><br>
My detection model is derived from <a href="https://github.com/xingyizhou/CenterNet" target="_blank">Centernet</a> (MIT licenced) with some modification.</p>
<ul>
<li><p>I replace the image-based feature extraction by a video-based feature extraction. Getting video features is very useful for deteting impact.</p></li>
<li><p>I stack multiple centernet's heads on the top of the feature extraction block. Each head is responsible for predicting helmet for each frame.</p></li>
<li><p>I calculate loss independently between 2 classes (helmet with impact and helmet without impact) and then using weighted sum of the 2 losses. </p></li>
<li><p>I write my own augmentation for video.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2545270%2F901fbd72b20f7781801a066a2d69ba09%2Fnet.jpg?generation=1609809101222562&alt=media" alt=""></p></li>
<li><p>Input: I use a sequence of 15 consecutive frames. I also use a customized sampling method that focus more on frames with impact. I think using longer sequence can improve the result further, but my GPU does not allowed me to do so.</p></li>
<li><p>Video feature extraction: I use EfficientB5 (need to modify number of input channels). I've tested EfficientB3, EfficientB5 and EfficientB6. I don't have enough time and GPU to explore further.</p></li>
<li><p>Detection head: I've used only Centernet for this detection head, because I am more familiar with it.</p></li>
</ul>
<p><strong>2. Post-Processing</strong></p>
<ul>
<li>IOU-Tracker: I simply use <a href="https://github.com/bochinski/iou-tracker" target="_blank">IOU-Tracker</a> (MIT licenced) to link the detected impacts from near by frames. For each tracked trajectory, I just keep the middle box and ignore the rest.</li>
<li>Dynamic confidence threshold: Since the impact most likely to happen from frame 30th to 80th, I use a low threshold within this range. The number of impacts decrease to the end of the video, so I slowly increase the confidence threshold to the end of the video. Below is the threshold I use for the whole video<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2545270%2F52d822eabfcb8c39f6e9f128a009178f%2Fthresh.png?generation=1609809219877057&alt=media" alt=""></li>
</ul>
|
NFL 1st and Future - Impact Detection
|
[40th place solution] 2-Class Prediction + HFlip + NMS + Filtering
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL 1st and Future - Impact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Firstly, thanks both to Kaggle founders and the NFL competition organisers for the possibility to take part in this contest! Secondly, thanks to <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> and his <a href="https://www.kaggle.com/its7171/2class-object-detection-training" target="_blank">kernel</a> for the great idea of 2-class training!</p>
<h1>Brief Summary</h1>
<p>The <a href="https://github.com/AlexeyAB/darknet/blob/master/cfg/yolov4-tiny-3l.cfg" target="_blank">Tiny-YOLOv4</a> from<a href="https://github.com/AlexeyAB/darknet" target="_blank"> AlexeyAB's darknet repository</a> was used. The training of both impacts and non-impacts with leaving impacts only was done. To gain more accuracy, horizontal flipping data augmentation both when training and testing, filtering and non-max suppression of both (flipped and original) frames were implemented.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2209740%2F579d67615527f33d40be1038d26ac5c2%2FNFL%20Impact%20Detection.png?generation=1610007246515155&alt=media" alt="The complete pipeline"></p>
<h1>Training</h1>
<p>The only videos and train_labels.csv were used in training. No separate frames from image_labels.csv and tracking data were used because of the lack of time and resources.</p>
<p>The parameters are:</p>
<ul>
<li>batch_size: 64,</li>
<li>classes: 2,</li>
<li>iterations: 4000 (with 64 images per iteration),</li>
<li>step: 15,</li>
<li>augmentation: horizontal flip only,</li>
<li>total images: ~4000 (2000 * 2),</li>
<li>input size: 896x512 (with random resize).</li>
</ul>
<p>It was used the step to extract frames from videostream to decrease the consumed time of training and also to make difference between impacted and non-impacted helmets.</p>
<h1>Inference</h1>
<p>The main idea is to use not only one but two images per the real video frame: the original and the horizontally flipped one.</p>
<p>When the prediction of both them is done, it's made the filtering by following parameters:</p>
<ul>
<li>nearest_index_distance: 5 (the farthest distance between two nearest impact frames),</li>
<li>farthest_index_distance: 23 (the distance between start and stop frames of the filtered sequence),</li>
<li>nearest_iou: 0.4 (the Intersection-Over-Union metric).</li>
</ul>
<p>When the filtering is done, it's made the Non-Max-Suppression to merge the overlapped bboxes and exclude the non-overlapped ones from the submission. The NMS is done on not the only one frame but few nearest frames in the sequence. So, the parameters are:</p>
<ul>
<li>nms_max_distance: 4,</li>
<li>nearest_iou: 0.4.</li>
</ul>
|
NFL 1st and Future - Impact Detection
|
[6th place solution] EfficientDet + resnet18
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL 1st and Future - Impact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to all participants and organizers!<br>
Thanks especially to <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> and <a href="https://www.kaggle.com/nvnnghia" target="_blank">@nvnnghia</a> for sharing amazing notebooks!</p>
<p>My solution is made up of the following two stages.<br>
(Stage-1) Detect all helmets with EfficientDet-d5<br>
(Stage-2) Classify all helmet to impact or no impact with resnet18</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3009367%2F027d750fd39f79ccdb7e6f9df7a43b7c%2Fnfl_hirai.jpg?generation=1609831143184495&alt=media" alt=""></p>
<h3>Stage-1</h3>
<p>I trained EfficientDet-d5 on helmet 1class.<br>
For training, I used the frames where collisions exist and frames where they don't. Training images are extracted from the video every 10 frames and +-5 frames of the frame with collisions.<br>
Images were enlarged to 1280*1280, and only horizontal flip was used for augmentation because loss converged faster.<br>
The inference was performed on two images, the original image, and the horizontally flipped image, and the results were merged using wbf.</p>
<h3>Stage-2</h3>
<p>I trained resnet18 on 2 classes(impact or not) and used it to classify the targeting helmet had collided.<br>
For resnet18 input, I cropped the image around the helmet and prepared 9 images from 4 frames before and after.<br>
The cropped image size is three times the longest side of the helmet rectangle(max(w,h) * 3).<br>
The cropped image was padded to square and resized to 112*112.</p>
<p>As in stage 1, the training images were extracted from the video every 10 frames and only the collision frame.<br>
When creating training data, the same evaluation criteria were used to label whether a collision occurred or not(impact=1, confidence>1, visibility>0).<br>
For augmentation, I used the alubumentation library and applied the following</p>
<ul>
<li>HorizontalFlip</li>
<li>RandomBrightness</li>
<li>RandomContrast</li>
<li>one of(MotionBlur,MedianBlur,GaussianBlur,GaussNoise)</li>
<li>HueSaturationValue</li>
<li>ShiftScaleRotate</li>
<li>Cutout</li>
</ul>
<p>In addition to the above, I also used the following augmentation</p>
<ul>
<li>Changed crop size randomly (max(w,h) * 2.5 ~ max(w,h) * 3.5)</li>
<li>Shifted cropping position randomly (-max(w,h) * 0.07 ~ max(w,h) * 0.07)</li>
<li>Replaced a random frame image with a black image (All 0's numpy array)</li>
</ul>
<p>When doing augmentation, the same augmentation is applied to all images in 9 frames.</p>
<p>I trained two resnet18 models(2d conv and 3d conv).<br>
For the 2d conv resnet18 input, the 9-frame images are combined in the RGB channels to form a 27-channel image.<br>
For the 3d conv resnet18 input, the 9-frame images are converted to a 9frame movie.</p>
<p>The inference process was performed on the two models, and the final output was the ensemble of the respective outputs.<br>
I also used the same flip tta as in stage-1.</p>
<h3>Post-Processing</h3>
<p>If helmets are detected in the same position within 4 frames, only the middle frame will be kept, and the others will be ignored.<br>
In addition, I ignored the first and last 10 frames of the video because it is expected to be a low collision.</p>
<h3>Score</h3>
<p>cv for EfficientDet-d5 : 0.926(for all helmets)<br>
roc_auc for 2d conv resnet18 : 0.964<br>
roc_auc for 3d conv resnet18 : 0.967<br>
cv for local : 0.53<br>
public score : 0.56<br>
private score : 0.59</p>
<p>Thank you!</p>
<p>Yes. That public notebook supressed false positives too much.</p>
<p>I'm sorry.<br>
I have not read any books about kaggle, so I cannot recommend it.<br>
And I tried to find an English version of the book, but it doesn't seem to exist.</p>
<p>I'm glad you said it's a good explanation!<br>
Thank you very much!</p>
<p>Thank you!<br>
Congrats on your medal too!</p>
<p>Thank you!<br>
The following link is 3d conv resnet18.<br>
<a href="url" target="_blank">https://pytorch.org/docs/stable/torchvision/models.html#resnet-3d</a></p>
<p>When used it for 2class classification, the number of the output must be changed to 2.<br>
And input shape must be (3 x T x H x W). T is the number of frames, so in my case, the input shape is (3 x 9 x 112 x 112).<br>
If a list of images exists, input images can be created as follows.<br>
<code>input_images = torch.stack(images_list, 1)</code></p>
<p>Thank you!<br>
Congrats on your first medal too!!</p>
|
NFL Big Data Bowl
|
(not really) 2nd Public LB Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I agree. We will be at a disadvantage for stage 2 having trained offline. Unfortunately we started working on the GNN with only 10 days left. We quickly realized there wouldn't be enough time to develop the model and also figure out how to make it run in the kernel. This was the best solution given the circumstances.</p>
<p>I'll leave it to <a href="/christofhenkel">@christofhenkel</a> to explain the architecture in more detail. In the end string representation of our <code>state_dict</code> was only around 2000 lines or so of text. Small enough that we could paste multiple models into a kernel.</p>
<p>First I want to say a big thanks to my teammates @peterhurford and @christofhenkel for teaming up this competition. Also thanks for the hosts and kaggle for putting on a really fun and challenging competition. When this competition first launched I thought I might have an advantage because of all the time I’ve spent watching the NFL. I also hoped that all the brilliant non-American kagglers might be turned off by American football. Turns out I was wrong on both fronts!</p>
<h2>High level summary</h2>
<ul>
<li>Even though we are 2nd on the public LB, we expect to drop significantly stage 2.</li>
<li>Our final solution used a blend of GNN and MLP models.</li>
<li>We trained the GNN offline and uploaded model weights. Our MLP is trained completely in the kernel.</li>
<li>We knew that our GNN model wouldn't get the benefit of training on the 2019 play data like models trained in the kernel would. Using @gertjac 's approach <a href="https://www.kaggle.com/gertjac/just-to-show-why-i-wonder-about-online-learning">here</a> we could pseudo label some of the public test set to at least provide our model some 2019 data to train on.</li>
<li>We knew this would overstate our placement on the stage 1 leaderboard, but also figured it would give us the best chance for success in stage 2. We waited until 2 days before the deadline to submit this overfit model to reduce our impact on the LB, and we renamed our team to signal that we knew we were overfit.</li>
<li>I'm honestly surprised that more teams didn't take this approach. It seemed like the best way to incorporate all the data we had available to us while training offline. As it turns out, most of the top teams kept their training in the kernel.</li>
</ul>
<h2>Features:</h2>
<ul>
<li>For the MLP we experimented with over 4000 features. These features were mostly based on player positions and labeled players by distance to the runner, but also position laterally and position within the defensive line. @peterhurford did some cool stuff with feature selection that he can expand upon. I think our final MLP only used a hundred or so of these features. We found removing bad features to be equally effective as adding new ones.</li>
<li>GNN setup used node and edge features and didn't require us to label players in relation to the runner.
<ul><li>Node features were specific to each player (normalized S, direction)</li>
<li>We added what we called <code>frame</code> features which were similar to what other top teams did where we simulated advancing of players positions based on their direction, speed and velocity.</li>
<li>Edge features involved iterations between players. We ended up using, distance, angles, offsetting S and momentum. We also created edge features that were difference in X, Y speed
and momentum but divided by distance.</li></ul></li>
</ul>
<h2>GNN setup</h2>
<ul>
<li>Our GNN model was based off of some starter code the @christofhenkel had modified from his CHAMPS solution.</li>
<li>We experimented with a lot of different features, learning rate schedulers, scaling/preprocessing.</li>
<li>One of the main things I spent time experimenting with was adjusting the design of our edge matrix. The edge matrix consists of binary (0,1) values representing the connections between nodes (player). We tried many different approaches for this: all players connected with each other, players only interacting with opponents, players only interacting with close players. Our final model's edge matrix used interactions between opposing team players AND the rusher regardless of the team.</li>
</ul>
<h2>Post processing</h2>
<ul>
<li>As others have mentioned we post processed to force 0's and 1's for yardage that would be beyond either endzones.</li>
<li>We used <code>np.maximum.accumulate</code> to ensure our predictions were monotonically increasing.</li>
</ul>
<h2>Uploading models</h2>
<p>Since we trained our models offline- but were not allowed to use external datasets, we were forced to paste plain text with our model weights directly in the kernel. Initially we just copied the string representation of pytorch's <code>state_dict</code>. However when we wanted to bag for our final solution the kernel would not commit if it was beyond 1MB in size (which actually we found to be more like 800kb, I guess kernels round up?). @peterhurford thought of a nice idea of zipping our model into a binary format, which allowed us to squeeze in an extra few models for bagging.</p>
<h2>How it unfolded</h2>
<p>@peterhurford and I teamed with a few weeks left before the merger deadline and I'm super grateful that he agreed to team. I learned a lot from him about MLP design, feature selection, and was amazed by his engineering abilities. Eventually we got stuck on the LB around 0.0133x and decided to reach out to @christofhenkel because of his experience and knowledge working with NNs. He agreed to join the team in a consulting role. Once we teamed he shook things up for us by suggesting the top teams were using GNN models. I'm really glad he agreed to join our team because he gave us the idea, motivation, and guidance for the GNN design.</p>
<p>Thanks for reading and best of luck to everyone in stage 2!</p>
|
NFL Big Data Bowl
|
10th Place Sharing
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our focus really are angle + dist + aggregations and their projections. You have a nice solution as well! Good luck in stage 2.</p>
<p>My guess is we tried it too early. This trick should be beneficial to some features, but we didn't have them yet.</p>
<p>First thanks for NFL and Kaggle for such a competition. It was a journey with lots of learning for us in terms of NFL NFL domain and trying different Data Science ideas.</p>
<p>What we did is simply this: Try a lot and fail a lot. But find some useful at the end. Since none of us in our team were used to complex DL models, we were only using MLP and we were mostly focusing on extracting smart features and how to feed them into MLP.</p>
<p>We are happy with our score because we were able to get 1268 with only MLP and many features.</p>
<p>Let's pass on more detail:</p>
<p><strong>Model & Features</strong>
Our model is just a normal NN with 5 inputs.
2 of them are for categorical features,
1 is for distance/speed & distance/speed projection features,
1 is for angle & angle projection features,
1 is for aggregation statistics features.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F690886%2Ff0aa8a21247e396004aca0a811bde56f%2Fdownload%20(4" alt="">.png?generation=1574945297226928&alt=media)</p>
<p><strong>Validation</strong>
We used 5 GroupKFold on ‘Week’, but actually the result is similar to grouping on ‘GameId’
We validated and early stop our model with > 2017 CV only, so in stage 2 it will early stop based on 2018+2019 data.</p>
<p><strong>About Rusher’s Speed and Acceleration</strong>
Rusher’s acceleration is useful to predict Yards in 2017 but not 2018:
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F690886%2Fa4d23ca3edc0ed556c7db290a3cb3d0f%2Fimage.png?generation=1574945361392996&alt=media" alt=""></p>
<p>Rusher Speed in 2018 is super linear with the distance but not in 2017:
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F690886%2Fca31bd3f1d128b4418da55ff72ee9f68%2FScreenshot%20from%202019-11-01%2002-11-55.png?generation=1574945400449410&alt=media" alt=""></p>
<p>Since the host said both S and A were calculated by Dis, we started to think the root cause is the location tracking. But then the host said the location tracking has been accurate. Therefore, our next guess is about the sampling frequency. The chances are the timestamp of sensors’ readings got messed up, or a resolution problem: because the sensors in 2017 and 2018 are different, their reading may have been forced to the same frequency 10 Hz.
We don’t know the answer, but we then try the following 2 things:</p>
<p>1) Rescale 2017 A and S with 2018 mean and std -> a little improvement in LB
2) S = Dis * 10 -> not improvement so we dropped it.</p>
<p>Ok let's talk about the magic jumps:</p>
<p><strong>Jump 1: Yards Post Processing</strong>
Same as other teams, we corrected some impossible Yards prediction based on YardLine.</p>
<p><strong>Jump 2: Angles</strong>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F690886%2Ffde1a54e0ed43d047ca0db8df4a1bc36%2FIMG_8D149B586699-1.jpeg?generation=1574945481716293&alt=media" alt=""></p>
<ul>
<li>Alpha: The relative angle between a player and the rusher</li>
<li>Theta: Reverse of Alpha</li>
<li>Beta: The different between Rusher’s Direction and the Theta, it indicates if a player is blocking the rusher’s path. Small abs(Beta) indicates the rusher is running towards the player.</li>
</ul>
<p><strong>Jump 3: Future Features Estimation</strong>
Features about what is going to happen, in next 0.5, 0.75, 1, 1.25, 1.5 seconds
1. the distance between rusher and closest players.
2. the distances between defensive players and offensive players.
3. Their distances to the yard line are important.
4. Speed is also projected, the projected horizontal/vertical speed of the rusher is important.</p>
<p><strong>Jump 4: Standardizing Rusher Direction</strong>
Consider the below 4 plays. The Black arrow indicating the rusher’s direction. We flipped all play with rusher’s direction pointing downwards. And the result is the blue arrows, after the flipping, you can see 1 & 2 are the same, 3&4 are the same. As a result, all rusher’s direction will belong to [270, 360] and [0, 90].</p>
<p>This flipping is tricky, we have to make sure the entire play is mirrored along the middle line.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F690886%2Fdf3a3a717a1eabbb5ebf6a7cfa854395%2Ffullsizeoutput_19d0.jpeg?generation=1574945602992218&alt=media" alt=""></p>
<p><strong>The end</strong>
Thanks to the host for offering this nice competition. We have learned a lot here. Also after reviewing other talents' approaches, we really need to update our deep learning skills. </p>
<p>It is not augmentation. It is an operation done on the original rows. The idea is similar to what the host suggested us to do, flipping the play direction. I tried augmentation but the result is worse than standardizing for our model. </p>
<p>Yes, normally you will have the same labels when doing augmentation. </p>
|
NFL Big Data Bowl
|
1st place solution The Zoo
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It is not the best idea to calculated CV by fitting NN only once for each fold. Firstly, the CV will be quite unstable and make it difficult to judge on model improvements. Secondly, it might be inconsistent with LB if one submits average of multiple NN predictions. To mitigate that, we did 4 runs for each of the 5 folds and used averaged predictions to calculate CV (so 20 model fits overall). It would have been better to do rather 8 runs, as our submission was an average of 8 models. But we decided to stick to 4 as CV variance was already quite low.</p>
<p>The model definitely learns such stuff from player locations, but it is a fair point if adding it explicitly can improve the accuracy. We've tried distance between rusher and yardline, rusher X and yardline itself as extra variables at some moments, but it never gave an improvement. I will re-check if adding it to our final structure improves CV.</p>
<p>Yes, you are absolutely right, it is worth adding this variable to the tensor as CV is consistently a bit lower with it. It could have brought us to the very edge of 0.01200 on public LB, but we cannot check it as late submissions are disabled. I would even suggest to add distance from a defender (instead of rusher) to the line of scrimmage, as it seems to give another teeny tiny improvement of CV. But such improvements are too close to the random range of CV values to be confident.</p>
<p>Good catch!</p>
<p>Too much 😆 </p>
<p>I would say each of us spends 4+ hours per day.</p>
<p>I wonder if NFL team analytics can improve it with extra data they collect. For instance, if the raw data indicates which defender tackled the rusher, it might be used to improve this model.</p>
<p>What I meant with the tackler is to supply it as a second target variable for the model, so during the fitting the model can learn also to better identify which defenders are more likely to tackle the rusher. That might improve the prediction of Yards without the necessity to provide data from future.</p>
<p>I hope the approach we developed will bring value to the analytics team, as it seems it can work for other types of plays.</p>
<p>We are glad to publish the solution write-up of The Zoo by <a href="/dott1718">@dott1718</a> and <a href="/philippsinger">@philippsinger</a>.</p>
<p>We want to sincerely thank the hosts and Kaggle for making this competition possible. We had a lot of fun crafting our solution as it was necessary to think a bit out of the box and come up with something that really reflects the situation on the field. An extra thanks goes to Michael Lopez for actively participating in all the discussions and activities around the competition. That did add motivation to improve and believe that we can bring some value to NFL analytics. Can’t remember the last time we’ve seen such involvement of a host into the competition.</p>
<p>There was little problem with the data (2017 measurement differences were disclosed) and there was a nice correlation between CV and public LB. There was also no real chance to cheat as private LB will be on future data. We also want to thank all competitors for not exploiting the possible leak in public LB.</p>
<p>We really hope there won’t be any surprises on the private LB data and we hope our kernels will run through. In these types of kernel competitions there is always the risk of something failing, which would be devastating, of course. </p>
<p>Regardless of what happens, we are really proud of our solution and strongly believe that it can be a valuable asset to future endeavors in NFL analytics.</p>
<p><strong>TL;DR:</strong> It’s a 2d CNN based on relative location and speed features only.</p>
<h3>Solution motivation</h3>
<p>Few words about how we came up with the model structure. To simplify we assume a rushing play consists of:
- A rusher, whose aim is to run forward as far as possible
- 11 defense players who are trying to stop the rusher
- 10 remaining offense players trying to prevent defenders from blocking or tackling the rusher</p>
<p>This description already implies connections between which players are important and which might be irrelevant, later we proved it to be the case on CV and LB. Here is an example of play visualization we used (based on the modified kernel from Rob Mulla [1])</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F85156%2F3304f790cf66a201d247243af8312433%2FPlayers%201.png?generation=1574946183096843&alt=media" alt=""></p>
<p>If we focus on the rusher and remove other offense team players, it looks like a simple game where one player tries to run away and 11 others try to catch him. We assume that as soon as the rushing play starts, every defender regardless of the position, will focus on stopping the rusher asap and every defender has a chance to do it. The chances of a defender to tackle the rusher (as well as estimated location of the tackle) depend on their relative location, speed and direction of movements.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F85156%2Fb81666526aa2c9098d2156cd21071787%2FPlayers%202.png?generation=1574946200648463&alt=media" alt=""></p>
<p>Another important rule we followed was not to order the players, because that would force an arbitrary criteria into the model, which will not be optimal. Besides, the picture from above gives us the reason to believe each defender should be treated in a similar manner.</p>
<p>That points to the idea of a convolution over individual defenders using relative locations and speeds, and then applying pooling on top.</p>
<p>At first we literally ignored the data about 10 offense players and built a model around the rusher and defenders, which was already enough to get close to 0.013 on public LB. Probably with proper tuning one can even go below 0.013.</p>
<p>To include the offense team player we followed the same logic - these 10 players will try to block or tackle any of the defender if there is a risk of getting the rusher stopped. So, to assess the position of a defender we want to go through all the offense team players, use their location and speed relative to the defender, and then aggregate. To do so, we apply convolution and pooling again. So good old convolution - activation - pooling is all we needed.</p>
<h3>Model structure</h3>
<p>The logic from above brought us to the idea of reshaping the data of a play into a tensor of defense vs offense, using features as channels to apply 2d operations.</p>
<p>There are 5 vector features which were important (so 10 numeric features if you count projections on X and Y axis), we added a few more, but they have insignificant contribution. The vectors are relative locations and speeds, so to derive them we used only ‘X’, ‘Y’, ‘S’ and ‘Dir’ variables from data. Nothing else is really important, not even wind direction or birthday of a player ;-)</p>
<p>The simplified NN structure looks like this:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F85156%2F2b5c9ce8e54f58ba78dcf120ef49c278%2FNN%20structure.png?generation=1574945484839246&alt=media" alt=""></p>
<p>So the first block of convolutions learns to work with defense-offense pairs of players, using geometric features relative to rusher. The combination of multiple layers and activations before pooling was important to capture the trends properly. The second block of convolutions learns the necessary information per defense player before the aggregation. And the third block simply consists of dense layers and the usual things around them. 3 out of 5 input vectors do not depend on the offense player, hence they are constant across “off” dimension of the tensor.</p>
<p>For pooling we use a weighted sum between both average and max pooling with average pooling being more important (roughly 0.7). In earlier stages of the model, we had different kinds of activations (such as ELU) as they don’t threshold the negative weights which can be problematic for the pooling, but after tuning we could switch to ReLU which is faster and had similar performance. We directly optimize CRPS metric including softmax and cumsum.</p>
<p>For fitting, we use Adam optimizer with a one cycle scheduler over a total of 50 epochs for each fit with lower lr being 0.0005 and upper lr being 0.001 and 64 batch size. We tried tons of other optimizers, but plain Adam is what worked best for us. </p>
<h3>CV</h3>
<p>We were quite fortunate to discover a really robust CV setup. Probably, we will never have such a nice CV again. In the end, it is quite simple. We do 5-fold GroupKFold on GameId, but in validation folds we only consider data from 2018 (similar to how Patrick Yam did it [2]). We saw very strong correlations between that CV and public LB as 2019 data is way more similar to 2018 data compared to 2017 data. Having the 2017 data in training is still quite crucial though. As we are using bagging on our final sub, we also bagged each fold 4 times for our CV, meaning our final CV is a 5-fold with each fold having 4 bags with random seeds.</p>
<p>Having such a strong CV setup meant that we did not always need to check public LB and we were quite confident in boosts on CV. We actually had quite a long period of not submitting to public LB and our improvements were all gradual. Based on given correlation, we could always estimate the rough LB score. You can see a plot of some of our CV and LB models below. The x-axis depicts the CV score, and y-axis respective LB score. Blue dots are models actually submitted to LB, and red dots are estimates. You can see that we lost the correlation only a tiny bit in the end, and our theoretical public LB score would have been below 0.01200. Our final CV for 2018 is around 0.012150.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F85156%2F09167c277a7fac56e0b9e44d9246c0bd%2FCV_LB.png?generation=1574945690175061&alt=media" alt=""></p>
<h3>Data processing</h3>
<p>As we assume most people did, we adjusted the data to always be from left to right. Additionally, for training we clip the target to -30 and 50. For X,Y and Dir there is no other adjustment necessary, however, as most have noted, there are some issues with S and A. Apparently, the time frames were slightly different between different plays.</p>
<p>For S, the best adjustment we found is to simply replace it with Dis * 10. A is a bit more tricky as there is apparently some form of leak in 2017 data (check the correlation between rusher A and target). So what we did is to adjust A by multiplying it with (Dis / S) / 0.1. That means we scale it similarly to how we scale S. After all, A only has a tiny signal after this adjustment, and one can easily drop it. As we rely on relative features in the model, we don’t apply any other standardization.</p>
<h3>Augmentation & TTA</h3>
<p>What worked really well for us is to add augmentation and TTA for Y coordinates. We assume that in a mirrored world the runs would have had the same outcomes. For training, we apply 50% augmentation to flip the Y coordinates (and all respective relative features emerging from it). We do the same thing for TTA where we have a 50-50 blend of flipped and non-flipped inference.</p>
<h3>Code optimization</h3>
<p>We decided quite early that it is best to do all the fitting within the kernel, specifically as we also have 2019 data available in the reruns. So we also decided early to spend time on optimizing our runtime, because we also knew that when fitting NNs it is important to bag multiple runs with different seeds as that usually improves accuracy significantly and it removes some of the luck factor.</p>
<p>As mentioned above, we use Pytorch for fitting. Kaggle kernels have 2 CPUs with 4 cores, where 2 of those cores are real cores and the other 2 are virtual cores for hyperthreading. While a single run is using all 4 cores, it is not optimal in terms of runtime, because you cannot multiprocess each operation in a fit. So what we did is to disable all multithreading and multiprocessing of Python (MKL, Pytorch, etc.) and did manual multiprocessing on a bag level. That means we can fit 4 models at the same time, gaining much more runtime compared to fitting a single model on all 4 cores. </p>
<p>Our final subs fit a conservative number of 8 models each, having a total runtime of our subs at below 8500 seconds.</p>
<h3>What didn’t work</h3>
<ul>
<li>Transformers and multihead attention, which seem to approximate the dependencies we explicitly use. We mainly focused on trying out attention to include offense-offense and defence-defence dependencies.</li>
<li>LSTM instead of CNN.</li>
<li>Adding dependencies like offense-offense and defence-defence explicitly.</li>
<li>As soon as all the inputs are vectors, it seems tempting to try complex numbers based NNs. There is even a nice paper and a github repo available with the math and implementation of complex number version of all the layers we are using [3], but in keras. However, all the attempts we’ve made to limit CNNs to vector operations gave worse results.</li>
<li>Going deeper and wider.</li>
<li>CNN adjustments known from CV like Squeeze-and-Excitation layers or residual networks.</li>
<li>Voronoi features.</li>
<li>Weighing 2018 data higher than 2017 data.</li>
<li>Multi-task learning, label smoothing, etc.</li>
<li>Other optimizers, schedulers, lookahead, etc.</li>
</ul>
<h3>Final subs</h3>
<p>Our first sub is our best model fitted on an 8-fold with picking the best epochs based on CV using 2018 and 2019 data (in the rerun, only 2018 in public LB). This model currently has 0.01205 public LB. Our second sub is using full data for fitting with fixed epochs (no early stopping). It currently has public LB 0.01201.</p>
<p>In private reruns we incorporate 2019 data into training and we hope that all goes well, but you never know.</p>
<p>P.S. Don’t forget to give your upvotes to <a href="/philippsinger">@philippsinger</a> as well - this model is a great example of teamwork.</p>
<p>[1] <a href="https://www.kaggle.com/robikscube/nfl-big-data-bowl-plotting-player-position">https://www.kaggle.com/robikscube/nfl-big-data-bowl-plotting-player-position</a>
[2] <a href="https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/119314">https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/119314</a>
[3] <a href="https://arxiv.org/abs/1705.09792">https://arxiv.org/abs/1705.09792</a></p>
<p>I take the full blame for that 😆 </p>
<p>You are correct and we didn't scale it anyhow as it was the intention for this feature. This way it shows the pace at which the defender is approaching the rusher.</p>
|
NFL Big Data Bowl
|
25th Public LB, A great learning experience
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks!</p>
<p>Thank you!</p>
<p>First at all, good luck to everybody for the second stage, and thank you to all the people who with their topics, kernels and, now, solutions make this a great competition where you can learn so much. </p>
<p>In a personal note I’m very happy, not because of the position (as I think I’m going to get worse on private or probably my submissions will crash) but because for me, as I said in the title, this has been a great learning experience. I started it knowing almost nothing about NN or python and I end it getting a python NN submission good enough to get to a good place. I still have to learn a lot but I think is a good start. Specially because is something that I really wanted it to do but I had been postponing as I feel quite comfortable with R and GBM models. This competition worked as a very good push to do it. It was out of necessity because as @dmitriyguller said in the using R post:</p>
<blockquote>
<p>Unfortunately my experience has been that taking part in this competition using R is like running a marathon while breathing through a straw.</p>
</blockquote>
<p>Sometimes I think it would had been better if the hosts had limited the competition to python kernels, but then maybe I wouldn’t had started so I don’t know. </p>
-
<p>With R, and a 199 multi-class LGB I was able to get to .01305 on the LB, at that moment a gold medal position, but I couldn't get better, so I started to learn about NN and Keras and almost from the beginning, using the same variables, I observed that you could get better results. The problem was that keras doesn’t work on R kernels so I had to learn python. </p>
-
<p>My final solution, is not very different to some of the solutions already explained: most of the variables are positions and distances to the rusher in different moments of time using S and Dir, and then some max, mins, means and sd over them. I also flip the Y and did post process to overwrite to 0 or to 1 impossible yardages. I also engineered some features trying to capture possibles blockages of the non rusher offenders over the defenders. I feed all this variable into a simple sequential NN (512,256,128) + dropouts(0.5) with elu activation and adam optimizer, and softmax final layer . Before that I run a Catboots regressor to predict the number of yards and use the prediction as a feature for the NN.</p>
-
<p>I spent quite a lot of time trying to get a more complicated NN to work. I got a first promising try getting the positions of the player into a grid an feeding into a CNN as it were and image, then using the positions in different moments of time and feed them into a CNN + LTSM NN as it were a video, and finally trying different combinations to get the player variables into a CNN. I had the feeling that being able to mix it was the magic to get a really good result. I think the Zoo solution proves my intuition was right but I was not even close to make it work. As I said, I still have to much to learn.</p>
-
<p>Because all of this I wasn’t able to do submissions for almost the last 15 days of the competition and I got my first working python solution two days before deadline. The worst thing of the transition from R to python and the final rush, is that I lost the correlation between my CV scores and LB that I had when I was using R and LGBM so I’m not very confident that my final submissions do well in a possible shake out. Also my CV using python keras is 0.00007 worse than my local R keras CV. I don’t know if is because of randomness of folds or NN or because I didn’t translate well some variables from R to python. I probably make some errors and it wouldn't be strange if my submissions doesn’t work with the new data. It wouldn’t make me very sad as I’m very happy about the things I learn. </p>
-
<p>This is why I want to thank again to all the people who share their insights and wisdom and make this a great community. Medals, leaderboard positions, and competition in general are very good things to push you to try harder, but the important, at the end, is the knowledge you acquire. </p>
|
NFL Big Data Bowl
|
2d-CNN for sparse heatmap images and MLP (1st stage 45th, 2nd stage 42nd place)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Summary:</h2>
<p>2d-CNN (Convolutional Neural Network) for sparse heatmap images and MLP for tabular data</p>
<h3>Input tensor for CNN</h3>
<p>Generated heatmap-like field images of 30 ((YardLine - 10) <= X < (YardLine + 20)) x 54 (0 <= Y < 54) yards grid
(rounded to integer).</p>
<p>After several experiments, the following combinations of 18 (= 3 x 3 x 2) channels worked best.</p>
3 player categories:
<ul>
<li>11 defense players</li>
<li>10 offense players excluding the rusher</li>
<li>The rusher (ball carrier)</li>
</ul>
3 variables:
<ul>
<li>A (acceleration)</li>
<li>S_X (speed in X axis)</li>
<li>S_Y (speed in Y-axis)</li>
</ul>
2 frames:
<p>Computed another snapshot of 1 second later by adding the speed.
(Also tried adding acceleration, but did not improve the performance.)</p>
<h2>CNN architecture</h2>
<ul>
<li>4 layers</li>
<li>Kept the X-direction until the dense layer and compressed in only Y-direction (stride=[1, 2])
as the X-direction is related to the outcome variable (Yards).</li>
<li>Concatenated different kernel sizes as introduced in the Inception architecture</li>
<li>CELU activation (slightly faster training than ReLU)</li>
</ul>
<p>CNN part of the architecture was configured in YAML for PyTorch as follows.</p>
<p>(Please see <a href="https://github.com/Minyus/pipelinex">PipelineX</a> for the syntax)
<code>yaml
=: torch.nn.Sequential
_:
- {=: pipelinex.TensorSlice, end: 18}
- =: pipelinex.ModuleConcat
_:
- {=: pipelinex.TensorConv2d, in_channels: 18, out_channels: 10, kernel_size: [3, 3]}
- {=: pipelinex.TensorConv2d, in_channels: 18, out_channels: 10, kernel_size: [7, 7]}
- {=: pipelinex.TensorConv2d, in_channels: 18, out_channels: 10, kernel_size: [3, 9]}
- {=: torch.nn.CELU, alpha: 1.0}
- =: pipelinex.ModuleConcat
_:
- {=: pipelinex.TensorAvgPool2d, stride: [1, 2], kernel_size: [3, 3]}
- {=: pipelinex.TensorConv2d, stride: [1, 2], in_channels: 30, out_channels: 10, kernel_size: [3, 3]}
- {=: pipelinex.TensorConv2d, stride: [1, 2], in_channels: 30, out_channels: 10, kernel_size: [7, 7]}
- {=: pipelinex.TensorConv2d, stride: [1, 2], in_channels: 30, out_channels: 10, kernel_size: [3, 9]}
- {=: torch.nn.CELU, alpha: 1.0}
- =: pipelinex.ModuleConcat
_:
- {=: pipelinex.TensorAvgPool2d, stride: [1, 2], kernel_size: [3, 3]}
- {=: pipelinex.TensorConv2d, stride: [1, 2], in_channels: 60, out_channels: 20, kernel_size: [3, 3]}
- {=: pipelinex.TensorConv2d, stride: [1, 2], in_channels: 60, out_channels: 20, kernel_size: [7, 7]}
- {=: pipelinex.TensorConv2d, stride: [1, 2], in_channels: 60, out_channels: 20, kernel_size: [3, 9]}
# -&gt; [N, 120, 30, 14]
- {=: torch.nn.CELU, alpha: 1.0}
- =: pipelinex.ModuleConcat
_:
- =: torch.nn.Sequential
_:
- {=: torch.nn.AvgPool2d, stride: [1, 2], kernel_size: [3, 14]}
# -&gt; [N, 120, 28, 1]
- {=: pipelinex.TensorConv2d, in_channels: 120, out_channels: 20, kernel_size: [1, 1]}
- {=: pipelinex.TensorFlatten, _: }
- {=: torch.nn.CELU, _: }
- =: torch.nn.Sequential
_:
- {=: torch.nn.MaxPool2d, stride: [1, 2], kernel_size: [3, 14]}
# -&gt; [N, 120, 28, 1]
- {=: pipelinex.TensorConv2d, in_channels: 120, out_channels: 20, kernel_size: [1, 1]}
- {=: pipelinex.TensorFlatten, _: }
- {=: torch.nn.CELU, _: }
</code></p>
<h3>MLP (Multilayer Perceptrons)</h3>
<p>Add another channel to encode tabular features.</p>
Continuous features
<p>Max, Min, Mean, Stdev for axis (X, Y) and player categories (Defense, Offense)</p>
Categorical features (One-hot encoded)
<ul>
<li>Down (1, 2, 3, 4)</li>
<li>Flag of whether offense is home</li>
<li>OffenseFormation</li>
<li>DefendersInTheBoxCode</li>
</ul>
<h3>Computing the CDF output</h3>
<ol>
<li>Concatenate outputs of CNN and MLP.</li>
<li>Add base probabilities computed by counting the Yards in the whole train dataset in 2018 to let the
neural network learn the residual.</li>
<li>Forced predicted probabilities for the yards beyond the goal line to 0</li>
<li>Pad 0 to lower (-99 <= Yards < -10 yards) and upper (90 <= Yards < 100).</li>
<li>Divide by the sum through Yards to make the sum to 1 (SoftMax without exponential)</li>
<li>Compute cumulative sum through Yards</li>
</ol>
<h3>Loss function</h3>
<p>CRPS with yards clipped to -10 to 29 yards</p>
<h3>Other settings</h3>
<ul>
<li>Subtle augmentation
<ul><li>random shift in X-axis: 0.1 yards stdev of normal distribution</li>
<li>random shift in Y-axis: 1.0 yards stdev of normal distribution</li>
<li>(random flip in Y-axis decreased the performance thus not used.) </li></ul></li>
<li>Discarded all 2017 data which was very different from 2018 due to sensor issues and hard to adjust</li>
<li>Batch size: 32</li>
<li>Optimizer: Adam</li>
<li>Learning rate scheduler: LinearCyclicalScheduler (slightly better than CosineAnnealingScheduler)</li>
</ul>
<h3>What did not work:</h3>
<ul>
<li>Treat players as graph
<ul><li>Use the reciprocal of distance between players as edge weights, compute
Laplacian spectrum, count the number of 0 eigenvalues which equals to number of connected subgraphs
to use additional features</li></ul></li>
<li>Scaling
<ul><li>RankGauss</li>
<li>StandardScaler</li></ul></li>
</ul>
<h2>Dependencies available in Kaggle Kernel</h2>
<ul>
<li>torch==1.1.0</li>
<li>pytorch-ignite==0.2.0</li>
<li>pandas==0.25.1</li>
<li>numpy==1.16.4</li>
</ul>
<h2>Dependencies not available in Kaggle Kernel</h2>
<ul>
<li><a href="https://github.com/Minyus/pipelinex">pipelinex</a> (developed with/for this competition and open-sourced)</li>
</ul>
<h2>Dependencies only for experimentation (not used in Kaggle Kernel)</h2>
<ul>
<li>kedro</li>
<li>mlflow </li>
</ul>
<h2>The Python code and YAML config are available at:</h2>
<p><a href="https://github.com/Minyus/kaggle_nfl">https://github.com/Minyus/kaggle_nfl</a></p>
|
NFL Big Data Bowl
|
2nd Private LB/ 3rd Public LB Solution - Transformer
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I use 1 second, tried 0.5 and 2 but no improvement</p>
<h1><strong>Data Cleaning</strong></h1>
<p>As we all know data in 2017 is different from 2018, data cleaning is very important in this competition.</p>
<ul>
<li>Orientation: 90 degree rotation in 2017</li>
<li>A: I cannot find a good way to standardize A, I replace A in 2017 by the mean, surprisingly this improve my LB by 0.0002</li>
<li>S: if we look at 2018 data, we can see that S is linearly related to Dis<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1904629%2F08b63866e2b7f4cc9fca6ff4fcc4c30c%2FPicture1.png?generation=1574907774244324&alt=media" alt=""><br>
While data in 2017 is not very fit,<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1904629%2F10b4500caa590ff46ba64efa8abb5a23%2FPicture2.png?generation=1574907822172309&alt=media" alt=""><br>
By fitting a linear regression on 2018 data, the coefficient of lr is 9.92279507, which is very close to 10, so finally I replace S by 10 * Dis for both 2017 and 2018 data. This also gave me 0.0002 improvement.</li>
</ul>
<h1><strong>Features</strong></h1>
<p>total 36 features, ['IsRusher','IsRusherTeam','X','Y','Dir_X','Dir_Y',<br>
'Orientation_X','Orientation_Y','S','DistanceToBall',<br>
'BallDistanceX','BallDistanceY','BallAngleX','BallAngleY',<br>
'related_horizontal_v','related_vertical_v',<br>
'related_horizontal_A','related_vertical_A',<br>
'TeamDistance','EnermyTeamDistance',<br>
'TeamXstd','EnermyXstd',<br>
'EnermyYstd','TeamYstd',<br>
'DistanceToBallRank','DistanceToBallRank_AttTeam','DistanceToBallRank_DefTeam',<br>
'YardLine','NextX','NextY',<br>
'NextDistanceToBall',<br>
'BallNextAngleX','BallNextAngleY',<br>
'BallNextDistanceX','BallNextDistanceY','A']</p>
<h1><strong>Cross Validation</strong></h1>
<p>Always include 2017 data for training, 3 group folds by week for 2018 data, use only 2018 data for evaluation. In this way the CV score is close to public LB.</p>
<h1><strong>Model</strong></h1>
<p>Transformer (2 layers encoder + 2 layers decoder), large number of attention head is the key<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1904629%2Fe16225854085897e92b85fbdd704bc84%2FUntitled%20Diagram(2).png?generation=1574922346398274&alt=media" alt=""></p>
<h1><strong>Model ensemble</strong></h1>
<p>Optimizer: RAdam + lookahead<br>
Number of epoch: 30<br>
Batch Size: 32<br>
Weight Decay: 0.1<br>
Ensemble: snapshot ensemble (pick models at epoch 11, 13,…,17,29)<br>
Learning rate scheduler: 8e-4 for epoch 0-10,12,14,…,28. 4e-4 for epoch 11,13,…,29</p>
<p>Since we are only given 4 hours CPU training, snapshot ensemble seems to be a perfect choice as it won’t increase our training time and is significantly better than single model.<br>
In my final submission, I repeat the training (use all data) for 11000s and 9000s (safe mode).</p>
<p>Thanks, you too!</p>
<p>It works very well for me, my LB score improved from 0.01294 to 0.01271 by adjusting S, further improved to 0.01253 after adjusting A</p>
<p>GroupKfold only applied in 2018 data, like this:
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1904629%2F8770fa21c46be254e972952d9fc23429%2FUntitled%20Diagram(3" alt="">.png?generation=1574928154736007&alt=media)
Noted that I use cross validation for testing only, for submission I always use all data, since early stopping is not required for snapshot ensemble.</p>
<p>Snapshot ensemble, small Batch Size, dropout and weight decay help me to avoid overfitting.</p>
<p>0.012697</p>
<ol>
<li>No, a shared linear layer ( ( 36 + 1 ) * 128 ) is applied to 22 players</li>
<li>Sorting is not needed because transformer takes <strong>set</strong> as input</li>
<li>Simple average for two losses ( 0.5 * CE + 0.5 * CRPS )</li>
</ol>
<p>red boxes: offense players, blue boxes: defenders</p>
<p>The location of Ball is equal to location of ball carrier (i.e. rusher), so DistanceToBall means distance to rusher</p>
<p>Thanks for your questions!
1/2. Yes I'm treating 22 players as 22 words but without position embedding
3.There is difference between encoder and decoder in my model, for decoder the query is calculated using rusher's information while key and value are using all 22 player's information. The output of the decoder would be sequence of 1 (rusher) instead of 22, so I don't need an extra pooling layer to aggregate sequence of 22</p>
<p>The main reason is I find this performs slightly better than attention pooling over 22 players. And decoder is trying to find the interactions between rusher and other players, which make sense to me.</p>
<p>I compute "ToBaToBallRank" in this way: df.groupby('PlayId')['DistanceToBall'].rank()
Next features means location after 1 second, for example NextX = S * Dir_X + X</p>
<p>Sure! here is my winning kernel: <a href="https://www.kaggle.com/wimwim/nfl-transformer-final-run">https://www.kaggle.com/wimwim/nfl-transformer-final-run</a></p>
|
NFL Big Data Bowl
|
30th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We achieved a better score by using this bollean feature: Seasoncat2017. It tells the model if data came from 2017 season or not.</p>
<p>Thanks!</p>
<h3>1 - Features</h3>
<p>First of all, we used the 62 features below. You can understand what they mean by their names:</p>
<ul>
<li>'Yards_to_touchdown',</li>
<li>'YardLine_ref',</li>
<li>'X_defense_spread',</li>
<li>'X_defense_std',</li>
<li>'Y_defense_spread',</li>
<li>'Y_defense_std',</li>
<li>'X_offense_spread',</li>
<li>'X_offense_std',</li>
<li>'X_offense_centroid',</li>
<li>'X_defense_centroid',</li>
<li>'DistToBallCarrier_Offense_Mean',</li>
<li>'DistToBallCarrier_Defense_Mean',</li>
<li>'DistToBallCarrier_Offense_std',</li>
<li>'DistToBallCarrier_Defense_std',</li>
<li>'DistToBallCarrier0.5_Offense_Mean',</li>
<li>'DistToBallCarrier0.5_Defense_Mean',</li>
<li>'DistToBallCarrier0.5_Offense_std',</li>
<li>'DistToBallCarrier0.5_Defense_std',</li>
<li>'Y_offense_spread',</li>
<li>'Y_offense_std',</li>
<li>'Y_offense_centroid',</li>
<li>'Y_defense_centroid',</li>
<li>'Seconds_since_quarter',</li>
<li>'Distance_to_down',</li>
<li>'DefendersInTheBox',</li>
<li>'Average_tackle_time',</li>
<li>'Min_tackle_time',</li>
<li>'Season_cat_2017',</li>
<li>'Season_cat_2019',</li>
<li>'A1',</li>
<li>'S1',</li>
<li>'X_estimated1',</li>
<li>'Y_estimated1',</li>
<li>'Dir_cos1',</li>
<li>'Dir_sin1',</li>
<li>'S1_vs_S12',</li>
<li>'A12',</li>
<li>'S12',</li>
<li>'X_ref12',</li>
<li>'Y_ref12',</li>
<li>'X_estimated12',</li>
<li>'Y_estimated12',</li>
<li>'DistToBallCarrier12',</li>
<li>'DistToBallCarrier0.5_estimated12',</li>
<li>'Dir_cos12',</li>
<li>'Dir_sin12',</li>
<li>'Min_tackle_time12',</li>
<li>'S_horizontal12',</li>
<li>'S_vertical12',</li>
<li>'A13',</li>
<li>'S13',</li>
<li>'X_ref13',</li>
<li>'Y_ref13',</li>
<li>'X_estimated13',</li>
<li>'Y_estimated13',</li>
<li>'DistToBallCarrier13',</li>
<li>'DistToBallCarrier0.5_estimated13',</li>
<li>'Dir_cos13',</li>
<li>'Dir_sin13',</li>
<li>'Min_tackle_time13',</li>
<li>'S_horizontal13',</li>
<li>'S_vertical13'</li>
</ul>
<p>A key point to improve our score was to sort the dataframe by using this line of code:
<code>
train = train.sort_values(by= ["PlayId", "IsOnOffense", "IsBallCarrier", "DistToBallCarrier0.7_estimated"], ascending=[1, 0, 0, 1])
</code>
The “DistToBallCarrier0.7_estimated” feature is the estimated distance of a player to the rusher after 0.7 seconds.</p>
<h3>2 - Cross Validation</h3>
<p>We tried RepeatedKFold and GroupKFold with “GameId”, “Week” and “NflIdRusher” separating the groups. The RepeatedKFold strategy gave a better result:
<code>
rkf = RepeatedKFold(n_splits=3, n_repeats=4, random_state=1301)
</code></p>
<h3>3 - Training</h3>
<p>In order to train our model we used Keras with a very simple neural network:
```
model = keras.models.Sequential([
keras.layers.Dense(units=256, input_shape=[X.shape[1]]),
keras.layers.BatchNormalization(),
keras.layers.LeakyReLU(0.2),
keras.layers.Dropout(0.4),</p>
<pre><code> keras.layers.Dense(units=256),
keras.layers.BatchNormalization(),
keras.layers.LeakyReLU(0.2),
keras.layers.Dropout(0.5),
keras.layers.Dense(units=256),
keras.layers.BatchNormalization(),
keras.layers.LeakyReLU(0.2),
keras.layers.Dropout(0.2),
keras.layers.Dense(units=199, activation='sigmoid')
])
</code></pre>
<p>```</p>
<h3>4 - Post Process</h3>
<p>For post processing we used these three lines of code:
<code>
y_pred[:, :50] = 0
y_pred[:, -yards_covered:] = 1
y_pred[:, :100-yards_covered] = 0
</code>
Intuition:
- The 50 first predictions is 0 because there isn't any team which lost a huge amount of yards on the training set.
- The two last lines of code mean that if the yardline is, let’s say 75, the maximum number of yards the team can gain is 25. So every prediction on the right side of +25 should be 1. The same applyes for the case of lossing yards, where the team can losse only 75 yards and every prediction on the left side of -75 should be 0.</p>
<h3>5 - Score</h3>
<p>CV score: 0.01213
Public LB: 0.01297
Private LB: 0.012669</p>
<p><strong>Thanks for reading 👍 and many thanks to my teammates <a href="/jayjay75">@jayjay75</a>, <a href="/rafiko1">@rafiko1</a> and <a href="/bolkonsky">@bolkonsky</a>!</strong></p>
|
NFL Big Data Bowl
|
4th Place (Public LB) Efforts
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you. Partially yes, LR model used in last submission predicts multi classes for the 82 classes (-12~69 Yards) and other yards were filled with 0 probability. </p>
<p>Thanks again! And I’m surprised with also your impressive solution. I expect your next challenge! </p>
<p>Not a bit! Thanks always👍🏻</p>
<p>I'm sorry for late publishing. 😜</p>
<p>First of all, I wanna say thanks to organizers hosting such a exciting competition. To be honestly, I’d know less about NFL and the rules of American Football Game before this competition started, but now I really enjoy the matches every week;)
And also I appreciate all kaggler sharing info and attending discussions, especially mrkmakr, kenmatsu4, CPMP and charlie_s. Their kernel and insights much affected our models.</p>
<p>Our team is consisted of owruby, David and me. I and David mainly took on the role of feature engineerings and owruby developed NN model. Team members ware fixed just before team merge deadline At that point I and owruby were using Python and David was using R, so main work of the final week was translation from David's features made in R script to Python code. It has been so tough work but produced diversity of features. </p>
<p>I’ll share brief summary and essence of our approach because it is not so simple and elegant like other high ranker’s solutions.</p>
<h2>Overview</h2>
<p>We made a lot of play-level and player-level features. I think our feature’s interesting point is some variations of voronoi features.
Main model is MLP with kinds of attentions and sub-model for ensemble is logistic regression. </p>
<h2>Validation Schemes</h2>
<p>We used two kinds of validations. The first is ordinary Group 5-fold CV with GameId and the second is time-split validation predicting the plays of 2018/12. Each of them don't have consistency to LB much, but when scores of both improved apparently, also LB score nearly always improved. </p>
<h2>Models</h2>
<h3>MLP</h3>
<p>MLP model was mainly developed by owruby, so I cannot write down details here. It can be inputted play-level and player-level features separately and then they are concatenated using kinds of attentions. The network is trained by three kinds of losses which is consisted of CPRS loss of Softmax Layer, MAE loss of Linear Layer and CPRS loss of Gaussian Layer with weights of 1.0 : 0.01 : 0.1. The former two losses were from mrkmakr's kernel and The latter one was inspired from kenmatsu4's kernel. Gaussian Layer didn't much improve LB score, but we think it contributed to robustness of training. </p>
<h3>Logistic Regression</h3>
<p>Logistic regression model was trained only by play-level features with year weighted sample (2017 : 2018 : 2019 = 0.1 : 1.0 : 2.0). LR model had not so good score (0.00030 behind to MLP), but weighted ensemble model ( MLP : LR = 1.0 : 0.3 ) improved LB a little (~0.00005). </p>
<h2>Features</h2>
<p>We made various features representing player’s absolute and relative locations, S and A. As others do, we firstly standardized player’s coordinates so that offensive are moving to right side and calculated near future location and speed (0.5sec and 1.0sec after hand-off). And then we created varieties of features like distances, angles between each players and voronoi features with a view of both of offensive and defensive side.</p>
<h3>Voronoi Features</h3>
<p>We used some kind features from voronoi calculation. The most efficient feature is x and y lengths of ballcarrier and other's voronoi region at each time point.
Our voronoi features were originally developed in R by David. A package deldir in R is easily calculate voronoi regions with boundaries, so area of regions and coords of edges are always finite value and it was natural idea to using them as feature.
However voronoi function in Scipy cannot calculate bounded voronoi regions, so area and coords of voronoi were often calculated as infinite value and I'd strugled to translate calculating process of voronoi features. But thanks to discussion between CPMP and charlie_s in his kernel, I knew I could calculate virtually bounded voronoi regions by making mirrors of the points on four sides and I deployed it.
The whole voronoi features improved CRPS with about 0.00010.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F312139%2F497c9572f92c257410ccd7744569f245%2F2019-12-02%2016.54.28.png?generation=1575276864053926&alt=media" alt=""></p>
<h2>Training Augmentation and TTA</h2>
<p>We’ve noticed that we can do training augmentation and TTA by flipping coordinates and they improve model performance. But we only used TTA because execution time was limited and using only TTA was almost as effective as doing both.</p>
<h2>Final Submission and Execution Time</h2>
<p>We selected 4 seed averaged MLP model and weighted averaging model of 4 seed averaged MLP and 5-fold averaged logistic regression as final submissions. Execution time of the former is ~9000sec with about 4000sec training and the latter is about 10500sec with ~5000sec training. I implemented “try except” everywhere in my code and our kernel submit naive average values in case of error at model prediction or time over. </p>
<p>Thanks. I expect <a href="/owruby">@owruby</a> to write up detailed explanation about NN, too. 😆</p>
|
NFL Big Data Bowl
|
7th place (public LB) solution : NN + Survival LGBM
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First a big thanks to the organizers for this very interesting competition. The task is challenging and the players tracking data is so fascinating.
Here is a summary of our final solution. It is a mix between automatic features learning and manual features engineering.
The final solution is a blend of 1 deep learning model and 1 LGBM. The DL model and write-up is mostly my part while Sebastien worked on the Survival modelling with LGBM. </p>
<p>Thanks <a href="/sebastienconort">@sebastienconort</a> it was a pleasure to work together on this competition.</p>
<h3>Validation</h3>
<p>We used last 2000 play_id as validation. At the beginning is has a stable gap of 0.0001 with LB but in the end we started to overfit the validation. A posteriori, we may have treated this too lightly and some other teams have share better validation scheme.</p>
<h3>Survival modelling with LGBM for learning “Yard” distribution - by <a href="/sebastienconort">@sebastienconort</a></h3>
<p>In this part, unlike many competitors, we did not approach the problem as a multi-class problem, but as a regression problem, with a survival modelling approach on the number of Yards reached. The “time“ variable is the number of “yards run so far” and the “death” event is the rusher being stopped.
If you are interested for more details, see the note at the end o this post: (Notes on Survival Modelling for Yard prediction)
Depending on time and feedback from readers of this post, we might publish a tutorial notebook describing this approach which is very useful for churn prediction, or insurance pricing 😄
Without the NN embeddings descrived hereunder, and with descriptive variables on Defense and Offense players ordered by their distance to the Rusher, this LGBM approach reached 0.0131 on the public LB. </p>
<h3>NN Architecture</h3>
<p>The LGBM takes as inputs some intermediate embedding of the NN with other variables. </p>
<p>The deep learning architecture has differents parts:
* An MLP that generate an interaction embedding for each couple (player_i, player_j).
* These interaction embeddings is summed for each player_i then concat with other inputs for players_i. This is then fed into another MLP to create players_embedding
* The sum of all players embedding is concat with play-level features before feeding to final classifier.</p>
<p>Globally the interaction part is like a Graph NN with 1 round of message passing.
Separating player features, context features, and the terrain limits features (YardLine and DistancetoYardline) and treat them separately make it easier to regularize the network.</p>
<h3>Features</h3>
<ul>
<li>Most values are from players positions and speed, we found that play-level variables helped a little. </li>
<li>We projected positions in 0.5s, 1s, 1.5s with the Speed and these were the best features for LGBM. However we did not see improvement when doing this with NN
We did not see improvement when adding voronoi features on top of distances.</li>
<li>For NN, we found big gain to include in the interaction features the relative position of player_j projected on the direction from player_i to the rusher. </li>
</ul>
<h3>Deployment</h3>
<p>One of the challenge of the competition is the deployment. We did modify our code to be quite conservative in the end. We also made the choice to paste the weights in the kernels and refinetune the model from that weight.
LGBM is always retrain since it is fast (~1 min)
Finally for the 2 submissions, we chose 1 with NN finetuning and 1 without</p>
<h3>Tricks that helped</h3>
<ul>
<li>For deep learning, we fit a blend of 10 best parameters found by random search then distill the output to 1 DL model. The weight of this final model is pasted in the solution.</li>
<li>We made a mistake as some point to do batchnorm on inputs. Which means measures in X and Y axes are scaled differently. This make learning very unstable. Removing this and scale inputs manually helped.</li>
<li>Some tricks that helped a little bit each time:
<ul><li>augment data by flipping Y-axis</li>
<li>add noise to target in training</li>
<li>add a l1 penalization in the output of consecutive bins of the softmax (fused lasso)</li>
<li>Rotate Orientation by 90° in 2017</li></ul></li>
</ul>
<h3>What did not helped</h3>
<ul>
<li>We tried Transformer many times but it did not do better than our architecture. Maybe we should try again after reading other solutions :)</li>
<li>We tried to add more layer of interactions between players but it did not seems to help either</li>
<li>We tested different activation like ELU and SELU but ReLU does much better.</li>
<li>Normalizing S and A in 2017 did not work for us. We did not think of treating these as missing values like <a href="https://www.kaggle.com/wimwim">https://www.kaggle.com/wimwim</a>. Huge regret and big kudos for him with this nice data prep work.</li>
<li>We tried to predict the real distance travelled by the rusher instead since it is more natural in our pov. However, to our surprise it is worse, our hypothesis is that because the defense is organized around the yardline more than around the runner ? But we are interested to hear experts opinion about this.</li>
</ul>
<h3>Notes on Survival Modelling for Yard prediction</h3>
<p>Survival modelling consists in learning the distributions of a “death” event while dealing with potential censorship or truncature in the observation of that event: either the Survival Function (= 1 – Cumulative Probability Distribution) is learned, or the Cumulative Hazard Function, or the Hazard Function. Knowing one of these functions enables to know all the others, as there is 1 to 1 mapping. In this challenge, there was neither truncature nor censorship, but we were eager to test this survival approach anyway.
A nice generic approach to learning the Hazard Function is by assuming it is piecewise constant (this approach is referred to in literature as Piece-Wise Exponential Model, and is implemented for instance for linear models in the Lifelines python library). Here we assumed the constant step of our piecewise constant function was 1 yard. This approach enabled us to use a LGBM regressor with a Poisson objective. Indeed in the case of the Piece-Wise Exponential Model, the negative log-likelihood of the survival problem is equal (+ a constant term not important for the loss minimization) to the negative log-likelihood of a Poisson regression, the so-called “Poisson trick” as explained in chapter 7.4 of these Princeton class notes (<a href="https://data.princeton.edu/wws509/notes/c7.pdf">https://data.princeton.edu/wws509/notes/c7.pdf</a> )
To be able to apply this approach, the data set has first to be extended: each line is duplicated in multiple lines from -99 to “Yard” which is the death event. Then two new columns are added in the dataset : a descriptive variable being “yards run so far” taking values from -99 to “Yard”, and a new target (let’s name it “Stop” variable) being equal to 0 everywhere except at lines where “yards run so far” == “Yard” (death event). The “yards run so far” variable is very important to be included in the regression, for the LGBM to learn the shape of the Hazard Function.
The disadvantage of this approach is that it increases RAM consumption because of data duplication. But it has the advantage of having only one regression target. As in the train set the minimum observed value for “Yard” is -14, we suppressed all lines with ‘yards run so far” equal from -99 to -15, and extrapolated manually with a 0 value our Hazard Function in post processing.
Once our LGBM is trained, it is able to predict the Hazard Function for different “yards run so far” values, which gives the probability of the rusher being stopped knowing he has not been stopped so far.
To compute the Cumulative Distribution Function, we predict the Hazard Function for values of “yards run so far” from -99 to 99, and we postprocess the resulting numpy array “HF” with the following formula : 1 – np.exp (- np.cumsum(HF)).</p>
|
NFL Big Data Bowl
|
Graph Transformer With Minimal FE, with code
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I have improved my code in several ways, CV improved by about 0.00025. The code can be seen at <a href="https://www.kaggle.com/cpmpml/graph-transfomer?scriptVersionId=24417998">https://www.kaggle.com/cpmpml/graph-transfomer?scriptVersionId=24417998</a></p>
<p>I have updated the main post accordingly.</p>
<p>What works well is attention. Here it is used to model how much each player influences the fate of rusher. I think this attention mechanism works well when the input is a set (ordered or not) of entities. The transformer was first used in natural language translation, with very clear entities: words. Here we have very clear entities, players. And in the Molecular competition (CHAMPS) we had atoms as entities.</p>
<p>Yes, that's how I understand it as well.</p>
<p><a href="/philippsinger">@philippsinger</a> removing A moves LB down to 0.01265. Thanks fior making me try.</p>
<p>Moving further will require significant changes in model or additional features. All what i did so far, except for A and S cleaning, was in my todo list, but anything I would do now would leverage what others, esp your team, has shared, and would be totally unfair. Without A and S cleaning I guess my LB would be around 0.01280, which is what we submitted anyway. </p>
<p>ok, will do. </p>
<p>I ran with another little improvement, i.e. use speed relative to rusher instead of absolute speed. 12 fold LB is 0.01267, i.e. around where most top teams landed. Going further requires either a more powerful model, or more feature engineering. I'll stop here...</p>
<p>It was better to use a distance attention and a generic self attention than 2 generic self attention.</p>
<p>I'm only running it with gpu. with a 1080 Ti is runs rather fast, around one hour. On Kaggle kernel with gpu I see time per epoch being more than twice slower. With cpu it is roughly 30 mins per fold, 12 folds would exceed the time limit for submission. However 12 fold only brings 0.00003 improvement over last 2 folds.</p>
<p>Hi, good questions. Let me try to answer them in order.</p>
<ol>
<li><p>The distance is like the attention is transformer. In the transformer, then attention is scaled via a softmax, i.e. its sum by row is 1. I applied the same to distance, i.e. scaled it so that sum by row is 1.</p></li>
<li><p>I wanted a function of inverse distance. I wanted that influence of player i over player j is of the form: inluence(i, j) = f(dist(i, j)). A larger kernel would serve a different purpose, and would make the model dependent on a player ordering. As usual with machine learning, the only way to know if it would help is to try it.</p></li>
<li><p>You are correct, there is an embedding per player after each bloc on the left column in the picture. The 1x1 kernel convolution is what the transformer does. Citing the paper:
> In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between.
>
> While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. </p></li>
<li><p>The WO matrix after concatenating the heads is the Dense layer after concatenating the heads in <code>multi_head_self_attention()</code> and <code>multi_head_outer_attention()</code> functions.</p></li>
<li><p>Because this is the multiplication by the WO matrix. Now that I think of it, I should put a null bias in that dense layer.</p></li>
<li><p>This is the usual skip connection trick. It was introduced in resnet CNN, and since then has proved to be useful in many NN architectures. A great advantage is that it solves the vanishing gradient problem, hence allows for the use of activation functions like sigmoid or tanh in deep networks.</p></li>
<li><p>I had the same issue that you point out. Using all the output was worse than using get rusher. I improved the model during the competition by concatenating get_rusher with a global max pooling of all the output, but forgot to include it in the submitted version... My idea was to implement the decoder instead, and I was right as it proves to be better.</p></li>
</ol>
<p>Don't hesitate to ask more questions if you are still unclear on some parts.</p>
<p>The 2 fold commit has LB of 0.01279. At last we break our 0.01280 wall... For 12 folds I need to run it on my local machine, or maybe on a gpu enabled kernel. Let me try.</p>
<p>Will do ASAP. Now that we have the target I can also get the LB score with 12 folds.</p>
<p>Yes, you stopped Kaggling at the time I started, it is good to see you again!</p>
<blockquote>
<p>how about just sorting them?</p>
</blockquote>
<p>Interesting, si that what you did?</p>
<p>I had a hunch since the start of competition that the use of transformer inspired architectures would do well here as they did in the recent Molecular properties. Indeed, here we have a 2D version of the molecular problem in a way. Not surprisingly, many teams thought of it and use that analogy better than I did, based on public LB score. </p>
<p>Anyway, here is my version. Please indulge me, this is the first time I create a complex (well, complex for me) NN). I did what I could in 10 days. I'm sure it can be improved in many ways. </p>
<p><strong>The transformer</strong></p>
<p>I decided to implement it from scratch using Keras because I wanted to learn the transformer architecture insetead of using someone else' implementation. For those not familiar with the transformer I recommend these two tutorials, they helped me a lot:</p>
<p><a href="http://jalammar.github.io/illustrated-transformer/">http://jalammar.github.io/illustrated-transformer/</a></p>
<p><a href="https://nlp.seas.harvard.edu/2018/04/03/attention.html">https://nlp.seas.harvard.edu/2018/04/03/attention.html</a></p>
<p>My thinking was heavily influenced by top team models in molecular competition, especially #6 solution for its simplicity:</p>
<p><a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/106407">https://www.kaggle.com/c/champs-scalar-coupling/discussion/106407</a></p>
<p>This model had a small number of layers, which was very interesting given the limit on cpu time here. I also look at his author git repo to sort out doubts like: is layer norm performed before or after dropout?</p>
<p><a href="https://github.com/robinniesert/kaggle-champs">https://github.com/robinniesert/kaggle-champs</a></p>
<p><strong>Data cleaning</strong></p>
<p>I normalized data like in <a href="https://www.kaggle.com/cpmpml/initial-wrangling-voronoi-areas-in-python">my public notebook</a>. I wish I had used 10*Dis rather than S, and also replaced A in 2017 by a constant. Adding these after deadline improved my CV by almost 0.00015. I wish I could see the effect on LB.</p>
<p>In addition to flipping along the X axis as in my notebook, I also flipped along the Y axis if need be so that rusher always moves towards top right when dislaying plays.</p>
<p><strong>Features</strong></p>
<p>My model uses almost no feature engineering. It uses all in all:</p>
<ul>
<li>Player features: <code>'X', 'Y', 'X_dir', 'Y_dir', 'X_S', 'Y_S', 'S', 'A', 'IsRusher', 'IsOnOffense'</code>, X and Y are relative to rusher position.</li>
<li>Distance matrix: square of inverse distance matrix. </li>
<li>Play features: rusher position and yardline.</li>
</ul>
<p>That's it.</p>
<p><strong>Architecture</strong></p>
<p>Similar to the Molecular solution I started from, I start with embedding players features into a latent vector via a dense layer. I used 64 long embeddings.</p>
<p>Then I use a distance attention bloc. To update a given player embedding I use a weighted sum of the other players embeddings. The weight depends on the distance. I tried various ways, and a normalized squared inverse was best. I was about to try other transforms when I decided to have them learnt by the model, via a 1x1 convolution bloc on the data. </p>
<p>All my convolution blocs have a skip connection and 2 convolution layers with ReLU activation. As in the transformer I used a Glorot uniforrm weight initialization everywhere I thought of. </p>
<p>The distance attention is added to the skip connection, then normalized with a custom LayerNorm, followed by dropout. I use dropout 0.25 everywhere.</p>
<p>Next bloc of layers is a vanilla transformer multi head attention. Well, that's what I tried to implement, and any difference with the transformer is a mistake and unintended. If someone has the courage to read my code and provide feedback then I'd be extremely grateful! I used 4 attention heads, and length of 16 for queries, keys, and values.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2F898d580b95f4591dedd56449075264f4%2Fgnn.png?generation=1574952801881228&alt=media" alt=""></p>
<p><strong>Isotonic regression</strong></p>
<p>The output of multi head attention restricted to the rusher embedding is concatenated with the embeddings of play features. Then this is fed into two output layers. The first one is a linear layer with 199 output units followed by a sigmoid activation. The output is the 199 probabilities. </p>
<p>Issue with this is that there is no reason why these probabilities are monotonically increasing. I didn't like the fix used in many public kernels, which was to replace each probability by the max of all probabilities until it. I didn't like it because if you did the transformation the other way round, starting from the right and taking the min, then you did not get the same result. </p>
<p>I tried to output a softmax and then compute cumsum, but this was slower. I ended up running an <a href="https://scikit-learn.org/stable/auto_examples/plot_isotonic_regression.html">isotonic regression</a> to make the output monotonically increasing. Isotonic regression improved a bit CV and LB over using max from start.</p>
<p><strong>Logistic output</strong></p>
<p>I was still unhappy with output. We are asked to output a cumulated distribution function, hence we should base it on a distribution. I looked at various distributions, but none were perfect fit. I ended up doing some EDA. Start with the cumulated histogram of all yards values in train, it looks like a sigmoid skewed on the right:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2Fb9fc1af4bb648c98c14767bf407d070d%2Fbaseline.png?generation=1574953256245004&alt=media" alt=""></p>
<p>This made me think of taking the logits of this cdf. It yields:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2F17e09c2ba8d3ee15fa906b7082da270d%2Flogits.png?generation=1574953330643322&alt=media" alt=""></p>
<p>This plot is very interesting. We see that logits increase linearly up to near 0, then there is a smooth transition, and logits increase again linearly past 0, albeit with a much smaller slope. This can be approximated quite well with two half lines as show below.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2Fe356bb45fd5660615a6ae0d9c9518b30%2Fapprox.png?generation=1574953954703110&alt=media" alt=""></p>
<p>All I needed are the slopes of the two lines, and the x, and y of where they meet. Said differently, I could recreate the output from 4 numbers. I implemented a custom layer that outputs the two straight lines values, followed by a sigmoid activation. I later settled on 3 numbers only, the two slopes, and x where they meet. This was as good if not better.</p>
<p>This new output was better than the simple one, but using both outputs was better. Probably because optimizing two different outputs adds some regularization.</p>
<p><strong>Training</strong></p>
<p>I used 12 fold time based (unshuffled) CV with validation folds drawn from 2018 only. Ie also down weighted 2017 samples by 0.5. This made CV way more in line with LB. Down weighting 2017 may have helped a bit given I had not standardized S and A correctly. I used my local machine with 2 1080 Ti for developing the model. I had been burned too much by Kaggle kernels being reset for no reason. For final submission, I uploaded my notebook, and only used last 2 folds. </p>
<p>I used Adam optimizer with a learning rate decay on plateaus, and early stopping. From what I read, a predefined linear decay was best, but I hadn't time to tune it.</p>
<p><strong>Data augmentation</strong></p>
<p>Last day of competition, my team mate Reza made me think of using predicted future positions of players. For a given play I created 2 copies, after 0.3 and 0.6 seconds, assuming straight trajectories and constant acceleration. I'm still wondering if acceleration is always in same direction as speed. Indeed, it could be that some players decelerate... Of course, play copies were put in same fold to avoid overfitting.</p>
<p>For final prediction I also created 2 copies of each test play, then averaged the predictions of the 3 plays. This yields almost 0.00010 improvement on CV.</p>
<p>Data augmentation led to a LB of 0.01299 less than one hour before deadline...</p>
<p><strong>Lessons</strong></p>
<p>First, I should have followed my hunch much earlier. I guess I was a bit intimidated by the task. Second, I wish I had cleaned data more, esp S and A as shared by many top teams. Third, I should not use Kaggle kernels for model development, they are too unreliable when running times exceed one hour. They get reset even if they are attended and used interactively.</p>
<p>One thing I don't regret is to have teamed with Reza. He helped me understanding NFL Football. Also his implementation of influence and pitch control was very enlightening. I want to use a similar idea (Gaussian mixture) in a layer to preprocess distances before distance attention. Last, but not least, his models are better than mine :D</p>
<p>All in all, even if we probably will miss gold I am quite happy because I learned a lot. And now I can follow writeups of people who also used the transformer architecture!</p>
<p>The code can be seen at <a href="https://www.kaggle.com/cpmpml/keras-80?scriptVersionId=24171638">https://www.kaggle.com/cpmpml/keras-80?scriptVersionId=24171638</a></p>
<p>Edit: a much better model can be seen in the latest version of the notebook: <a href="https://www.kaggle.com/cpmpml/graph-transfomer?scriptVersionId=24417998">https://www.kaggle.com/cpmpml/graph-transfomer?scriptVersionId=24417998</a></p>
<p>I have improved my code in several ways, including:
- implemented encoder/decoder attention of the original transformer architecture
- added squeeze and excitation to the convolutions
- S and A cleaning</p>
<p>My CV improved by about 0.00025, not enough to make it a top model, but still interesting. Data cleaning brings about 0.00010. </p>
<p>I think the main interest is the transformer implementation done with Keras functional api. It yields a code much more compact that what we can find online. Here is the updated NN architecture:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2Fe50d024a978c324b67182a554a2b66bd%2Ftransformer.png?generation=1575305725966574&alt=media" alt=""></p>
<p>The decoder part is simpler than the transformer one as I didn't include convolution bloc nor a self attention bloc, because the decoder input is so simple (only 3 features). The code can be seen at <a href="https://www.kaggle.com/cpmpml/graph-transfomer?scriptVersionId=24417998">https://www.kaggle.com/cpmpml/graph-transfomer?scriptVersionId=24417998</a></p>
<p>It had to be inversely correlated with distance: influence of close players is higher than influence from players far away. I started with inverse distance, then played a bit. inverse squared worked best for me.</p>
<p>OK, you are using selft attention on play. As I wrote, this is not useful here because play has only 3 features. The original transformer had both self attention (your code) and encoder/decoder attention (my code) on the decoder part.</p>
<p>You swapped x_inner and x_outer, right? Thsi is not the transformer, but why not? Have you checked if it is better?</p>
<p>This logic is the logic of the transformer architecture ;) I didn't innovate here, </p>
<p>Thanks! </p>
<p>Yes, it was used in some of the top solutions. #6 used gaussians based on distance in one attention layer. Solution #2 combined distance and transformer attention. I tried their combination but it wasn't as good as what I used on NfL data.</p>
|
NFL Big Data Bowl
|
My solution was public from the beginning
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks. Edited.</p>
<p>Sorry for the click bait title but it is 90% true. It is basically combination of these 2 kernels:
<a href="https://www.kaggle.com/divrikwicky/lightweight-version-of-2-65-custom-nn">https://www.kaggle.com/divrikwicky/lightweight-version-of-2-65-custom-nn</a>
<a href="https://www.kaggle.com/divrikwicky/nfl-lofo-importance">https://www.kaggle.com/divrikwicky/nfl-lofo-importance</a></p>
<p>First one is from the chemistry competition. I have used almost the same model in this competition. Conv1D with Global Pooling on player vs rusher interaction features instead of atom interaction features. Only difference is that this time I had 2 of them: rusher vs teammates sub-model and rusher vs opponents sub-model.</p>
<p>Second one is Leave One Feature Out method with my open source implementation. I have used LOFO to understand overall importance of the raw and generated features. I have also run LOFO separately for different seasons. It made it very quick for me to decide on which features have stable importance over time. I first run LOFO on target MAE with lightgbm and then remove/replace the features from my NN. This way I both benefit from speed of lightgbm and having double validation by two models even though I use only NN model for my submission at the end.</p>
<p>Since I am the author of the repo ( <a href="https://github.com/aerdem4/lofo-importance">https://github.com/aerdem4/lofo-importance</a> ), I am looking for feedback from people who have tried it in this competition. Please let me know any of your positive or negative stories with lofo-importance. Any suggestion is welcome.</p>
<p>Makes sense. This is also useful when there are too highly correlated but useful features. I will work on it and make it parametric.</p>
<p><a href="/robikscube">@robikscube</a> happy to hear that it was useful for you!</p>
<p>what kind of error did you get?</p>
<p>This is a good idea. This can be passed as an argument to the plotting function.</p>
<p>Nice to hear that you liked it!</p>
<p>Thanks for your valuable feedback:) When there are two highly correlated but useful features, LOFO will be underestimating their importance. Therefore, I don't recommend anyone to remove batches of least important features using LOFO. Features with negative importance can be removed in batches safely but the ones with positive close-to-zero importance needs to be removed one by one by re-running LOFO. In addition, I plan to make LOFO more parametric by making it Leave N Features Out. This should handle such cases.</p>
|
NFL Big Data Bowl
|
Private 13th place journey (pocket side)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks PSI :) <br>
I tend to think that there has to be a story to tell, if you really committed yourself to a competition ;)</p>
<p>It was a pleasure to work with you too ;)
Nice results in the end~~~</p>
<p>This competition was one of the best competition that I can remember. <br>
Diverse approaches, scores never stagnating, real world setup, no leaks and interesting data :) <br>
So I want to thank everyone involved, including the hosts, kaggle admins, the competitors and of course, my teammates. </p>
<p>Since I got an unexpected gold medal, I am going to write about my journey in this competition. <br>
By the way, our final model was a single model, best described by TripleLift (<a href="https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/119331">https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/119331</a>). </p>
<p>I started this competition early, with my hopes really high. <br>
It was a while since I gave kaggle a serious shot, and I was prepared to commit myself to this competition. <br>
The competition metric is basically an RMSE for each yards, so I built a LightGBM model for each 199 yards. <br>
Of course, this makes runtime really long, but since only the yards -5 to 15+ really matter for the score, <br>
I predicted yards -5 to 15 and filled the other yards with average. <br>
This gave me first place at that time, and I was fooled to believe this would be a winning approach (so silly looking back, haha) </p>
<p>After a while, I started dropping to the silver zone. <br>
Because making new features consistently improve my score, I decided to merge with my teammate Emin, who seemed to have a lot of good features. <br>
With his NN and features, we reached close to gold zone (but not enough). <br>
Since we were so far away from the top, I was thinking that there must be some kind of magic. <br>
I searched really hard, noticed the 2017/2018 difference of S and A, tried to make "the magic" happen, but couldn't make it. <br>
After a while, I noticed that adjusting the S and A to 2017 improved my LightGBM model, so I stuck with that adjustment. <br>
In hind-sight, this was really bad of me, because I should have tried different approaches for NN. <br>
This was a extremely important, because our after competition experiments show that adjusting to 2018 like Patrick did <br>
(<a href="https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/119314">https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/119314</a>) would have given us roughly the same score (2nd place). </p>
<p>Anyways, I was stuck in silver, and I made a last minute merge with TripleLift and yuhao. <br>
The merge went way better than I expected. Their model was a totally different one from ours, with basically no features and a simple embedding NN. <br>
I thought that this could have been all the difference, so I put in a lot of efforts to improve the NN. <br>
The most positive change I made was adding the "T+n" feature, which is where the players are, n seconds after the start. <br>
I also removed a lot of garbage features based on my experience of GBDT in this competition which might have improved the model. <br>
The last part was adding a multi-output. We were doing a softmax classification, so I added a regression part as well. <br>
This change was good in certain folds, especially the LB, so we used it, but not so sure about how positive it actually was. </p>
<p>This ended us just below the gold zone in publicLB, which was really disappointing given all the high hopes I had. <br>
I didn't expect much shake of the LB because of how consistent our CV and LB was throughout the competition. <br>
However, In the end, I was proven wrong again, and reached a happy gold medal this time. </p>
<p>Thanks for reading, and good luck to everyone in the future ;) </p>
|
NFL Big Data Bowl
|
Private 21st - 1st Competition
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I wish had been using the Kaggle Kernels. I didn't realize until a few days before the end of the contest that you can run 10 kernels at a time. I liked working locally but then should have been constantly copying models to the kernels and committing them.</p>
<p>Staying organized was also key. It was easy to get confused when running lots of models. I did an okay job until the end when I was rushing. Next time I'm definitely going to put some thought on the best way to stay organized.</p>
<p>After reading and following Kaggle competitions for a few years I finally decided to take the plunge. I am a huge New England Patriots fan and also love Deep Learning so this was the perfect competition for me. I had a ton of fun and learned so much. I am extremely glad I did it. To anyone reading this who hasn't done a competition yet - just go for it! The community is amazing and I promise you will have fun.</p>
<h3>Lessons Learned</h3>
<ol>
<li><strong>Data Analysis</strong> - I was pretty naive that the Public notebooks had enough data analysis in them. I was amazed at how much information was publicly shared but you still need to go through everything in detail. Next time I will make sure to spend a significant amount of time at the start on this. I missed the issues with S and A in the 2017 data.</li>
<li><strong>CV/PLB/Submit Early</strong> - I didn't full appreciate how important this is. Without accurate metrics its easy to go with stuff that doesn't help and throw away stuff that does help. I didn't submit to the LB until 3 days before the deadline. That was a huge mistake. My CV was way lower and I didn't have time to figure it out.</li>
<li><strong>Try Every Easy Idea</strong> - I convinced myself future time steps wouldn't help (too non-linear/random/players passing through each other) and never tried it. It would have been easy to hack up some linear next steps,</li>
<li><strong>Don't Fiddle With Params</strong> - I wasted a lot time fiddling with NN parameters. I think I spent weeks tinkering.</li>
<li><strong>Don't expect the model to figure out everything</strong> - I was amazed though at how well the Transformer did. I think I should have tried to help it out a bit more especially with the limit of 4 hours of CPU time.</li>
<li><strong>Code Competitions - Leave Time</strong> - I ran out of time and am not thrilled with my final two submissions. This was pretty tricky though with not having the 2019 data until after the submission.</li>
</ol>
<h3>Solution</h3>
<p>I basically went with a bunch of Transformer Encoder Layers with tied weights. Spent most of the time fiddling with its configuration and inputs/outputs. I missed the issues with 2017 data which in hindsight led to a lot of my difficulties in getting it to run correctly. Because of this I also never figured out the issues with the CV. My best submission had a <strong>0.01190 5-fold random CV and a 0.01295 PLB</strong>.</p>
<p>My best discovery was definitely <strong>concatenating the raw runner inputs onto every player</strong>. Looks like this allowed the model to figure out many of the features that the other competitors discovered to be important.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1156376%2F6ec577fc64cb4ede33084d77ac27a7b7%2Fnflkaggle%20(4" alt="">.png?generation=1575034228981776&alt=media)</p>
<p>oX and oY are cos(Orientation) and sin(Orientation), respectively. Position and Season are embeddings. (# values, # dims) Positions = ['RB', 'OL', 'QB', 'TE', 'WR', 'CB', 'DL', 'LB', 'SS']. Looks like I should have worked on removing this feature though based on reading about better models. I have my fingers crossed that by embedding the season the model can figure out the issues with 2017 data. 🙈 </p>
<p>I played around a lot with parameter sharing in the transformer layers. I found that 2 sets of weights interlaced seemed to work the best:</p>
<p><code>
for _ in range(self.n_layer // 2):
players = self.encoder_layer(players)
players = self.encoder_layer2(players)
</code></p>
<p>Although the gains were minor compared to just tying all weights together as was shown in:
ALBERT - <a href="https://arxiv.org/pdf/1909.11942.pdf">https://arxiv.org/pdf/1909.11942.pdf</a></p>
<p>Still not sure why adding FC Layers at the end didn't help. Guessing it might have had to do with the issues with S and A in 2017 and overfitting to it. It was cool though to see a 10K parameter network (16 dim) do pretty well at making predictions.</p>
<p>Also the 107 outputs were for -7 to 99 yard range. Interestingly I found -5 to 21 came out with the same scores. I wasted a lot of time on a "long run" predictor. I could never get it to be even close to predicting long runs.</p>
<p>Thanks to the hosts and all the competitors! Good luck everyone!</p>
<p>Thanks!</p>
<p>Thanks for the advice! I've already started working on the Data Science Bowl. 😃 I'm trying to stop thinking about this one though. After carefully reading over The Zoo's solution I realized I should have had 110 inputs to the transformer: cat(runner, defense, offense) - 1 x 10 x 11. I tried it out and the improvement is huge.</p>
|
NFL Big Data Bowl
|
Private 7th(Public 12th) solution&code with keras GCN library
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I am very grateful to competition organizers and Kaggle for holding this great competition. There are many different approaches like CNN, Transformer, GCN, MLP, Boosted Trees, and many feature engineering, and there was much to learn :)</p>
<p>My codes are available from : <a href="https://github.com/vinmuk/NFL-predict-yards">https://github.com/vinmuk/NFL-predict-yards</a>
Useful keras GCN library : <a href="https://github.com/danielegrattarola/spektral">https://github.com/danielegrattarola/spektral</a></p>
<h1>Overview</h1>
<ul>
<li><p>Data fix</p>
<ul><li>No data modification.</li>
<li>With some features, there was a difference in distribution, however, in my case, fix them by adjusting mean and standard deviation makes score worse.</li></ul></li>
<li><p>Model</p>
<ul><li>2-layer GCN with multi output.</li>
<li>Attention sum pooling layer (attention is computed by dot products of trainable weights and features of each player) follows each GCN layer </li>
<li>GCN layer is my original layer which combine Graph attention networks (GAT) and GraphConvSkip layer.</li></ul></li>
<li><p>Optimizer</p>
<ul><li>Adam(lr=1e-3)</li></ul></li>
<li><p>Loss function</p>
<ul><li>Binary crossentropy and mae for the last layer</li>
<li>Binary crossentropy loss with divided output into 50 sections are located immediately after each GCN layer. This hastens the time to convergence.</li></ul></li>
<li><p>Data Augment</p>
<ul><li>Flip y axis</li>
<li>Add data after a few seconds doesn't work</li>
<li>TTA doesn't work</li></ul></li>
<li><p>Feature engineering</p>
<ul><li>115 features for each player</li>
<li>Separate player feature and play feature made score worse</li>
<li>Below features after 0~2 seconds significantly improved score.
<ul><li>the number of opponent players around each player.</li>
<li>whether to collide with rusher when traveling a certain distance</li>
<li>distance from rusher</li></ul></li></ul></li>
<li><p>Post process</p>
<ul><li>Padding by 0 or 1 for parts that cannot be reached from the current position.</li></ul></li>
</ul>
<p>Thank you!</p>
|
NFL Big Data Bowl
|
Private LB 37th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First off, congratulations to the winners and thanks to NFL & Kaggle for hosting the competition!</p>
<p>This was one of my first major Kaggle competitions and was quite surprised to end up in the top 50 so thought I’d share a few details of my solution (I waited until the very end for fear of a disastrous code break in the final weeks). I started 52nd on the public LB and ended up in 37th.</p>
<p>I focussed mainly on feature engineering with the spatial variables and ended up passing these into a fully connected 128 (dense) x 64 (dense) x 199 (softmax) neural network.</p>
<p>One of my regrets was spending so much time on an initial model which had fundamental issues with the features and expecting small tweaks to yield a breakthrough.</p>
<p>An entire rethink of my feature engineering yielded a breakthrough when I finally got a grip on the spatial and temporal features.</p>
<p>The first portion of my feature vector as input was fairly standard rusher specific info which you would expect, but the majority of the vector was filled with spatial counts (both boxed and radial) of defenders relative to the rusher. This was done using box and radial counts at T = 0, T = 0.5, T = 1.0, using the speed and direction field to ‘move’ the play forward in time. </p>
<p>Pictures always make more sense:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3982690%2Fa639de3527be947752f674c1245ec9d0%2Frusher_pic.png?generation=1578518825464910&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3982690%2Fe1304d0eebd13d4cf37b27b2c6117140%2Frusher_pic_radius.PNG?generation=1578519155871188&alt=media" alt=""></p>
<p>Adding a few aggregate stats (min, mean) on the distance of defenders relative to rusher at each time gave me around 80 features in the final model.</p>
<p>I had wanted to incorporate the offensive players, but simple counts didn’t yield me any significant improvement. More nuanced treatment was needed, given their role as blockers.</p>
<p>Beyond this, I spent most of the time making my code as robust as possible (rather than trying any particular tweaks – data augmentation, post processing etc. ) which seemed to turn out ok!</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.