Title of Competition
stringclasses 343
values | Title of Writeup
stringlengths 6
139
| User
stringclasses 1
value | Writeup
stringlengths 419
80.4k
|
---|---|---|---|
Jigsaw Multilingual Toxic Comment Classification
|
8th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I tried a few sampling techniques and as said I dont think it has any huge impact. I rather did some downsampling in the end to speed up training.</p>
<p>In the end only using public LB.</p>
<p>Thanks a lot to Kaggle and Jigsaw to host this competition. I am very happy to achieve my second solo gold medal within a week :)</p>
<p>My main goal when starting this competition was to learn how to use TPUs on Kaggle. I started with Pytorch, switched to Tensorflow, and then switched back to Pytorch. I think overall the usability is still very shaky, and 3 hour limit is also limiting things quite a bit. There is constant progress on it though, and I hope that they will be better usable in the future. I made a separate <a href="https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification/discussion/159723">topic</a> before.</p>
<p>The whole test set was visible in this competition, so subbing was very easy and blending many models was very important. Unfortunately, I could not select my best sub because it included a blend with a <a href="https://www.kaggle.com/shonenkov/tpu-inference-super-fast-xlmroberta">kernel</a> where no training script was available. I am happy I followed my heart to not select it even though I would have ranked a few spots higher with it. I really would encourage people to stop posting inference only kernels, because logically people will use them if they score well, but the learning potential is quite limited imho.</p>
<h3>Models</h3>
<p>There is not much differences in the models I fit to those that have been posted throughout the competition. I could not get any better results compared to using XLM-Roberta-Large. I also tried uni-language models without any improvements over XLM-Roberta.</p>
<p>I also do not believe that any advanced sampling technique, label smoothing, augmentation, or other stuff floating around was giving any improvements. </p>
<p>The only real important thing was to translate the training data to the languages at hand. This is in line with the experiments reported in the original <a href="https://arxiv.org/pdf/1911.02116.pdf">XLM-Roberta paper</a>.</p>
<p>I blended models by doing rank-averaged blending.</p>
<h3>Data</h3>
<p>I only used provided data and translations of it.
I think the most untapped potential was to use talk pages from Wikipedia dumps to further train the models. I did not have time to explore this, but I hope someone else did and can report on it.</p>
<p>I actually think one of the main issues of the provided English data was that it included barely any Bot comments, but the test data contained quite many of them, specifically Turkish which is also why I think the score is better there than for others.</p>
<h3>Post-processing</h3>
<p>After blending, post-processing was quite important for me to reach high scores. As the metric is a global AUC over six different languages, it became quite quickly obvious to me that there should be room to optimize the global score by shifting predictions of individual languages up or down. </p>
<p>I tested this on CV with the three languages and managed to get very good boosts there. I further tested how a simulated public and private dataset would behave in terms of optimization, assuming a random split between public and private. So what I did was to repeatedly select random 30% of CV, optimize the thresholds for the 30% and apply them to the remaining 70% and check how far off they would be from the optimal thresholds of the 70%. Overall, I saw that there was not much difference, which is why I was quite confident that I can optimize the thresholds on public LB and they should work well for private LB. Optimizing on CV alone does not work because CV only has 3 languages. This is an example plot of such an analysis:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F37166%2F14ef07f17e3b1eda7b98ebac1f6cb46b%2Fdownload.png?generation=1592897946838419&alt=media" alt=""></p>
<p>The x-axis is the scaling factor for the language at hand, the y-axis is the improvement of the score over no scaling. The blue line is the improvement of the 30% sample and the orange of the 70% sample. Note that the improvement is not 0 for factor 1.0 in this case as another language has already been optimized.</p>
<p>Additionally, I optimized a range of values for which I optimize the scaling factors. I found that not scaling the top predictions improved scores, so I only scaled predictions between 0.25 and 0.875 (rank average values). You can get some further minor improvements by averaging the predictions for similar texts.</p>
<p>My whole PP routine boosted scores by roughly 0.004, so quite a lot.</p>
|
Jigsaw Multilingual Toxic Comment Classification
|
Second silver in one week
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks bro and congratulations for 4X Expert !</p>
<p>Congratulations all, I am happy to belong to such an amazing community and happy to see new expert like me,new Master like my teammate <a href="/futureboykid">@futureboykid</a> and new GM.
see you soon in another leaderboard !</p>
<p>Thank you Mehdi ! Congrats to you too </p>
<p>Thanks a lot </p>
|
Jigsaw Multilingual Toxic Comment Classification
|
🏅 Best single models 🏅
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="/konohayui">@konohayui</a> you can see it at the last section of the model <a href="https://www.kaggle.com/hmendonca/jigsaw20-xlm-r-lb0-9487-singel-model#Apply-language-multipliers">https://www.kaggle.com/hmendonca/jigsaw20-xlm-r-lb0-9487-singel-model#Apply-language-multipliers</a> </p>
<p>thanks for let us know <a href="/leecming">@leecming</a> your <a href="https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification/discussion/160862">write-up</a> is very inspiring!
I assume from your msg that you haven't used your distilled model in your final ensemble, is that right?
Moreover, how do you prevent your use of PL to interfere with the ensemble diversification?</p>
<p>Indeed, I agree you that the vocab embeddings seem to be the bottle neck of the current models, both in learning capacity as well as in number of parameters, which hits me as a possible improvement to the NLP models. However, that might not be a major issue on the new huge models like GPT3, we'll see...
My team mate <a href="/ratthachat">@ratthachat</a> and I tried to mitigate the issue adding more layers at the top of the model instead, but only with some moderate success.</p>
<p>Thanks <a href="/drpatrickchan">@drpatrickchan</a> Congrats on your first gold medal and the Comp Master! :)</p>
<p>I wonder what were the best singles models around.</p>
<p>From the write ups it seems like most people got XLM-R around 0.942* and heavily rely on ensembling, post-processing and other tricks (not diminishing their amazing achievement).</p>
<p>Anyway, I'd like to share a <a href="https://www.kaggle.com/hmendonca/jigsaw20-xlm-r-lb0-9487-singel-model">single model</a> (weights and code) with pseudo labels and knowledge distillation that got to <strong>LB 0.9475</strong> (and <strong>0.9487</strong> after applying the hardcoded multipliers from @christofhenkel thx!) here <a href="https://www.kaggle.com/hmendonca/jigsaw20-xlm-r-lb0-9487-singel-model">https://www.kaggle.com/hmendonca/jigsaw20-xlm-r-lb0-9487-singel-model</a></p>
<p>Unfortunately we failed to ensemble as well as others and did not explore the language modifiers beforehand. This comp was an amazing learning experience for me doing NLP for the first time in my life :D</p>
<p>The main take out for me was that relying to heavily on pseudo labels seem to have made blending a much harder task (knowledge distillation seem to have worsen that issue). It took me too long to realize that but lesson learned for the next comps. I'll extend my report when I find some more time, but please let me know your thoughts on that!
(and please share your models if you can)</p>
<p>Cheers!</p>
|
Jigsaw Rate Severity of Toxic Comments
|
118th, Comparison-based Approach
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>In this competition, I used a comparison-based approach. I thought training a comparer was the easiest way to combine all existing datasets: datasets of the two previous competitions, and the Ruddit dataset.</p>
<h3>Model</h3>
<p>My approach is a little like the popular <code>MarginRankingLoss</code> approach. The <code>MarginRankingLoss</code> compares two output logits to calculate the loss, while I predict logits based on two embeddings and train the models with BCE loss. As shown in the figure below, given two texts, the model needs to predict whether Text 1 is more toxic than Text 2, and the label is 0 or 1.</p>
<p>After encoding two comments, I also made a cross-attention to get better representations. You can refer to my inference <a href="https://www.kaggle.com/wuyhbb/deberta-jigsaw-inference" target="_blank">notebook</a> for the detail of the model.</p>
<p><img src="https://imgur.com/fXHTSO0" alt="Model Structure"></p>
<h3>Data</h3>
<p>Jigsaw 1: use all the comments with the label “Toxic” and without the label of “Severe Toxic”, and build random pairs with other comments<br>
Jigsaw 2: used comments whose toxicity_annotator_count is large than 10. Randomly build pairs and determined the label with scores calculated as severe_toxicity*8+toxic<br>
Ruddit: Randomly build pairs and determined the label with scores.</p>
<p>I trained the model with all the data from different sources, and I did not use the validation data to train.</p>
<h3>Results</h3>
<table>
<thead>
<tr>
<th>Model</th>
<th>Validation</th>
<th>Private</th>
<th>Public</th>
</tr>
</thead>
<tbody>
<tr>
<td>Roberta Base</td>
<td>0.7034</td>
<td>0.79937</td>
<td>0.74746</td>
</tr>
<tr>
<td>DEberta Base</td>
<td>0.7036</td>
<td>0.80251</td>
<td>0.73669</td>
</tr>
</tbody>
</table>
<p>It is a pity that I had a severe bug, and I did not submit the right result.</p>
<p>Special Thanks to Z by HP & NVIDIA for sponsoring me a Z4 Workstation with dual A6000 GPU!</p>
<p>Training large transformer models with 48GB VRAM is awesome!</p>
|
Jigsaw Rate Severity of Toxic Comments
|
11th & 44th place solution - toxidity as percentile
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Always glad to help. :-) <br>
On Github we have an example notebooks folder with plenty of stuff.</p>
<p>Hi,</p>
<p>I am Thomas and creator of an automl library called e2eml (we are still in baby shoes yet).<br>
I justed wanted to test the library and see how far I can go with it.</p>
<p>I finished 44th place, but also had an unselected submission, which would have hit 11th place. I did not select it, because I thought it would run too long and exceed the 9 hours limit (my fault misassuming that).</p>
<p>**So what has been the strategy? **<br>
I started very late into the competition and quickly looked around some notebooks. I found these toxidity weights everyone seemed to use. I did not train my models directly on this because these values seemed a little static. So I followed the following approach:</p>
<ul>
<li>in a few hunded permutations I sampled from the full dataset</li>
<li>based on the toxiidity score I calculated the percentile within that sample (so interpreting toxidity as a relative position)</li>
<li>after running all permutations I averaged the percentiles for each comment</li>
</ul>
<p>From a model architecture the 11th place solution has been a refined Roberta model, which has been pretrained on toxic comments (available on huggingface). I created five model weights on different random splits and averaged them. As it looked too slow on inference I did not chose it for submission.</p>
<p>The 44th place solution took only three of these refined Roberta weights and blended it with a SGD on TF-IDF model. </p>
<p>Some notebooks including the 44th place inference have been published as examples on our <a href="https://github.com/ThomasMeissnerDS/e2e_ml/tree/develop/Example%20notebooks/Kaggle%20competition%20examples/Jigsaw%20Rate%20Severity%20of%20Toxic%20Comments%202021" target="_blank">Github repo</a>.</p>
|
Jigsaw Rate Severity of Toxic Comments
|
12th Solution – Trust Your CV
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h5>First of all, thank you for the organizers and other participants. I’ve learned many things including the importance of CV.</h5>
<h4>Solution</h4>
<p>My solution is summarized as</p>
<ul>
<li>Use ‘validation_data.csv’ for the CV estimation. (Do not use it for the training)</li>
<li>Training dataset - ‘jigsaw-unintended-bias-training.csv’ and ‘jigsaw-toxic-comment-train.csv’ in ‘<a href="https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification" target="_blank">Jigsaw Multilingual Toxic Comment Classification</a>’ challenge</li>
<li>Model – TF-IDF with/wo text cleaning, distilroberta-base, roberta-base, roberta-large</li>
<li>Ensemble of all trained model and TF-IDF</li>
</ul>
<h4>CV</h4>
<p>Among my solution, the most important thing is to believe the CV. So, I firstly introduce the discussion that encourages me to focus on CV not LB.</p>
<ul>
<li><a href="https://www.kaggle.com/c/jigsaw-toxic-severity-rating/discussion/287033" target="_blank">Carefull, LB is only 5%</a></li>
<li><a href="https://www.kaggle.com/c/jigsaw-toxic-severity-rating/discussion/287147" target="_blank">Classic CV vs LB Discussion</a></li>
<li><a href="https://www.kaggle.com/c/jigsaw-toxic-severity-rating/discussion/289089" target="_blank">Higher CV, lower LB</a></li>
</ul>
<p>I find that it is hard to obtain CV score over 0.70 (using validation_data.csv), so my goal is to obtain <strong>ensemble of model that stably gives</strong> CV 0.7.</p>
<h4>Dataset</h4>
<p>It seems most of the public kernel use ’jigsaw-toxic-comment-train.csv’ from ‘Toxic Comment Classification Challenge’. It seems that data is subset of the data in ‘<a href="https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification" target="_blank">Jigsaw Multilingual Toxic Comment Classification</a>’. so I use the data from Multilingual challenge.<br>
In the toxic comment and unintended bias, there are columns including ‘toxic’, ‘severe_toxic’, etc. I just give weight 1 for all cols except ‘severe_toxic’ that has weight 2. (It seems there is no huge difference on CV) </p>
<h4>Model and Ensemble</h4>
<p><strong>TF-IDF</strong> – the code for TF-IDF is barely modified from the referenced code</p>
<ul>
<li><a href="https://www.kaggle.com/yamqwe/ridge-fasttext-improved-roberta#Toxic-clean-data" target="_blank">🔥 Ridge + fastText + Improved RoBERTa 🔥</a></li>
<li><a href="https://www.kaggle.com/vitaleey/tfidf-ridge" target="_blank">TFIDF+RIDGE</a></li>
</ul>
<p><strong>Bert-like model</strong> - distilroberta-base, roberta-base, and roberta-large. Trained with fasthug. </p>
<ul>
<li><a href="https://www.kaggle.com/alibaba19/jigsaw-training-ulmfit-w-fastai" target="_blank">Jigsaw Training - ULMFIT w/FastAi</a></li>
</ul>
<p><strong>Submission (ensemble)</strong> – use exhaustive search and equal weights for the submissions. Both works but equal weights is better.</p>
<ul>
<li>Sub 1 (CV 0.706, Public 0.77805, Private 0.81029) - exhaustive search to maximizes CV </li>
<li>Sub 2 (CV 0.705, Public 0.77631, Private 0.81044) - Similar weights except roberta-large (Actually, in case of roberta-large, I trained few epochs due to the computational budget…) CV 705, Public 0.77631 Private 0.81044</li>
</ul>
<p>The inference kernel is updated <a href="https://www.kaggle.com/learnitanyway/12th-solution-jrstc-tf-idf-and-fastai" target="_blank">here</a>. Thank you again for all participants and the organizers and congratulations to all winners.</p>
|
Jigsaw Rate Severity of Toxic Comments
|
14th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks and congrats to you too! What would you like to know about the fine-tuning? I think I found that ruddit was less effective than jigsaw 1 or 2, but it did seem to have a positive effect in at least some cases in my experiments.</p>
<h1>Overview</h1>
<p>The public LB didn't seem very useful so my strategy was to just maximise my validation score. My final submission is a weighted mean of 6 (5-fold) transformers. It was both my highest CV and highest private LB score, so I'm glad I trusted my CV this time.</p>
<h1>Data</h1>
<p>I tried to include as many different datasets as I could. For training I used:</p>
<ul>
<li>the validation set</li>
<li>jigsaw 1: the data from <a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge" target="_blank">the first jigsaw competition</a></li>
<li>jigsaw 2: the data from <a href="https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification" target="_blank">the second jigsaw competition</a></li>
<li>ruddit: the data from <a href="https://aclanthology.org/2021.acl-long.210/" target="_blank">Ruddit: Norms of Offensiveness for English Reddit Comments</a></li>
<li>offenseval2020: the data from <a href="https://sites.google.com/site/offensevalsharedtask/results-and-paper-submission" target="_blank">OffensEval 2020: Multilingual Offensive Language Identification in Social Media</a>. This dataset was uploaded to haggle by <a href="https://www.kaggle.com/vaby667" target="_blank">@vaby667</a> <a href="https://www.kaggle.com/vaby667/toxictask" target="_blank">here</a></li>
</ul>
<h3>CV strategy</h3>
<p>I used the <a href="https://www.kaggle.com/columbia2131/jigsaw-cv-strategy-by-union-find" target="_blank">Union-Find</a> method by <a href="https://www.kaggle.com/columbia2131" target="_blank">@columbia2131</a> to generate folds that didn't have any texts leaked across folds. I didn't use majority voting or any other method of removing disagreements from the data, as I thought this would just make the validation set artificially easier and less similar to the test data.</p>
<h1>Training</h1>
<h3>Loss</h3>
<p>I used Margin Ranking Loss to train with the validation data, and tried both Margin Ranking and MSE with the other datasets. It was fairly easy to create large amounts of ranked paired data from the extra datasets, but I didn't find that this improved performance over just training them directly on the labels with MSE loss, and also required lower batch sizes.</p>
<h3>Evaluation</h3>
<p>When fine-tuning on the extra datasets, I computed average agreement with annotators on the validation set at each evaluation, and used early stopping. I trained for multiple epochs on the small datasets (jigsaw 1 and ruddit), evaluating once an epoch. For the large datasets (offenseval and jigsaw 2) I usually only trained for 1 epoch, evaluating every 10% the epoch's steps.</p>
<p>When training on the validation data, I trained 5 models, using 4/5 folds for training and the remaining fold as validation for each one. I computed the CV at each epoch and used the model weights from the epoch that had the highest CV. </p>
<p>I had original started out using fold-wise early stopping, but I discovered that <a href="https://twitter.com/a_erdem4/status/1483379758331269123" target="_blank">this leads to overly optimistic CV scores</a>.</p>
<h3>Multi-stage fine-tuning</h3>
<p>I found that taking a model I had already fine-tuned, and fine-tuning it on another dataset improved performance. This worked across multiple fine-tuning stages. The order that worked best was to start by fine-tuning on the larger, and lower scoring, datasets first, and then on the smaller ones after. </p>
<p>For example:</p>
<ol>
<li>fine-tune a pretrained model on offenseval (validation: ~0.68)</li>
<li>use #1 to fine-tune on jigsaw 2 (validation ~0.695)</li>
<li>fine-tune #2 on jigsaw 1 (validation ~0.7)</li>
<li>fine-tune #3 on ruddit (validation ~0.705)</li>
<li>fine-tune 5 folds on the validation data, using #4 (CV: 0.71+)</li>
</ol>
<h3>Hyperparameters</h3>
<p>I started out training most of the base-sized models with 1e-5 on earlier fine-tuning stages and reduced to 1-6 on the later ones. I used 1e-6 and then 5e-7 on the larger models. At each stage I also used warmup of 5% and linear LR decay. Even with mixed precision training, I could only fit batch size 8 on the GPU with the large models, so I used gradient accumulation to simulate batch sizes of 64.</p>
<h1>Models</h1>
<p>Here's a table of model results. </p>
<table>
<thead>
<tr>
<th>base model</th>
<th>folds</th>
<th>CV</th>
<th>final submission</th>
</tr>
</thead>
<tbody>
<tr>
<td>deberta-v3-base</td>
<td>5</td>
<td>0.715</td>
<td>yes</td>
</tr>
<tr>
<td>distilroberta-base</td>
<td>5</td>
<td>0.714</td>
<td>yes</td>
</tr>
<tr>
<td>deberta-v3-large</td>
<td>5</td>
<td>0.714</td>
<td>yes</td>
</tr>
<tr>
<td>deberta-large</td>
<td>10</td>
<td>0.714</td>
<td>yes</td>
</tr>
<tr>
<td>deberta-large</td>
<td>5</td>
<td>0.713</td>
<td>yes</td>
</tr>
<tr>
<td>rembert</td>
<td>5</td>
<td>0.713</td>
<td>yes</td>
</tr>
<tr>
<td>roberta-base</td>
<td>5</td>
<td>0.711</td>
<td>no</td>
</tr>
<tr>
<td>roberta-large</td>
<td>5</td>
<td>0.708</td>
<td>no</td>
</tr>
</tbody>
</table>
<ul>
<li>I mostly stuck to using roberta and deberta variants because they always perform well. If I had had more time I would've tried some others, but I spent most of the time trying out different combinations of datasets.</li>
<li>The reason I tried rembert was because I wanted to make use of the multilingual <a href="https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification" target="_blank">jigsaw 3</a> data. I wasn't able to get any improvement from including the extra data, but I was still able to get reasonably good performance out of rembert.</li>
<li>Deberta-large (v1) is in there twice because I did an experiment with a 10 fold model which turned out quite well. I didn't want to keep training 10 fold models though because it took too long.</li>
<li>I think all of the large models are slightly under-trained. Training on large datasets like offenseval and jigsaw 2 took over 24 hours so my colab instances timed-out. </li>
</ul>
<h1>Ensembling</h1>
<p>My final submission was a weighted mean of 6 models which were selected from a pool of models by trying to add each of them to the ensemble one at a time, tuning the weights with Optuna for each combination of models, and greedily selecting whichever model increased the OOF score the most. This was repeated until the OOF score stopped improving. My best score was 0.7196.</p>
<p>Interestingly, the highest weighted models ended up being deberta-large and rembert, despite those having lower CV scores.</p>
<h1>Things which didn't work</h1>
<h3>The measuring hate speech dataset</h3>
<p>The <a href="https://huggingface.co/datasets/ucberkeley-dlab/measuring-hate-speech" target="_blank">Measuring Hate Speech dataset</a> by ucberkeley-dlab seemed like it was going to be useful, but the labels didn't seem to match the annotations in the validation set for this competition very well. I was unable to get more than 0.656 with this data.</p>
<h3>Binary labels</h3>
<p>I wanted to try to make use of the binary labelled data too (<a href="https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification" target="_blank">jigsaw 3</a> and <a href="https://www.kaggle.com/ashwiniyer176/toxic-tweets-dataset" target="_blank">toxic tweets</a>). I tried fine-tuning on these datasets, and used the model's predicted probability of the positive class as an output for inference and evaluation. I was able to get 0.68 on the validation set with this method, but I found that it didn't chain together with my multi-stage fine-tuning approach as I had to modify the last layer of the model to switch between regression and classification tasks.</p>
<h3>TF-IDF with linear models</h3>
<p>I experimented with it a little bit, but wasn't able to get anything over 0.7 on the validation set so I gave up on it.</p>
<h3>Word vectors with linear models</h3>
<p>I tried encoding each comment as the average of spaCy word vectors, and using this as an input into various linear models. It did about as well as TF-IDF.</p>
<h1>Improvements</h1>
<p>As I've already mentioned, I think my large models are under-trained due to GPU limitations. I was surprised by how much rembert helped the ensemble, so think I could've made a stronger ensemble by choosing some more diverse model architectures instead of focusing on deberta and roberta so much.</p>
<p>I guess the validation scores are not really comparable unless we used the same CV strategy and preprocessing</p>
<p>I forgot to add that I made my github repo public too in case anyone wants to see my source code: <a href="https://github.com/AMontgomerie/jigsaw-toxic-severity-competition" target="_blank">https://github.com/AMontgomerie/jigsaw-toxic-severity-competition</a></p>
<p>The second submission was just my highest scoring single model: deberta-v3-base 0.715, which got 0.803 on private LB.</p>
<p>I tried using hate_speech_score as the target first, which produced the 0.656 score I mentioned. I also tried using the sum of a subset of the labels like ["insult", "humiliate, etc. ] and using this as the target (since this worked well for jigsaw_1) but it performed worse than using the hate_speech_score. </p>
<p>I guess there might be some better ways to calculate targets since there are a lot of features in this dataset, but I didn't find a good way to use them.</p>
<p>I tried both MSE loss with the original target values and MarginRankingLoss with paired comments and found that MSE worked better. When fine-tuning on ruddit I also found that training for lots of epochs (compared to when fine-tuning on the other datasets) was beneficial.</p>
|
Jigsaw Rate Severity of Toxic Comments
|
1st place solution with code
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Apologize for the confusion. Weights were generated during validation (in those valid.py of the training code) by genetic algorithms. You can generate them by re-running my training code (results may differ a little bit due to the randomness of the algorithm). For inference, I simply copied and hard-coded the weights in the script rather than upload them as files (as we all do for transformer weights) because the number of weights are very small.</p>
<p>Public LB looks misleading so I focused on the validation performance only. Since it's unclear whether there is any distribution shift between validation data and private LB data, I only considered linear models build on validation data to maximize the robustness of the solution. To achieve this, I trained models on the Jigsaw2018 data and use the predicted probabilities (6 output) as input features and fit a linear model on the validation data. Weights were optimized with genetic algorithms. I did the same for Jigsaw2019 data (7 output). The Ruddit data have only a single label so no optimization is done on the validation data. </p>
<p>The table below shows the performance of all my models. I only used roberta and deberta models as they gave the best performance. Final submission is a weighted rank average of 15 models. "-l" means I included the duplicates between Jigsaw18 and validation.</p>
<table>
<thead>
<tr>
<th>model</th>
<th>data</th>
<th>validation</th>
<th>public LB</th>
<th>private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>roberta-base</td>
<td>jigsaw18</td>
<td>0.7023</td>
<td>0.7815</td>
<td>0.8052</td>
</tr>
<tr>
<td>roberta-large</td>
<td>jigsaw18</td>
<td>0.7035</td>
<td>0.7788</td>
<td>0.8064</td>
</tr>
<tr>
<td>deberta-base</td>
<td>jigsaw18</td>
<td>0.7040</td>
<td>0.7598</td>
<td>0.8030</td>
</tr>
<tr>
<td>deberta-large</td>
<td>jigsaw18</td>
<td>0.7050</td>
<td>0.7906</td>
<td>0.8139</td>
</tr>
<tr>
<td>roberta-base-l</td>
<td>jigsaw18</td>
<td>0.7028</td>
<td>0.7690</td>
<td>0.8070</td>
</tr>
<tr>
<td>roberta-large-l</td>
<td>jigsaw18</td>
<td>0.7027</td>
<td>0.7737</td>
<td>0.8013</td>
</tr>
<tr>
<td>deberta-base-l</td>
<td>jigsaw18</td>
<td>0.7030</td>
<td>0.7474</td>
<td>0.8013</td>
</tr>
<tr>
<td>deberta-large-l</td>
<td>jigsaw18</td>
<td>0.7044</td>
<td>0.7716</td>
<td>0.8085</td>
</tr>
<tr>
<td>roberta-base</td>
<td>jigsaw19</td>
<td>0.7008</td>
<td>0.7617</td>
<td>0.8020</td>
</tr>
<tr>
<td>roberta-large</td>
<td>jigsaw19</td>
<td>0.6991</td>
<td>0.7468</td>
<td>0.7968</td>
</tr>
<tr>
<td>deberta-base</td>
<td>jigsaw19</td>
<td>0.7026</td>
<td>0.7403</td>
<td>0.7958</td>
</tr>
<tr>
<td>roberta-base</td>
<td>ruddit</td>
<td>0.6859</td>
<td>0.8108</td>
<td>0.7845</td>
</tr>
<tr>
<td>roberta-large</td>
<td>ruddit</td>
<td>0.6865</td>
<td>0.8132</td>
<td>0.7955</td>
</tr>
<tr>
<td>deberta-base</td>
<td>ruddit</td>
<td>0.6880</td>
<td>0.7903</td>
<td>0.7880</td>
</tr>
<tr>
<td>deberta-large</td>
<td>ruddit</td>
<td>0.6942</td>
<td>0.8296</td>
<td>0.7989</td>
</tr>
<tr>
<td>ensemble</td>
<td>jigsaw18</td>
<td></td>
<td>0.7763</td>
<td>0.8103</td>
</tr>
<tr>
<td>ensemble</td>
<td>jigsaw19</td>
<td></td>
<td>0.7509</td>
<td>0.8012</td>
</tr>
<tr>
<td>ensemble</td>
<td>ruddit</td>
<td></td>
<td>0.8235</td>
<td>0.7983</td>
</tr>
<tr>
<td>ensemble</td>
<td>all15</td>
<td></td>
<td>0.7879</td>
<td>0.8139</td>
</tr>
</tbody>
</table>
<p>At last I would like to thank Jigsaw and Kaggle for hosting this competition.</p>
<p>Link to inference code: <a href="https://www.kaggle.com/wowfattie/notebook9298460840" target="_blank">https://www.kaggle.com/wowfattie/notebook9298460840</a><br>
Link to training code: <a href="https://github.com/GuanshuoXu/Jigsaw-Rate-Severity-of-Toxic-Comments" target="_blank">https://github.com/GuanshuoXu/Jigsaw-Rate-Severity-of-Toxic-Comments</a></p>
|
Jigsaw Rate Severity of Toxic Comments
|
24th Place: Late Interaction + TPUs + More
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Late Interaction Mechanism</h2>
<p>The idea for this is loosely taken from the late interaction mechanism of <a href="https://arxiv.org/pdf/2004.12832.pdf" target="_blank">ColBERT</a> paper. <br>
The task at hand is to rank 14,000 comments based on their level of toxicity. This is similar to search ranking on Google, where a model must quickly sort through millions of search results based on a query. I started reading papers on search ranking in the hope of finding something useful. This is where I came across Cross-encoders, Bi-encoders, and the Late interaction mechanism. </p>
<p>I have made a post about <a href="https://www.kaggle.com/c/jigsaw-toxic-severity-rating/discussion/296349" target="_blank">Cross Encoders vs Bi Encoders</a> previously. Late interaction mechanism can be seen as combining the representation power of cross-encoders and the low latency of bi-encoders. </p>
<p>We are given the training data in pair form of (less_toxic_comment, more_toxic_comment). </p>
<p>A cross-encoder would be able to better model this pairwise interaction but it cannot be used in test time because there are total of 14k*14k/2 = 98M possible pair interactions between the comments.<br>
On the other hand, a bi-encoder would only be able to look at each comment individually and would compress the comment into a vector.</p>
<p>Here is where I use the late interaction mechanism to approximate all the pairwise interactions in a time-efficient way. This boosted my public LB from 0.844 to 0.872. Sadly it did not have such a big impact on private LB.</p>
<h2>Other Tricks</h2>
<ul>
<li><strong>Psuedo Labeling</strong>: I used the 2018 competition dataset for pseudo-labeling which gave a big improvement in score</li>
<li><strong>Muppet Roberta Large</strong>: I got better performance from Muppet Roberta Large backbone by Facebook, probably because it was already trained on sentiment and sentence similarity tasks.</li>
<li><strong>Ensembling</strong>: Ensembling on a single backbone did not help because of soft labels of pseudo labels. On retrospect I should have ensembled more.</li>
<li><strong>Loss Function:</strong> I used dice loss with alpha 0.25 from the paper <a href="https://arxiv.org/abs/1911.02855" target="_blank">Dice Loss for Data Unbalanced NLP Tasks</a> to train on the binary labels: severe_toxic, identity_hate, threat, toxic, obscene, and insult. I came across this paper while researching the Feedback Competition, but it also turned out to be useful for this competition. </li>
<li><strong>TPUs</strong>: To speed up the training I used a batch size of 512 with mixed precision. With this training a Roberta large model on 200k+ comments took <5 mins / epoch. I generally use Tensorflow+TPU for NLP because 20 hours of free 8xTPUv3 >> 30 hours of free P100 GPU. </li>
<li><strong>Hyperparameters</strong>: To make up for the large batch size I set beta_2 to 0.99 and used a cyclic learning rate with a max LR of 16e-5. </li>
<li><strong>SWA</strong>: I also implemented a SWA callback in Keras to checkpoint the weights at the end of each cycle. I averaged the weights in the end (Model Soup).</li>
<li><strong>Moving Average</strong>: Small lift with using EMA of 0.99</li>
</ul>
<p>I used some of the highest-scoring public notebooks for generating additional pseudo labels. I also trained a model on 2019 competition data to generate even more pseudo labels. This way, for each comment, I had 20+ labels. </p>
<p>Round 0 public LB: 0.833 | Round 1 public LB: 0.837 | Round 2 public LB: 0.844 | Round 3 public LB: 0.842 </p>
<hr>
<p>Hindsight: I really wish I had read this paper by Deepmind: <strong>Multiplicative Interactions and Where to Find Them</strong></p>
|
Jigsaw Rate Severity of Toxic Comments
|
27th Solution - What Worked and What Didn't for Me...
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all congratulations, all for your hard work!</p>
<p><strong>Overview</strong><br>
It was a weird experience, CV LB gap was insane and really unstable so I stop looking at LB scores after my first couple submissions unless something extraordinary happens (like really high or low score), otherwise I just concentrated on my CV scores. Then the question appeared: "Which CV to trust?" So I started by choosing CV strategy first:</p>
<p><strong>CV Strategy</strong><br>
I tried several but at the end I decided to use <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a>'s CV strategy <a href="https://www.kaggle.com/its7171/jigsaw-cv-strategy" target="_blank">here</a>. It made sense to me and giving somewhat more stable results.</p>
<p><strong>Data</strong></p>
<ul>
<li>Started with validation data only before getting into other datasets.</li>
<li>Some of these others were pretty popular with other competitors. (Like old competition data or Ruddit etc.) then decided to rule some of them out to avoid possible leaks.</li>
<li><a href="https://www.kaggle.com/c/jigsaw-toxic-severity-rating/discussion/302060" target="_blank">Measuring Hate Speech</a> with some preprocessing. I created less and more toxic pairs using several permutations and keeping the closest scored ones in dataset and added samples from it to main pipeline without replacement.</li>
</ul>
<p><strong>Loss & Metric</strong></p>
<ul>
<li>Even though other loss functions like MSE etc. gave me decent CV scores (0.70+, 0.71+) at the end I decided to go with MarginRankingLoss.</li>
<li>Metric wise I used what described in competition evaluation page…</li>
</ul>
<p><strong>Model</strong></p>
<ul>
<li><p>My final ensemble consist of transformer based models only. Which include,</p>
<ul>
<li>roberta variants mostly (Including distilbert),</li>
<li>deberta v3 variants,</li>
<li>bert variants (mostly toxic, hate ones)</li>
<li>bart</li></ul></li>
<li><p>Without external data my average CV was around <code>0.71</code> with external data included my CV didn't change much but a little bit less than the original scores but I decided to keep high scoring external data models to increase diversity at final ensemble.</p></li>
<li><p>Models trained based on MarginRankingLoss, I didn't train the actual transformer blocks much (low learning rate for the backbones), instead I trained a final custom attention on top of their outputs and output layer.</p></li>
<li><p>Fine-tuned these models with frequent validation checks. Usually converged around 3/4'th of first epoch.</p></li>
</ul>
<p>Final submission was weighted sum of the rankings from these models.</p>
<p><strong>Things Might Have Worked</strong></p>
<ul>
<li>Models with sentence transformer embeddings. I created high dimensional matrices by concatenating different base transformer model embeddings. (This one was my second final selection and made barely into silver zone, didn't have time to include more models, it was only roberta and roberta-large)</li>
<li>Combining different approaches at final ensemble (linear, transformer, embedding ones etc.)</li>
</ul>
<p><strong>Things Didn't Worked</strong> </p>
<ul>
<li>Linear Models with TF-IDF</li>
<li>Models with word embeddings (Glove, fasttext etc.)</li>
<li>Linear models with transformer embeddings combined together TF-IDF</li>
<li>Pretraining masked language models on toxic datasets before finetuning on actual task.</li>
<li>Some transformer approaches like Electra (I was expecting better CV score for discriminators)</li>
</ul>
<p>At the end some of the earlier ensembles made much better than my final selection but that's the luck part in competition :) As I said this competition was an interesting experience for me, learned bunch of stuff on the road…</p>
|
Jigsaw Rate Severity of Toxic Comments
|
41st solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks! <br>
The number of data and the threshold of toxic or nontoxic may be different.</p>
<p>Thanks!<br>
In my case, weighted avg boost CV about 0.002~3 from simple avg.</p>
<p>Thank you!</p>
<p>Congradulations, all winners. <br>
I learned a lot about NLP in this competition. thank you!</p>
<p>Once the ranking is confirmed, I will be the competition master! 🙂</p>
<h3>Overview</h3>
<p><img src="https://user-images.githubusercontent.com/70784979/152980279-cb5b02a9-63f3-499a-a60b-2e6e9a668114.png" alt="image"></p>
<h3>Data</h3>
<p>I Created two types of extra data.</p>
<ol>
<li>binary label data. (toxic or nontoxic)</li>
<li>more_toxic, less_toxic data</li>
</ol>
<h3>CV strategy</h3>
<p><a href="https://www.kaggle.com/its7171/jigsaw-cv-strategy" target="_blank">Jigsaw CV strategy</a></p>
<h3>Models</h3>
<table>
<thead>
<tr>
<th>model</th>
<th>CV</th>
<th>loss</th>
</tr>
</thead>
<tbody>
<tr>
<td>microsoft/deberta-large</td>
<td>0.7032</td>
<td>BCE</td>
</tr>
<tr>
<td>gpt2-medium</td>
<td>0.6993</td>
<td>BCE</td>
</tr>
<tr>
<td>distilroberta-base</td>
<td>0.7012</td>
<td>MarginRankingLoss</td>
</tr>
<tr>
<td>microsoft/deberta-base</td>
<td>0.7043</td>
<td>MarginRankingLoss</td>
</tr>
<tr>
<td>xlnet-base-cased</td>
<td>0.6997</td>
<td>MarginRankingLoss</td>
</tr>
<tr>
<td>robelta-large</td>
<td>0.702</td>
<td>MarginRankingLoss</td>
</tr>
<tr>
<td>facebook/bart-base</td>
<td>0.7027</td>
<td>MarginRankingLoss</td>
</tr>
<tr>
<td>Hate-speech-CNERG/dehatebert-mono-english</td>
<td>0.6769</td>
<td>MarginRankingLoss</td>
</tr>
<tr>
<td>roberta-base</td>
<td>0.7017</td>
<td>MarginRankingLoss</td>
</tr>
<tr>
<td>microsoft/deberta-large</td>
<td>0.7014</td>
<td>MarginRankingLoss</td>
</tr>
</tbody>
</table>
<p>The larger model had a better CV.</p>
<p>max_len : 256</p>
<p>optimizer : AdamW<br>
lr : 1e-5 <br>
weight_decay : 0.1<br>
epochs : 1~3<br>
scheduler : get_linear_schedule_with_warmup</p>
<h3>Ensemble</h3>
<p>I use optuna. As described below</p>
<pre><code>cols = ['less_toxic_pred', 'more_toxic_pred']
oof = []
oof.append(pd.read_csv('../input/jigsaw4-model2/bi003_deberta-large/oof.csv',usecols=cols).values)
oof.append(pd.read_csv('../input/jigsaw4-model1/exp008_roberta-base/oof.csv',usecols=cols).values)
def calc_score(weight):
pred = np.zeros([n_data,2])
for p, w in zip(oof,weight):
pred += p*w
score = (pred[:,0] < pred[:,1]).sum()/pred.shape[0]
return score
class Objective:
def __init__(self, n_models):
self.n_models = n_models
def __call__(self, trial):
weight = [trial.suggest_uniform('weight' + str(n), 0, 1) for n in range(self.n_models)]
return calc_score(weight)
max_iter = 1800
SEED = 29
objective = Objective(n_models)
sampler = optuna.samplers.TPESampler(seed=SEED)
study = optuna.create_study(sampler = sampler,direction='maximize')
study.optimize(objective, n_trials = max_iter, n_jobs = -1)
</code></pre>
|
Jigsaw Rate Severity of Toxic Comments
|
4th - This is Great! - Shared Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a>, </p>
<p>It's great for me to see you write here!</p>
<p>Hi <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, </p>
<p>Thanks for your greeting!</p>
<p><a href="https://www.kaggle.com/pdnartreb" target="_blank">@pdnartreb</a> and his thread <a href="https://www.kaggle.com/c/jigsaw-toxic-severity-rating/discussion/304441" target="_blank">https://www.kaggle.com/c/jigsaw-toxic-severity-rating/discussion/304441</a> comments about <a href="https://www.kaggle.com/sorenj" target="_blank">@sorenj</a> and his notebooks <a href="https://www.kaggle.com/sorenj/perspectiveapi-tuning" target="_blank">https://www.kaggle.com/sorenj/perspectiveapi-tuning</a> and <a href="https://www.kaggle.com/sorenj/scoring-ruddit-comments" target="_blank">https://www.kaggle.com/sorenj/scoring-ruddit-comments</a></p>
<p>In the first one you can see this piece of code,</p>
<p><code>validation_annotations = validation.set_index('more_toxic').join(
comments[['comment_id', 'text']].set_index('text'), how='inner').set_index('less_toxic').join(
comments[['comment_id', 'text']].set_index('text'),
lsuffix='_more', rsuffix='_less', how='inner').reset_index(drop=True)</code></p>
<p>There you can follow the answer!</p>
<p>Hi <a href="https://www.kaggle.com/virilo" target="_blank">@virilo</a>, thanks for your congratulations, I have just shared the complete solution if it answers your question!</p>
<p>I vaguely referred to "cv" just to indicate the term validation in a generic way. Actually only "tito cv" is really cross validation in the solution. When I refer to "host cv" it is not technically a cross validation but only a selection of the host validation data. </p>
<p>You can use this link <a href="https://scikit-learn.org/stable/modules/classes.html#module-sklearn.model_selection" target="_blank">https://scikit-learn.org/stable/modules/classes.html#module-sklearn.model_selection</a> to read more about other validation sets beyond this competition.</p>
<p>Your thread finished opening my eyes!</p>
<p>Thanks to you <a href="https://www.kaggle.com/pdnartreb" target="_blank">@pdnartreb</a> </p>
<p>This is Great!</p>
<p>I just got the results and I'm super happy.</p>
<p>Thanks to everyone who participated in the competition, to the host who proposed the challenge and to kaggle for making it easy for me to use their GPU resources.</p>
<p>I won't have much time today until the end of the day but I'll quickly share my solution here, which was a final combo of roberta base (tito cv 0.6983) + roberta large (tito cv 0.7014) with detoxify (host cv 0.7036).</p>
<p>Thanks to everyone once again!</p>
<p>UPDATED EDIT: </p>
<h5>Shared Code</h5>
<p>I just shared the solution in the following training and inference notebooks,</p>
<p>Training:</p>
<ul>
<li><a href="https://www.kaggle.com/coreacasa/jigsaw4-luke-base-training-tito-cv-strategy/notebook" target="_blank">https://www.kaggle.com/coreacasa/jigsaw4-luke-base-training-tito-cv-strategy/notebook</a></li>
<li><a href="https://www.kaggle.com/coreacasa/jigsaw4-luke-large-training-tito-cv-strategy/notebook" target="_blank">https://www.kaggle.com/coreacasa/jigsaw4-luke-large-training-tito-cv-strategy/notebook</a></li>
</ul>
<p>Inference:</p>
<ul>
<li><a href="https://www.kaggle.com/coreacasa/jigsaw4-merging-luke-roberta-with-detoxify/notebook?scriptVersionId=87076833" target="_blank">https://www.kaggle.com/coreacasa/jigsaw4-merging-luke-roberta-with-detoxify/notebook?scriptVersionId=87076833</a></li>
</ul>
<p>Acknowledgment and Credits to the authors in the comments section that I repeat here (I'm sorry if I didn't include someone): </p>
<ul>
<li><p><a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> and his notebooks <a href="https://www.kaggle.com/yasufuminakama/jigsaw4-luke-base-starter-train" target="_blank">https://www.kaggle.com/yasufuminakama/jigsaw4-luke-base-starter-train</a> and <a href="https://www.kaggle.com/yasufuminakama/jigsaw4-luke-base-starter-sub" target="_blank">https://www.kaggle.com/yasufuminakama/jigsaw4-luke-base-starter-sub</a></p></li>
<li><p><a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> for their notebook <a href="https://www.kaggle.com/its7171/jigsaw-cv-strategy" target="_blank">https://www.kaggle.com/its7171/jigsaw-cv-strategy</a></p></li>
<li><p><a href="https://www.kaggle.com/steubk" target="_blank">@steubk</a> and his dataset <a href="https://www.kaggle.com/steubk/detoxify-sourcemodels" target="_blank">https://www.kaggle.com/steubk/detoxify-sourcemodels</a></p></li>
<li><p><a href="https://www.kaggle.com/pdnartreb" target="_blank">@pdnartreb</a> and his thread <a href="https://www.kaggle.com/c/jigsaw-toxic-severity-rating/discussion/304441" target="_blank">https://www.kaggle.com/c/jigsaw-toxic-severity-rating/discussion/304441</a> </p></li>
<li><p>(from previous discussion) <a href="https://www.kaggle.com/sorenj" target="_blank">@sorenj</a> and his notebooks <a href="https://www.kaggle.com/sorenj/perspectiveapi-tuning" target="_blank">https://www.kaggle.com/sorenj/perspectiveapi-tuning</a> and <a href="https://www.kaggle.com/sorenj/scoring-ruddit-comments" target="_blank">https://www.kaggle.com/sorenj/scoring-ruddit-comments</a></p></li>
</ul>
<p>UPDATED EDIT2: </p>
<h5>Overview</h5>
<p>Jigsaw Rate has continued the motivation of the 2018-2019-2020 challenges. Beyond the new metric (task) that evaluated the final performance of the solutions. I would add several peculiarities in this new challenge with respect to the previous ones, one of them (1) it is that the host did not provide a specific training data set, another (2) is that the host did provide a validation data set and finally (3) that we had a scarce reference of the public board scores representing only 5% of the final result.</p>
<p>Once we advanced in the development of the competition and derived from (1) and (2) above, I thought that as important as the diversity of models, perhaps this time the diversity of training datasets could play an even stronger role. Likewise, derived from (2) and (3) and from the lack of gratitude that any submission to the public scoreboard was, I decided in the last week to give all priority to validation scores regardless of whether the public board confirmed improvements in performance or not.</p>
<h6>Diversity of Datasets:</h6>
<p>-From the previous Jigsaw competitions I worked with,<br>
(2018 Jigasw1) - Toxic Comment Classification Challenge<br>
(2019 Jigasw2) - Jigsaw Unintended Bias in Toxicity Classification<br>
(2020 Jigsaw3) - Multilingual Toxic Comment Classification</p>
<p>-From this competition,<br>
(2021 Jigasw4) - 'validation_data.csv'.</p>
<p>Although they are not part of the solution I also tried, without success, with external data. I'm referring to ruddit and hate_speech data, thanks <a href="https://www.kaggle.com/andre112" target="_blank">@andre112</a> and thanks <a href="https://www.kaggle.com/rajkumarl" target="_blank">@rajkumarl</a>. Specifically, I extracted samples of paired texts for each one of them given the enormous final extension that the combination of all the crosses supposed. Perhaps I made a mistake in this and I should have sampled the texts to work with the resulting paired texts in a number appropriate to my available resources.</p>
<h6>Validation Strategies:</h6>
<p>As I indicated in a brief reference at the beginning of this discussion, the final solution uses 2 validations,</p>
<p>1) tito-cv (original author's nickname) applied to validation_data.csv of this competition. This strategy extends a 5-fold cross-validation by non-overlapping Groups of 4143 leak-free (least toxic-most toxic) text pairs (14251 different texts).</p>
<p>2) host-cv is not cross-validation (sorry if I am confusing the vagueness of 'cv') but simply an extraction of text pairs (least toxic-most toxic) from the validation_data.csv dataset (same as above) that match with the dataset comments_to_score.csv, in total 8334 pairs found.</p>
<h6>Models and Trainings:</h6>
<p>Taking into account the datasets that are ingested, we can divide this part into two sections,</p>
<p>1) Two Luke-Roberta models were fine-tuned on the validation_data.csv of this competition, please visit <a href="https://github.com/studio-ousia/luke" target="_blank">https://github.com/studio-ousia/luke</a>.</p>
<p>I forked Y.Nakama's notebook Luke-base (Roberta-Base) by replacing the original validation pipeline (groupkfold over worker_id) with the aforementioned tito-cv. Saving the best of 15 epochs for each fold, I kept the rest of the parameters untouched (among others… max_len= 64, batch_size= 64, AdamW, lr's= 1e-5, 1/2 cycle cosine) as well as the loss function Margin-Ranking-Loss with margin= 0.5.</p>
<p>I trained a second model this time changing the base model to Roberta-Large, only 5 epochs keeping everything else the same. On these 2 models, a result of linear optimized blending was tito-cv 0.705 (I share this new notebook <a href="https://www.kaggle.com/coreacasa/jigsaw4-luke-base-large-tito-cv-merging-optimize" target="_blank">here</a>) although I did not use this blend in the end.</p>
<p>2) About the training data of the Jigsaw1 (original), Jigsaw2 (unbiased) and Jigsaw3 (multilingual) competitions I used (did not train) already trained models to predict toxic comments about those 3 competitions, please visit <a href="https://github.com/unitaryai/detoxify" target="_blank">https://github.com/unitaryai/detoxify</a>.</p>
<table>
<thead>
<tr>
<th>Model name</th>
<th>Transformer type</th>
<th>Data from</th>
</tr>
</thead>
<tbody>
<tr>
<td>original</td>
<td>bert-base-uncased</td>
<td>Toxic Comment Classification Challenge</td>
</tr>
<tr>
<td>unbiased</td>
<td>roberta-base</td>
<td>Unintended Bias in Toxicity Classification</td>
</tr>
<tr>
<td>multilingual</td>
<td>xlm-roberta-base</td>
<td>Multilingual Toxic Comment Classification</td>
</tr>
</tbody>
</table>
<p>With a high performance for single models (not ensembles) in the leaderboards of each challenge and evaluating their individual performance on host validation, I considered it a great match to include these predictions in the final solution.</p>
<p>As for the labels on which these models return a probability value, we would have,</p>
<table>
<thead>
<tr>
<th>Model name</th>
<th>Labels</th>
</tr>
</thead>
<tbody>
<tr>
<td>original</td>
<td>{'toxicity', 'severe_toxicity', 'obscene', 'threat', 'insult', 'identity_attack'}</td>
</tr>
<tr>
<td>unbiased</td>
<td>{'toxicity', 'severe_toxicity', 'obscene', 'identity_attack', 'insult', 'threat', 'sexual_explicit'}</td>
</tr>
<tr>
<td>multilingual</td>
<td>{'toxicity', 'severe_toxicity', 'obscene', 'identity_attack', 'insult', 'threat', 'sexual_explicit'}</td>
</tr>
</tbody>
</table>
<p>The next step was to find a method to translate these multivariable outputs into a single output that measures toxicity, beyond the 'toxicity' label itself, and that would work with the 'Average Agreement with Annotators' in this competition. </p>
<p>In the first of <a href="https://www.kaggle.com/sorenj" target="_blank">@sorenj</a> 's notebooks, the author uses an interesting answer on the above by proposing a random search of weights for each label in order to find a optimum in the metric for with host-cv. That same approach but with some nuances was the one I finally followed.</p>
<h6>Combining All Things:</h6>
<p>From the previous approach, I tried several alternatives to merge the detoxify's labels. Rather than looking for a solution close to the linear optimum (increasing the number of iterations), I looked at the variability of the result of each iteration on host-cv. Since nothing guaranteed that the texts in the private belonged to the same family as the validation ones, I thought it might be a good idea to try a non-optimized submission but simply play with the results of an iteration.</p>
<p>To later combine the 3 models I did the same. The idea was not to optimize twice on the same texts and on the same workers. I also combined the luke's like this and therefore I did not use the training oofs but directly applying them to the prediction.</p>
<p>Luke's final ensemble with detoxify's was with ranges and I didn't use maths or randomness this time. Simply the kaggle-art to balance dataset diversity and model diversity (1/3 for the former and 2/3 for the latter). A final action was to randomize the cases of duplicate scores in the final submission if applicable.</p>
<h6>Summary of Results:</h6>
<p>I repeat the well-known validation scores and add the evaluation on the public and private leaderboard,</p>
<table>
<thead>
<tr>
<th>Model name</th>
<th>Validation</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>luke-base</td>
<td>0.6983</td>
<td>0.79993</td>
<td>0.79167</td>
</tr>
<tr>
<td>luke-large</td>
<td>0.7014</td>
<td>0.80287</td>
<td>0.79898</td>
</tr>
<tr>
<td>original</td>
<td>0.69450</td>
<td>0.74507</td>
<td>0.79883</td>
</tr>
<tr>
<td>unbiased</td>
<td>0.69750</td>
<td>0.73364</td>
<td>0.79768</td>
</tr>
<tr>
<td>multilingual</td>
<td>0.69138</td>
<td>0.73919</td>
<td>0.79579</td>
</tr>
</tbody>
</table>
<p>Next step,</p>
<table>
<thead>
<tr>
<th>Ensemble</th>
<th>Validation</th>
<th>Public</th>
<th>Private</th>
<th>Weights (norm)</th>
</tr>
</thead>
<tbody>
<tr>
<td>luke's</td>
<td>0.75150(leaked)</td>
<td>0.80439</td>
<td>0.79698</td>
<td>[0.53215757 0.46784243]</td>
</tr>
<tr>
<td>detoxify's</td>
<td>0.70362</td>
<td>0.74605</td>
<td>0.80791</td>
<td>[0.38403517 0.43512979 0.18083503]</td>
</tr>
</tbody>
</table>
<p>Last step,</p>
<table>
<thead>
<tr>
<th>Submission</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>final ensemble</td>
<td>0.77587</td>
<td>0.81282</td>
</tr>
</tbody>
</table>
<h6>Checking Reproducibility and Randomness:</h6>
<p>In case some strange phenomenon had been introduced in the solution, I checked <a href="https://www.kaggle.com/coreacasa/jigsaw4-checkup-reproducibility-4th-place-solution?scriptVersionId=87635669" target="_blank">here</a> its reproducibility.</p>
<p>To check how important randomness (seed) is in the solution, I ran this <a href="Jigsaw4-Checkup-Randomness-4th-Place-Solution" target="_blank">this notebook</a> and tried various seed changes (or intermediate additions) keeping the results a gold standard.</p>
<p>(In any case… yes, this time luck was on my side!)</p>
<p>That's all folks!</p>
|
Jigsaw Rate Severity of Toxic Comments
|
52nd - Silver Medal - RAPIDS Forest Inference Library!
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks Patrick</p>
<h1>Trust Your CV - 1800th Public to 52nd Private!</h1>
<p>Thanks for a great competition. Public leaderboard was very misleading since it only contained 5% of test data. For my final two submissions, I chose two submissions that were ranked 1500th and 1800th on public LB. They both finished at 52nd private. Each had my best leak-free CV score of 0.707, public LB 0.802, private LB 0.804!</p>
<h1>RAPIDS FIL infers 100 million rows per second!</h1>
<p>In this competition, i chose to use RAPIDS Forest Inference Library because it can infer up to 100 million rows of data per second! See blog <a href="https://medium.com/rapids-ai/rapids-forest-inference-library-prediction-at-100-million-rows-per-second-19558890bc35" target="_blank">here</a></p>
<h1>Feature Engineering - roBERTa XGB</h1>
<p>For each comment, we extract 768 features from <code>roBERTa-base</code> that was pretrained on old comp data with old comp targets. And we extract 1024 features from <code>roBERTa-large</code> that was pretrained on old comp data with old comp targets. Next we train XGB using these features and infer with RAPIDS FIL. We did not use Tfidf nor linear regression models.</p>
<h1>Infer Every Possible Pair - RAPIDS FIL</h1>
<p>Since inference is so fast, instead of predicting 14,000 toxic scores, we can predict 196,000,000 million combinations of every pair of toxic test comments. During inference we build a RAPIDS cuDF dataframe with <code>196,000,000</code> million rows and <code>3584</code> columns which represents every possible pair <code>14,000 * 14,000 = 196,000,000</code>. Then we use RAPIDS FIL to predict every row and get 196,000,000 probabilities.</p>
<h1>Convert Pair Probabilities into Toxic Scores</h1>
<p>Based on the 196,000,000 comparisons of each of the 14,000 test comments with every other comment, we compute toxic scores for each of the 14,000 test comments and merge it unto <code>submission.csv</code>.</p>
<h1>Training Code and Inference Code</h1>
<p>I published train code <a href="https://www.kaggle.com/cdeotte/rapids-xgb-fil-silver-medal-train-0-804" target="_blank">here</a>. I published infer code <a href="https://www.kaggle.com/cdeotte/rapids-xgb-fil-silver-medal-infer-0-804" target="_blank">here</a></p>
<p>Thanks vad13irt</p>
<p>Thanks. Train code is <a href="https://www.kaggle.com/cdeotte/rapids-xgb-fil-silver-medal-train-0-804" target="_blank">here</a> and infer code is <a href="https://www.kaggle.com/cdeotte/rapids-xgb-fil-silver-medal-infer-0-804" target="_blank">here</a>. Training is only 11 folds on 30,000 rows of validation data. Inference is 196,000,000 rows on all possible pairs of test comments.</p>
<p>Thanks <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> . Congrats on your great finish. RAPIDS FIL is incredible. I've been using it at work and wanted to try it in a competition.</p>
<p>Whenever you have a tree model (XGB, LGBM, Sklearn Random Forest, whatever), you can load it with RAPIDS FIL and infer. If you infer XGB as FIL on GPU it is like 10x faster than inferring XGB as XGB on GPU! So it isn't just about GPU, the FIL library evaluates the tree splits more efficiently than any of the tree libraries by themselves. </p>
<p>Congrats on achieving silver medal.</p>
<p>It will take between 1-7 days. Kaggle will finalize the private LB by removing teams that violated rules. Afterward, the comp will be finalized and you get your medal.</p>
<p>Thanks. Are you asking about my journey in this competition? Or my life's data science journey?</p>
|
Jigsaw Rate Severity of Toxic Comments
|
5th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I decided to focus on maximizing only validation score in the last month of the competition. Public LB did not seem quite useful because it had only 5% of data and there was not correlation between validation score and public LB score. <br>
I used 2 datasets for training models: Ruddit dataset, Toxic comments dataset (<a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge)" target="_blank">https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge)</a>. I tried other datasets, but they did not improve my validation score at all.<br>
I used “validation_data.csv” for validation only.</p>
<h2>Ruddit dataset / models</h2>
<p>Ruddit dataset includes tuples (A, B, C, D) where “A” is the least toxic and “D” is the most toxic. Therefore, I used those tuples to create pairs (X, Y) where X is less toxic than Y. <br>
Then I trained 2 models with MarginRankingLoss:<br>
1) “roberta-large” with linear layer <br>
2) “unitary/unbiased-toxic-roberta” with linear layer<br>
I used layerwise learning rate decay (from 1e-5 to 1e-4). Training was made by using 5-folds cross-validation. Each model was trained with 3 epochs, after each epoch I saved the model with the best accuracy.</p>
<h2>Toxic comments dataset / models</h2>
<p>I used that dataset to train a regression models. Data preparation: 1) remove items that have the same text in validation_data.csv 2) compute toxic target (y = toxic<em>0.32 + severe toxic</em>1.82 + obscene<em>0.16 + threat</em>1.5 + insult<em>0.64 + identity_hate</em>1.5) 3) create a train dataset with positive toxic target and negative toxic target.<br>
After data preparation I trained 3 models:<br>
1) TfIdf + Ridge. I used cleaned text and 4 extra features: length, portion of punctuation in text, portion of chars with uppercase in text, count of swear words (these features slightly improve score on validation_data). I trained models on 8 folds with grid search of hyperparameters ( “ngram_range”, “alpha”, “fit_intercept”) and chose the model with the highest score on validation_data.<br>
2) Regression model based on “unitary/unbiased-toxic-roberta” with linear custom layer and MSELoss. I trained that model by using 5-folds cross-validation with 5 epochs. The model with the minimal loss on validation fold was saved. Models were trained with layerwise learning rate. <br>
3) Regression model based on “unitary/unbiased-toxic-roberta” with attention custom layer and MSELoss. I trained that model by using 5-folds cross-validation with 5 epochs. The model with the minimal loss on validation fold was saved. Models were trained with layerwise learning rate.</p>
<h2>Models ensemble</h2>
<p>After training the models I used optuna to weigh models in ensemble. Steps:<br>
1) normalizing logits of models using min, max scaling 2) weighing models using weights 0 or 1. <br>
After some experimentation I got maximum validation score - 0.709 with public LB score 0.819. <br>
Final ensemble – sum of the models:</p>
<table>
<thead>
<tr>
<th><strong>№</strong></th>
<th><strong>Model</strong></th>
<th><strong>Dataset</strong></th>
<th><strong>Count of models</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>roberta-large with MarginRankingLoss</td>
<td>Ruddit</td>
<td>2</td>
</tr>
<tr>
<td>2</td>
<td>unitary/unbiased-toxic-roberta with MarginRankingLoss</td>
<td>Ruddit</td>
<td>3</td>
</tr>
<tr>
<td>3</td>
<td>Tfidf + RIDGE</td>
<td>Toxic comments</td>
<td>3</td>
</tr>
<tr>
<td>4</td>
<td>unitary/unbiased-toxic-roberta with custom linear layer with MSELoss</td>
<td>Toxic comments</td>
<td>3</td>
</tr>
<tr>
<td>5</td>
<td>unitary/unbiased-toxic-roberta with custom attention layer with MSELoss</td>
<td>Toxic comments</td>
<td>2</td>
</tr>
</tbody>
</table>
<h2>Notebook</h2>
<p>Link to my submission - <a href="https://www.kaggle.com/alexander1980/best-cv-ens?scriptVersionId=87213915" target="_blank">https://www.kaggle.com/alexander1980/best-cv-ens?scriptVersionId=87213915</a><br>
P.S.: Sorry for duplications in my code.</p>
<p><a href="https://www.kaggle.com/asarvazyan" target="_blank">@asarvazyan</a> , thank you!</p>
|
Jigsaw Rate Severity of Toxic Comments
|
62nd Place solution and what I learnt and tried
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all I am pretty new in Kaggle Just started with Ventilator being my first real comp . Also learning along the way with great people like Chris, Abhishek and lot of great folks that share knowledge and great notebooks .</p>
<p>My approach here might now be the most unique but still want to share my journey ..</p>
<ul>
<li><p>Started with roberta models trying to tune and getting the most juice of of those .. Got some good tips from <a href="https://www.kaggle.com/readoc" target="_blank">@readoc</a> </p></li>
<li><p>Got a great suggestion for deberta models by <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> and started to explore that . Though should have paid more attention to these as had great val scores and maybe would have finished higher . </p></li>
<li><p>Started to train XLNET , distilbert and ensemble .Got some good lb</p></li>
<li><p>Saw LB dropping with ensembles so selfishly tried to ensemble with linear models .</p></li>
<li><p>AS the lb was very confusing so started to think to trust cv . developed a ensemble cv strategy by union folds for CV . Trusted it and ensembled based on it bu slowly adding models .</p></li>
<li><p>Realized that Bert models really were good to cv so started adding that .</p></li>
<li><p>Tried to train on ruddit , toxic data from old comps with some datasets people shared with MSE loss . Trained distil, xlnet , deberta and roberta models on this and added to ensemble .Some models with leaky relu worked good for distil and in some tinkered with various archs. Ruddit data scored less in public LB but ensembled well . Added one linear model using FastText with ridge from a public NB .</p>
<ul>
<li>For 1-2 of my ext data model did also tried to augment data a bit which helped cv for my distilbert models <br>
distilbert old_toxic distil-tx-old mse 0.013031223 0.633618972 without Aug<br>
distilbert_aug_relu old_toxic distil-tx-old mse 0.013226555 0.648996944 with Aug<br>
distilAug_LRelu old_toxic distil-tx-old mse 0.013024052 0.6599907 WIth Aug<br>
All ideas for NLP augs came from this notebook <a href="https://www.kaggle.com/shonenkov/nlp-albumentations" target="_blank">https://www.kaggle.com/shonenkov/nlp-albumentations</a> </li></ul></li>
<li><p>Finally last day though to give deberta v3 another shot and added to ensemble . gave good cv but forgot to select it :D .</p></li>
<li><p>Also tried weighted ensemble with optuna for tuning weights like people already shared . This would have made me land higher but somewhat didn't trust it so went with a simple average ensemble . Had used the validation data which I split up and scored and then put back in to get the score . Think many would have used like this only for weighted ensembles .</p></li>
</ul>
<p>Final ensembles and one selected in bold </p>
<table>
<thead>
<tr>
<th>ENS</th>
<th>TYPE</th>
<th>LB</th>
<th>CV</th>
<th>PRIVATE</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Ens with relu with toxic and ruddit data</strong></td>
<td>Avg</td>
<td>87.6</td>
<td>70.46728972</td>
<td>0.80434</td>
</tr>
<tr>
<td>Ens with deberta</td>
<td>Avg</td>
<td>87.1</td>
<td>70.99</td>
<td>0.80634</td>
</tr>
<tr>
<td>weighted ens without deb</td>
<td>Weighted</td>
<td>86.785</td>
<td>72.1528</td>
<td>0.80615</td>
</tr>
</tbody>
</table>
<p><strong>Models tried</strong><br>
deberta v3<br>
distil<br>
roberta<br>
xlnet</p>
<p><strong>datasets tried</strong> <br>
ruddit<br>
clean validation data<br>
old comp data .</p>
<p>Finally went with deberta distil and roberta . .and distil and roberta for final ensemble </p>
<p>Anyways was good to learn to structure experiment better at the end and have a good trusted cv code. Which I realized pretty late and maybe had missed lot of great models I had tried since the start of the comp . </p>
<p>But at the end there was always new things to learn and learn to improve .. Thanks to all </p>
<p>Thanks to you .. was a great suggestion and most top rakers think tried it. Though the low public lb score made me lose confidence in it initially which was a bad decision. Must have had more confidence in it and it's good cv 😊</p>
|
Jigsaw Rate Severity of Toxic Comments
|
6th Place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Overview</h2>
<p>Congradulations, all winners. <br>
But really lucky for us, We got a gold medal. None of our members could have imagined this result. As pointed out <a href="https://www.kaggle.com/c/jigsaw-toxic-severity-rating/discussion/306366" target="_blank">in 7th place solution</a>, we had a lot of models that were trained mainly on Jigsaw 1st directly, and it is possible that the performance was better for private tests. This was simply luck.</p>
<h2>Data</h2>
<ul>
<li><a href="https://www.kaggle.com/julian3833/jigsaw-toxic-comment-classification-challenge" target="_blank">jigsaw-toxic-comment-classification-challenge(jigsaw_1)</a></li>
<li><a href="https://www.kaggle.com/rajkumarl/ruddit-jigsaw-dataset" target="_blank">ruddit jigsaw dataset</a></li>
<li><a href="https://www.kaggle.com/nkitgupta/jigsaw-regression-based-data" target="_blank">jigsaw_regression_data</a></li>
</ul>
<p>When we used jigsaw_1 to train a regression models, we calculated target values by following formula.</p>
<p><code>(y = toxic0.32 + severe toxic1.82 + obscene0.16 + threat1.5 + insult0.64 + identity_hate1.5)</code></p>
<h2>CV strategy</h2>
<p>We used validation_data.csv to evaluate cv score. (In order to evaluate more accurately,) We removed all sentences from the training data that overlapped with validation_data.</p>
<h2>Preprocessing / Engineering</h2>
<h3><i>mogmog part</i></h3>
<p>The training data were jigsaw1(applied under sampling) and Ruddit. We apply multiple patterns of text preprocessing. After preprocessing, TF-IDF (and SVD for lgb) , gensim_embedding, Basic feature(Word count, Character count), FastText embedding were applied on each of them to generate features.</p>
<h3><i>kma### part</i></h3>
<p>The training data were jigsaw_1, ruddit, jigsaw_regression_data. we applied text cleaning on jigsaw_1 and ruddit, and none on jigsaw_regression_data.<br>
we applied TF-IDF for each of them, and trained Ridge reg. (alpha=0.5, 1, 2).<br>
In inference, alpha=0.5, 1, and 2 were used as an ensemble.</p>
<h3><i>kfsky part</i></h3>
<p>The training data was jigsaw1 data (applied under sampling, text cleaning) for training data and generate features.</p>
<ul>
<li>TF-IDF(and SVD)</li>
<li>CountVectorizer(and SVD)</li>
<li>Word count</li>
<li>Character count</li>
<li>FastText(100dim)</li>
</ul>
<p>Train LightGBM with these features.</p>
<p>And we used TF-IDF and trained Ridge model.</p>
<h2>Training Model</h2>
<p>we trained following models.</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Dataset</th>
<th>Text Cleaning</th>
<th>Feature</th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>LightGBM_kfksy_1</td>
<td>jigsaw_1</td>
<td>Cleaned</td>
<td>TF-IDF, CountVectriser, FastText, Word count, Character count</td>
<td>0.6843</td>
</tr>
<tr>
<td>Ridge_kfksy_1</td>
<td>jigsaw_1</td>
<td>Cleaned</td>
<td>TF-IDF</td>
<td>0.6639</td>
</tr>
<tr>
<td>Ridge_kma###_1</td>
<td>jigsaw_1</td>
<td>Cleaned</td>
<td>TF-IDF</td>
<td>0.6823</td>
</tr>
<tr>
<td>Ridge_kma###_2</td>
<td>ruddit</td>
<td>Cleaned</td>
<td>TF-IDF</td>
<td>0.6312</td>
</tr>
<tr>
<td>Ridge_kma###_3</td>
<td>jigsaw_regression_data</td>
<td>Non</td>
<td>TF-IDF</td>
<td>0.6713</td>
</tr>
<tr>
<td>Ridge_mogmog_1</td>
<td>jigsaw_1</td>
<td>Cleaned</td>
<td>TF-IDF</td>
<td>0.6810</td>
</tr>
<tr>
<td>Ridge_mogmog_2</td>
<td>ruddit</td>
<td>Cleaned</td>
<td>TF-IDF</td>
<td>0.6283</td>
</tr>
<tr>
<td>lightGBM_mogmog_1</td>
<td>jigsaw_1</td>
<td>Cleaned</td>
<td>gensim_Embedding & FastTextEmbedding & Basic_Feat</td>
<td>0.6792</td>
</tr>
<tr>
<td>lightGBM_mogmog_2</td>
<td>ruddit</td>
<td>Cleaned</td>
<td>gensim_Embedding & FastTextEmbedding & Basic_Feat</td>
<td>0.6572</td>
</tr>
<tr>
<td>lightGBM_mogmog_3</td>
<td>jigsaw_1</td>
<td>Cleaned</td>
<td>TF-IDF & SVD</td>
<td>0.6711</td>
</tr>
<tr>
<td>lightGBM_mogmog_4</td>
<td>ruddit</td>
<td>Cleaned</td>
<td>TF-IDF & SVD</td>
<td>0.6130</td>
</tr>
<tr>
<td>RoBERTa_base_mogmog_1</td>
<td>jigsaw_1</td>
<td>Cleaned</td>
<td>Only text cleaning</td>
<td>0.6879</td>
</tr>
</tbody>
</table>
<h2>Weight Ensembling</h2>
<p>We ensembled these models. Before ensemble, we applied MinMaxScaler because the scale of the prediction values were different among the models. The weights of each model are determined so that the CV score is maximized. (we used <code>scipy.optimize.minimize(method='powell')</code>) . After ensembling, cv score was 0.7048.</p>
<h2>Closing</h2>
<p>Thank you <a href="https://www.kaggle.com/ryo1993" target="_blank">mogmog</a>, <a href="https://www.kaggle.com/kmakmar" target="_blank">kma###</a>, <a href="https://www.kaggle.com/mkt0309" target="_blank">mkt0309</a> for teaming up with us.</p>
<h2>Clean Inference Code</h2>
<p><a href="https://www.kaggle.com/ryo1993/inference-krkm-final-001?scriptVersionId=88284870" target="_blank">https://www.kaggle.com/ryo1993/inference-krkm-final-001?scriptVersionId=88284870</a></p>
|
Jigsaw Rate Severity of Toxic Comments
|
76 place and Lesson Learned
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Private Score 0.80433<br>
Public Score 0.72950<br>
<a href="https://www.kaggle.com/crischir/ktrain-regression-transformer?scriptVersionId=87187729" target="_blank">https://www.kaggle.com/crischir/ktrain-regression-transformer?scriptVersionId=87187729</a><br>
**First of all: trust your CV and your work **<br>
Second do not copy notebooks like "very simple code with score 0.885" - it was a lot of knowledge for tha script but it was not mine. From a public score of 0.87950 the final score was 0.74882<br>
There are some questions regarding this big shake-up of the public leaderboard.<br>
1.Transformers vs Linear.</p>
<ul>
<li>Linear regression has a stellar performance on public LB. 0.9 using TF-IDF. <br>
-Transformers were performing very poor in public LB. 0.72950 in public 0.8 in private LB<br>
-For similar transformers models, the public LB was inconsistent: public score betweeen 0.72950 and 0.78230 with similar results in private LB: 8.01 even if the CV was the same (0.67-0.69)<br>
2.Scoring <br>
Basically if someone has the original corpus from <a href="https://www.perspectiveapi.com" target="_blank">https://www.perspectiveapi.com</a> (<a href="https://github.com/conversationai/perspectiveapi/blob/main/example_data/perspective_wikipedia_2k_score_sample_20180829.csv" target="_blank">https://github.com/conversationai/perspectiveapi/blob/main/example_data/perspective_wikipedia_2k_score_sample_20180829.csv</a>)<br>
my script does nothing :)<br>
If someone add the probabilities from the initial jigsaw corpus the results are better.<br>
My question remains:<br>
Why the public LB was so different, if we compare with CV and Private LB scores. The private LB is corelated with CV. </li>
</ul>
|
Jigsaw Rate Severity of Toxic Comments
|
7th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our team didn't trust public LB, and worked towards improving the score on the un-processed validation_df. In the end, we improved the score to 0.7211 without leaks, which was the 7th best score in private LB. We thought this was a great result because we kept running Trust CV, but we later realized that we had ensembled models that had won the lottery.</p>
<p>As other teams have pointed out, the private LB dataset's test comments overlapped with jigsaw's first competition, the Toxic Comment Classification Challenge. Therefore, it was possible to create a model within the gold zone by training this dataset directly.</p>
<p>Our team placed in the gold zone, partly because we did a Trust CV, but essentially because we had models that won the lottery and we gave them enough weight in the weighted average. </p>
<p>Here's a brief overview of what we've been working on since.</p>
<h1>Model × Dataset</h1>
<p>The following table shows the models used for the final submission, the dataset trained, and the scores. The "pseudo label" refers to the model trained with validation_df in toxic-xlm-roberta and pseudo-labeled on the dataset. This pseudo-labeling greatly improved the CV, while private LB did not.</p>
<p>In addition, we strongly emphasized the importance of not leaking when training. Specifically, when training validation_data, union-find GroupKFold was used, and when training other datasets, the text in validation_data was removed.</p>
<table>
<thead>
<tr>
<th>name</th>
<th>model</th>
<th>dataset</th>
<th>CV</th>
<th>PublicLB</th>
<th>PrivateLB</th>
</tr>
</thead>
<tbody>
<tr>
<td>mst029</td>
<td>toxic-xlm-roberta</td>
<td>Ruddit (Pseudo Label)</td>
<td>0.7126</td>
<td>0.77391</td>
<td>0.79796</td>
</tr>
<tr>
<td>mst030</td>
<td>toxic-xlm-roberta</td>
<td>jigsaw 1st (Pseudo Label)</td>
<td>0.7224</td>
<td>0.77587</td>
<td>0.79676</td>
</tr>
<tr>
<td>msttweet</td>
<td>toxic-xlm-roberta</td>
<td>Toxic Tweet (Pseudo Label)</td>
<td>0.7155</td>
<td>0.77587</td>
<td>0.79641</td>
</tr>
<tr>
<td>colum014</td>
<td>roberta-base</td>
<td>validation_df</td>
<td>0.70307</td>
<td>0.79590</td>
<td>0.79410</td>
</tr>
<tr>
<td>colum015</td>
<td>roberta-large</td>
<td>validation_df</td>
<td>0.70307</td>
<td>0.80189</td>
<td>0.79744</td>
</tr>
<tr>
<td>colum016</td>
<td>deberta-v3-base</td>
<td>validation_df</td>
<td>0.70646</td>
<td>0.78730</td>
<td>0.79594</td>
</tr>
<tr>
<td>colum018</td>
<td>unbiased-toxic-roberta</td>
<td>validation_df</td>
<td>0.70426</td>
<td>0.80341</td>
<td>0.79416</td>
</tr>
<tr>
<td>colum019</td>
<td>toxic-xlm-roberta</td>
<td>validation_df</td>
<td>0.70898</td>
<td>0.76749</td>
<td>0.79644</td>
</tr>
<tr>
<td>colum020</td>
<td>toxic-bert</td>
<td>validation_df</td>
<td>0.70456</td>
<td>0.77533</td>
<td>0.79422</td>
</tr>
<tr>
<td>calpis001</td>
<td>roberta-large</td>
<td>jigsaw 1st</td>
<td>0.7031</td>
<td>0.79111</td>
<td>0.81814</td>
</tr>
<tr>
<td>calpis011</td>
<td>toxic-xlm-roberta</td>
<td>jigsaw 1st (Pseudo Label)</td>
<td>0.7205</td>
<td>0.77152</td>
<td>0.79532</td>
</tr>
<tr>
<td>calpis012</td>
<td>toxic-xlm-roberta</td>
<td>Toxic Tweet (Pseudo Label)</td>
<td>0.7141</td>
<td>0.76967</td>
<td>0.79743</td>
</tr>
<tr>
<td>naoism1004</td>
<td>roberta-large</td>
<td>jigsaw 1st</td>
<td>0.7048</td>
<td>0.80581</td>
<td>0.81520</td>
</tr>
<tr>
<td>naoism4001</td>
<td>toxic-xlm-roberta</td>
<td>jigsaw 1st (Pseudo Label)</td>
<td>0.7220</td>
<td>0.76325</td>
<td>0.79824</td>
</tr>
<tr>
<td>naoism4003</td>
<td>toxic-xlm-roberta</td>
<td>Civil Comment Dataset (Pseudo Label)</td>
<td>0.7103</td>
<td>0.76249</td>
<td>0.79584</td>
</tr>
</tbody>
</table>
<h1>Ensemble</h1>
<p>We used these models to perform the ensemble of weighted averages. The weights were determined using Optuna, which was implemented by <a href="https://www.kaggle.com/calpis10000" target="_blank">@calpis10000</a> .</p>
<p>The implementation is as follows. We used the array of less and more predictions corresponding to validation_df to automatically determine the weights for each model that would score best. Also, there was a problem that the scales of the predictions for each model were different in this case. Therefore, we tried MinMaxScaler, StandardScaler, RankGauss, and so on. The best one was StandardScaler.</p>
<pre><code>def val(trial):
p1 = np.zeros(30108)
p2 = np.zeros(30108)
x0 = trial.suggest_float(pred_columns[0], 0, 1)
x1 = trial.suggest_float(pred_columns[1], 0, 1)
x2 = trial.suggest_float(pred_columns[2], 0, 1)
x3 = trial.suggest_float(pred_columns[3], 0, 1)
x4 = trial.suggest_float(pred_columns[4], 0, 1)
x5 = trial.suggest_float(pred_columns[5], 0, 1)
x6 = trial.suggest_float(pred_columns[6], 0, 1)
x7 = trial.suggest_float(pred_columns[7], 0, 1)
x8 = trial.suggest_float(pred_columns[8], 0, 1)
x9 = trial.suggest_float(pred_columns[9], 0, 1)
x10 = trial.suggest_float(pred_columns[10], 0, 1)
x11 = trial.suggest_float(pred_columns[11], 0, 1)
x12 = trial.suggest_float(pred_columns[12], 0, 1)
x13 = trial.suggest_float(pred_columns[13], 0, 1)
x14 = trial.suggest_float(pred_columns[14], 0, 1)
x_list = [x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14]
for i, x in enumerate(x_list):
p1 += scaler_less[:, i] * x
p2 += scaler_more[:, i] * x
score = (p1 < p2).mean()
return score
optuna.logging.disable_default_handler()
study = optuna.create_study(
direction='maximize',
sampler=optuna.samplers.TPESampler(seed=42)
)
study.optimize(val, n_trials=1000)
display(study.best_trial.number)
display(study.best_trial.values)
display(study.best_trial.params)
</code></pre>
<p>The weights are as follows. It was good to see that much greater weight was given to calpis001 and naoism1004. The results were such that highly correlated predictions didn't give as much weight to each other, which was consistent with our goal of finding a weighted average that would produce diversity.</p>
<pre><code>{
'mst029_toxic': 0.14599159334648742,
'mst030_toxic': 0.9400258577765264,
'msttweet_toxic': 0.2296755364508963,
'colum014_toxic': 0.46435656956718,
'colum015_toxic': 0.8156998001053446,
'colum016_toxic': 0.6003326557886719,
'colum018_toxic': 0.08731982956387017,
'colum019_toxic': 0.02050220970508744,
'colum020_toxic': 0.8002815649764881,
'calpis001_toxic': 0.8907374041352457,
'calpis011_toxic': 0.46794326158461463,
'calpis012_toxic': 0.0613928819716386,
'naoism1004_toxic': 0.9998908148400331,
'naoism4001_toxic': 0.9085892865154106,
'naoism4003_toxic': 0.0312185722129308
}
</code></pre>
<p>The above is our solution. Thank you <a href="https://www.kaggle.com/mst8823" target="_blank">@mst8823</a> , <a href="https://www.kaggle.com/calpis10000" target="_blank">@calpis10000</a> , and <a href="https://www.kaggle.com/naoism" target="_blank">@naoism</a> for teaming up with us!</p>
<p>Inference code: <a href="https://www.kaggle.com/columbia2131/jigsaw-team-ensemble-006-fix/notebook" target="_blank">[Jigsaw] team-ensemble 006 fix</a></p>
<p>Thank you for your comment!<br>
But we didn't know if Optuna's tuning was good this time until we saw the results. <br>
I wonder how much less trouble it would have caused if the public LB was reliable and the distributions of validation_df and test's dataset were close…</p>
<p>Thank you for your comment!<br>
Optuna is quite effective in this case of optimization for validation_df.<br>
It may also be useful for determining the weight of the models and be good to have it as a handy tool👍</p>
|
Jigsaw Rate Severity of Toxic Comments
|
84th Place Approach (Trust the right CV)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>TLDR</p>
<p>First The validation data was cleaned using <a href="https://www.kaggle.com/samarthagarwal23" target="_blank">@samarthagarwal23</a> 's notebook . <a href="" target="_blank">here</a></p>
<p>Then the transformer models were trained for 5 fold on the clean data using ranking loss.</p>
<table>
<thead>
<tr>
<th>Model Name</th>
<th>CV</th>
<th>Public LB Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>roberta-base</td>
<td>0.74</td>
<td>0.811</td>
</tr>
<tr>
<td>roberta-large</td>
<td>0.76</td>
<td>0.810</td>
</tr>
<tr>
<td>deberta-base</td>
<td>0.75</td>
<td>0.806</td>
</tr>
<tr>
<td>deberta-large</td>
<td>0.79</td>
<td>0.808</td>
</tr>
<tr>
<td>albert-base</td>
<td>0.73</td>
<td>0.806</td>
</tr>
<tr>
<td>sentence-transformer(mpnetv2)</td>
<td>0.74</td>
<td>-</td>
</tr>
<tr>
<td>mpnet-base</td>
<td>0.74</td>
<td>-</td>
</tr>
</tbody>
</table>
<p>I didnt pay much heed to the public LB scores as I couldn't find any correlation. I did use linear models too but due to lack of any correlation (For example: TF-IDF 7-fold & & Fasttext ensemble on JTCC Data: 0.77 CV 0.86 LB . Hence no correlation).</p>
<p>I also tried another method of training on the JTCC data but rather than combining the different category scores i trained models on each of the categories (toxic, threat, obscene etc.) using weighted BCELoss. On running a loop to obtain the best weights for combining the different category model predictions on the untouched validation_data I observed that almost all weightage was given to the "toxic" column so i stuck to training for the BCELoss for the "toxic" column only.</p>
<p>I tried retraining the models pretrained on the JTCC data on the cleaned validation data in the ranking loss which gave me a small boost in CV.</p>
<p>My final ensemble included only the base transformer models. which were ensembled with 7-fold TF-IDF and Fasttext models from <a href="https://www.kaggle.com/yamqwe/ridge-fasttext-improved-roberta" target="_blank">here</a>.</p>
<p>Final Notebook Link: <a href="https://www.kaggle.com/aishikai/js-final-ensemble-0-813-lb-0-839-cv" target="_blank">here</a></p>
<p>In this competition one major help to me was <a href="https://www.kaggle.com/samarthagarwal23" target="_blank">@samarthagarwal23</a> 's notebook which gave me a lot of insight into the data and finally helped me choose my sub.</p>
|
Jigsaw Rate Severity of Toxic Comments
|
99th place solution - silver
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for a competition!</p>
<p>After joining the competition and checking public solutions i made a few decisions:</p>
<ol>
<li>I will not use public lb for validation, just because it contains only 5% of data.</li>
<li>Ill try linear models on validation data and find out, that they performed bad, so i focused on transformers.</li>
</ol>
<p><strong>Datasets:</strong></p>
<ol>
<li><a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/data" target="_blank">https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/data</a></li>
<li><a href="https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data" target="_blank">https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data</a></li>
<li><a href="https://www.kaggle.com/rajkumarl/ruddit-jigsaw-dataset" target="_blank">https://www.kaggle.com/rajkumarl/ruddit-jigsaw-dataset</a></li>
<li><a href="https://www.kaggle.com/c/jigsaw-toxic-severity-rating/data" target="_blank">https://www.kaggle.com/c/jigsaw-toxic-severity-rating/data</a></li>
</ol>
<p>From "1", "2", "3" datasets i made a big one in some different ways:</p>
<ol>
<li>Thresholding datasets "2" features, converting it to format of dataset "1". Then convert "1"+"2" to toxicity_score by calculating mean score, concatenate it with "3".</li>
<li>Calculate mean scores of dataset "1", then concatenate with "2" and "3".</li>
</ol>
<p>Implementation of both methods you can see in code below (i called them "sparse" and "dense" datasets).</p>
<p>Dataset "4" (validation_data.csv) i used for validation. I didnt use cross validation, because train dataset consists of 2 195 487 samples and training one model on google colab takes from 20 to 50 hours.</p>
<p>I select 3 models, trained with different datasets, loss_fuctions and sequence lengths. Two of them based on GroNLP/hateBERT, the last one cardiffnlp/twitter-roberta-base-hate. <br>
It give me 0.79784 private score (0.6988 score on validation, was my first submission, 157 place and bronze zone).</p>
<p>For the second submission i mixed models with other open solutions same way. But this two solutions are ensembles themselfs.</p>
<ol>
<li><a href="https://www.kaggle.com/yuzhoudiyishuai/robertabase5fold2-linear-256" target="_blank">https://www.kaggle.com/yuzhoudiyishuai/robertabase5fold2-linear-256</a></li>
<li><a href="https://www.kaggle.com/saurabhbagchi/pytorch-w-b-jigsaw-starter" target="_blank">https://www.kaggle.com/saurabhbagchi/pytorch-w-b-jigsaw-starter</a></li>
</ol>
<p>So the weights: model_1, model_2, model_3, 1st ensemble, 2nd ensemble - each for 20% contributions in final result.</p>
<p>Validation score of 5 models: 0.7190. It was my second submission, 0.80153 on private and 99th place.</p>
<p>Full code of my solution with model weights and description you can find here:<br>
<a href="https://github.com/vilka-lab/JigsawRate" target="_blank">https://github.com/vilka-lab/JigsawRate</a></p>
|
Jigsaw Rate Severity of Toxic Comments
|
A Detoxify solution (#3)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>congratulations to you too for your smart solution <a href="https://www.kaggle.com/coreacasa" target="_blank">@coreacasa</a> ! In any case Detoxify is a <a href="https://github.com/unitaryai/detoxify" target="_blank">github project by Unitaryai</a> that I have just assembled into a Kaggle dataset! 😊</p>
<p>Thank you to Jigsaw and Kaggle for hosting a challenging and engaging competition and congratulations to the winners and everyone who enjoied the competition !</p>
<p>I particularly enjoied this competition format because it forces to find a solution that generalizes on unseen data.</p>
<h2>My Solution</h2>
<p>For a solution that generalizes on unseen data I tried to use: </p>
<ul>
<li>datasets from previous Jigsaw competitions to extract features</li>
<li>VAL data + RUD dataset for ensembling and parameters tuning</li>
</ul>
<p>Unfortunately, I could see MHS dataset too late to be able to use it in the competition.</p>
<h3>Feature Extraction</h3>
<p>I used the Detoxify pretrained models as feature extractor [<em>toxicity</em>, <em>severe_toxicity</em>, <em>identity_attack</em>…] because <a href="https://www.kaggle.com/c/jigsaw-toxic-severity-rating/discussion/303665" target="_blank">they work very fine on previuos Jigsaw competitions</a> ,<br>
I also generated other tf-idf models on JC/JUC dataset for each specific target [<em>toxicity</em>, <em>severe_toxicity</em>, <em>identity_attack</em> …] for total of 46 features.</p>
<p>Some models generated by the individual features extracted from the detoxify models work fine on "unseen data" VAL e RUD (accuracy > 0.80 on RUD and > 0.69 on VAL) while tf-id are weak models ( < 0.71 on RUD and < 0.66 on VAL)</p>
<h3>How to combine features:</h3>
<p>As first attempt I fitted models (Ridge, RF, XGB, LGB) using RUD (with RMSE as metric) and tuning the hyperparameters using VAL: accuracy of these models varies from 0.700 to 0.707 on VAL. </p>
<p>In this way, however, I was not sure of the performance of model on unseen data and so I simplified a little more.</p>
<p>I built a simple model by selecting features optimizing ranked mean on VAL : <br>
This model contains 11 Detoxify features and 1 tf-idf feature <a href="https://www.kaggle.com/steubk/jrsotc-fe-rud-2-linear-rank-sub" target="_blank">Submission 1</a></p>
<p>I further eliminated 2 features which lower the accuracy on RUD to generate another model: <br>
This model contains only 10 Detoxify features <a href="https://www.kaggle.com/steubk/jrsotc-fe-rud-2-linear-rank-sub-2" target="_blank">Submission 2</a></p>
<p>I preferred to lower a little accuracy on VAL hoping for a better generalization (and luckily it worked ! )</p>
<table>
<thead>
<tr>
<th>Submission</th>
<th>VAL</th>
<th>RUD</th>
<th>LB</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>sub 1</td>
<td>0.70858</td>
<td>0.80888</td>
<td>0.759</td>
<td>0.80901</td>
</tr>
<tr>
<td>sub 2</td>
<td>0.70652</td>
<td>0.82242</td>
<td>0.764</td>
<td>0.81299</td>
</tr>
</tbody>
</table>
<h3>Public LB/Private LB</h3>
<p>This is the performance of my models:</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>LB</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ridge</td>
<td>0.72559</td>
<td>0.78464</td>
</tr>
<tr>
<td>RF</td>
<td>0.74823</td>
<td>0.80405</td>
</tr>
<tr>
<td>LGB</td>
<td>0.75595</td>
<td>0.80466</td>
</tr>
<tr>
<td>XGB</td>
<td>0.75454</td>
<td>0.80159</td>
</tr>
<tr>
<td>SUB 1</td>
<td>0.75922</td>
<td>0.80901</td>
</tr>
<tr>
<td>SUB 2</td>
<td>0.76412</td>
<td>0.81299</td>
</tr>
</tbody>
</table>
<p>With this data someone can argue that there is some signal on LB and "trust LB" was not a bad idea ! </p>
<p>This is the link of the experiments I did <a href="https://github.com/steubk/jrsotc-3rdplace" target="_blank">https://github.com/steubk/jrsotc-3rdplace</a></p>
<p>Sure ! this is the link <a href="https://www.kaggle.com/steubk/jrsotc-fe-rud-2-linear-rank-sub-2" target="_blank">SUB 2</a> for the 0.81299 solution </p>
<p><a href="https://www.kaggle.com/steubk/jrsotc-fe-rud-2-linear-rank-sub" target="_blank">SUB 1</a> and <a href="https://www.kaggle.com/steubk/jrsotc-fe-rud-2-linear-rank-sub-2" target="_blank">SUB 2</a> are my 2 selected submission notebooks</p>
<p>Muito obrigado, Marilia !</p>
<p>Grazie Riccardo !</p>
|
Jigsaw Rate Severity of Toxic Comments
|
Creativity and CV [golden medal level solution: private 0.813]
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello! One of my decision hits the zone of golden medals (unexpectedly to me) and I would like to introduce it to community. <br>
Thanks to the organizers for an interesting task.</p>
<p><strong>CV</strong><br>
I've used 2 validation sets:<br>
1) "Refined" validation pairs: I've removed inconsistencies in validation dataset, just keeping only unique pairs where sum(text1 is less_toxic) < sum(text2 is more_toxic). <br>
2) "Refined" dataset + 5k random pairs from ruddit<br>
Both datasets correlated with private data but did not with LB.</p>
<p><strong>Targets</strong><br>
The main task of this competition (I guess) was to convert different types of toxicity from Jigsaw classification and others competitions to single number. So it required some kind of creativity from competitors, which is fun!</p>
<p>I've used next scheme: <br>
toxic_types = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate']<br>
1) Xr, Yr = Take targets from reference dataset (ruddit and sample from OffenceEval)<br>
2) X = Train dataset (Jigsaw classification) texts <br>
3) Y = Train dataset scores * weights<br>
4) Train Linear(X, Y)<br>
5) Yhat = Predict (Xr)<br>
6) argmin (weights) MSE(Yhat, Yr)<br>
The best weights give us the best scores<br>
I've used 1 variant from public notebook, 2 variants from Ruddit and 2 variants from offenseEval, total 5</p>
<p>example of code:</p>
<blockquote>
<p>r = pd.read_csv("data/ext_ruddit.csv")<br>
c = pd.read_csv("data/jigsaw_clf.csv")<br>
r = preprocess(r, 'comment_text')<br>
c = c.query("labeled==True")</p>
<h4>clf downsampling</h4>
<p>s = c[cols].sum(1)<br>
cd = pd.concat((c[s!=0], c[s==0].sample(c[s!=0].shape[0], random_state=42)))<br>
cd = preprocess(cd, 'comment_text')</p>
<h4>train linear model (clf texts)</h4>
<p>x = cd[cols].values<br>
vec = TfidfVectorizer(analyzer='char_wb', max_df=0.8, min_df=1, ngram_range=(2, 5))<br>
X = vec.fit_transform(cd['comment_text'])<br>
rx = vec.transform(r['comment_text'])<br>
ry = r.score.values<br>
w = np.array([some numbers])<br>
y = x @ w<br>
model = Ridge(al)<br>
model.fit(X, y)<br>
rp = model.predict(rx)<br>
print(mse(ry, rp))> </p>
</blockquote>
<p>Unfortunately, I didn't score cv for ensemble of this solution (just forgot, I believe) and I used worse ensemble (with single score) of models for final solutions.</p>
<p><strong>Conclusions</strong></p>
<ul>
<li>Transfer knowledge from different datasets could be <em>very useful</em> (aka multitask learning), but if one's done it right</li>
<li>Trust your cv</li>
</ul>
<p><strong>Inference notebook</strong><br>
<a href="https://www.kaggle.com/alturutin/inference-private-0-8128" target="_blank">https://www.kaggle.com/alturutin/inference-private-0-8128</a></p>
<p><strong>Train</strong><br>
U could find example of training in <strong>jigsaw22models-infer/train_regression.py</strong><br>
In short:</p>
<ul>
<li>Backbone: <a href="https://huggingface.co/SkolkovoInstitute/roberta_toxicity_classifier" target="_blank">https://huggingface.co/SkolkovoInstitute/roberta_toxicity_classifier</a> (patriotic choice 🙂)</li>
<li>Seqlen: 128</li>
<li>Loss: MSE</li>
<li>Gradually unfreezing head, than layers 9-12, 5-8, all</li>
<li>Head = concat([CLS, GlobalPoolingMax, GlobalPoolingAvg])</li>
</ul>
<p>Good luck everyone in future competitions and happy kaggling :)</p>
|
Jigsaw Rate Severity of Toxic Comments
|
Customed loss(10th solution)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>There are many no-working things before I come to my final solution, the list could build an unbalanced dataset together with the working things😄<br>
I always believe in simpleness. Everystep should be taken with a reason or never takes it.</p>
<h5>Modeling</h5>
<p>In the end, I ensembed 7 of the variation models along:<br>
<strong>3 pretrain encoder</strong> roberta-base, roberta tweet sentiment, roberta tweet offensive<br>
<strong>3 head</strong> bi-lstm with layernorm, robertaforclassification, robertaforclassification+sigmoid<br>
<strong>4 dataset</strong> jigsaw toxic, ruddit, davidson, olid<br>
<strong>2 loss</strong> customed loss and bce loss<br>
<strong>seeds</strong> when the dataset keeps , change the seed. When the model keeps, change the seed. When loss keeps, don't change the seed.<br>
scores are summed simplely.<br>
<em>There is one submission without the oild dataset could score 5th. So the olid is not good</em></p>
<h5>Customed loss</h5>
<p>The loss is something like a margin rank loss. Just adapt it to be batch-wised. <br>
Sort the batch samples by label score ascendingly. Sort the output in the same order. And calculate the rank loss between adjacent samples. When the batch is N samples, there are N-1 pairs is summed and averaged.<br>
As well as customed loss, it benefits to have bigger batch size. I train with tpu 8 core, the batch size is 48*8=384. I kinds of assume that a best tranning is the one without strange lr schedulers, use a constant 1e-4.</p>
<h5>Trust cv</h5>
<p>As soon as I realized the public lb is a misleading one. I closed my eyes to it. I use both validation data and its voting version with 1-fold. And remove leaky comments in toxic data.</p>
<p>It's actually a boost version of the margin loss which compares the toxicity of 2 different comments. I want to make the model be capable of comparing more than just 2 comments in a single training batch. This loss sorts all the samples in one batch by their toxicity scores.</p>
<p>There is anotherfundemental sense here. For a transformer-based model,it's very important to teach it an intact information flow in very a single batch. That's very different from a CV model like Resnet, which you can teach it about the flowers in 1st batch and birds in 2nd batch. Languages are much more abstract information than images. One must carefully design the trainning pipeline to train a bert model. So I want to teach the model in every single batch whole the information in the trainning set. That's kind of a normalization operation, just to trainning samples here.</p>
|
Jigsaw Rate Severity of Toxic Comments
|
Detoxify Baseline ended up in the silver zone!
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Increasing max_len to 500 slightly increases the performance, the private lb score change from 0.80857 to 0.80868</p>
<p>The reason why out-of-the-box detoxify worked so well is clearly explained by <a href="https://www.kaggle.com/readoc" target="_blank">@readoc</a> in this discussion thread <a href="https://www.kaggle.com/c/jigsaw-toxic-severity-rating/discussion/306181" target="_blank">https://www.kaggle.com/c/jigsaw-toxic-severity-rating/discussion/306181</a> </p>
<p>Somehow the Detoxify model scored extremely well in the private LB (0.80857), which is in the silver zone. So we don't really need to do anything fancy in order to get a really high score!</p>
<p><a href="https://www.kaggle.com/bachngoh/using-detoxify-in-offline-mode-0-808-private-lb" target="_blank">https://www.kaggle.com/bachngoh/using-detoxify-in-offline-mode-0-808-private-lb</a></p>
|
Jigsaw Rate Severity of Toxic Comments
|
Forgotten Silver Medal
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello everyone, I decided to write short story about how I got my first medal.<br>
I saw this competition just right after its launch and decided to join it, but after couple tries I got that this competition is fully random and here I made a goal for myself - create stable solution only trusting my CV. So, I just fine-tuned RoBERTa-base on <a href="https://www.kaggle.com/rajkumarl/ruddit-jigsaw-dataset" target="_blank">Ruddit</a> and <a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge" target="_blank">Jigsaw</a> datasets and then averaged models predictions. And after this solution I stopped solving this competition. 2 months later I got an email from Kaggle with deadline approaching things and I decided to choose best by LB and best by CV solutions and thats it. I even didn't expect to get any medals, but here I am. Really lazy medal</p>
<p>P.S. Honestly, It was really difficult to remember which was my solution, cuz I even have no training scripts now</p>
|
Jigsaw Rate Severity of Toxic Comments
|
Learning To Rank (#26)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all!<br>
Thanks to the host for this challenge but also for the work you share.</p>
<p>I'm hesitant to post this given the lack of interest in my previous post. But it is worth a try if only one can get something out of it. </p>
<h1>Framing the problem</h1>
<p>What is the task? I understood the task as a <strong>ranking task</strong>. We have to rank/order comments by their toxicity importance.<br>
LTR (Learning To Rank) is an interesting field. <a href="https://everdark.github.io/k9/notebooks/ml/learning_to_rank/learning_to_rank.html" target="_blank">https://everdark.github.io/k9/notebooks/ml/learning_to_rank/learning_to_rank.html</a> is a great resource to begin with this new field for me. <a href="https://embracingtherandom.com/machine-learning/tensorflow/ranking/deep-learning/learning-to-rank-part-1/" target="_blank">https://embracingtherandom.com/machine-learning/tensorflow/ranking/deep-learning/learning-to-rank-part-1/</a> + <a href="https://embracingtherandom.com/machine-learning/tensorflow/ranking/deep-learning/learning-to-rank-part-2/" target="_blank">https://embracingtherandom.com/machine-learning/tensorflow/ranking/deep-learning/learning-to-rank-part-2/</a> also help. The field seems to have grown over <a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/MSR-TR-2010-82.pdf" target="_blank">https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/MSR-TR-2010-82.pdf</a>.</p>
<h1>Modeling</h1>
<p>All the models in the ensemble I built are based on a personal implementation of <strong>TFR-BERT</strong> using TFR losses. My implementation would rather be TFR-xBERTx because it can be used with different backbones. Resources: <a href="https://arxiv.org/pdf/1812.00073.pdf" target="_blank">https://arxiv.org/pdf/1812.00073.pdf</a>, <a href="https://www.tensorflow.org/ranking" target="_blank">https://www.tensorflow.org/ranking</a>, <a href="https://github.com/tensorflow/ranking" target="_blank">https://github.com/tensorflow/ranking</a>, <a href="http://bendersky.github.io/res/TF-Ranking-ICTIR-2019.pdf" target="_blank">http://bendersky.github.io/res/TF-Ranking-ICTIR-2019.pdf</a>.<br>
<a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-2007-40.pdf" target="_blank">https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-2007-40.pdf</a> shows that pairwise approach can be improved by a <strong>listwise approach</strong> (see also <a href="https://arxiv.org/pdf/2001.01828.pdf)" target="_blank">https://arxiv.org/pdf/2001.01828.pdf)</a>. The best models I built optimize a listwise loss.<br>
The idea is simple and seems to be behind Gmail search for example. A training data example is a concatenation between a query and a resource. The targets are the annotations between for example 0 and 5 concerning the relevance (I made it with 11 levels, 0 to 10, because of the 10 annotators for each comment in the training data I built). The algorithm learn by extracting the more relevant resources from a list or by ordering it. I implemented it by removing the query assuming definition of toxicity is latent but I wish I could try by giving the exact task asked to the annotators. This could allow to use all the datasets but also to let the algorithm extract latent meanings more broadly. This could make a more generic solution.<br>
My solution is heavily inspired by <a href="https://arxiv.org/pdf/2004.08476.pdf" target="_blank">https://arxiv.org/pdf/2004.08476.pdf</a> and by the correponding LB: <a href="https://microsoft.github.io/MSMARCO-Passage-Ranking-Submissions/leaderboard/" target="_blank">https://microsoft.github.io/MSMARCO-Passage-Ranking-Submissions/leaderboard/</a>. It is a blend of 7 BERT-based models : RoBERTa, Deberta and Electra pointwise approach (optimize cross-entropy loss) + RoBERTa optimizing PairwiseLogisticLoss (<a href="https://www.tensorflow.org/ranking/api_docs/python/tfr/keras/losses/PairwiseLogisticLoss" target="_blank">https://www.tensorflow.org/ranking/api_docs/python/tfr/keras/losses/PairwiseLogisticLoss</a>) + RoBERTa, Deberta and Electra optimizing a listwise loss (<a href="https://www.tensorflow.org/ranking/api_docs/python/tfr/keras/losses/SigmoidCrossEntropyLoss" target="_blank">https://www.tensorflow.org/ranking/api_docs/python/tfr/keras/losses/SigmoidCrossEntropyLoss</a> worked best). The blending also includes 2 LightGBM models (<a href="https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMRanker.html" target="_blank">https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMRanker.html</a>) that optimize LambdaMART on a concatenation of pointwise and listwise embeddings from the 3 models of each.</p>
<h1>Data</h1>
<p>Exploring all the datasets shared (and more: <a href="https://hatespeechdata.com/" target="_blank">https://hatespeechdata.com/</a> + <a href="https://www.turing.ac.uk/research/research-programmes/public-policy/online-hate-research-hub" target="_blank">https://www.turing.ac.uk/research/research-programmes/public-policy/online-hate-research-hub</a>) shows that the latent definition of toxicity is very important. In my experimentations, the validation data respond better to the datasets from previous challenges from Jigsaw. It is maybe due to the instructions that the annotators have been asked. As a former psychologist I know that it is crucial and that it is a major source of bias. Jigsaw papers add to this point of view: <a href="https://storage.googleapis.com/pub-tools-public-publication-data/pdf/66073ca7ac60ee38e93fc1d173a09cab65f2fef3.pdf" target="_blank">https://storage.googleapis.com/pub-tools-public-publication-data/pdf/66073ca7ac60ee38e93fc1d173a09cab65f2fef3.pdf</a>, <a href="https://raw.githubusercontent.com/conversationai/unintended-ml-bias-analysis/main/archive/presentations/measuring-mitigating-unintended-bias-paper.pdf" target="_blank">https://raw.githubusercontent.com/conversationai/unintended-ml-bias-analysis/main/archive/presentations/measuring-mitigating-unintended-bias-paper.pdf</a>, <a href="https://aclanthology.org/2021.hcinlp-1.3.pdf" target="_blank">https://aclanthology.org/2021.hcinlp-1.3.pdf</a> as well as Ruddit paper: <a href="https://arxiv.org/pdf/2106.05664v2.pdf" target="_blank">https://arxiv.org/pdf/2106.05664v2.pdf</a>.<br>
As said in the description of the competition, comments in validation_data.csv come from the Wikipedia corpus (and not only one half), more precisely the 1st challenge extract from it. EDA shows that the sampling were done by selecting ~40% of the comments that have been annotated to 1 to at least one of the targets. Only ~2.5% of the "all zero" have been sampled. This means that Jigsaw aims at ranking the more toxics one, so no need to add too much non-toxic comments in the training dataset.<br>
<a href="https://arxiv.org/pdf/1610.08914.pdf" target="_blank">https://arxiv.org/pdf/1610.08914.pdf</a> underlines that optimizing cross-entropy for toxicity ratio (and not a binary task) improve predictions. I validate that. I reconstructed the dataset from the 1st Jigsaw challenge (<a href="https://meta.wikimedia.org/wiki/Research:Detox" target="_blank">https://meta.wikimedia.org/wiki/Research:Detox</a>) with 10 annotators. This was tricky because a lot of processing has been made between the initial dataset and the dataset given for the competition. Reverse engineering the data of the second challenge shows that there are at least 4 types/sets of comments (by number of annotators for a comment): I kept the comments annotated by more than 4 annotators (i.e. toxic comments). The final dataset I used for training is composed by <strong>500K+ comments annotated by at least 10 annotators</strong>. The target is the ratio of toxicity (0.2 for a comment evaluated toxic by 2 annotators on 10).<br>
I have tried a lot of processing based on psychological concepts, for example the relation to norm. This did not degrade but not improve the results either. <a href="https://homes.cs.washington.edu/~msap/social-bias-frames/" target="_blank">https://homes.cs.washington.edu/~msap/social-bias-frames/</a> is a great entrypoint.<br>
Trying to find a good data ordering (based on <a href="https://arxiv.org/pdf/2002.06305.pdf" target="_blank">https://arxiv.org/pdf/2002.06305.pdf</a>) failed.</p>
<h1>Validation</h1>
<p>I used as CV the whole <strong>CV</strong> set (0.7092/0.7602) + verification that the trend was the same on 2 independent subsets + <strong>Ruddit</strong> pairs (0.715+, scoring method: <a href="https://www.kaggle.com/sorenj/scoring-ruddit-comments" target="_blank">https://www.kaggle.com/sorenj/scoring-ruddit-comments</a>) + <strong>Measuring Hate Speech</strong> dataset (0.740+, scoring method by reproducing majority aggregation Average Agreement with Annotators on ~160,000 random pairs, 4 pairs / comment).<br>
I used 4 benchmarks (3 Detoxify models + PerspectiveAPI) which score from 0.693 to 0.698 on the CV, 0.68 to 0.695 on Ruddit and 0.722 to 0.735 on the Measuring Hate Speech dataset.<br>
My goal was to build a solution that could be used in a distillation process.</p>
<h1>Machine learning and machine teaching</h1>
<p>Sometimes I think that I do not practice ML but machine teaching. Choosing the data to give to the algorithm for example. Some talk about data-centric vs model-centric approach (see Andrew NG). It turns out that there are resources for this paradigm: <a href="https://pages.cs.wisc.edu/~jerryzhu/machineteaching/" target="_blank">https://pages.cs.wisc.edu/~jerryzhu/machineteaching/</a> and <a href="https://arxiv.org/pdf/1707.06742.pdf" target="_blank">https://arxiv.org/pdf/1707.06742.pdf</a> for example.</p>
<h1>Mentoring</h1>
<p>I would love if Kaggle could offer a mentorship program.</p>
<p><a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a>: thank you so much for your contributions to the community. I feel so much grateful! I come from very far (former psychologist) and you play(ed) an important role in my learnings for applying ML. I am sure that I am not alone. Please continue!</p>
<h1>Opening</h1>
<p>NLP is a very interesting field. As a psychologist I thought that language underpin abstraction and this is why I learn it. There is so much to build and to learn.</p>
<p>Thanks if you read until here.</p>
<p>Thanks!<br>
Congrats <a href="https://www.kaggle.com/amontgomerie" target="_blank">@amontgomerie</a> for the solo gold! You deserve it. Your interventions were constructive and interesting.</p>
|
Jigsaw Rate Severity of Toxic Comments
|
My First Silver Medal: Solution Approach (Simple & Single model)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Step-0</strong>: Basic Text Cleaning<br>
<strong>Step-1</strong>: Used Data from <a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge" target="_blank">https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge</a> and BERT ('bert-base-uncased') model to train to predict probabilities for different classes. <br>
<strong>Step-2</strong>: Trained model is used to predict class-probabilities for the comments for this competition (Train/validation Data)<br>
<strong>Step-3</strong>: Used Bayesian Optimization to find weights for each of the classes such that the class-probabilities when multiplied with the weights optimize the validation data shared in this competition<br>
<strong>Step-4</strong>: Submission: Load the Trained BERT model, predict class-probabilities, and multiply with the optimized weights to predict score, followed by ranking to avoid duplicate score.</p>
<p><strong>Scope for improvement (Simple Ones)</strong>: Multi-folds / Additional Data-Sets / Ensemble-Methods / Non-Linear weight multiplier across different classes (instead of direct sum(w*c))</p>
<p><strong>Thanks to Kaggle and other experts for sharing their valuable insights/experiences throughout the competitions. Special thanks to all those who helped me realize (mathematically) that score more than ~0.82+ is not real</strong></p>
|
Jigsaw Rate Severity of Toxic Comments
|
Pretrained model benchmark + 0.769 Private LB solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>As i mentioned earlier, here is the <strong>table for toxicity classification benchmark with offline datasets, taken from huggingface.co ready to use as a pre-trained models</strong>.</p>
<p>I hope it will be usefull for somebody!</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>LB</th>
<th>Offline Dataset</th>
</tr>
</thead>
<tbody>
<tr>
<td>twitter-roberta-base-offensive</td>
<td>0.727</td>
<td><a href="https://www.kaggle.com/dzisandy/twitterrobertabaseoffensive" target="_blank">link</a></td>
</tr>
<tr>
<td>bert-base-uncased-hatexplain-rationale-two</td>
<td>0.784</td>
<td><a href="https://www.kaggle.com/dzisandy/bertbaseuncasedhatexplainrationaletwo" target="_blank">link</a></td>
</tr>
<tr>
<td>SkolkovoInstitute/roberta_toxicity_classifier_v1</td>
<td>0.743</td>
<td><a href="https://www.kaggle.com/dzisandy/skolkovoinstituteroberta-toxicity-classifier-v1" target="_blank">link</a></td>
</tr>
<tr>
<td>abhishek/autonlp-toxic-new-30516963</td>
<td>0.750</td>
<td><a href="https://www.kaggle.com/abhishek/autonlp-toxic-1" target="_blank">link</a></td>
</tr>
<tr>
<td>twitter-roberta-base-hate</td>
<td>0.716</td>
<td><a href="https://www.kaggle.com/dzisandy/twitterrobertabasehate" target="_blank">link</a></td>
</tr>
<tr>
<td>dehatebert-mono-english</td>
<td>0.728</td>
<td><a href="https://www.kaggle.com/tunminhhunh/hatespeechcnergdehatebertmonoenglish" target="_blank">link</a></td>
</tr>
<tr>
<td>twitter-roberta-base-sentiment</td>
<td>0.708</td>
<td><a href="https://www.kaggle.com/dzisandy/twitterrobertabasesentiment" target="_blank">link</a></td>
</tr>
<tr>
<td>SkolkovoInstitute/roberta_toxicity_classifier</td>
<td>0.654</td>
<td><a href="https://www.kaggle.com/jonathanchan/roberta-toxicity-classifier" target="_blank">link</a></td>
</tr>
<tr>
<td>twitter-xlm-roberta-base-sentiment</td>
<td>0.647</td>
<td><a href="https://www.kaggle.com/dzisandy/twitterxlmrobertabasesentiment" target="_blank">link</a></td>
</tr>
<tr>
<td>distilbert-base-uncased-finetuned-sst-2-english</td>
<td>0.625</td>
<td><a href="https://www.kaggle.com/dzisandy/distilbertbaseuncasedfinetunedsst2english" target="_blank">link</a></td>
</tr>
<tr>
<td>distilroberta-finetuned-tweets-hate-speech</td>
<td>0.608</td>
<td><a href="https://www.kaggle.com/dzisandy/distilrobertafinetunedtweetshatespeech" target="_blank">link</a></td>
</tr>
<tr>
<td>unbiased-toxic-roberta</td>
<td>0.535</td>
<td><a href="https://www.kaggle.com/dzisandy/unbiased-toxic-roberta" target="_blank">link</a></td>
</tr>
<tr>
<td>unitary/toxic-bert</td>
<td>0.467</td>
<td><a href="https://www.kaggle.com/julian3833/toxic-bert" target="_blank">link</a></td>
</tr>
</tbody>
</table>
<p>All model are taken from huggingface.co, datasets are ready to use as:<br>
<code>model_path = '/path to dataset/ '</code><br>
<code>tokenizer = AutoTokenizer.from_pretrained(model_path)</code><br>
<code>model = AutoModelForSequenceClassification.from_pretrained(model_path)</code></p>
<p>Using this models, we used to obtain best result with, as a blending rule Ridge Regression is used:</p>
<table>
<thead>
<tr>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.78959</td>
<td>0.79641</td>
</tr>
</tbody>
</table>
<p>Unfortunately, we haven't trusted this model and haven't used it for a submission. Very unlucky :(<br>
<strong>But you can watch this solution <a href="https://www.kaggle.com/dzisandy/easy-bert-based-models-blending-private-0-79641" target="_blank">here</a></strong></p>
<p>Congratulations on winning prizes! Happy kaggling! If you use the data provided here, please upvote!</p>
|
Jigsaw Rate Severity of Toxic Comments
|
Private score 0.81148. What could've been the 5th place solution (Single model)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all winners. I wasn't able to focus on the competition due to some other commitments during the latter half. During the start, I had my eye on one of my baseline models even though it didn't perform well in public LB. It ended up scoring 0.81148 in private LB. So I thought I'll share the approach anyway. </p>
<h2>Data prep</h2>
<h3>Evaluation data</h3>
<p>I used the text pairs provided as validation data, but converted them into a regression score by fitting the pairs to a Bradley-Terry model. <a href="https://www.kaggle.com/jdoesv/wrangled-validation-data" target="_blank">Notebook</a></p>
<h3>Training data</h3>
<p>Due to the severe overlap between this competition's data and Jigsaw 1's data, I used that original competition's dataset here. All samples that were toxic in anyway (along with arbitrarily sampled non-toxic texts) were used. The weights for each of the six columns in the original dataset are the inverse ratio of their counts. That way, all types of toxic data were normalized to the same plane. I also standardized them to fit between 0. and 1. <br>
<a href="https://www.kaggle.com/jdoesv/baseline-data-prep/notebook#Train-data" target="_blank">Notebook available here</a></p>
<h3>Data preprocessing for both train and validation</h3>
<p><a href="https://www.kaggle.com/jdoesv/wt-text-processing-utils/notebook" target="_blank">Notebook</a><br>
Preprocessing was fairly simple and done mostly using regex</p>
<ol>
<li>Removed IP Addresses</li>
<li>De-emojisized all emojis.</li>
<li>Unidecoding.</li>
<li>All repetitions were reduced to a maximum of 3 repetitions. E.g. : <code>hellllllllllllllllo</code> -> <code>helllo</code>, and <code><swearword> <swearword> <swearword> <swearword> <swearword>.....</code> -> <code><swearword> <swearword> <swearword></code></li>
</ol>
<h2>Model Selection:</h2>
<p>One intuition that I had was to use a model <strong><em>pretrained</em></strong> on conversation style corpora (which are two-way communications) instead of the more stable Bert-like models which are usually pretrained on Wiki, Bookcorpus and similar one-way communication texts which are polished over and over, unlike comments that are made on the fly (like this competition's dataset). The only model that came close to this requirement was <a href="https://huggingface.co/vinai/bertweet-large" target="_blank">Bertweet Large</a> . I finetuned it for <strong><em>pointwise ranking</em></strong> regression. <br>
<a href="https://www.kaggle.com/jdoesv/baseline-model-bertweet-regression" target="_blank">Training</a> and <a href="https://www.kaggle.com/jdoesv/baseline-predictions/notebook?scriptVersionId=80303997" target="_blank">inference</a></p>
<p>Note to self: Bookmark competition end dates in calendars so I can allocate some time selecting the final submissions. </p>
<blockquote>
<p>Did you also use jigsaw1 in your selected models?</p>
</blockquote>
<p>I used both Jigsaw 1 and 3 (Unintended bias) datasets. </p>
<blockquote>
<p>why didn't you select it?</p>
</blockquote>
<p>Terrible oversight. I thought the competition was ending today and hence didn't select my two candidate models.</p>
|
Jigsaw Rate Severity of Toxic Comments
|
Single Bert [1 epoch] to get silver medal
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><ul>
<li><strong>pretrained model</strong>: <a href="https://huggingface.co/GroNLP/hateBERT" target="_blank">HateBert</a></li>
<li><strong>task type</strong>: regression</li>
<li><strong>score</strong>: 0.80077(1 epoch), 0.80345(5 epochs)</li>
<li><strong>code</strong>: <a href="https://www.kaggle.com/leolu1998/jigsaw-hatebert-reg-infer" target="_blank">Jigsaw - HateBert Reg Infer</a></li>
</ul>
|
Jigsaw Rate Severity of Toxic Comments
|
Toxic Solution and Review (2nd Place)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for your praise, maybe the green hand would more straight-forword and directly put the knowledge into practice LOL. </p>
<p>Thanks! I suppose that the notebook had been public. The loss function I used is MSE by <code>torch.nn.MSELoss()</code></p>
<p>First of all, congrats to the winners and everyone who enjoyed the competition! ! Although this is my first time participating in a competition, I really like this competition on Kaggle, especially the competitive atomsphere in there.</p>
<h1>Overview</h1>
<p>To be honest I never thought of finishing in 2nd place, which is largely due to the luck, but I will say ''Thank you!'' to the luck for seeing positive results for my persistence with <strong>''Bert''</strong> Method.I have to say that we could marvel at and believe the performance of "Bert" in NLP world all the time. The **Roberta **helps me attain the success, even though its' public score is lower than linear model, but I find that the pinpoint of this competition perhaps is that **''Public leaderboard perhaps mislead us to be overfitting'' **</p>
<h1>Retrospective</h1>
<p>What I remember fresh is probably in the first ten days of January, when many people started to increase the public leaderboard ranking crazily, which made me feel that the overall trend was not normal, so I made two decisions: </p>
<ol>
<li>Running program Locally in order to avoid paying more attention to the public leaderboard scores.</li>
<li>I chose to part ways with a partner who was supposed to be teaming up. (The story here is that we communicated and did a lot of groundwork and exploration togethera at the beginning, but in those crazy period in January, I disagree with his obsession with improving public scoring and trust linear model instead of BERT, (Perharps the submission score of Roberta only 0.79416, which is even lower than 0.8 and not to say the linear model(0.893+)), but it would cause some overfitting risks I realized when I learn more from papers and my professor. <br>
Obviously, that's the reason why we can see this remarkable 'Shake-up' from Public leaderboard to Private Leaderboard. Fortunately, the Luck resonate my persistence in BERT method. </li>
</ol>
<h1>About Dataset</h1>
<p>I used data from the last toxic classification competition. Since the goal of this competition is to predict the toxicity of comments, which is a regression task, I weighted averaged several features of the orginal data.</p>
<h1>About Data Cleaning</h1>
<p>BERT uses raw data when pretrained, which means that Pre-trained BERT is based on an uncleaned corpus. Actually, I find that Data Cleaning will change distribution and lose information. <br>
Usually, data cleaning can improve the performance of linear models, but could harm the performance of BERT. If the weight decay or dropout value is too large, the model performance will degrade. When we go deep, we could find that Linear models need to be cleaned to reduce feature size and reduce overfitting. The capacity of BERT is large enough that it is not very necessary.<br>
So in the end, I make a huge change of the code, especially simpliying the data cleaning procedure</p>
<p>Comparison:<br>
Apply Data cleaning: Private Score:0.80826, Public Score:0.77609<br>
Delete some Data cleaning: Private Score:0.81363, Public Score:0.79416</p>
<h1>About Feature Engineering</h1>
<ol>
<li>For text preprocessing, only simple tokenization methods are used. I didn't select the features manually because the model can learn the importance of each token.<br>
2.The BERT model acts like a feature transformer, converting text data into a vector representation of length 768.</li>
</ol>
<h1>About Train Method</h1>
<p>I used BERT(Bidirectional Encoder Representations from Transformers) for this competition. BERT is currently the State-of-the-art language model that makes use of Transformer to learn contextual relations between words (or sub-words) in a text. The model I used in this competition consists of two parts: the RoBERTa base, and a multi-layer perceptron built upon it for regression. That‘s is the essence of idea why I choose this method and insist it until the competition is ended.</p>
<h1>About Esemble Weight</h1>
<p>I randomly sampled 5 folds of data, each fold contained a proportion of the data and trained a separate Model on each fold. Then average up the predictions from the 5 models.</p>
<h1>Optimization</h1>
<p>I used **Adam optimizer **as it is computationally efficient and works well with large data sets and large parameters. Besides, I also used a learning rate schedular to decrease the learning rate gradually based on the epoch. This has the effect of quickly learning good weights early and fine tuning them later.</p>
<h1>Avoid Overfitting</h1>
<p>Focusing on using** Weight Decay, Dropout Layers and Early stopping **to prevent the models from overfitting. All of these keep the relative good generalization of model.<br>
To be more specific, For *<em>Earlystoping</em>*, the stopping criteria will be based on the validation loss in each epoch.</p>
<p>Thanks everyone,I am happy everyone help new bird , I learn a lot from this competition! </p>
<p>I‘m happy that it could encourage you a lot ! From my perspective, Less overfitting exert an essential role in competition, I suggest you could review the knowledge about it, keep the model having better generalization. Besides, you also directly participate the competition of amateur such as" Titanic", which provide specifc guide for you, which means that it is a new bird friendly competition. And, Just do it !!!! You could never believe something that would happend from public leaderboard to private leaderboard.</p>
|
Jigsaw Rate Severity of Toxic Comments
|
Trust your CV?
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>What's your CV schema? It seems to be important to choose right CV there.<br>
Mine is stratified k-fold by connectivity components of aggregated validation data by majority voting.</p>
<pre><code>g = nx.Graph()
for _, row in df_majority.iterrows():
less_toxic = row['less_toxic_id']
more_toxic = row['more_toxic_id']
g.add_edge(less_toxic, more_toxic)
sizes = []
groups = []
for c in nx.connected_components(g):
sizes.append(len(c))
groups.append(c)
for ..., ... StratifiedKFold().split(..., y=sizes):
# set folds to df_majority pairs
</code></pre>
<p>Also I used <strong>ruddit pairs for validation.</strong> Well, final solution is MLP-stacking with 818LB, 809PLB and 10 fold 760+ CC validation and 710+ ruddit validation. Feel free to share your CV schema or ideas to choose your final solution ;)</p>
|
Jigsaw Rate Severity of Toxic Comments
|
Unselected top scored simple Kmeans solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Simple kmeans clustering model and Euclidean distance from centroid of non-toxic cluster scored 0.818 in private LB. </p>
<p><img src="https://i.postimg.cc/hjkKhC0K/Capture.jpg" alt=""></p>
<p>K means clustering was performed on Jigsaw Multilingual Toxic Comment Classification dataset and validation data from this competition untouched. It scored 0.718 in validation data of this competition.</p>
|
Jigsaw Rate Severity of Toxic Comments
|
[45th solution]Lucky Beginner's only 1 submission
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>First</h1>
<p>A kaggle beginner got a silver medal by only 1 submission with "great luck & a few ideas", so here is the solution<br>
It is not about advanced strategies or difficult techniques, and there may be some technical errors and poor English.</p>
<h1>Overview</h1>
<p>Public LB is unreliable, so I used validation_data.csv, which is the dataset of this competition, as the evaluation data.<br>
However, the validation_data also contains noise that was randomly selected by the annotator.<br>
I thought that the annotator might randomly select the data in the following patterns, so I deleted the data in the following patterns in validation_data and used it as an evaluation data set. The details are posted <a href="https://www.kaggle.com/c/jigsaw-toxic-severity-rating/discussion/303429" target="_blank">here</a>.<br>
(1) data with similar toxicity scores, <br>
(2) data with lower toxicity scores, <br>
(3) data with similar meanings (By sentence transformers. Code is <a href="https://www.kaggle.com/mujrush/reliable-validation-data-by-sentence-transformers" target="_blank">here</a>)</p>
<h1>Procedure</h1>
<ul>
<li><p>Step 1. Model training and prediction<br>
Using the first jigsaw competition data, train some models to predict the toxicity score of validation_data.</p></li>
<li><p>Step 2. Pre-processing of validation_data<br>
Remove the data corresponding to steps 1-3 from the prediction data of each top model in step 1. As a result, the validation_data has been reduced to half the size.<br>
(1) Remove data with a close toxicity score (+0.01 to -0.01)<br>
(2) Remove data with low toxicity score (0.15 or less) <br>
(3) Remove data with close meaning (0.65 or higher by sentence transformers)</p></li>
<li><p>Step3. Model evaluation by pre-processed validation_data<br>
Predict with some models again with pre-processed validation_data from step2 and submit the best score in the ensemble.</p></li>
<li><p>(Step4. Pray to get a medal. This procedure is not reproducible)</p></li>
</ul>
<h1>Dataset</h1>
<p>The data from <a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge" target="_blank">the first jigsaw competition</a></p>
<h1>Model</h1>
<p>1.BERT-jigsaw-severetoxic<br>
2.roberta<br>
3.hatebert<br>
4.berta <br>
5.twitter-roberta-base-offensive<br>
6.TFIDF+ridge</p>
<h1>Traing</h1>
<ul>
<li>fold :5</li>
<li>epoch: 10</li>
<li>Learning rate change: No</li>
<li>Scheduler: No</li>
</ul>
<h1>Ensemble</h1>
<p>roberta + TFIDF/Ridge</p>
<p>Here's a table of model results.</p>
<table>
<thead>
<tr>
<th>model</th>
<th>valid_data score</th>
<th>Preprocess valid_data</th>
<th>Preprocessed valid_data score</th>
<th>ensemble</th>
</tr>
</thead>
<tbody>
<tr>
<td>BERT-jigsaw-severetoxic</td>
<td>0.691078</td>
<td>yes</td>
<td>0.78536680</td>
<td></td>
</tr>
<tr>
<td>roberta</td>
<td>0.687491</td>
<td>yes</td>
<td>0.78744243</td>
<td>yes(weight:0.892)</td>
</tr>
<tr>
<td>hatebert</td>
<td>0.686561</td>
<td>yes</td>
<td>0.78348576</td>
<td></td>
</tr>
<tr>
<td>BERT</td>
<td>0.685366</td>
<td>yes</td>
<td>0.78478303</td>
<td></td>
</tr>
<tr>
<td>twitter-roberta-base-offensive</td>
<td>0.685000</td>
<td>yes</td>
<td>0.78536680</td>
<td></td>
</tr>
<tr>
<td>TFIDF/Ridge</td>
<td>0.681513</td>
<td></td>
<td>0.78108581</td>
<td>yes(weight:0.108)</td>
</tr>
</tbody>
</table>
<h1>Improvements</h1>
<ul>
<li>Change training dataset/mode/fold/epoch/ in step1.</li>
<li>Add training Learning rate change/Scheduler in step1.</li>
<li>Tuning of each threshold in step2.</li>
<li>Ensemble selection in step3. etc.</li>
</ul>
<h1>Finally</h1>
<p>In this competition, I was happy to get upvotes and reactions on my first submission to kaggle kernel and discussion. I think it's great that the kaggle community is so warm and friendly, even for beginner like me.</p>
<p>I hope this post will be useful for dear kaggler.</p>
|
Jigsaw Rate Severity of Toxic Comments
|
[9th solution] Ensemble of 5 external dataset transformers
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all huge thanks to the organizers of the competition and to all the participants. I've learned a lot and strengthened my faith in the "Trust your CV" principle 🙂</p>
<h4>High level description</h4>
<p>List of models used in the ensemble:</p>
<ol>
<li>CV: 0.7356, LB: 0.786 - roberta-base trained on CCC 2017 dataset using custom attention head, cross-entropy and margin ranking losses, a multi-staged prediction (first predict toxicity without knowing the 6 labels, then aggregate that prediction and 6 labels into the final prediction).</li>
<li>CV: 0.7450, LB: 0.747 - roberta-base trained on CCC 2017 dataset using a weighted average of margin ranking and binary cross-entropy loss.</li>
<li>CV: 0.7403, LB: 0.790 - roberta-base regression model trained on Ruddit dataset.</li>
<li>CV: 0.7448, LB: 0.751 - unitary/unbiased-toxic-roberta fine-tuned on Ruddit dataset with 2-layer DNN as a regressor.</li>
<li>CV: 0.7429, LB: never submitted - roberta-base regression trained on Wiki Talk Labels dataset.</li>
</ol>
<p>The ensembling has CV 0.7605, LB 0.779. All those models were ensembled using rankdata + mean.</p>
<h4>Validation strategy</h4>
<p>All of the models were validated on <code>validation_data.csv</code> aggregated using majority voting.<br>
First I used to define a final CV score as <code>CV * 0.65 + LB * 0.35</code>. Those weights represented my personal confidence in the score value. At some point, I've noticed a huge flakiness in the public LB score, small model change (0.001 score difference on CV) led to the 0.05-0.1 change in the LB score. I hypothesized that a small number of unique comments are used either in <code>more_toxic</code> or in <code>less_toxic</code> roles. More about it here <a href="https://www.kaggle.com/c/jigsaw-toxic-severity-rating/discussion/305301" target="_blank">https://www.kaggle.com/c/jigsaw-toxic-severity-rating/discussion/305301</a> . So I gave up on LB and stuck to the local CV score only.</p>
<h4>Ensembling strategy</h4>
<p>At first, I wanted to ensemble 14 models (trained on UBTC, Offenseval, C3, Hate Speech Measurement) instead of 5. But it in the ranking task more models do not necessarily give better score. The score distribution differs a lot across the models, the toxicity definition differs a lot across the datasets (Ruddit and Offenseval for example). To get maximum out of ensembling I've generated all the subsets of 14 models and run the validation on the results blended using rankdata + mean. This way I've identified that the highest score is reached by the subset of the 5 aforementioned models.</p>
<p>I've also experimented a lot with various ensmebling strategies: taking mean across the closest scores only, writing custom comparator function and sorting the comments based on voting, ranking + artificial distribution generation (Poisson, Normal), etc. Unfortunately, everything worked slightly worse than rankdata + mean on CV.</p>
<h4>General transformer tricks</h4>
<p>The following tricks were used for training all the models:</p>
<ol>
<li>Per-layer LR, lower for backbone layers (2e-5), higher for DNN or final attention head (up to 1e-3).</li>
<li>High and dynamic validation frequency: typically the model is trained for 1-2 epochs with validation at least every 10% batches. The higher the accuracy, the higher the validation frequency.</li>
<li>AdamW optimizer and cosine annealing with warmup scheduler, warmup typically takes 1 validation cycle.</li>
</ol>
<h4>Things which didn't work</h4>
<ul>
<li>MLM pretraining</li>
<li>Augmentations / TTA. Although I've only explored some basic ones, they seem to change the sense or the toxicity of the comment significantly, and thus worsen the score. Moreover, I've started playing with them at the very end of the competition so probably that's just a lack of time.</li>
<li>unitary/toxic-bert backbone. It gives superior performance, but it has been trained on the whole CCC 2017 dataset and that's why some portion of validation_data.csv comments have been exposed to it before. I've considered it a leaky one and switched to roberta-base and unitary/unbiased-toxic-roberta.</li>
</ul>
|
Jigsaw Rate Severity of Toxic Comments
|
[Simplest Gold medal ?]11th solution (Ridge + Detoxify + LightGBM)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Rate Severity of Toxic Comments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>WARNING</h1>
<p>First of all, congratulations to all winners. <br>
Though, really fortunately, I got the gold medal, <br>
<strong>90%(or more?) of this result may be just lucky.</strong></p>
<p>Therefore, my solution is maybe not so interesting,special, and beneficial to you .</p>
<p>What I did are just </p>
<ul>
<li>search additional data on Kaggle datasets</li>
<li>Use some libraries to get toxicity of text</li>
<li>make features and train simple LightGBM</li>
</ul>
<p>If it's ok, then please read my solution (or my poem).</p>
<h2>4 lines Overview</h2>
<p>step1 : train ridge regressor by tf-idf of a lot of datasets <br>
step2 : Toxicity prediction library(Detoxify)<br>
step3 : convert validation_data.csv to rate of more toxic for train<br>
step4 : Train LightGBM with these features and additional naive features</p>
<h2>Overview_image</h2>
<p><img src="https://pbs.twimg.com/media/FLFjKScUcAQRG02?format=jpg&name=large" alt="画像"></p>
<h1>step1 train ridge regressor by tf-idf of a lot of datasets</h1>
<h2>1-1 : reading ridge regression notebook</h2>
<p>I started from <a href="https://www.kaggle.com/samarthagarwal23/mega-b-ridge-to-the-top-lb-0-85?scriptVersionId=80534222" target="_blank">this ridge regression notebook</a>.</p>
<p>The notebook is</p>
<ul>
<li>To vectorize, It use tf-idf</li>
<li>ridge regression with a lot of external data </li>
<li>brute force weighting the ridge results <br>
(It takes a lot of memory, so I can’t weighting not so many models )</li>
</ul>
<p>After reading and executing this notebook, I thought</p>
<ul>
<li>The more external data is, the more diverse prediction I can do<ul>
<li>I thought diverse predictors are needed because the score is averaged by many workers. </li></ul></li>
<li>Weighting the ridge results by another method<ul>
<li>Finally, I chose LightGBM for optimal weighting the ridge results.</li></ul></li>
</ul>
<h2>1-2 : search additional data</h2>
<p>The notebook contains these data to train ridge regression</p>
<ul>
<li>ruddit data<ul>
<li><a href="https://www.kaggle.com/rajkumarl/ruddit-jigsaw-dataset" target="_blank">https://www.kaggle.com/rajkumarl/ruddit-jigsaw-dataset</a></li></ul></li>
<li>previous jigsaw data<ul>
<li><a href="https://www.kaggle.com/julian3833/jigsaw-multilingual-toxic-comment-classification" target="_blank">https://www.kaggle.com/julian3833/jigsaw-multilingual-toxic-comment-classification</a></li>
<li><a href="https://www.kaggle.com/c/jigsaw-toxic-severity-rating" target="_blank">https://www.kaggle.com/c/jigsaw-toxic-severity-rating</a></li></ul></li>
</ul>
<p>And I added </p>
<ul>
<li>wikipedia data<ul>
<li><a href="https://www.kaggle.com/manishguptads/wikipedia-toxicity" target="_blank">https://www.kaggle.com/manishguptads/wikipedia-toxicity</a></li></ul></li>
<li>Dynamically Generated Hate Speech Dataset<ul>
<li><a href="https://www.kaggle.com/usharengaraju/dynamically-generated-hate-speech-dataset" target="_blank">https://www.kaggle.com/usharengaraju/dynamically-generated-hate-speech-dataset</a></li></ul></li>
<li>Malignant Comment Classification<ul>
<li><a href="https://www.kaggle.com/surekharamireddy/malignant-comment-classification" target="_blank">https://www.kaggle.com/surekharamireddy/malignant-comment-classification</a></li></ul></li>
</ul>
<p>I used them to train ridge regressor and train LightGBM with the regressor's output.<br>
(continued on step 4)</p>
<h1>step2 : Call toxicity prediction library</h1>
<p>I use <a href="https://github.com/unitaryai/detoxify" target="_blank">Detoxify</a> . It’s toxicity scoring library by Bert like models trained with past zigsaw competitions.<br>
And I found <a href="https://www.kaggle.com/steubk/detoxify-quick-prediction-without-internet" target="_blank">offline execution notebook</a></p>
<p>This library have three kinds of pretrained model (original,unbiased,multilingual)<br>
I use all of these models.</p>
<p>I also use these models to convert train text to features .<br>
(continued on step 4)</p>
<h1>step3 : convert validation_data.csv to rate of more toxic for train</h1>
<p>For train data, I use provided validation_data.csv.<br>
1 problem is how to make the target value.</p>
<p>As long as I read competition overview, test data score is calculated like this↓</p>
<p><img src="https://pbs.twimg.com/media/FLFk9DnUcAEtejb?format=png&name=large" alt="図2"><br>
In other words, the score is the rate of more toxic. </p>
<p>Then, to train, I did the same conversion to train data. I train my model to predict the score directly.</p>
<h1>step4: Train LightGBM with these features and additional naive features</h1>
<p>Finally, I concat all of these features,train,and predict score.<br>
And I added some naive features(length of text, words count, and so on)<br>
Please see the image I attached the beginning of this notebook.</p>
<h1>result</h1>
<p>public LB 0.79242<br>
private LB 0.81062</p>
<p><strong>Thank you for reading !</strong><br>
If you have any question, I'll answer (if I can …)</p>
|
Jigsaw Unintended Bias in Toxicity Classification
|
106th place solution (21 models blend)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you! Yeah, the same story, pytorch helps you to dive deep into your models, but sometimes you have to pay for this forgetting the syntax :D</p>
<p>Thank you very much, hope you get Expert in Competitions soon, as well as Master and Grandmaster ;)</p>
<p>Thank you, my friend!</p>
<p>Good day everyone! I was waiting eagerly for the results of this competition to write a short report about the solution of our team and what it meant to us😊</p>
<p>You can find our final submitted notebook here: <a href="https://www.kaggle.com/blackitten13/lstm-cp-clip-nb-tw-tfidf-bert4x-gpt-test-best?scriptVersionId=16367978">https://www.kaggle.com/blackitten13/lstm-cp-clip-nb-tw-tfidf-bert4x-gpt-test-best?scriptVersionId=16367978</a></p>
<h1>Our team</h1>
<p>I started solving this competition about 2-3 weeks before the final deadline. I was spending all 5 my submissions per day, however, at some moment I understood that model blending shows itself to be very powerful in this competition, but I didn't have much time/resources to make many strong diverse models. So, I decided to blend with my friend <a href="https://www.kaggle.com/artkulak">Artem Kulakov</a>, a BSc CS student at HSE. I also wanted to find another person to merge with and saw that <a href="https://www.kaggle.com/lopuhin">Konstantin Lopuhin</a> started to make some progress towards solving the problem. He had a slightly worse score than me, though. So, I proposed a merge and he accepted it, and I was very happy about it - it is my first serious Kaggle competition, but I still had an honour to work with such a well-known Grandmaster!</p>
<h1>Our approach</h1>
<p>As I already said, we believed in blending, so we all tried to prepare strong diverse models and mix them properly. Artem and I were working on LSTM-based models, while Konstantin, as a master of transformers, designed BERT and GPT-2-based solutions. I also added some TF-IDF models, and came up with how can we make a cool blend.</p>
<p>Next, I want to give more details about our methods. In this report, I tried to focus on what made our solutions unique in some ways - so I can skip something which was well-known due to public kernels, e.g. preprocessing for LSTM models - it is completely the same as it was in the baseline kernel to which I refer below. Also we didn't invent anything for the loss function.</p>
<h3>Validation</h3>
<p>As Konstantin correctly noticed, it was pointed that in the 2nd stage, the data will consist of 50% annotated identities and 50% not annotated identities. Therefore, we decided to use a holdout set of the same size as the test set which was stratified by <code>identity_annotator_count</code> column (you can see the code <a href="https://www.kaggle.com/blackitten13/my-validation">here</a> - where the train set was split to <code>train_df</code> and <code>val_df</code>). We decided not to use OOF predictions for several reasons: it could take too much time, we still had many models, the holdout set was relatively small (so it wouldn't hurt the model quality much) and also validation score correlated in a good way with public score. It is also worth noting that our models barely fit in the private dataset quota 😄 </p>
<h3>LSTMs</h3>
<p>As a baseline, we used <a href="https://www.kaggle.com/chechir/bert-lstm-rank-blender">this kernel</a>. Then, throughout the whole time of solving the competition, with 5 submissions per day, we tested many ideas. Finally, all our LSTM-based models had similar architecture:</p>
<ul>
<li>Embedding (non-trainable)</li>
<li>SpatialDropout (0.3)</li>
<li>LSTM #1 (128/256 units, bidirectional)</li>
<li>LSTM #2 (128/256 units, bidirectional) - it took the result of LSTM #1 as an input</li>
<li>Concatenation of 3 types of pooling: Global Average Pooling, Global Max Pooling and Softmax Pooling (e.g. see <a href="https://www.cs.tut.fi/sgn/arg/dcase2017/documents/challenge_technical_reports/DCASE2017_Salamon_192.pdf">[1]</a>)</li>
<li>Concatenation of two layers - for output and auxiliary output, as in the baseline kernel. However, instead of being just a dense layer, we made it to be Dropout (0.5) -> BatchNorm -> Dense.</li>
</ul>
<p>In order to make our model diverse, we came up with some changes that we made in them.</p>
<p><em><strong>Final models</strong></em></p>
<ol>
<li>[CP] (<a href="https://www.kaggle.com/blackitten13/bilstm-crawl-paragram-6-ep-0-train">https://www.kaggle.com/blackitten13/bilstm-crawl-paragram-6-ep-0-train</a>) - this model took concatenated Crawl and Paragram vectors as embeddings, which were provided <a href="https://www.kaggle.com/iezepov/gensim-embeddings-dataset">in this dataset</a>. It had 128 hidden LSTM units.</li>
<li>[CP with target clip 0.05] (<a href="https://www.kaggle.com/blackitten13/bilstm-c-p-target-clip-0-05-5-ep-0-train-val?scriptVersionId=16115529">https://www.kaggle.com/blackitten13/bilstm-c-p-target-clip-0-05-5-ep-0-train-val?scriptVersionId=16115529</a>) - the same model, but with a trick - train target values from [0, 0.05] were clipped to 0.05, and the ones from [0.95, 1] were clipped to 0.95, in order to prevent too big model certainty and the impact of noise. And also here we changed number of hidden LSTM units to 256.</li>
<li>[Nb] (<a href="https://www.kaggle.com/blackitten13/bilstm-numberbatch-0-train-val">https://www.kaggle.com/blackitten13/bilstm-numberbatch-0-train-val</a>) - LSTM model with Numberbatch embeddings (from the same dataset) with 256 hidden LSTM units.</li>
<li>[Tw] (<a href="https://www.kaggle.com/blackitten13/bilstm-twitter-1-train-val">https://www.kaggle.com/blackitten13/bilstm-twitter-1-train-val</a>) - LSTM model with Twitter embeddings with 256 hidden LSTM units.</li>
</ol>
<p>Each model was trained with 3 different seeds, therefore for each model instead of one we obtained three of them. For the blend, we took the best epoch on validation for each model/seed. Thus we got 3 * 4 = 12 different LSTM models for the blend.</p>
<p><em><strong>What was tried but didn't work</strong></em></p>
<ul>
<li>Adding convolutional layers in many different ways</li>
<li>Attention</li>
<li>Adding GRU cells</li>
<li>Focal loss</li>
</ul>
<p><em><strong>A tragedy</strong></em></p>
<p>As I wrote <a href="https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/discussion/97387">here</a>, we had a tragedy - we didn't make it on time to include models made by Artem completely (so for the final solution we took only models prepared by me and Konstantin 😔 ). He had some strong candidates - we calculated validation score for them and even included them in blend, but, unfortunately, we were late to put it all together because there were ~10 of them, and all with different preprocessings 😲 (however, as we calculated later, including them could give us a boost just in several places up). Still, I want to include his interesting thoughts:</p>
<ul>
<li>using regular expressions replacement with the patterns below gave a slight boost both to CV and LB scores:
<code>replacement_patterns = [
(r'won\'t', 'will not'),
(r'can\'t', 'cannot'),
(r'(I|i)\'m', 'I am'),
(r'ain\'t', 'is not'),
(r'(\w+)\'ll', '\\g&lt;1&gt; will'),
(r'(\w+)n\'t', '\\g&lt;1&gt; not'),
(r'(\w+)\'ve', '\\g&lt;1&gt; have'),
(r'(\w+)\'s been', '\\g&lt;1&gt; has been'),
(r'(\w+)\'s', '\\g&lt;1&gt; is'),
(r'(\w+)\'re', '\\g&lt;1&gt; are'),
(r'(\w+)\'d', '\\g&lt;1&gt; would')
]</code></li>
<li>blending predictions from each models epochs with weights growing exponentialy (as was shared in many kernels) gave a big boost</li>
<li>adding statistical features collected for each text sample also increased the score (although many people wrote it did not work)</li>
<li>it was able to dramatically reduce train size by excluding easy training samples. It could have been done in the following way. Train a simple LSTM network for 1 epoch and then exclude all samples which were scored very close to zero (these samples were easy to classify even when no training was done with them)</li>
</ul>
<p><em>didn't work</em>:</p>
<ul>
<li>pseudo labeling (probably more time should have been invested in it to make it work)</li>
<li>Crawl + Glove embeddings - worked worse than Crawl + Paragram</li>
<li>in some recent competitions there was an advice to set embeddings layer to trainable for the last epoch. It slightly overfits, but adds more diversity to the final blend. Though this technique wasn't explored fully, so it did not work as well.</li>
</ul>
<h3>Transformers</h3>
<p>For transformers, we took <a href="https://www.kaggle.com/yuval6967/toxic-bert-plain-vanila">Yuval Reina’s kernel</a> as a starting point, converted it to a script, added a custom batcher (more random and more efficient than in kernels), slightly increased train sequence length, increased test sequence length, which gave around 0.399 LB from a single model on one epoch and took around 4-5 h to train on 2080ti. Unfortunately, scaling it up was not successful - we didn’t get a better score from 2 epochs, or from bert-large. Still we added bert-base-cased and bert-large as they provided diversity in the ensemble.
We also trained a gpt-2 model, which needed quite a bit of tweaks to get to a reasonable score. It was still only around 0.396 for a single model, but added to the ensemble nonetheless.
Our code for transformers is available at <a href="https://github.com/lopuhin/kaggle-jigsaw-2019">https://github.com/lopuhin/kaggle-jigsaw-2019</a>.
Overall we found <a href="https://github.com/huggingface/pytorch-transformers">https://github.com/huggingface/pytorch-transformers</a> really nice to use. It’s clear that transformers are now a must in any NLP competition, and we’re still waiting for more tricks to be discovered on how to use them.</p>
<h3>TF-IDF</h3>
<p>To make models even more diverse, we trained several models on two different TF-IDF matrices. To do this, we used two kernels from the previous competition: [<a href="https://www.kaggle.com/chernyshov/logistic-regression-with-preprocessing">2</a>, <a href="https://www.kaggle.com/coolcoder22/simple-logisticregression">3</a>]. For boosting, we also tuned some parameters on the validation set. Finally, for this part we got 4 models:</p>
<ul>
<li><a href="https://www.kaggle.com/blackitten13/fork-of-train-preprocessing-for-tf-idf">2</a> <a href="https://www.kaggle.com/blackitten13/simple-tf-idf-models-train-val">LightGBM</a> models</li>
<li><a href="https://www.kaggle.com/blackitten13/simple-tf-idf-models-train-val">Logistic Regression</a> model</li>
<li><a href="https://www.kaggle.com/blackitten13/simple-tf-idf-models-train-val">NB-SVM</a> model</li>
</ul>
<h3>Blending</h3>
<p>And this is where the magic begins. At some point we noticed that we got a better quality of validation/public set, if we change the coefficients of the models in the blend - and even make them look like very strange constants and <strong>even negative</strong>. In order not to do it by hands, we wrote a script for tuning them which you can see in the validation part of our solution. The idea is to initialize weights for models, and then change them iteratively one by one each iteration - specifically, each weight is represented as a fraction, and each time we try to change its numerator up or down by some integer values. The algorithm finishes when on the last iteration there were no changes in weights.</p>
<p><em><strong>What was tried but didn't work</strong></em></p>
<ul>
<li>scipy optimization of coefficients (gave trivial irrelevant results and worked for much time, though we could have made a mistake somewhere)</li>
<li>stacking (this was a surprise - neither LR nor LightGBM could beat the blending approach)</li>
<li>other types of blending (geometric, harmonic, rank)</li>
<li>a mad idea ~5 hours before the deadline - including extracted 4 simple features such as <code>clean_word_len</code>, <code>clean_char_len</code>, <code>clean_chars</code>, <code>clean_chars_ratio</code> as the models in the blend. Obviously, they had AUC-ROC ~0.5, however, they gave some boost on validation 😁 There was no boost on public, though, so we decided to throw them out of harm's way.</li>
</ul>
<h1>The results</h1>
<p>The general results are available in the following notebooks:</p>
<p><a href="https://www.kaggle.com/blackitten13/lstm-cp-clip-nb-tw-tfidf-berts-gpt-val2-best-r?scriptVersionId=16349926">Validation notebook (version 2)</a>
<a href="https://www.kaggle.com/blackitten13/lstm-cp-clip-nb-tw-tfidf-bert4x-gpt-test-best?scriptVersionId=16367978">Final notebook with submission (version 3)</a></p>
<p>In the final solution, we had 21 models. On the table below you can find the results of each model.</p>
<p>| Model | Validation score | Blend coefficient
| --- | --- | ---|
| <a href="https://www.kaggle.com/blackitten13/bilstm-crawl-paragram-6-ep-0-train">LSTM CP (seed 1, epoch 3)</a> | 0.93335312 | 0.0316 = 50/130 * 17/207
| LSTM CP (seed 2, epoch 3) | 0.93439617 | 0.0253 = 40/130 * 17/207
| LSTM CP (seed 3, epoch 3) | 0.93436607 | 0.0253 = 40/130 * 17/207
| <a href="https://www.kaggle.com/blackitten13/bilstm-c-p-target-clip-0-05-5-ep-0-train-val?scriptVersionId=16115529">LSTM clip (seed 1, epoch 2)</a> | 0.93341418 | 0.0514 = 40/124 * 33/207
| LSTM clip (seed 2, epoch 3) | 0.93333687 | 0.0514 = 40/124 * 33/207
| LSTM clip (seed 3, epoch 3) | 0.93372217 | 0.0566 = 44/124 * 33/207
| <a href="https://www.kaggle.com/blackitten13/bilstm-numberbatch-0-train-val">LSTM nb (seed 1, epoch 4)</a> | 0.92830396 | -0.0204 = /4040/123 * -13/207
| LSTM nb (seed 2, epoch 4) | 0.92741954 | -0.0199 = 39/123 * -13/207
| LSTM nb (seed 3, epoch 4) | 0.92771196 | -0.0225 = 44/123 * -13/207
| LSTM tw (seed 1, epoch 3) | 0.93013805 | -0.0016 = 40/119 * -1/207
| <a href="https://www.kaggle.com/blackitten13/bilstm-twitter-1-train-val">LSTM tw (seed 2, epoch 3)</a> | 0.92989958 | -0.0016 = 39/119 * -1/207
| LSTM tw (seed 3, epoch 3) | 0.93067808 | -0.0016 = 40/119 * -1/207
| <a href="https://www.kaggle.com/blackitten13/fork-of-train-preprocessing-for-tf-idf">TF-IDF LightGBM #1</a> | 0.88133766 | -0.0037 = 42/164 * -3/207
| <a href="https://www.kaggle.com/blackitten13/simple-tf-idf-models-train-val">TF-IDF LightGBM #2</a> | 0.88202771 | -0.0036 = 41/164 * -3/207
| <a href="https://www.kaggle.com/blackitten13/simple-tf-idf-models-train-val">TF-IDF LR</a> | 0.88507331 | -0.0039 = 44/164 * -3/207
| <a href="https://www.kaggle.com/blackitten13/simple-tf-idf-models-train-val">TF-IDF NB-SVM</a> | 0.88679439 | -0.0033 = 37/164 * -3/207
| BERT Uncased | <strong>0.94027252</strong> | 0.2986 = 59/126 * 132/207
| BERT Cased | 0.93841706 | 0.1518 = 30/126 * 132/207
| BERT Uncased Fresh | 0.93919279 | -0.1417 = -28/126 * 132/207
| BERT Large | 0.94019660 | <strong>0.3290 = 65/126 * 132/207</strong>
| GPT-2 | 0.93866539 | 0.2029 = 42/207
| <strong><em>Megablend</em></strong> | <strong><em>0.94379378</em></strong></p>
<p>To conclude: personally, this was a very useful and interesting competition for me. I learned a lot, tried many things, worked in a great team, and, last but not least, this is my first ever medal at Kaggle! (and it is not even bronze - it is silver!) Also it is a second medal for Artem, meaning he is an Expert now!🎉 🎉 </p>
<h1>Bonus: an interesting bug</h1>
<p>A few days before the deadline I figured out a very strange LSTM models behaviour - some of them gave different validation scores if I put them in different kernels. It was even like this: say there are models m1 m2 m3. If I put m1 in one kernel, m2 in second kernel and m3 in third, they give some results. If instead I put all them in one kernel in the following order: m1 m2 m3, then m1 has the same result, but m2 and m3 - different. Wtf? I debugged it the whole day, but then gave up and asked the teammates - what can be wrong? As Konstantin noticed, I forgot to switch my models to the inference mode with <code>model.eval()</code> and <code>with torch.no_grad():</code>. As a result Dropout and BatchNorm were in the training mode, and there produced this strange behaviour: as I understood, turned-on Dropout in m1 modified random seed somehow, causing different results in m2 and m3. Anyway, I think that from now on I will never forget to put <code>model.eval()</code> and <code>with torch.no_grad():</code> in my code😃 </p>
|
Jigsaw Unintended Bias in Toxicity Classification
|
10th place late writeup[code available now]
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The improvement from pretraining is significant in the beginning, and it was about 0.002. However, after we found more tricks, such as better loss function, the improvement was getting more and more insignificant.</p>
<p>Actually, we included numeric toxic label in our auxiliary loss, and it improved our score too.</p>
<p>First, many thanks to my great teammate <a href="https://www.kaggle.com/yangddd">yangDDD</a>. We entered this competition quite late, but we managed to harness the beasts.</p>
<p>github url: <a href="https://github.com/thuwyh/Jigsaw-Unintended-Bias-in-Toxicity-Classification">https://github.com/thuwyh/Jigsaw-Unintended-Bias-in-Toxicity-Classification</a></p>
<h2>1 Models</h2>
<h3>BERT</h3>
<p>In this competition, we used six BERT models. They are
1. BERT-large-uncased (further pretrained with competition data),
2. BERT-large-cased (further pretrained with competition data),
3. BERT-large-cased (trained with uncased data),
4. BERT-large-wwm-uncased (further pretrained with competition data),
5. BERT-large-wwm-cased,
6. BERT-based-uncased (further pretrained with competition data).</p>
<p>To add diversity to our models, we trained BERT-large-cased model with uncased data and count it as one of our models.</p>
<h3>GPT-2</h3>
<p>This model didn’t achieve a high score as we expected, but it still improved the score of our ensemble model.</p>
<h3>RNN</h3>
<ul>
<li>Glove 300d embedding</li>
<li>Crawl 300d embedding</li>
<li>Wordvec 100d embedding</li>
<li>char-level count vector and keyword-based features</li>
<li>3 RNN layers + attention pooling + classifier</li>
</ul>
<h2>2 Blending</h2>
<p>Blend the above 6 BERT models with weights: 0.25, 0.15, 0.2, 0.2, 0.12, 0.08.</p>
<p>Blend 13 RNN models with average.</p>
<p>Blend the Blended BERT, RNN and GPT-2 with weights: 0.88, 0.06, 0.06.</p>
<h2>3 Training method</h2>
<h3>Nvidia Apex and bucket sampler</h3>
<p>The typical training time of one epoch BERT-base model is about 2 hours with a single V100 GPU.</p>
<h3>MTL custom loss</h3>
<p>We used auxiliary labels including severe_toxicity, obscene, identity_attack, insult and threat. </p>
<p>For the original toxic label, we used 0.5 as threshold to get the binary label, and then calculated the binary cross entropy with our prediction as loss1. For auxiliary labels, we directly calculated the binary cross entropy with our predictions as loss2 (both labels and logits were float numbers in the range of [0,1]). The final loss = loss1 * 0.02 + loss2. We tried a lot of other loss functions to fit the metric, and this one worked best.</p>
<h3>Sentence clipping</h3>
<p>If a sentence is longer than Lmax, we concatenated the 0.25Lmax tokens in the beginning and 0.75Lmax tokens in the end, for the most important information often exists in the tail and the head. The max length Lmax was set to 220.</p>
<h3>Different learning rate among the layers of BERT</h3>
<p>We set the learning rate of last layer of BERT encoder and the task-specific classifier to a base learning rate, such as 5e-5; and we reduced the learning rate by a ratio of 0.98 for the preceding BERT layers. As a result, the learning rate of the first layer of a BERT base model is 5e-5 * 0.98^12 . </p>
<h3>Gradient accumulation</h3>
<p>For BERT-base and GPT-2, we set batch size to 48 and accumulating step to 3; and for BERT-large the batch size and accumulating step were 12 and 20 respectively. </p>
<p>what is correction?</p>
<p>--
sorry, did not see the photo.
correction is a spell checking function</p>
|
Jigsaw Unintended Bias in Toxicity Classification
|
11th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Can't remember the exact score. For bert uncased the lb score from 0.936+ to 0.943.</p>
<ol>
<li>yes, pregenerate the data first then finetuning.</li>
<li>just jigsaw data.</li>
<li>In my experiment, the loss drop fast at first. And I finetune all models (base and large) with 2 epochs.</li>
</ol>
<p>I use the default parameters. 😑 Actually I didn't notice the loss... Just try the finetuned lm directly.</p>
<p>Thanks to kaggle and Jigsaw for this great competition and I'm almost wining my Titan RTX card😔 . After team up with @tides, we learned a lot from each other and the lb score increase a lot by simply averaging our submissions. We figured out that the diversity of our methods lead to the increase. I focus on language model finetune while tides focus on more reasonable loss function. </p>
<h1>Summary</h1>
<h1>toxu:</h1>
<p>I didn't spent much time on custom loss function in this competition. I mainly focused on train more models with different kinds of language model and tried to make the inference faster. </p>
<h2>Model</h2>
<p>several rnn-based models with different embeddings(glove, fasttext, word2vec, para and their concatenation). Achieve lb:0.938+ by averaging all results.</p>
<p>Bert models:</p>
<ul>
<li>uncased bert base</li>
<li>cased bert base</li>
<li>uncased bert large</li>
<li>cased bert large</li>
</ul>
<p>All above bert models with lm finetune. I found lm finetune helps a lot but time consuming.</p>
<h2>things work for me</h2>
<ul>
<li>preprocessing (from <a href="https://www.kaggle.com/christofhenkel/how-to-preprocessing-for-glove-part1-eda">public kernel</a>)</li>
<li>lm finetune.</li>
<li>pad the sentence with different length and put the sentences with similiar length in a batch. It also helps us put more models in the inference time.</li>
</ul>
<h2>things not work for me</h2>
<ul>
<li>lexvec embedding</li>
<li>NumberBatch embedding</li>
<li>stacking based on previous jigsaw competition</li>
<li>statistical feature for sentences</li>
</ul>
<h1>tides:</h1>
<h2>Model summary</h2>
<ul>
<li>bilstm + cnn: 12 with different embeddings and loss (0.94163) + 6 with differenct tokenizers</li>
<li>3 bert uncased base (best 0.94346)</li>
<li>1 bert cased base (0.94184)</li>
<li>1 bert uncased large</li>
<li>1 gpt2</li>
</ul>
<p>And for bert, actually when using bucketing technique, I made a mistake, which cause '[SEP]' is trimmed. After I modified the error and added another trick, the score improved locally from 0.9390 -> 0.9410 (95w train, 20w validation). But I don't have time to update models. </p>
<h2>Loss:</h2>
<p>I focus more on loss function. After some experiments, I try to plot distribution of predictions for 4 samples(sp, sn, bp, bn) and design loss to adjust the distributions. Following are desinged loss. Validation for each part is not so solid, so be careful.</p>
<p>```python
class MyFocalLoss(nn.Module):
def <strong>init</strong>(self, alpha=1, gamma=1, logits=True, reduce=True):
super(FocalLoss, self).<strong>init</strong>()
self.alpha = alpha
self.gamma = gamma
self.logits = logits
self.reduce = reduce</p>
<pre><code>def forward(self, inputs, targets, weight, focal_weights, penalty_weights, focal_alpha, penalty_alpha):
if self.logits:
BCE_loss = nn.BCEWithLogitsLoss(weight=weight, reduce=False)(inputs, targets)
else:
BCE_loss = nn.BCELoss(weight=weight, reduce=False)(inputs, targets)
pt = torch.exp(-BCE_loss)
# The designed part
bias_weights = (1.0 + targets.detach() - torch.sigmoid(inputs).detach())
F_loss = (1-pt)**(focal_weights) * BCE_loss * (bias_weights ** penalty_weights) * focal_alpha * penalty_alpha
if self.reduce:
return torch.mean(F_loss)
else:
return F_loss
</code></pre>
<p><code>
</code>python
IDENTITY_COLUMNS_WEIGHTS= {
'black': 4.5,
'white': 3.5,
'homosexual_gay_or_lesbian': 5,
'muslim': 2.5,
'jewish': 0.2,
'female': 1,
'christian': -0.1,
'male': 0.5,
'psychiatric_or_mental_illness': 4.5
}
def get_sample_weight_and_scale(train):
sample_weights = np.ones((len(train),))</p>
<pre><code># sn(subgroup, negtive) samples will have more weights, the weights are adjusted to balance 'bnsp_auc' and 'bpsn_auc'
for i, ic in enumerate(IDENTITY_COLUMNS_WEIGHTS):
sample_weights += ((train['target'].values < 0.5) & (train[ic].fillna(0) >= 0.5)).astype(int) * IDENTITY_COLUMNS_WEIGHTS[ic]
# The sample receiving more annotators is more important
coeff1 = np.log10(train['toxicity_annotator_count'])
# The sample receiving more identity involved annotations, is more important
coeff2 = np.log(train['identity_annotator_count'] + 1) / np.log(4)
sample_weights = sample_weights.values * coeff1 * ((coeff2 + 1) ** 0.3)
sample_weights /= sample_weights.mean()
return sample_weights.values
</code></pre>
<p>```</p>
<h2>Multiple targets:</h2>
<ul>
<li>MAIN_COLUMNS = ['target'], designed loss</li>
<li>AUX_COLUMNS = ['target', 'insult', 'threat', 'sexual_explicit', 'obscene'], original BCE loss</li>
<li>Another trick that not used in final version:
AUX_COLUMNS2 = ['target1', 'target2', 'target3', 'target4', 'target5', 'target6'], original BCE loss
<code>python
train['target6'] = np.where(train['target'] >= 0.6, 1, 0)
train['target5'] = np.where(train['target'] >= 0.5, 1, 0)
train['target4'] = np.where(train['target'] >= 0.4, 1, 0)
train['target3'] = np.where(train['target'] >= 0.3, 1, 0)
train['target2'] = np.where(train['target'] >= 0.2, 1, 0)
train['target1'] = np.where(train['target'] >= 0.1, 1, 0)
</code></li>
</ul>
<h2>About blending:</h2>
<p>The score decrease when we add more bert models. I notice the fraction of lstm based model should not be too low. So I keep it at about 0.25.</p>
|
Jigsaw Unintended Bias in Toxicity Classification
|
14th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>That's right.</p>
<p><a href="/httpwwwfszyc">@httpwwwfszyc</a>
Sorry for late reply. We have trained GBDT models individually.
For example, it can be handled with the following minor modifications.</p>
<p>```
class CustomBertClassifier(nn.Module):
...
def forward(self, x):
...</p>
<h2> return pooled_output, out</h2>
<p>embeddings = np.zeros((len(X), 768))</p>
<p>for i, (x_batch, ids) in enumerate(loader):
x_batch = trim_tensors([x_batch])[0]</p>
<pre><code>with torch.no_grad():
emb, pred = model(x_batch.to(device), attention_mask=(x_batch &gt; 0).to(device), labels=None)
...
embeddings[i * batch_size: (i + 1) * batch_size] = emb.detach().cpu()
</code></pre>
<hr>
<h1>train/predict GBDT models with <code>embeddings</code> as feature</h1>
<p>...
```</p>
<ol>
<li>Simply remove last linear layer on each BERT model.</li>
<li>Improve score about 0.0003 ~ 0.0005 on private LB.</li>
</ol>
<blockquote>
<p>What was the LB score for the GBDT models?</p>
</blockquote>
<p>About 0.9437~0.9439 on public LB. Their are close to our BERT models, but the rank correlation are not very high.</p>
<blockquote>
<p>Were rank averages better than simple averages?</p>
</blockquote>
<p>We tried only rank averaging. Because we checked good LB score using rank averaging at an early stage.</p>
<p>First of all, We would like to thank the competition hosts for such a great competition. Also, congratulate all winers and participants.</p>
<h3>Models</h3>
<p>Our solution is the weighted rank averaging of the following models. We didn't try XLNet because there was not enough time for us...</p>
<ul>
<li>RNN / lstm-gru-Attention</li>
<li>BERT-Base uncased</li>
<li>BERT-Large uncased</li>
<li>GPT-2</li>
<li>LightGBM w/ BERT-Base uncased 768d feature</li>
<li>LightGBM w/ BERT-Large uncased 1024d feature</li>
<li>CatBoost w/ BERT-Base uncased 768d feature</li>
<li>CatBoost w/ BERT-Large uncased 1024d feature</li>
</ul>
<h3>Preprocessing</h3>
<ul>
<li><p>RNN</p>
<ul><li>spell correct to match more words to embedding</li>
<li>do not apply contraction mapping</li></ul></li>
<li><p>BERT/GPT-2</p>
<ul><li>apply spell correction and contraction mapping</li></ul></li>
</ul>
<h3>Embedding</h3>
<ul>
<li>RNN
<ul><li>fastText & train fastText (300d, on the competiton dataset)</li>
<li>Glove & train fastText</li></ul></li>
</ul>
<h3>Loss function</h3>
<p>We used weighted FocalLoss and <a href="https://www.kaggle.com/matsuik/subgroup-negative-weighting">subgroup negative weighting</a> for all NN models.</p>
<h3>What didn't work for us</h3>
<ul>
<li>data augmentation (translation, past competition data)</li>
<li>more spell correction for toxic words about 2,000 patterns (We didn't use most.)</li>
<li>oof for indicator feature (completion, residual)</li>
<li>trained RNN with part of sentences (first/last 50 characters)</li>
<li>use statistical features for BERT, GPT-2 and GBDT</li>
</ul>
<p>That's correct.</p>
<p>Just return <code>pooled_output</code> and use them to train and predict. There are not any special processing.</p>
|
Jigsaw Unintended Bias in Toxicity Classification
|
16th @ Jigsaw Unintended Bias
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>1 epoch takes about 7 hours of kaggle kernel.</p>
<p>The accuracies of pretraining were like this with 90% dev and 10% val.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2117657%2Fe3ab4083ca7ed1d156f267ae4f37271f%2Fdev%20acc%20and%20val%20acc.svg?generation=1563863816997270&alt=media" alt=""></p>
<p>Here is the plot of Jigsaw's AUCs of finutuned models based on pretrained ones.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2117657%2F1b9a46fbd7357282af80f4c897187b01%2Fchart.svg?generation=1563864010378403&alt=media" alt=""></p>
<p>I used the model of epoch 18 for the sake of diversity to get better ensemble. 👍 </p>
<p>Thank you for Jigsaw and Kaggle to host such an awesome clean NLP competition. I really enjoyed this new type of kernel competition although it was a little controversial at first.</p>
<p>I have fully committed for 3 months, and I'm so happy to barely manage to get a gold medal. </p>
<p><a href="https://www.kaggle.com/matsuik/fork-of-pred-lstm-gpt2-bu12c45-lu0c1-ftpe18-7c1c82">This is my kernel for final private LB</a></p>
<ul>
<li>Ensemble</li>
<li>Loss</li>
<li>Sentence level features</li>
<li>Preprocessing</li>
<li>Dynamic minibatch size</li>
<li>Models
<ul><li>BERT models</li>
<li>FastText-LSTM-GRU</li>
<li>GPT2</li>
<li>Further pretraining of BERT base uncased</li></ul></li>
<li>Things didn't work</li>
</ul>
<h2>Ensemble</h2>
<p>My model is an ensemble of
- fasttext-lstm-gru
- GPT2
- BERT base uncased with further pretraining on competition data
- two BERT base uncased
- two BERT base cased
- BERT large uncased
- BERT large cased</p>
<h2>Loss</h2>
<p>I only weighted subgroup negative samples.</p>
<p>```python
TOXICITY_COLUMN = 'target'
identity_columns = [
'male', 'female', 'homosexual_gay_or_lesbian', 'christian', 'jewish',
'muslim', 'black', 'white', 'psychiatric_or_mental_illness']</p>
<p>subgroup_bool_train = train[identity_columns].fillna(0)>=0.5
toxic_bool_train = train[TOXICITY_COLUMN].fillna(0)>=0.5
subgroup_negative_mask = subgroup_bool_train.values.sum(axis=1).astype(bool) & ~toxic_bool_train</p>
<p>sample_weight = np.ones((y_train.shape[0],))
sample_weight += 1.5 * subgroup_negative_mask
```</p>
<p>I jointly optimezed this loss for other features like
<code>python
['target', 'severe_toxicity', 'obscene', 'identity_attack', 'insult', 'threat']
</code></p>
<h2>Sentence level features</h2>
<p>I tried a lot of sentence-level features with simple RNN cross validation.
Especially, word_count, num_unique and num_unique_ratio performed well. I concatenated these features before the last layer of each model.</p>
<h2>Preprocessing</h2>
<p>I removed punctuations and digits for some models. No preprocess for other models.</p>
<h2>Dynamic minibatch size</h2>
<p>This is so important to
- speed up training and inferece
- ensure all batches with various sequence length are on the memory</p>
<p>For example, the batch size ranges from 16 to about 250 for BERT base. </p>
<h2>Models</h2>
<h3>BERT models</h3>
<p>BERT models are like concatenation of sentence-level features and BERT output, then residual dense layer, then final output.</p>
<ul>
<li>max length: 220</li>
<li>0.3 dropout for the added residual dense layer, 0.1 for others</li>
<li>devide train data 90% to dev, 10% to val </li>
<li>Adam</li>
<li>linearly increase learning rate from 1e-6 to 1e-5 in the first 10% of dev data.</li>
<li>2 epochs</li>
<li>gradient accumulation over 8 batches for BEAT Large. </li>
</ul>
<h3>FastText-LSTM-GRU</h3>
<ul>
<li><p>reduce the number of OOV of FastText embedding in training and test times <a href="https://www.kaggle.com/matsuik/fork-of-pred-lstm-gpt2-bu12c45-lu0c1-ftpe18-7c1c82#LSTM-GRU-bs-256-crawl-ROOV-TEST">like this.</a></p></li>
<li><p>predict missing identities with LSTM-GRU and used predicted ones to weight samples.</p></li>
<li><p>ensemble models from 10 folds and 15 epochs.</p></li>
</ul>
<h3>GPT2</h3>
<p>I took max pooling of the final layer of GPT2.</p>
<h3>Further pretraining of BERT base uncased</h3>
<p>This further pretraining means BEAT's masked LM training on this Jigsaw competition dataset.</p>
<p>I pretrained BEAT for 18 epochs and finetuned it.</p>
<h2>Things didn't work</h2>
<ul>
<li>training on data of former toxic competition</li>
<li>translate back with textblob</li>
<li>XLNet (I just didn't have time to tune it)</li>
<li>cosine annealing</li>
<li>SGD, Adam with proper weight decay</li>
<li>tf-idf</li>
<li>NLTK sentiment features</li>
<li>subsampling, focal loss</li>
</ul>
<h2>Some feelings</h2>
<p>I am so impressed to see other top ranker's solutions are really simple. My solution is unnecessarily complex、、、</p>
<p>I am proud of my first solo gold. At the same time, honestly soloing kaggle was a bad experience because I was just lonely and couldn't learn anything from my teammates.</p>
<p>I have only one GTX 1080 Ti, so it was hard to train huge models like BERT large. I was exciting to tweak my code and params to make it.</p>
|
Jigsaw Unintended Bias in Toxicity Classification
|
1ST PLACE SOLUTION
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We presented our solution on ML Training Conference which will be available <strong>on 10 August in English.</strong> You can see slides with <a href="https://docs.google.com/presentation/d/1Km1C8jKpofoRRnWRMka-SGrfRNd8jlp6TuasqVKY91E/edit">solution here</a></p>
<p><strong>Also you can read quick solution here</strong></p>
<p><strong>1.</strong> Custom head for BERT, XLNet and GPT2
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1192776%2F1d4e0a1e3b47c88b02aae5b43c1bc360%2Fhead.PNG?generation=1565469589665468&alt=media" alt=""></p>
<p><strong>2.</strong> Bucket Sequencing Collator</p>
<p><strong>3.</strong> Auxiliary tasks for models
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1192776%2F957bfdf462369a6eca31bf4ef4e7942b%2Fbert_aux.PNG?generation=1565469511607218&alt=media" alt=""></p>
<p><strong>4.</strong> Custom mimic loss</p>
<p>```
def compute(predictions, labels, subgroups, power=5.0, score_function=F.binary_cross_entropy_with_logits):
subgroup_positive_mask = subgroups & (labels.unsqueeze(-1) >= 0.5)
subgroup_negative_mask = subgroups & ~(labels.unsqueeze(-1) >= 0.5)
background_positive_mask = ~subgroups & (labels.unsqueeze(-1) >= 0.5)
background_negative_mask = ~subgroups & ~(labels.unsqueeze(-1) >= 0.5)</p>
<pre><code>bpsn_mask = (background_positive_mask | subgroup_negative_mask).float()
bnsp_mask = (background_negative_mask | subgroup_positive_mask).float()
subgroups = subgroups.float()
bce = score_function(predictions,labels, reduction="none")
sb = (bce.unsqueeze(-1) * subgroups).sum(0).div(subgroups.sum(0).clamp(1.)).pow(power).mean().pow(1/power)
bpsn = (bce.unsqueeze(-1) * bpsn_mask).sum(0).div(bpsn_mask.sum(0).clamp(1.)).pow(power).mean().pow(1/power)
bnsp = (bce.unsqueeze(-1) * bnsp_mask).sum(0).div(bnsp_mask.sum(0).clamp(1.)).pow(power).mean().pow(1/power)
loss = (bce.mean() + sb + bpsn + bnsp) /4
return loss
</code></pre>
<p>```</p>
<p><strong>5.</strong> SWA and checkpoint ensemble
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1192776%2Fe87d734f257318f1586ec4bcd035ad4f%2Fswa.PNG?generation=1565469689119703&alt=media" alt=""></p>
<p><strong>6.</strong> Rank average ensemble of 2x XLNet, 2x BERT and GPT2 medium</p>
<p><strong>Many thanks to my teammates <a href="/sh73ch">@sh73ch</a> and <a href="/nprokoptsev">@nprokoptsev</a>, Kaggle for this interesting competition and community for new ideas and knowledge sharing</strong></p>
|
Jigsaw Unintended Bias in Toxicity Classification
|
27th Public Solution (kernels only)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><ol>
<li>step: 5 models x 5 folds + 4 models x4 folds = 41 kernels</li>
<li>use 5 resp. 4 models to pseudolabel and train a new model per fold
-> 1model x 5folds + 1model x 4 folds = 9models (in 9 kernels). The mthod of pseudo labelling for "compressiong knowledge" of several models into one is called knowledge distillation as by the name of the respective paper. It is explained here <a href="https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/discussion/97135#latest-561117">https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/discussion/97135#latest-561117</a> </li>
<li>upload 9 models as dataset and do 1 kernel which does the inference on the test set.</li>
</ol>
<p>where I defined the weights iterative via cv loss</p>
<p>Also since I only trained 1 epoch, averaging 5 folds with different order of training data, gave a good boost</p>
<p>Depends on the outcome of private LB. My kernels involve a lot of tricks for speeding up training. I might need for the next NLP competition... Although it might save some trees if everybody could train 3x faster with the tricks I used ;)</p>
<blockquote>
<p>compressed model for each fold</p>
</blockquote>
<p>yes, I needed some tricks to get 9x BERT large inference in final kernel</p>
<blockquote>
<p>If you had more time would you have done more models per compression? </p>
</blockquote>
<p>Sure. Problem was I was solo and only had 4 kaggle kernels (running 24/7). Hyperparameter and loss tuning took for ages. And another problem with my setup was that folds and logits need to match. You are then e.g. not able to change from a 7logit to a 9logit loss. Or have massive problems when kaggle team decides to change backend one week before competition end and folds do not match anymore, because underlying seeds are suddenly different :/</p>
<p>Since I think LB is not reliable because it only has 90k of data , I wanted oof predictions to measure performance. As I implemented the knowledge distillation it goes as following:</p>
<p>for each fold:
1. train a bunch of models to predict y. y in my case are 7 logits on which I measure the loss with BCE
2. predict your training data <code>shape = (n_models,len(train),7)</code>
3. blend/stack your training predictions to get pseudolabels <code>shape = (len(train),7)</code> = <code>y.shape</code>
4. exchange original y with your new pseudolabeled y
5. retrain a single model</p>
<p>PS.
I first used oof predictions as new y, but realized that this creates leakage</p>
<p>PPS.
I know performance would have been way better if I used the penultimate logits of the modes (e.g. 768 in BERT base) but then those logits need to match for all models in order to ensemble) and it would have been messy to save and load 100x larger data than if you only use 7 logits. I also made positive experiments in using external data (texts from the first toxic comments challenge) as additional data to pseudolabel, as you give the compressed model more info how it should behave, but had no time to implement.</p>
<p>I used my own loss function.</p>
<p><code>def custom_loss(data, targets):</code></p>
<pre><code>bce_loss_1 = nn.BCEWithLogitsLoss(weight=targets[:, 1:2])(data[:, :1], targets[:, :1])
bce_loss_2 = nn.BCEWithLogitsLoss()(data[:, 1:2], targets[:, 2:3])
bce_loss_3 = nn.BCEWithLogitsLoss()(data[:, 5:6], targets[:, 6:7])
bce_loss_4 = nn.BCEWithLogitsLoss()(data[:, 4:5], targets[:, 5:6])
bce_loss_5 = nn.BCEWithLogitsLoss()(data[:, 6:7], targets[:, 7:8])
return (bce_loss_1 * 0.15*loss_weight) + bce_loss_2 + bce_loss_3 + 0.4*(bce_loss_4 + bce_loss_5)
</code></pre>
<p>Have you read the original <a href="https://arxiv.org/abs/1503.02531">paper</a> ? Its from this Hinton guy.</p>
<blockquote>
<p>so basically it is pseudo labeling on training data and then using a simpler model on top, right?</p>
</blockquote>
<p>right</p>
<blockquote>
<p>Did you also add test data with pseudo labels for the simpler model?</p>
</blockquote>
<p>yes, I tried. It helped a bit, but at the end I did not implement. </p>
<p>BCE_loss with logits. I also tried rmse, but BCE was slightly better</p>
<p>no augmentation. For training the models I used a loss that takes 7 logits. So for knowledge distillation I let the new model predict those 7 logits. I lost like 0.001 via compression so if ensembled scored 0.946 after compression it scored 0.945. I am pretty sure there are more efficient ways to compress, but all models and folds need to match and you need to plan carefully beforehand. Unfortunately the competition turned out to be more dynamic than I thought, and I was not able to optimize the compression</p>
<p>I trained the following in kaggle kernels:</p>
<p>MODEL 1
5fold BERT large cased
5fold BERT large uncased
5fold XLNET
5fold GPT2
5fold LSTM</p>
<p>-> compressed into 5fold BERT large cased using knowledge distillation ~LB 0.9458</p>
<p>MODEL 2
4fold BERT large cased
4fold BERT large uncased
4fold XLNET
4fold GPT2</p>
<p>-> compressed into 4fold BERT large uncased using knowledge distillation ~LB 0.9453</p>
<p>ran 9 times (5fold + 4fold) inference in final kernel, average over folds and blended results -> ~LB 0.946</p>
<p>benefit is just timing. I compressed from 41 models into 9</p>
|
Jigsaw Unintended Bias in Toxicity Classification
|
29th LB And LB 0.94376 for single Bert-Base
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for my teammates <a href="/zongbingwang">@zongbingwang</a> <a href="/wxzcyy">@wxzcyy</a> </p>
<p>0.1 * mean(6lstm) +0.1* mean(3gpt2) + 0.8*mean(2 bert_base_uncase + 1bert_large_uncased)=0.94599</p>
<p>bert-base code <a href="https://www.kaggle.com/hanyaopeng/single-bert-base-with-0-94376">https://www.kaggle.com/hanyaopeng/single-bert-base-with-0-94376</a></p>
<p>some trick use bert</p>
<h3>1. 2 epoch with bert base</h3>
<p>lr in epoch 1 is 2e-5 </p>
<p>lr in epoch 2 is 1e-5</p>
<h3>2、weights</h3>
<p>```python
coll = ['black','white','homosexual_gay_or_lesbian','muslim']</p>
<p>identity_columns = [
'male', 'female', 'homosexual_gay_or_lesbian', 'christian', 'jewish',
'muslim', 'black', 'white', 'psychiatric_or_mental_illness']
weights = np.ones((len(X),)) / 4</p>
<h1>Subgroup identity_columns</h1>
<p>weights += (train_df[identity_columns].fillna(0).values>=0.5).sum(axis=1).astype(bool).astype(np.int) / 4</p>
<h1>Background Positive, Subgroup Negative</h1>
<p>weights += (( (train_df['target'].values>=0.5).astype(bool).astype(np.int) +
(train_df[identity_columns].fillna(0).values<0.5).sum(axis=1).astype(bool).astype(np.int) ) > 1 ).astype(bool).astype(np.int) / 4</p>
<h1>Background Negative, Subgroup Positive</h1>
<p>weights += (( (train_df['target'].values<0.5).astype(bool).astype(np.int) +
(train_df[identity_columns].fillna(0).values>=0.5).sum(axis=1).astype(bool).astype(np.int) ) > 1 ).astype(bool).astype(np.int) / 4</p>
<p>weights += (( (train_df['target'].values>=0.5).astype(bool).astype(np.int) +
(train_df[coll].fillna(0).values<0.5).sum(axis=1).astype(bool).astype(np.int) ) > 1 ).astype(bool).astype(np.int) /8</p>
<p>weights += (( (train_df['target'].values<0.5).astype(bool).astype(np.int) +
(train_df[coll].fillna(0).values>=0.5).sum(axis=1).astype(bool).astype(np.int) ) > 1 ).astype(bool).astype(np.int) / 8</p>
<p>loss_weight = 1.0 / weights.mean()
weights =weights.reshape(-1,1)</p>
<p>def custom_loss(data, targets):
''' Define custom loss function for weighted BCE on 'target' column '''
bce_loss_1 = nn.BCEWithLogitsLoss(weight=targets[:,0:1])(data[:,0:1],targets[:,1:2])
bce_loss_2 = nn.BCEWithLogitsLoss()(data[:,1:],targets[:,2:])
return (bce_loss_1 * loss_weight) + bce_loss_2</p>
<p>```</p>
<h3>3、when sentence is too long . get sentence from its head and tail</h3>
<p><code>python
def convert_lines(example, max_seq_length, tokenizer):
max_seq_length -= 2
all_tokens = []
longer = 0
for text in tqdm_notebook(example):
tokens_a = tokenizer.tokenize(text)
if len(tokens_a) &gt; max_seq_length:
tokens_a = tokens_a[:max_seq_length]
longer += 1
one_token = tokenizer.convert_tokens_to_ids(["[CLS]"] + tokens_a + ["[SEP]"]) + [0] * \
(max_seq_length - len(tokens_a))
all_tokens.append(one_token)
print(longer)
return np.array(all_tokens)
</code></p>
<h3>4、LenMatchBatchSampler/trim_tensors</h3>
<p>code <a href="https://www.kaggle.com/hanyaopeng/single-bert-base-with-0-94376">https://www.kaggle.com/hanyaopeng/single-bert-base-with-0-94376</a></p>
|
Jigsaw Unintended Bias in Toxicity Classification
|
2nd place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="/liyushihaoren2">@liyushihaoren2</a> I'm sorry to reply late.
It improves about 0.001 (0.942 -> 0.9432 in my local CV). :D</p>
<p><a href="/yimingcaiquantum">@yimingcaiquantum</a> I'm sorry to reply late. </p>
<p>ident_loss was used in the identity subgroup classifier. This is not a toxic predictor.</p>
<p>The subgroup's label is multiple labels.
That is, it is a multi-label classification problem.
(It is different from a typical multi-class classification with one label.)</p>
<p>So I used a <code>binary_cross_entropy.</code></p>
<p>Please reference to below link.
<a href="https://stats.stackexchange.com/questions/207794/what-loss-function-for-multi-class-multi-label-classification-tasks-in-neural-n">https://stats.stackexchange.com/questions/207794/what-loss-function-for-multi-class-multi-label-classification-tasks-in-neural-n</a></p>
<p>The purpose of the subgroup classifier is to classify which subgroup the sentence belongs to.
I believe that is essentially similar to the auxiliary task of the 1st place solution. </p>
<p>I think it is a regularization to prevent overfitting to toxic prediction.
Because they must satisfy both toxic_loss and ident_loss.</p>
<p>I'm sorry to reply late. I don't know why it actually works... <br>
I think it just a hyper-parameter I found.</p>
<p>Could you explain more about your question? :p
<code>Then next sentence prediction is closed in this process ?</code></p>
<p>I was inspired by the winner of the Kaggle Taxi competition
<a href="http://blog.kaggle.com/2015/07/27/taxi-trajectory-winners-interview-1st-place-team-%F0%9F%9A%95/">http://blog.kaggle.com/2015/07/27/taxi-trajectory-winners-interview-1st-place-team-%F0%9F%9A%95/</a></p>
<p><code>
We initially tried to predict the output position x, y directly, but we actually obtain significantly better results with another approach that includes a bit of pre-processing. More precisely, we first used a mean-shift clustering algorithm on the destinations of all the training trajectories to obtain around 3,392 popular destination points. The penultimate layer of our MLP is a softmax that predicts the probabilities of each of those 3,392 points to be the destination of the taxi. As the task requires to predict a single destination point, we then calculate the mean of all our 3,392 targets, each weighted by the probability returned by the softmax layer.
</code></p>
<p>I think the maximum of y_pred is 1.0. So I did not normalize the prediction value. You can simply check it as shown below
<code>
import torch
cls_prob = torch.softmax(torch.FloatTensor([10000.0, 1000, 10, 0, 0, 0, 0, 0, 0, 0, 0]), 0)
cls_vals = torch.arange(1.0, -0.1, -0.1)
print(sum([p*v for p, v in zip(cls_prob, cls_vals) ]))
</code>
tensor(1.)</p>
<ul>
<li>cls_prob also has a probability of 0.0.</li>
</ul>
<p>It boosted my local CV ~0.0005. (BERT-base, BERT-large)
I used 3 epochs and 0.2 dropout in the LM finetuning stage.
Sometimes failed to converge on the BERT-large model. Then I changed the seed and tried again.</p>
<p>Of course. :D</p>
<p>bert-base : 310 secs
bert-large : 1030 secs
XLNet about : 1405 secs
Opengpt: 348 secs
Opengpt2: 1067 secs
total : 6294 secs</p>
<p>I used this trick.</p>
<p><code>
test_df['length'] = test_df['comment_text'].str.len()
test_df = test_df.sort_values('length')
</code></p>
<p><code>
max_len = length.max() # the maximum sequence length within the batch
input_ids = input_ids[:, :max_len]
segment_ids = segment_ids[:, :max_len]
input_mask = input_mask[:, :max_len]
</code></p>
<p>Thank you to Jigsaw and Kaggle for hosting an interesting and enjoyable competition.
And congratulations to the winners and everyone who reached the finish line!</p>
<h2>Summary of my approach</h2>
<ul>
<li>No text preprocessing</li>
<li>BERT language model finetuning</li>
<li>Models</li>
<li>Loss</li>
<li>Predicting toxic score</li>
<li>Blending strategy</li>
</ul>
<h2>BERT language model finetuning</h2>
<p>I tried BERT language model finetuning on data from this competition.
<a href="https://github.com/huggingface/pytorch-transformers/tree/xlnet/examples/lm_finetuning">https://github.com/huggingface/pytorch-transformers/tree/xlnet/examples/lm_finetuning</a></p>
<p>Then I changed the values of segment_ids from [0] to [1] in the BERT finetuning stage.
<code>segment_ids = [1]*len(tokens) + padding</code></p>
<h2>Models</h2>
<p>Here's the list of my models.
- BERT-base-uncased, BERT-base-cased, BERT-large-uncased, BERT-large-cased
- OpenGPT, OpenGPT2-small, OpenGPT2-medium
- XLNet-large-cased
- Max length: 256</p>
<h2>Loss</h2>
<p>```
identity_columns = [
'male', 'female', 'homosexual_gay_or_lesbian', 'christian', 'jewish',
'muslim', 'black', 'white', 'psychiatric_or_mental_illness']</p>
<p>subgroup_bool = (train_df[subgroups].fillna(0) >=0.5).sum(axis=1) > 0
positive_bool = train_df['target'] >= 0.5</p>
<h1>Overall</h1>
<p>weights = np.ones(len(train_df)) * 0.25</p>
<h1>Backgroud Positive and Subgroup Negative</h1>
<p>weights[((~subgroup_bool) & (positive_bool)) | ((subgroup_bool) & (~positive_bool))] += 0.25</p>
<h1>Subgroup</h1>
<p>weights[(subgroup_bool)] += 0.25
```</p>
<p>```</p>
<h1>toxic_logits size: (batch_size x 11)</h1>
<h1>idnet_logits size: (batch_size x 9)</h1>
<p>toxic_logits, ident_logits = model(input_ids, segment_ids, input_mask)
toxic_loss = (F.cross_entropy(toxic_logits, toxic_target, reduction='none') * weight).mean()
ident_loss = F.binary_cross_entropy_with_logits(ident_logits, ident_target)
loss = 0.75*toxic_loss + 0.25*ident_loss
```</p>
<h3>Predicting toxic score</h3>
<p>I split the toxic score into 11 classes as below.
<code>
cls_vals = [1.0, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.0]
for cls, v in enumerate(cls_vals):
if target &gt;= v:
target = cls
</code>
So the shape of toxic_logits is (batch x 11). And the score was calculated as follows.
<code>
toxic_logits = toxic_logits[0] # the probability distribution of 11 classes
y_pred = sum([p*v for p, v in zip(toxic_logits, cls_vals)])
</code></p>
<h2>Blending strategy</h2>
<p>Power 3.5 weighted sum
<code>
p = 3.5
bert-base-cased**p * 0.25 +
bert-base-uncased**p * 0.25 +
bert-large-cased**p * 1.0 +
bert-large-uncased**p * 1.0 +
opengpt2**p * 1.0 +
opengpt2-medium**p * 2.0 +
openai-gpt**p *0.5 +
xlnet-large**p *1.0
</code></p>
<p>I referenced to the link below.
<a href="https://medium.com/data-design/reaching-the-depths-of-power-geometric-ensembling-when-targeting-the-auc-metric-2f356ea3250e">https://medium.com/data-design/reaching-the-depths-of-power-geometric-ensembling-when-targeting-the-auc-metric-2f356ea3250e</a></p>
<h2>XLNet</h2>
<p><code>tokens = tokens + ['&lt;sep&gt;'] + ['&lt;cls&gt;']</code>
And I take the last time step hidden_state.</p>
<p>I added answers to your questions to the above solution text.</p>
|
Jigsaw Unintended Bias in Toxicity Classification
|
32nd Place Solution Summary
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our solution is the weighted averaging of the following 6 models. The weights were calculated by Scipy Nelder-Mead for the best cv score. </p>
<h2>Models and Weights</h2>
<ol>
<li>bert base uncased</li>
<li>rnn</li>
<li>rnn (data augmented by translation)</li>
<li>rnn (data augmented by old toxic competition)</li>
<li>bert base cased</li>
<li>bert base multilingual cased
w = [0.39193463, 0.14464678, 0.03892303, 0.06871146, 0.317778, 0.038006]</li>
</ol>
<h2>Missing Value Complementation</h2>
<p>The characteristic point compared with other teams is missing value complementations of identity_columns. </p>
<p>Missing value of identity_columns doesn’t mean comment_text doesn’t have the identity attributes. It means there aren’t annotators of the identity attributes. So, missing value shouldn’t be zero-filled because there is a case that the comment_text which has missing value has some identity attributes.</p>
<p>We filled missing value with the value estimated by rnn(input:comment_text, output:identity_columns), not zero. The rnn model was trained by data which doesn't have any missing value.</p>
|
Jigsaw Unintended Bias in Toxicity Classification
|
3rd place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We tried data augmentation mentioned <a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/discussion/48038">here</a>, but this just worsened our score.</p>
<p>Detail:
- via es, fr, and de
- only <code>target_value &gt; 0</code> samples because of time restriction
- avoid 429 error by <a href="https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/discussion/95150#549656">this technique</a></p>
<p>First of all, I would like to express my sincere gratitude to everyone, especially to my great teammates <a href="https://www.kaggle.com/flowlight" target="_blank">flowlight</a>, <a href="https://www.kaggle.com/hiromoon166" target="_blank">hiromu</a>, <a href="https://www.kaggle.com/ryunosukeishizaki" target="_blank">Draconda</a>, and <a href="https://www.kaggle.com/dafchen" target="_blank">从心</a>!!</p>
<p>The following is a brief summary of our solution.</p>
<ul>
<li>LSTM models</li>
<li>BERT & GPT2 models</li>
<li>Blending strategy</li>
<li>What didn't work for us</li>
</ul>
<h2>LSTM models (LB 0.94061)</h2>
<ul>
<li>Blending of 4 models: 2 LSTM + GRU models, LSTM + Capsule + Attention model, LSTM + Conv model.</li>
<li>5 fold, 6 epoch training.</li>
<li>EMA & checkpoint ensemble.</li>
<li>Pseudo labeling.</li>
<li>Projection + sum of Glove and Fasttext embeddings. <a href="https://arxiv.org/abs/1804.07983" target="_blank">https://arxiv.org/abs/1804.07983</a></li>
<li>Target weight multiplied by log(toxicity_annotator_count + 2).</li>
</ul>
<p>You can check my LSTM code <a href="https://www.kaggle.com/sakami/single-lstm-pseudo-label" target="_blank">here</a>. </p>
<p><strong>Target weight</strong></p>
<p>In LSTM part, we use target weight as follows:</p>
<pre><code>weights = np.ones((len(train),))
weights += train[identity_columns].fillna(0).values.sum(axis=1) * 3
weights += train['target'].values * 8
weights /= weights.max()
</code></pre>
<ol>
<li>samples with <code>identity column</code> >= 0.5 are evaluated twice </li>
<li><code>target</code> value has inverse correlation with their frequency and performed as <code>class_weight</code>.</li>
</ol>
<h2>BERT & GPT2 models</h2>
<ul>
<li>Head + tail tokens. <a href="https://arxiv.org/abs/1905.05583" target="_blank">https://arxiv.org/abs/1905.05583</a></li>
<li>Dynamic learning rate decay. <a href="https://arxiv.org/abs/1905.05583" target="_blank">https://arxiv.org/abs/1905.05583</a></li>
<li>Further fine-tuning with <a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge#description" target="_blank">old toxic competition</a> data.</li>
</ul>
<p>Blending of the following:</p>
<ul>
<li>BERT base uncased, cased (LB 0.94398, 0.94370)</li>
<li>BERT Large uncased, cased (LB 0.94443, 0.94520)</li>
<li>BERT WWM uncased, cased (LB 0.94377, 0.94346)</li>
<li>GPT2 (LB 0.94136)</li>
</ul>
<p><strong>Negative down sampling</strong></p>
<p>We use negative down sampling, which boosted our score. <br>
On the 1st epoch, we cut off 50% of negative samples, whose target value, aux_target values, and subgroup values are all zero. On the 2nd epoch, we use half dropped negative samples and positive samples. <br>
This improve our score from 0.94239 to 0.94353!!! (BERT base uncased)</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2133818%2F056a0ded2d250df39604847a500d339b%2Fnegative_down_sampling.png?generation=1661764423281002&alt=media" alt=""></p>
<h2>Blending strategy</h2>
<p>We use <a href="https://github.com/pfnet/optuna" target="_blank">optuna</a> to determine our blending weight because it is difficult to optimize metrics for this competition directly.</p>
<p>We made 200k validation predictions and optimized weight about half of them and checked score of the rest. After repeating this step 10 times, we scoop only those that have relatively small deviation between score of train data and that of validation data, then take an average of them.</p>
<p>This works well. Our public score: 0.94705 -> private score: 0.94707.</p>
<h2>What didn't work for us</h2>
<ul>
<li>Simple pre-training. <a href="https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/lm_finetuning/simple_lm_finetuning.py" target="_blank">https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/lm_finetuning/simple_lm_finetuning.py</a></li>
<li>Dataset augmentation.</li>
<li>Pseudo labeling for BERT.</li>
<li>Prediction value shift of each subgroup to optimize BPSN & BNSP AUC.</li>
</ul>
<p>Thank you for taking time for reading.</p>
<p>Our code is now available on <a href="https://github.com/sakami0000/kaggle_jigsaw">here</a>.</p>
<p>lol. Thanks.</p>
<p>Thank you for your comment.
In my understanding, pseudo labeling doesn't depend on unlabeled data itself, because it just promotes training by fitting a model to predictions of a trained model. Actually, I got a similar result when I used old toxic competition data as unlabeled data.
Please let me know if I'm wrong. :)</p>
<p>EDIT:
However, predictions for augmented data become close to predictions of a trained model, and LB score changes. I shouldn't have used test data as you said. Thanks.</p>
<p>You can check my LSTM code <a href="https://www.kaggle.com/sakami/single-lstm-pseudo-label/notebook">here</a>. :)</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2133818%2F2fcae863cb2c06e96fd27e70f203b09d%2Fimage.png?generation=1561800677998726&alt=media" alt=""></p>
<p>The BNSP, BPSN can be regarded as a metric to measure a gap between predictions on subgroup and that on background. We tried to maximize BNSP, BPSN AUC by shifting predictions of each subgroup, but didn't work well because of samples belonging to multiple subgroups.
Could you tell me how did you realize this idea?</p>
<p>If you use the default parameters, BERT is trained 3 epochs with learning rate 5e-5, but this causes overfitting. You should use smaller learning rate, fewer epochs.
<a href="https://www.kaggle.com/yuval6967/toxic-train-bert-base-pytorch">Yuval's kernel</a> is really helpful for fine-tuning BERT.</p>
<p><strong>BERT Base</strong>
- epoch: 2
- batch size: 32
- accumulation step: 1
- learning rate: 2.5e-5
- lr decay coefficient: 0.95</p>
<p><strong>BERT Large</strong>
- epoch: 2
- batch size: 16
- accumulation step: 4
- learning rate: 1.5e-5
- lr decay coefficient: 1.0</p>
<p>max sequence length are various. For example,
head 128 + tail 90
head 128 + tail 111
head 160 + tail 79
head 241
and so on.</p>
<p><a href="/ryches">@ryches</a> You are right. I used "further pre-training" as a meaning like "further fine-tuning".
I edited the draft, thank you.</p>
<p><a href="/dafchen">@dafchen</a> You made it realized, thank you!! </p>
|
Jigsaw Unintended Bias in Toxicity Classification
|
52th Public Solution – Newbie Lessons learned
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I want to thank all contributors in this competition. I learned a lot from you all!</p>
<p>Best Model:
- 3 BERT
- 1 XLNet</p>
<p>My solution was based in a very simple BERT model based on Yuval’s great kernel with:
- No text preprocessing
- No loss weights</p>
<p>The key point was selecting a small subset of training data (only ~475.000):
- All records with valid identities (not na)
- All positive records</p>
<h2>Lessons learned</h2>
<h3>Quality of data is key</h3>
<p>My assumption was that most of data was labeled using ML, and only records with valid identities where better supervised by humans. Training with only valid identities records produced better performance than using random records.</p>
<p>In a world of non-toxic comments, toxic comments are gold, so including all toxic comments also improved model performance. This helps solve the unbalanced problem with this dataset, instead of using loss weight.</p>
<p>I ended up with ~475.000 records that could be trained in a Kernel with 2 epochs with a LB 0.94299. Am I overfitting LB? Will see.</p>
<h3>Be agile</h3>
<p>Machine Learning is filled with try and error, so being agile is also a key. BERT and XLNet was very time consuming so creating a subset of small data (say only 100.000 selected records) can help in trying new things.</p>
<h3>Stay tuned with new technologies</h3>
<p>I heard about XLNet only three days until deadline. I finally get to train a XLNet model but with many bugs. If I had a couple of days more I could probably fine-tuned the model and increased LB since, even with those bugs, XLNet improved my blending score.</p>
<p>Good luck for the final score!</p>
<p>Hola Carlos :)</p>
<p>1 BERT at a time for training a model.
3 BERT + 1 XLNET for predictions in the same kernel.</p>
<p>BERT 475.000 recs 2 epochs -> 0.94271 CV, 0.94278 LB
I always used the same set of data as a base line.
I tried once with another set (different random seed) and get 0.94052 CV, 0.94233 LB that's why I'm afraid I was overfitting LB.</p>
<ol>
<li>Shuffle and split dataset for training and validation (~1.700.000/100.000)</li>
<li>Remove null identity records from training dataset: ~450.000 records.</li>
<li>Select all positive (toxic comments) records: ~45.000</li>
<li>Select random 45.000 negative records.
This way I have a balanced 50% positive/negative dataset with only 90.000 records for training and 100.000 for validation.</li>
</ol>
<p>NOTE: ~45.000 is the number of positive records with identities </p>
<p>0.94299 is with 475.000 rows.
I will release the code but I need a little of time to make some code clean up.
In the meantime, here is the key part where records are selected:</p>
<p>```python
train_DF = pd.read_csv(F'../input/train.csv')
train_df = train_DF[:-100_000]</p>
<p>iden_df = train_df[~train_df['male'].isna()]
true_df = train_df[train_df['male'].isna()]
true_df = true_df[true_df[TARGET_COLUMN] >= 0.5]</p>
<p>train_df = pd.concat([iden_df, true_df], axis=0)
```</p>
|
Jigsaw Unintended Bias in Toxicity Classification
|
5th place
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Exhaustive search using 5% validation set and tuning on public LB.</p>
<h3>Thanks and Congrats</h3>
<p>Thanks to Kaggle and Jigsaw for this great competition, flawless setup, and rich dataset. <br>
Special thanks to Kaggle Kernels Team. Kernels platform is very stable and efficient compute environment. During this competition I trained many large models including several BERT Large models in Kernels and it was very comfortable. And by the way it's very cool to have P100 inside! <br>
Many thanks to the community for enormous amount of sharing. I want to emphasize several contributions. Of course there was a lot more. Please check out and upvote:
* <a href="https://www.kaggle.com/thousandvoices">Thousandvoices</a> showed <a href="https://www.kaggle.com/thousandvoices/simple-lstm">really compact LSTM</a>
* <a href="https://www.kaggle.com/yuval6967">Yuval</a> did a lot of <a href="https://www.kaggle.com/yuval6967/toxic-bert-plain-vanila">BERT related sharing and mentoring</a>
* <a href="https://www.kaggle.com/bminixhofer">Benjamin</a> demonstrated <a href="https://www.kaggle.com/bminixhofer/speed-up-your-rnn-with-sequence-bucketing">sequence bucketing</a>
* <a href="https://www.kaggle.com/christofhenkel">Dieter</a> shared <a href="https://www.kaggle.com/christofhenkel/how-to-preprocessing-for-glove-part2-usage">nice preprocessing</a>
* <a href="https://www.kaggle.com/pronkinnikita">Nikita</a> contributed <a href="https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/discussion/95220">PyTorch GPT-2</a>
* <a href="https://www.kaggle.com/strideradu">Strideradu</a> contributed <a href="https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/discussion/96773">PyTorch XLNet</a></p>
<h3>Models</h3>
<p>My final solution was weighted average of 9 models:
* BERT Base Uncased
* BERT Base Cased
* BERT Large Uncased
* BERT Large Cased ========> Best single. Public LB: 0.94482
* BERT Large Uncased WWM
* BERT Large Cased WWM
* BiLSTM with FastText and Glove
* GPT-2 Large
* XLNet Large</p>
<h3>Data</h3>
<p>I trained all final models except LSTM on 95% of data, leaving 5% as cold holdout for validation. Final LSTM was trained on full dataset. For my LSTM I used standard cleaning and preprocessing. All other models were trained on RAW data: no cleaning, no preprocessing, no feature engineering. Well, I guess future is now…</p>
<h3>Loss</h3>
<p>I used weighted sigmoid cross-entropy loss (<code>torch.nn.BCEWithLogitsLoss</code>) with different sets of weights and target variants (including 6 additional toxicity subtypes).</p>
<h3>Tools</h3>
<p>Code: PyTorch plus a bit of FastAI <br>
Hardware: P100 (Kernels), V100 (GCP) </p>
|
Jigsaw Unintended Bias in Toxicity Classification
|
7th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all I would like to thank the organizers and Kaggle for this awesome NLP competition!
The best part was: there were no leaks or golden features. It turned out to be a pure NLP competition.</p>
<h3>Loss Functions</h3>
<p>We used two types of loss functions for BERT and similar models:
```
y_columns = ['target']</p>
<p>y_aux_train = train_df[['target', 'severe_toxicity', 'obscene',
'identity_attack', 'insult',
'threat',
'sexual_explicit'
]]</p>
<p>y_aux_train = y_aux_train.fillna(0)</p>
<p>identity_columns = [
'male', 'female', 'homosexual_gay_or_lesbian', 'christian', 'jewish',
'muslim', 'black', 'white', 'psychiatric_or_mental_illness']</p>
<h1>Overall</h1>
<p>weights = np.ones((len(train_df),)) / 4</p>
<h1>Subgroup</h1>
<p>weights += (train_df[identity_columns].fillna(0).values >= 0.5).sum(
axis=1).astype(bool).astype(np.int) / 4</p>
<h1>Background Positive, Subgroup Negative</h1>
<p>weights += (((train_df['target'].values >= 0.5).astype(bool).astype(np.int) +
(train_df[identity_columns].fillna(0).values < 0.5).sum(
axis=1).astype(bool).astype(np.int)) > 1).astype(
bool).astype(np.int) / 4</p>
<h1>Background Negative, Subgroup Positive</h1>
<p>weights += (((train_df['target'].values < 0.5).astype(bool).astype(np.int) +
(train_df[identity_columns].fillna(0).values >= 0.5).sum(
axis=1).astype(bool).astype(np.int)) > 1).astype(
bool).astype(np.int) / 4</p>
<p>y_train = np.vstack(
[(train_df['target'].values >= 0.5).astype(np.int), weights]).T</p>
<p>y_train = np.hstack([y_train, y_aux_train])</p>
<p>def custom_loss(data, targets):
''' Define custom loss function for weighted BCE on 'target' column '''
bce_loss_1 = nn.BCEWithLogitsLoss(
weight=targets[:, 1:2])(data[:, :1], targets[:, :1])
bce_loss_2 = nn.BCEWithLogitsLoss()(data[:, 1:2], targets[:, 2:3])
bce_loss_3 = nn.BCEWithLogitsLoss()(data[:, 2:3], targets[:, 3:4])
bce_loss_4 = nn.BCEWithLogitsLoss()(data[:, 3:4], targets[:, 4:5])
bce_loss_5 = nn.BCEWithLogitsLoss()(data[:, 4:5], targets[:, 5:6])
bce_loss_6 = nn.BCEWithLogitsLoss()(data[:, 5:6], targets[:, 6:7])
bce_loss_7 = nn.BCEWithLogitsLoss()(data[:, 6:7], targets[:, 7:8])
bce_loss_8 = nn.BCEWithLogitsLoss()(data[:, 7:8], targets[:, 8:9])</p>
<pre><code>return bce_loss_1 + bce_loss_2 + bce_loss_3 + bce_loss_4 \
+ bce_loss_5 + bce_loss_6 + bce_loss_7 + bce_loss_8
</code></pre>
<p>```</p>
<p>And,</p>
<p>```
def weights_and_targets(df):
y_aux_train = df[['target', 'severe_toxicity', 'obscene',
'identity_attack', 'insult',
'threat',
'sexual_explicit'
]]</p>
<pre><code>y_aux_train = y_aux_train.fillna(0)
identity_columns = [
'male', 'female', 'homosexual_gay_or_lesbian', 'christian', 'jewish',
'muslim', 'black', 'white', 'psychiatric_or_mental_illness']
# Overall
weights = np.ones((len(df),)) / 4
# Subgroup
weights += (df[identity_columns].fillna(0).values &gt;= 0.5).sum(
axis=1).astype(bool).astype(np.int) / 4
# Background Positive, Subgroup Negative
weights += (((df['target'].values &gt;= 0.5).astype(bool).astype(np.int) +
(df[identity_columns].fillna(0).values &lt; 0.5).sum(
axis=1).astype(bool).astype(np.int)) &gt; 1).astype(
bool).astype(np.int) / 4
# Background Negative, Subgroup Positive
weights += (((df['target'].values &lt; 0.5).astype(bool).astype(np.int) +
(df[identity_columns].fillna(0).values &gt;= 0.5).sum(
axis=1).astype(bool).astype(np.int)) &gt; 1).astype(
bool).astype(np.int) / 4
loss_weight = 1.0 / weights.mean()
y = np.vstack(
[(df['target'].values &gt;= 0.5).astype(np.int), weights]).T
y_aux = np.hstack([y, y_aux_train])
return weights, loss_weight, y, y_aux, identity_columns
</code></pre>
<p>def custom_loss(data, targets, loss_weight):
bce_loss_1 = nn.BCEWithLogitsLoss(weight=targets[:, -1])(data[:, 0], targets[:, 0])
bce_loss_2 = nn.BCEWithLogitsLoss()(data[:, 1:-1], targets[:, 1:-1])
return (bce_loss_1 * loss_weight) + bce_loss_2</p>
<p>```</p>
<h3>Preprocessing</h3>
<p>We used contraction mapping and some other small preprocessing for all models.</p>
<h3>Models</h3>
<p>For LSTMs we worked with a mix of Keras and Pytorch. In the end, we selected pytorch model for final submission.</p>
<p>During the whole course of the competition we trained over 100 models which included, LSTMs, BERT Base, BERT Large, GPT-2, XLNet, ElMo and ULMFit.</p>
<p>Final submission was:</p>
<p><code>
LSTM * 0.15 + GPT2 * 0.05 + BERT_BASE_UNCASED * 0.2 BERT_BASE_CASED * 0.2 + BERT_LARGE_WWM_UNCASED * 0.2 + BERT_LARGE_WWM_CASED * 0.1 + BERT_LARGE_CASED * 0.1
</code></p>
<p>How we came up with the weights? Well, don’t ask :D</p>
<p>As you can see, our model did not have XLNet in it.</p>
<h3>What didn't work for us:</h3>
<p>Model
- ElMo, ULMFit, XLNet, BERT multi_cased</p>
<p>LSTM
- embeddings (lexvec, Probabilistic FastText, GN-GloVe, numberbatch)
- statistical features
- Excessive preprocessing</p>
<p>BERT
- LM pretraining(use Old toxic, Quora)
- Excessive preprocessing</p>
<h3>Hardware</h3>
<p>We used 1080ti, Titan X, multiple 2080ti and V100 GPUs for training the models. The final model took around 5000 seconds for inference.</p>
<p>The teammates: <a href="/pvduy23">@pvduy23</a>, <a href="/atsunorifujita">@atsunorifujita</a> and <a href="/frankrosenblatt">@frankrosenblatt</a> were one of the best teammates and I learnt a lot from them. Special congratulations to <a href="/atsunorifujita">@atsunorifujita</a> for becoming Kaggle Master :)</p>
|
Jigsaw Unintended Bias in Toxicity Classification
|
8th Place Solution (4 models simple avg)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I trained xlnet on local machine, with V100...
But maybe nvidia apex can help you, if you're worry about GPU kernel runtime limitation.</p>
<p>It means, that's 5 different dropout layers.
The code is like this:</p>
<p>```
class BertForJigsawV2(BertPreTrainedModel):</p>
<pre><code>def __init__(self, config, out_dim=8):
super(BertForJigsawV2, self).__init__(config)
self.out_dim = out_dim
self.bert = BertModel(config)
self.dropouts = nn.ModuleList([
nn.Dropout(0.5) for _ in range(5)
])
self.linear = nn.Linear(config.hidden_size, out_dim)
self.apply(self.init_bert_weights)
def forward(self, input_ids, token_type_ids=None, attention_mask=None, labels=None):
_, pooled_output = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False)
for i, dropout in enumerate(self.dropouts):
if i == 0:
h = self.linear(dropout(pooled_output))
else:
h += self.linear(dropout(pooled_output))
return h / len(self.dropouts)
</code></pre>
<p>```</p>
<p>I have made my final kernel and dataset public now.</p>
<p><a href="https://www.kaggle.com/haqishen/jigsaw-predict">https://www.kaggle.com/haqishen/jigsaw-predict</a></p>
<p>Thank Jigsaw and Kaggle for hosting such an interesting and challenge competition!
Congrats to all winners and participants!</p>
<h1>Summary</h1>
<ul>
<li>I ensambled BERT-base-uncased, BERT-large-uncased, GPT2, XLNET models (simple average)</li>
<li>Though I tried text preprocessing, but it seems to have no obvious improvement on LB score in this competition.</li>
<li>Label preprocessing -- Most effective trick for me.</li>
<li>Multi-Sample Dropout from here <a href="https://arxiv.org/abs/1905.09788">https://arxiv.org/abs/1905.09788</a> -- increase LB a little and the training gets stabler (loss decreases stabler).</li>
<li>Training model using toxicity score as additional label (Thanks to Dieter for this kernel: <a href="https://www.kaggle.com/christofhenkel/how-to-preprocessing-for-glove-part2-usage">https://www.kaggle.com/christofhenkel/how-to-preprocessing-for-glove-part2-usage</a>
) -- Improved my BERT-base and GPT2</li>
<li>Training model using BCELoss with sample weight (Thanks to Dieter for the same kernel). I applied this trick to all my models.</li>
</ul>
<h1>Label Preprocessing</h1>
<p>I do something like this:</p>
<p><code>labels = [(df['target'].values&gt;=rate/(args.bin+1)).astype(np.int) for rate in range(1, args.bin+1)]</code></p>
<p>Here <code>args.bin</code> is a hyper-parameter defined how many class to generated.
For example, if we have origin label <code>[0.33, 0.05, 1.0, 0.58, 0.0]</code> , and <code>args.bin = 9</code> , we will get output like this:
<code>
array([[1, 1, 1, 0, 0, 0, 0, 0, 0], # 0.33
[0, 0, 0, 0, 0, 0, 0, 0, 0], # 0.05
[1, 1, 1, 1, 1, 1, 1, 1, 1], # 1.0
[1, 1, 1, 1, 1, 0, 0, 0, 0], # 0.58
[0, 0, 0, 0, 0, 0, 0, 0, 0]]) # 0.0
</code>
Then I treat each column as binary class label, and train it using BCELoss. When doing predicting, simply take the mean value of all columns as final prediction.</p>
<p>I used <code>args.bin=29</code> for BERT-base and GPT2, <code>args.bin=99</code> for BERT-large, <code>args.bin=9</code> for XLNET, for final submission. These are decided by only a few experiments since the limitation of compute resource, maybe not the best ones.</p>
<h1>Multi-Sample Dropout</h1>
<p><a href="https://arxiv.org/abs/1905.09788">https://arxiv.org/abs/1905.09788</a></p>
<p>One advantage of models like BERT, GPT2, XLNET is that they have been pretrained on large scale text data. Which means we can fine-tune them on our own dataset in a relatively short time period and get good result, compared to train from the scratch. However, there is also a drawback, that is, we cannot modify the architecture of these models.</p>
<p>In this situation, I came out with Multi-Sample Dropout, which can be easily apply to fc layers at the end of these models. Experiments shown that it can decrease the loss stabler while training and make the model generalize better.</p>
<p>I applied this trick to all of my models. 5 dropout layers with different mask and p=0.5 have been used parallel followed the basic BERT model, then pass the output of each dropout layer to a shared weight fc layer, finally I take the average of the outputs from fc layer as final output. This process is shown by follow figure.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F448347%2Fe7da45582176dd2a4dbdc7e7bd37ca8b%2F2019-07-22%2019.11.42.png?generation=1563790324431533&alt=media" alt=""></p>
<p>Thanks for sharing the result! It seems that your method is better in this case.
I'm eager to see your <code>30th place solution</code> now :)</p>
<p>Though treat the label as binary class gives a better score than regression, the loss of information on label is a fact. So I was thought of a way that treating the label as classification while avoiding loss of information, and finally came out with this.</p>
<p>This trick turns out can boost my LB score of BERT-base and GPT2 around 0.002~ , BERT-large around ~0.001. XLNET didn't test since it was close to the end of the competition.</p>
|
Jigsaw Unintended Bias in Toxicity Classification
|
9th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Sorry for the super late reply...</p>
<ol>
<li>2 epochs.</li>
<li>lr warm up from 0 to 2e-4, warm-up ratio is 0.1.</li>
<li>lr linear decay from 2e-4 to 1e-4.</li>
<li>BERT:
<ul><li>Architecture: Large Bert.</li>
<li>Pretrained Weights: Uncased, WWM (Whole Word Masking).</li></ul></li>
<li>Sequence max length: 220.</li>
<li>Optimizer: Bert Adam, no decay for bias.</li>
</ol>
<p>Thanks~ Nice to see you around and doing great too!</p>
<p>Congrats to the top teams! Thanks Kaggle\Jigsaw for hosting this competition, thanks for my teammate @kazanova for the efforts, and thanks @yuval6967 for sharing us a lot how to use pytorch with APEX to accelerate the learning, and how to use BERT to finetune, and thanks to @strideradu for sharing the xlnet in pytorch version... In order to speed up the experiment, I tried to use AMP with tensorflow... but I'm not able to solve the setup issue in short time... so this is also the competition I turned back from using keras\tensorflow to using pytorch 2 weeks before competition ends...</p>
<p>Here is the solution:</p>
<p><strong>Loss Function:</strong></p>
<p>Adding <code>toxicity_annotator_count</code> as well as <code>severe_toxicity, obscene, identity_attack, insult, threat</code> in loss function shared in the public kernels, i.e. NN also predicts <code>toxicity_annotator_count.min()/toxicity_annotator_count</code>. This get +0.0003 improvements in 5-fold cv. I added this as one of the targets to be predicted since I found the <code>toxicity</code> value is influenced by the number of <code>toxicity_annotator_count</code> as well, training NN to predict how many annotators for toxicity might help predicting <code>toxicity</code> as well. For example: for toxicity=0.1666666 (1/6), the annotator count happens to be 6, which means the toxicity annotator count determine what values are possible for <code>toxicity</code> (in this case: 0/6, 1/6, 2/6..., 6/6 are possible values). I think our intuition here could be further improved.</p>
<p><strong>Models have tried</strong></p>
<p>Bi-LSTM, Char-CNN, BERT-Small-Uncased, Bert-Large-Uncased with WWM, GPT2, XLNET, Logistic Regression\LGB with tfidf.
Amongst these models tested on public LB, their cv v.s. public LB scores are:
Bi-LSTM: .941 (5 fold-cv) v.s. .939,
BERT-Small-Uncased: .943 (1 fold-cv) v.s. .942,
Bert-Large-Uncased with WWM: .9442 (1 fold-cv) v.s. .9440
XLNET: .9437 (1 fold-cv) v.s. .9431</p>
<p><strong>Stacking</strong></p>
<p>Considering the time, we use only top-3 most diverse and strongest models for stacking: Bi-LSTM, Bert-Large-Uncased with WWM, XLNET, and the meta model is ExtraTreesClassifier, which scores .9463 in cv and .9461 in public LB. Since our NN predicts not only <code>toxicity</code>, it is possible to incorporate other predictions (threat\insult...) to the stack. Interestingly, except <code>toxicity</code>, I found that using <code>obscene</code> from Bi-LSTM\Bert-Large-Uncased-WWM\XLNET, <code>insult</code> from XLNET, <code>threat</code> in Bi-LSTM (totally 5 extra features in the stack) helps a lot in the stacking, which scores .9468 in cv and .94671 in LB, where we jump from 47th place to 12th place in the last day with stacking... I was not expecting we could reach Gold zone in the last day, but as what @cpmpml shared his mindset in another competition: never give up till the last moment, do our best and hope for the best!</p>
<p><strong>What I've learned</strong></p>
<p>Thanks to Kaggle community, I've learned a lot in this competition:
1. Deploying Bert\XLNET
2. Using AMP with pytorch
3. Using Bucket Sequencing to accelerate the learning.</p>
<p><strong>What could be further improved</strong></p>
<ol>
<li>Get back to read papers more thoroughly!</li>
<li>Better loss function shared by other solutions!</li>
<li>Trimming tensors while predicting, to shorten the training cycle and allow more models to be put in the stack!</li>
</ol>
|
Jigsaw Unintended Bias in Toxicity Classification
|
FULL CODE 4th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We've cleaned up the code and opened it
– there is a simple structure –
inside<code>combat-wombat-bias-in-toxicity/code/</code>
- the input folder
- the models folder
- the training scripts</p>
<p><a href="https://github.com/iezepov/combat-wombat-bias-in-toxicity">https://github.com/iezepov/combat-wombat-bias-in-toxicity</a></p>
<p>Run the scripts one by one (or select only those you want), then wait for all eternity (i'm kidding, just a few (100) hours) and that's it!</p>
<p>It's up to you to do your dream blend from the trained models – <strong>fast</strong> (lstm + one bert/gpt) or with <strong>the best performance</strong> (full stack).</p>
<p>Thank you my teammate <a href="/iezepov">@iezepov</a> – you're super professional making our code great.
Thank Kaggle for the competition and for such an amazing community.</p>
<p>Rate the repo with a star and feel free to ask questions.</p>
<p><a href="/yimingcaiquantum">@yimingcaiquantum</a> thank you for the question. There is such an intuition: by predicting several targets, you propagate through your network using not only the signal from one target, but also using other targets. Informally, signal – information that make sense for the task, e.g presence of identities. Hence, by learning network using several sources of useful information we make things easier for optimization process (for finding an optimum) and for the regularization (in is really more difficult to overfit 18 targets then 1) – so, it’s really common situation, when adding targets improve performance of the original task.</p>
|
Jigsaw Unintended Bias in Toxicity Classification
|
Fake 13.5th Solution (Real 77th) NO PytorchBERT only keras
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Before Competition End (77th solution)</h1>
<p>We finally get 77th in the end which is not as good as what we expect. The reason might be</p>
<ol>
<li>We lost our learning curve while we use raw Adam optimizer with 0.01 lr-decay but not what the thesis suggested.</li>
<li>My messy arrangement of feature testing: always wasted time to find out what I am doing right now and took less logs about what had been done. </li>
<li>We paid too much time on keras solution with easy usage of TPU, while pytorch solution has trimming and length-match inference technology to cut down single inference time (Maybe keras can do this as well, but I can find any information via googling). </li>
<li>Not enough time since midterm of May.</li>
</ol>
<p>Other methods we use before competition end:
- 1. massive preprocessing
- 2. using default weights with downsampling weights of samples which have target=0.5 and
belong to black, white, muslim and homosexual subgroup.
- 3. loss function like below:
<code>python
def custom_loss(y_true, y_pred):
return binary_crossentropy(K.reshape(y_true[:,0],(-1,1)),y_pred) * y_true[:,1]
</code>
and
<code>python
model3.compile(loss=[custom_loss,'binary_crossentropy'],loss_weights=[loss_weight, aux_loss],optimizer=adamwarm)
</code>
- 4. Layerwise decay 0.995 (used in BERT base only)
- 5. ensembling:
0.85*(4 bertuncasedbase+ 1 bertuncasedlarge)+0.15*avg(simpleLSTMmodels)
We don't have time to do analysis on LSTMs, so we just folk the best score one from public
with our massive preprocessing and public EDA features (<a href="https://www.kaggle.com/cristinasierra/pretext-lstm-tuning-v3">https://www.kaggle.com/cristinasierra/pretext-lstm-tuning-v3</a>)
- 6. results:
Bertuncasedbase: CV0.94339 - private LB 0.94168 (when can I see my public LB back...)
Bertuncasedlarge: CV0.94221 - private LB 0.94212
LSTMs no time to look CV - private LB 0.92984
ensemble no CV - private LB 0.94407 (77th)</p>
<h1>After Competition Ends (14.5th Solution)</h1>
<p>I just want to reuse the OFFICIAL optimizer to see how well can I do without pytorch. At first I just copy the optimizer to my training script, but the models always went divergent. At last I folk my kernel (messy arrangement...) public in Kaggle and can start to do my further research on this task. The features I use for Fake 14.5th solution is:</p>
<ol>
<li><p>sampleweights we used how <a href="/hanyaopeng">@hanyaopeng</a> did in his kernel: (<a href="https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/discussion/97443#latest-584490">https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/discussion/97443#latest-584490</a>). I think this way will be the final way to deal with the 4 subgroup low-AUC problem. This is the only method I take from others.</p></li>
<li><p>Instead of Using LSTMs I replace them to single GPT2 model (also in keras) with same structure like <a href="/pronkinnikita">@pronkinnikita</a> post on his discussion. </p></li>
<li><p>our ensemble has changed to (1 GPT2 + 1 wwm-uncased-bert-large (Un_B_La) + 2 uncased-bert-base (Un_B_Ba)+2 cased-bert-based (Ca_B_Ba))/6</p></li>
<li><p>Layerwise learning rate decay 0.998
I only use this in BERT large model, and the CV score does not improved but contributes to the ensemble CV and the private score.</p></li>
<li><p>Here are some hyperparameters
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2721247%2Fc0463fd182acf0d289f686e3939a8a24%2Fpara.bmp?generation=1564251525199274&alt=media" alt=""></p></li>
</ol>
<p>PP short for preprocessing or not; LC short for cased or not. BSZ is batchsize. LRep1 is learning rate in first epoch. TSrate is totalstep multiple rate. </p>
<h1>Result (No pytorch already, only chosen by CV)</h1>
<p>wwm-uncased-bert-large: best CV0.95385 - private LB 0.94353
uncased-bert-base: best CV0.95262 - private LB 0.94442
cased-bert-base: best CV0.95109 - private LB 0.94265
GPT2-117: best CV0.95217 - private LB 0.94131
ensemble: just average, CV0.95468 - private LB 0.94631 (between 13 and 14 score)
(kernel address: <a href="https://www.kaggle.com/httpwwwfszyc/keras-models-inference">https://www.kaggle.com/httpwwwfszyc/keras-models-inference</a>)</p>
<h1>Conclusion:</h1>
<ol>
<li>I should have a scientific Route for plugging features (Control experiment) and see if CV score increase (sometimes might affected by public LB...) and note them down.</li>
<li>I really learn a lot from other solutions. Many of them are brilliant thought about the data distribution. Maybe I will try them in the future.</li>
<li>Keras can also be well performed, but who has the technology or at least hints about how to cut down inference time in keras (I wonder, if private dataset are all long sequence, what will happen : - )) DOES KERAS TRIMMING EASY to use like what has shown by pytorch?</li>
<li>Thanks me teammate <a href="/cristianozhuo">@cristianozhuo</a> as he is enthusiastic to do massive preprocessing and help me a lot when I feel tense and keep working on this project when I have to do my job.</li>
</ol>
|
Jigsaw Unintended Bias in Toxicity Classification
|
Kernel For Training Bert (The real stuff)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/yuval6967/toxic-train-bert-base-pytorch">Toxic train BERT Base pytorch</a> is similar to [my plain vanilla kernel] (<a href="https://www.kaggle.com/yuval6967/toxic-bert-plain-vanila">https://www.kaggle.com/yuval6967/toxic-bert-plain-vanila</a>) but this time with a more appropriate loos function and training.</p>
<p>It is similar to the notebooks I used for training Bert-base, Bert-large and GPT2. A few thing to note:</p>
<ol>
<li><p>Perprocessing - I hardly do any, and what I do is really not needed (for GPT2, it must be removed)</p></li>
<li><p>This kernel takes 2.5H to train 0.5 epoch. changing <code>partial_train</code> to 1.0 would train on full epoch</p></li>
<li><p>I normally trained on 2 epochs using <code>WarmupCosineWithWarmupRestartsSchedule</code> with a slightly higher lr.</p></li>
</ol>
<p>My final solution is an ensemble of 3 models of bert-uncased-large, 4 models of bert-uncased-base, 5 models of GPT2 models.</p>
<p>I use it just to see loss on live runs</p>
<p>0.5 Epoch takes 2.5H</p>
<ol>
<li>Every batch is trimmed to the length of the longest member of this batch</li>
<li>I make sure that in every batch all the line are of similar length. I do it like this:
<ul><li>First I sort the input lines according to the text length</li>
<li>Now I group the input to batch_size groups</li>
<li>shuffle the groups.</li>
<li>Train without shuffling the dataset </li></ul></li>
</ol>
<p>simple average. same weights within the same model. different weights for different models.</p>
<p>I used special token for zero-padding</p>
<p>The classification layer I used is similar to the Bert classification layer. </p>
|
Jigsaw Unintended Bias in Toxicity Classification
|
Loss function and a short report (LB: 5)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I created a notebook to show how I computed the loss function. I used the same loss function with LSTM based models, Bert-Base, and GPT2.</p>
<p>[https://www.kaggle.com/mashhoori/a-loss-function-for-the-jigsaw-competition]</p>
<p>Ensemble of 10 LSTM based models: ~0.9415
Ensemble of 5 Bert-Base based models: ~0.9462
Ensemble of 3 GPT2 based models: ~0.9425</p>
<p>Overall score: ~ 0.94725</p>
<p>=======================</p>
<p>So this was my line of thinking for the loss function:</p>
<p>The objective function has 4 terms, each representing one type of AUC. The formula below is a simplified version of the score, without any power of root involved:</p>
<p>OverAll_AUC + 1/9 Sum(Subgroup(i) AUC) + 1/9 Sum(BPSN(i) AUC) + 1/9 Sum(BNSP(i) AUC)</p>
<p>(1) Each sample participates in some of these terms -> a sample that participates in 3 terms is more important than a sample that participates in two terms since giving a bad score to that sample affects the overall score more.
(2) AUC itself represents the probability that a model assigns a higher score to a randomly chosen positive sample compared to a randomly chosen negative sample. If we wanted to estimate this probability based on a number of positive and negative examples and their scores, we could form a set of all the ( p , n ) pairs and compute the fraction of pairs in which score(p) > score(n). Here, each positive example participates in 1/num-positive-samples of the pairs. The same goes for negative examples. This number, thus, can represent the influece of each positive and negative sample on the AUC. If we have only 2 positive examples in a set, the effect of assigning a bad score to each one of these samples is higher compared to the situation in which we have 200 positive samples. </p>
<p>Another effect of assining weight in this manner is that it balances the effect of the number of positive and negative samples. If the number of negative examples is much higher than the positive examples, after assigning weight we will have :
sum(weight-of-positive-samples) = sum(weight-of-negative-samples)</p>
<p>This can remove one source of bias in our model. If the model sees a higher fraction of positive examples for one subgroup compared to the overall rate of positive examples in the dataset, the model might get biased towards that subgroup. But after assigning weights in the way explained above the model sees a balances set of positive and negative examples for each subgroup (one issue is that, in this dataset, each sample could belong to more than one subgroup, so the final weights that I computed were not 100% balanced).</p>
<p>(1) and (2) gave me the loss function that I used, in which the weight of each sample is: the sum of the influences of that sample on each term in the overall score. </p>
<p><a href="/httpwwwfszyc">@httpwwwfszyc</a> I added an explanation to the post.</p>
|
Jigsaw Unintended Bias in Toxicity Classification
|
Nuances of bias in machine learning (In Dimension C-132)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><em>I wrote this a couple of months ago and then never posted it for some reason. Figured I'd just finally dump it here.</em></p>
<p>Recently on Twitter, I’ve witnessed lots of snarky little back and forths on the subject of machine learning bias. this subject recently on Twitter and thought I’d throw my two cents into the ring. I’ll have a second part to this that takes the matter a bit more seriously. Here I explore the complexities of this important issue using a fictional situation. </p>
<h1>Why you should care</h1>
<h3>The Scenario</h3>
<p>The first thing I want to cover is why you should care about bias in Machine Learning. I’ll start with a fictional situation to display the complexity of the issue. </p>
<p>Let’s say you make this new algorithm that utilizes z-ray images in order to detect discombobulated plumbii. Your algorithm is able to do this with state of the art accuracy: .92 F1 score whereas human judgment is only able to achieve F1 of .85. </p>
<p><img src="https://i.ytimg.com/vi/eMJk4y9NGvE/maxresdefault.jpg" alt=""></p>
<p>Your algorithm gets released to the medical world and everyone is amped on this life-saving application of machine learning. Five years down the road a follow-up medical study compares the survival rates of those inspected with the new method vs the old method and determined that the fatality rate is 20% higher with the new method. Why did this happen? It turns out that the training data that was used contained images only from a single, specific hospital region that served 50% Cromulon, 20% Gazorpian, 20% Traflorkian and 10% other races. In this small set all of the groups were relatively well represented. There was relatively little data on the other races’ groups but this didn’t have any serious negative effect at model build time, however, out in the wild it actually had huge implications.</p>
<h3>The Problem</h3>
<p>Several key issues come to light. The thick hard shell of the Grunglokians blocks most of the z-rays from producing clear images. The rays make it through but the brightness and clarity are greatly reduced. For this species the algorithm always guesses negative. Conversely, the Bluubosians, and their squishy, gelatinous bodies allow the z-rays straight through and offer no signal back. For whatever reason, the algorithm always flagged this group as positive. These are only two examples of the algorithm having unintended negative consequences, but there are many more.</p>
<p>| Grunglokians | Bluubosians |
| --- | --- |
| <img src="https://vignette.wikia.nocookie.net/rickandmorty/images/d/da/Grunglokian_Wedding.png/revision/latest?cb=20170607201458" alt=""> | <img src="https://vignette.wikia.nocookie.net/rickandmorty/images/4/4a/S1E1_Surprised_Bluubosian.png/revision/latest?cb=20170607190714" alt=""> |</p>
<h3>The implications</h3>
<p>So now the Grunglokians are being plagued by undetected discombobulated plumbii, whereas a typical doctor has very accurate diagnosis with this group. On the flip side, all of the Bluubosians are being subjected to a painful treatment for a condition they don’t have and also costing the intergalactic federation billions for unnecessary treatment. Some are still happy with the improved algorithm, but there is a small minority calling for changes and some the outright banning of z-ray based detection algorithms. </p>
<h1>So how do we solve this?</h1>
<h3>Objective</h3>
<p>This is a very nuanced issue. Let’s start by discussing our objectives here. First off what are the responsibilities and priorities on the machine learning end? Is the objective to purely maximize F1 score? That didn’t turn out too well in the first 5-year trial. Is it to create an algorithm that is able to handle as many races with reasonable accuracy as possible? Is it to develop a set of conditions under which the algorithm will not be used? Is it to minimize false positives and save the intergalactic federation money or is it to catch as many cases as possible?</p>
<p>All of these angles should be considered in a normal data science workflow. There is no definite answer without further information and that is the key here. When this algorithm was first rolled out none of these issues nor their implications had been considered and many of them were completely unknown. Pushing ML out into the real world is always a risky endeavor, but it’s important to stress the potential costs of failure and ethics of the situation. </p>
<h1>Solution Candidates</h1>
<p>Simplify the algorithm to work in more cases
One possibility is to change from that fancy CNN model down to a simpler linear regression setup. This model is invariant to the shifts in image quality between the various races and has been shown to do well in almost all cases, but on average it is worse than doctor review in terms of performance so it would likely yield minimal benefit. </p>
<h3>Gather more data</h3>
<p>One would think that now that the model has been rolled out and the lesson has been learned about the various edge cases we could adequately address this by utilizing this knowledge and data to improve in these key areas, but there are a few major struggles. The data we learned from was based on fatalities and not necessarily discombobulated plumbii detection so we have a proxy for labels but not necessarily labels themselves. </p>
<p>We could always go and target the various groups that are underrepresented in our data now that we have figured a few of them out, but it would be extremely expensive to gather adequate data for all possible groups and doing this would cause us to sample from a different distribution. Originally we were gathering data from people who had come in because they were feeling ill, but actively gathering data for these subgroups would likely skew towards the negative class since they were already symptom-free. This could be a viable solution in other scenarios, but not this one. </p>
<h3>Augment the data</h3>
<p>One of the key things that was noticed in the retrospective study on the algorithm was that the biggest differentiator between species seemed to be in how the z-rays penetrated the patients. This caused fluctuations in the brightness and clarity of images. We could theoretically do some augmentation to simulate these different cases that were seen but this still doesn’t resolve many cases where all information was lost. This seems semi-viable, but likely won’t please everyone and we can’t guarantee augmentation will improve the various edge cases and not in some way decrease the performance of the original algorithm.</p>
<h3>Throw away the algorithm</h3>
<p>Many people did not take very kindly to the experimental method causing them harm. One option to consider is just canning the whole thing. The problem with this option is that the new z-ray technology has actually shown to be highly effective under the right conditions. Dropping the project entirely could potentially lead to many undiagnosed cases that are only possible to be detected via the new algorithm. Is it really fair to throw away something that has such significant upsides for certain people?</p>
<h3>Create condition-set to switch to doctors</h3>
<p><img src="https://vignette.wikia.nocookie.net/rickandmorty/images/4/42/S2e8_doctor.png/revision/latest?cb=20170813201413" alt=""></p>
<p>This is an in-between solution. We can’t make our algorithm robust to every different race, but it doesn’t quite seem right to totally ditch something that can be effective in the correct conditions so one option would be to look back at that retrospective study and our previous internal metrics and determine within some confidence level which races were likely negatively affected by this algorithm and create a protocol in which doctors review certain races, but the algorithm reviews others. This can likely be optimized to yield great results, but some races may also view this as discriminatory since the technology is being withheld from them off of seemingly little empirical evidence and others will question the safety of it if it is not a widely adopted technique. </p>
<h1>Final Verdict</h1>
<p>It’s honestly hard to say what the best option is. There are going to be trade-offs that need to be made and we can’t please everyone, but now that these paths have been looked at and more data has been gathered we can likely come to a better-informed solution than simply throwing our model out there and hoping we do more good than harm. </p>
|
Jigsaw Unintended Bias in Toxicity Classification
|
Public LB Top 10, Private LB Top 26 Solution (Single Bert - 0.94560)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I also do not know the reason..</p>
<p>I split aux output into two parts. One is aux target, the other is aux sub target,</p>
<p>Mee too. It is really fun! Good luck to u and us:)</p>
<p>Thanks</p>
<p>I hope too. Good luck to us!</p>
<p>I have a idea from your kernel ! Thanks.</p>
<p>Our tram has gpu. We make baseline on kernel, and train this on local machine.</p>
<p>I referenced <a href="https://medium.com/data-design/reaching-the-depths-of-power-geometric-ensembling-when-targeting-the-auc-metric-2f356ea3250e">here</a>. 0.94567 (linear) -> 0.94581(power 3) based on public lb!</p>
<p>Thanks! I edited title.</p>
<h1>Before posting</h1>
<p>Before posting, Thanks for hosting this great competition. And also, Thanks to our teammates:</p>
<ul>
<li>Young Soo Lee <a href="/yslee04">@yslee04</a> </li>
<li>Tae Jin Kim <a href="/fulrose">@fulrose</a> </li>
<li>Soon Hwan Kwon <a href="/soonhwankwon">@soonhwankwon</a> </li>
<li>You Han Lee <a href="/youhanlee">@youhanlee</a> </li>
</ul>
<p>This was my first experience in NLP. From Counter-vectorization To Bert, I learned a lot from kaggler.</p>
<h1>Solution</h1>
<h2>Summary</h2>
<ul>
<li>1) Text Preprocessing: Based on public kernel</li>
<li>2) <strong>Loss Fuction</strong>: We focused on here. A lot of EDA.</li>
<li>3) Model Architecture: 2 LSTM, 4 BERT, 1 GPT2</li>
<li>4) Ensemble: Pow Ensemble</li>
</ul>
<h2>Text Preprocessing</h2>
<p>Thanks to <a href="/christofhenkel">@christofhenkel</a> (<a href="https://www.kaggle.com/christofhenkel/how-to-preprocessing-for-glove-part1-eda">How To: Preprocessing for GloVe Part1: EDA</a>)</p>
<p>Based on Dieter`s Kernel, We handle contraction more carefully(We'll -> We will ...).</p>
<h2>Loss Function</h2>
<p>Thanks to <a href="/tanreinama">@tanreinama</a> (<a href="https://www.kaggle.com/tanreinama/simple-lstm-using-identity-parameters-solution">Simple LSTM using Identity Parameters Solution</a>)</p>
<p>We focused on here. When you look at <a href="https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/overview/evaluation">Overview - Evaluation Tab</a>, 'Metric' is a lot different from the metric we already know.</p>
<p>There are two types of metric: First is Overall AUC(we call g_auc), then Generalized Mean of Bias AUCs(we call b_auc). Also, We can know that the ratio of g_auc and b_auc is 1:3. So we have to focus on b_auc.</p>
<p>B_auc consists of three parts: 1) Subgroup AUC, 2) BPSN, 3) BNSP. After some EDA, After some EDA, We can see that Subgroup AUC and BPSN score is low than the other things.</p>
<p>># new identity columns
NEW_IDENTITY_COLUMNS = [
'homosexual_gay_or_lesbian',
'jewish',
'muslim',
'black',
'white',
'psychiatric_or_mental_illness'
]</p>
<p>># weight
weights = np.ones((len(X_train), )) / 4</p>
<p>># bpsn
weights += (((train_df['target'].values < 0.5).astype(bool).astype(np.int) +
(train_df[NEW_IDENTITY_COLUMNS].fillna(0).values >= 0.5).sum(axis=1).astype(bool).astype(np.int) ) > 1 ).astype(bool).astype(np.int) * 3 / 4</p>
<p>> # loss function
bce_loss_1 = nn.BCEWithLogitsLoss(weight=targets[:, 1:2])(preds[:, :1], targets[:, :1])
bce_loss_2 = nn.BCEWithLogitsLoss()(preds[:, 2:], targets[:, 3:])
bce_loss_3 = nn.BCEWithLogitsLoss()(preds[:, 1:2], targets[:, 2:3])
final_loss = bce_loss_1 * 1.2 + bce_loss_2 + bce_loss_3</p>
<p>With this change, Our LSTM scored 0.939x, BERT scored 0.944x and GPT2 scored 0.940x.</p>
<h2>Model Architecture</h2>
<p>Thanks to <a href="/bminixhofer">@bminixhofer</a> (<a href="https://www.kaggle.com/bminixhofer/simple-lstm-pytorch-version">Simple LSTM - PyTorch version</a>), <a href="/yuval6967">@yuval6967</a> (<a href="https://www.kaggle.com/yuval6967/toxic-bert-plain-vanila">Toxic BERT plain vanila</a>)</p>
<h3>LSTM</h3>
<ul>
<li>Embeddings: Glove + Fasttext (600d)</li>
<li>Architecture: 2 Layer Bi-lstm cell, Max pooling, Average pooling</li>
<li>Hyper Parameters: Max length 220, Hidden state size 128</li>
<li>We used batch level padding.</li>
</ul>
<h3>Bert (Fine tuning)</h3>
<ul>
<li>Framework: Pytorch, Keras</li>
<li>Model: Bert base uncased</li>
<li>Hyper Parameters: Max length 300, 2 epoch, 0.01 lr warm up, batch 64.</li>
</ul>
Not Working
<p>> - More pretrained bert
- Dynamic layer wise decay</p>
<p>After some hyper parameter tuning, BERT socred 0.94560.</p>
<h3>GPT2 (Fine tuning)</h3>
<ul>
<li>Framework: Pytorch</li>
<li>Hyper Parameters: Max length 260, 2 epoch, 8e-5 lr, batch 32.</li>
</ul>
<h2>Ensemble</h2>
<ul>
<li>Pow Ensemble (square).</li>
</ul>
<p>We select 4 bert, 1 lstm, 1gpt2 based on our validation score, corr, and lb score.
It`s really difficult process.</p>
<h1>What we wanted to do</h1>
<ul>
<li>Stacking</li>
<li>Develop GPT2 forward layers.</li>
<li>Xlnet</li>
<li>Systematic Ensembles.</li>
</ul>
<p>I learned so much from this competition! EDA is always fun.
Thank you for reading. Good luck!</p>
<p>Thanks</p>
<p>Thanks. Good luck, too :)</p>
<p>It's just a public lb, but thanks</p>
<p>Thanks!</p>
<p>1) There was a mistake in making a discussion. Let's take a look. Thanks:)</p>
<p>2) It`s heuristics. I tried a lot experiment. I decied to 1.2 based on validation score and lb score !</p>
|
Jigsaw Unintended Bias in Toxicity Classification
|
Sklearn Logistic Regression Solution!
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi, </p>
<p>In this kernel, I used simple Sklearn model "Logistic Regression" and Clean data with Sklearn feature extraction text tool called TfidfVectorizer.</p>
<p>Our goal in this competition we are asked to build a model that recognizes toxicity and minimizes unintended bias.</p>
<p>Check my Kernel here : <a href="https://www.kaggle.com/elcaiseri/toxicity-bias-logistic-regression-tfidfvectorizer">Toxicity Bias Logistic Regression, TfidfVectorizer
</a></p>
<p>UPVOTE it if you find it interested and feel free in comments.</p>
|
Jigsaw Unintended Bias in Toxicity Classification
|
UET: 18th Private LB (with code)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Unintended Bias in Toxicity Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>No. It is not the same because of my implementation. I reset the counter to zero when I do <code>optimizer.step()</code>. Please refer <a href="https://github.com/ngxbac/Kaggle-Jigsaw/blob/master/callbacks.py#L293">this</a>. To easy understand, let say <code>accumulation_steps=8</code>. </p>
<p>Correct: [0, 8, 16, 24, ....]
Wrong: [7, 14, 21, ...]</p>
<p>As long as the iteration increases, the difference is larger. That is the reason why Learning rate is reduced to 0 at 1.6 epochs in Wrong way. I checked the weight carefully. <code>LR = 0</code> made the weight of epoch 2 = weight of epoch 1.6.</p>
<p>In our experiments, with Linear Warmup. I expect after 2 epochs, The learning rate is reduced to 0 in “correct” way. However, the learning rate is 0 at around 1.6 epochs if I use wrong way. Is it normal?</p>
<p>.</p>
<p>Thanks to Kaggle and organizer for hosting an interesting competition. </p>
<p>Thanks to my teammates <a href="/a11rand0m">@a11rand0m</a> and <a href="/phongkhacdo">@phongkhacdo</a> for the hard working during this competition.</p>
<p>I am the person in charge of BERT, GPT2 and XLNET, so for the LSTM, sorry if I could not provide information clearly. </p>
<h1>TL;DR</h1>
<p>Our solution is an ensemble of: 8xLSTM + 4xBert base + 2xBert large + 1xGPT2 + 1xXLNet.
Each type of model is trained with different seeds, sequence length, etc,...
The code of BERT/GPT2/XLNET is published in <a href="https://github.com/ngxbac/Kaggle-Jigsaw">here</a></p>
<h1>Preprocessing</h1>
<ul>
<li>LSTM
We use the preprocessing steps at the public kernel. </li>
<li>BERT/GPT2/XLNET
No preprocessing </li>
</ul>
<h1>Features</h1>
<p>We add 11 statistic features and use them for both LSTM and BERT/GPT2/XLNET
```
def add_features(df):
df['total_length'] = df['comment_text'].apply(len)
df['capitals'] = df['comment_text'].apply(lambda comment: sum(1 for c in comment if c.isupper()))
df['caps_vs_length'] = df.apply(lambda row: float(row['capitals']) / float(row['total_length']), axis=1)
df['num_exclamation_marks'] = df['comment_text'].apply(lambda comment: comment.count('!'))
df['num_question_marks'] = df['comment_text'].apply(lambda comment: comment.count('?'))
df['num_punctuation'] = df['comment_text'].apply(lambda comment: sum(comment.count(w) for w in '.,;:'))
df['num_symbols'] = df['comment_text'].apply(lambda comment: sum(comment.count(w) for w in '*&$%'))
df['num_words'] = df['comment_text'].apply(lambda comment: len(comment.split()))
df['num_unique_words'] = df['comment_text'].apply(lambda comment: len(set(w for w in comment.split())))
df['words_vs_unique'] = df['num_unique_words'] / df['num_words']
df['num_smilies'] = df['comment_text'].apply(lambda comment: sum(comment.count(w) for w in good_emoji))
return df</p>
<p>```</p>
<h1>Loss function</h1>
<p>This loss function is better than weighted one.</p>
<p>```
class CustomLoss(nn.Module):
def <strong>init</strong>(self, loss_weight):
super(CustomLoss, self).<strong>init</strong>()
self.loss_weight = loss_weight
self.bin_loss = nn.BCEWithLogitsLoss
self.aux_loss = nn.BCEWithLogitsLoss</p>
<pre><code>def forward(self, output_bin, target_bin, weight_bin, output_aux, target_aux):
bin_loss = self.bin_loss(weight=weight_bin)(output_bin, target_bin)
aux_loss = self.aux_loss()(output_aux, target_aux)
return 0.5 * bin_loss + 0.5 * aux_loss
</code></pre>
<p>```</p>
<p>Sorry that I havent cleaned unnecessary stuffs. </p>
<h1>Models</h1>
<p>For both LSTM and BERT/GPT2/XLNET, we concatenate pooled features with 11 additional features before passing to final FCs. <br>
<a href="https://github.com/ngxbac/Kaggle-Jigsaw/blob/master/models/bert_model.py#L30">Code of BERT model</a> </p>
<h1>Training</h1>
<ul>
<li><p>LSTM
It is done by fastai after 4 epochs. We use bagging for ensembling with different seeds. The single score is around 0.936-0.938. Ensembling 8xLSTM give 0.940 LB. </p></li>
<li><p>BERT </p></li>
</ul>
<p>We recognize that: taking the output from 11st layer for bert base brings better performance than 12nd layer. So, we train many models with different seeds, layer output for more diversity. </p>
<p><code>
Optimizer: BertAdam
Warmup: 0.01
n_epochs: 2
</code>
With 95/5 data for train/valid. We can get around <code>0.9424x</code> for Bert base. We tried to train more epochs, but the result is worse. It seems that 2 epochs are enough. From these observations and correlations between CV and LB, we decided to train all data and <code>no need validation</code>.
This change gives us: <code>0.94285</code> LB. We also tested with previous lower versions. There is a correlation when I trained with full data too. </p>
<ul>
<li><code>Bug</code> or magic features 😄
It's the funniest moment during this competition 😄 . By a mistake, I got a bug when implementing <code>Gradient accumulation</code>.
Correct:
<code>
if step % accumulation_steps == 0:
# Do your code here ....
</code></li>
</ul>
<p>Wrong:
<code>
if (step + 1) % accumulation_steps == 0:
# Do your code here ....
</code></p>
<p>However, it surprised me that this bug gives us an improvement from: <code>0.94285</code> to <code>0.94314</code> for bert base and <code>0.94375</code> for bert large.
It seems that Bert is coverage at 1.5 epoch, not at the end of 2 epochs. How lucky I got this bug. Haha.</p>
<ul>
<li><p>GPT2
Similar setting as BERT above. We get only <code>0.939xx</code> LB. However, this model does help for ensembling (0.001). </p></li>
<li><p>XLNET
We do not have time and resource for this model. It's truly a monster :(
<code>
Data: 70%
BS: 16
Accumulation: 8
Epoch: 0.5
</code>
LB: <code>0.936</code>.
I was disappointed at the first time I submit it. However, It helps ensemblinh improves from: <code>0.94630</code> to <code>0.94646</code> ⭐️ </p></li>
</ul>
<h1>Inference</h1>
<p>Our solution takes ~5500s for inference. The secret is <a href="https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/discussion/94779#latest-553355"><code>trimming</code></a> . Many thanks to <a href="/hooong">@hooong</a> . </p>
<p>We apply trimming to Bert model only since there is no performance decrease. The inference time of each model as below:
* GPT2, XLNET: 15mins/model
* Bert large: 5mins/model
* Bert base: 1-2mins/model
* LSTM: 1min/model</p>
<h1>What didnt work</h1>
<ul>
<li>I spent much time to play with different loss functions, tricks, ... that I often use in CV competition to solve NLP problem. However, I was totally failed.</li>
<li>K-Fold and stacking. 5Fold Bert base gives 0.9440 LB. However, we were not success on stacking. </li>
</ul>
<h1>We can fall down at the second stage</h1>
<p>Since we use all data for training BERT/GPT2/XLNET models, it is a risk that we are overfitted public LB. We hope the shakedown is acceptable :). </p>
<p>Finally, Good luck to everyone. I can't wait to read the solutions from other top teams. I learnt a lot from this competition. </p>
<p>Thanks for reading.</p>
<p>Yes. It does help for all models (bert, lstm, ...)</p>
<p>Yes. We just use the weighted average. </p>
<p>Trimming for training hurts performance for us too. Trimming for inference give different result, but the LB score is not changed. I guess the different is only 1e-5</p>
|
KDD Cup 2013 - Author Disambiguation Challenge (Track 2)
|
4th place code
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: KDD Cup 2013 - Author Disambiguation Challenge (Track 2)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This is the repository for our code:</p>
<p><a href="https://github.com/bensolucky/Authors">https://github.com/bensolucky/Authors</a></p>
|
KDD Cup 2013 - Author Disambiguation Challenge (Track 2)
|
Solutions
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: KDD Cup 2013 - Author Disambiguation Challenge (Track 2)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Nice solution Hustmonk, congratulations. Only one thing is unclear for me: How do You recognize a nickname? Is it a name that is a subsequence of other name?</p>
<p>Steve thanks for posting Your solution. I have thought about something like fuzzy comparison, but I wanted to use Levenstein distance. The idea with taking out all spaces is really cleaver, nice one!</p>
<p>I just checked whether names of one author are subset of names of other authors. In my solution <span>Sing Bing Kang == Bing Kang. Because {Bing} \in {Sing, Bing}</span></p>
<p><span>Potential disadvantage of such solution is that there is no closure, example:</span></p>
<p><span>Author1: James R. Dawson</span></p>
<p><span>Author2: R. Dawson</span></p>
<p><span>Author3: Michael R. Dawson</span></p>
<p><span>Author2 is duplicate of Author1 and Author3, but Author1 is not duplicate of Author3.</span></p>
<p><span>This causes that some authors with just one short name are duplicates of a lot of authors, I've excluded an author if he has more than 12 duplicates.</span></p>
<p>I hope that this is not against the rules to talk about the solutions, but I'm really interested in the methodology of winners.</p>
<p>I'm a bit dissapointed by my solution because it was very simple, the main idea was as follows:</p>
<p>- Discard all authors without any papers written (that was THE KEY!)</p>
<p>- Normalize the names (convert the 'strange' letters)</p>
<p>- Split the names by spaces and:</p>
<p> o Last element is surname</p>
<p> o All other elemens are names, the one with length one are treated as short names and the others as long names</p>
<p>- Choose as duplicates authors with similar names, names are similar if:</p>
<p> o surname is identical</p>
<p> o names of one are 'subset' of the other</p>
<p> o Long names are compared with short names using just first letter</p>
<p></p>
<p>It is the core of the solution and it was almost sufficient to TOP10.</p>
<p>I spent a lot of time on analyzing the keywords and clustering them. I thought that it was working pretty well when I looked at the clusters, but the best result that I achieved was when I took all potential duplicates choosen by my algorithm, regardless
of keywords similarity.</p>
<p>What about Your solutions? </p>
<p></p>
<p>By normalization I meant removing diacritical marks. It is language dependent thing, in R the easiest way is to use Iconv function.</p>
|
KDD Cup 2013 - Author Disambiguation Challenge (Track 2)
|
Source Code of SmallData
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: KDD Cup 2013 - Author Disambiguation Challenge (Track 2)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Deal all,</p>
<p></p>
<p>We have made the code repository on Github public once the track 1 finishes. Here is the link: <a href="https://github.com/remenberl/KDDCup2013">https://github.com/remenberl/KDDCup2013</a>.</p>
<p>As we kept using this repository within the competition period, one can see all the commit-related information. We will submit one paper introducing our method later.</p>
<p></p>
<p>Best,</p>
<p>SmallData Team</p>
<p></p>
|
KDD Cup 2013 - Author Disambiguation Challenge (Track 2)
|
Team Algoithm's papers and slides
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: KDD Cup 2013 - Author Disambiguation Challenge (Track 2)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Dear all,</p>
<p>Our papers and slides are available here:</p>
<p>http://www.csie.ntu.edu.tw/~cjlin/papers/kddcup2013/</p>
<p>Any comment is very welcome. Thanks.</p>
<p>Team Algorithm from National Taiwan University</p>
|
KDD Cup 2013 - Author Disambiguation Challenge (Track 2)
|
my source code
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: KDD Cup 2013 - Author Disambiguation Challenge (Track 2)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>https://github.com/hustmonk/kddcup</p>
<p>it costs about 30 minutes.</p>
|
KDD Cup 2013 - Author-Paper Identification Challenge (Track 1)
|
Team Algorithm's papers and slides
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: KDD Cup 2013 - Author-Paper Identification Challenge (Track 1)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Dear all,</p>
<p>Our papers and slides are available here:</p>
<p>http://www.csie.ntu.edu.tw/~cjlin/papers/kddcup2013/</p>
<p>Any comment is very welcome. Thanks.</p>
<p>Team Algorithm from National Taiwan University</p>
|
KDD Cup 2014 - Predicting Excitement at DonorsChoose.org
|
Congrats to STRAYA, NCCU and adamaconguli!
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: KDD Cup 2014 - Predicting Excitement at DonorsChoose.org
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>[/quote]</p>
<p>Hey dkay, just to clarify, when you say time decay do you mean some sort of feature/weights in your models or something like adjusting the output of the model (probabilities) with a decay factor? We have worked with the latter.</p>
<p>[/quote]</p>
<p>We tried both.</p>
<p>1. We trained our GBMs on responses censored with time (0 if the project has not been funded yet at a given date and 1 if it was fully funded and exciting).</p>
<p>2. Applied linear decay</p>
<p>We chose to trust more the option 1 as it performed almost as well on the public leaderboard. So we discarded all our submissions with linear decays. Unfortunately! </p>
<p>Congrats to STRAYA, NCCU and adamaconguli! It was a fun but challenging competition. <br>Thanks to the competition host DonorsChoose.org for the dataset.</p>
<p>Our best submission is a mixture of GBMs that took into account the time component and had a lot of historical features to describe what teachers did in the past. <br>It looks like that we should have been more aggressive in the time decay by adding additional decay. But we didn’t choose the right submission.</p>
<p>One very good feature was the teachers donations history. Not what they received but what they gave. Teachers who donated to exciting projects have a higher likelihood to post exciting projects. Has anybody else observed this?</p>
<p>Xavier</p>
<p>We made our code and the writeup to our approach available here</p>
<p>http://www.datarobot.com/blog/datarobot-the-2014-kdd-cup/</p>
<p>Like Yoon, Peng and Black Magic, we:</p>
<p>- used GBMs (R and Python), <br>- improved our models thanks to history features and text mining, <br>- and came up with a model strategy to account for the time bias due to the Mid May cut off in the test set.</p>
|
Kaggle - LLM Science Exam
|
0.894 without training any model (+ 56th solution)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Update: I got 0.896 without training by replacing Xwin with Platypus instruct 70B model in my 0.892 submission! I wonder if one can get to 0.9 without training.</p>
<p>I managed to run a deberta model along a 70B model few weeks ago but now it times out.</p>
<p>Yes, I think he RAG is the reason why it works well. It is the same as the top team BTW, except I use paragraphs of length at least 150 when they keep all of them.</p>
<p>I used fine tuned debertas with the same RAG (actually 3 RAG then ensemble, and got 0.902 private. Maybe your deberta is better than mines though.</p>
<p>I can make my RAG data sources public so that you can replace my 70B model with yours and experiment.I coud do the experiment myself but I must say that I am less motivated to try new stuff here after they validated using non commercial models despite their rules forbidding it (see my ant post).</p>
<p><a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> I shared the code and data sources if you want to try your models with my RAG. The RAG is in the prepared_data file created early in the notebook.</p>
<p>Thank you! Maybe it was a bit too effective and discouradged me from finetuning models.</p>
<p>Thanks! Congrats again on your solution, there is so much things you did differently from others that we will all learn quite a bit from it.</p>
<p>I quickly had this run on a local machine with stellar score on the train 200 (I didn't know that this score was not a good proxy for the LB score). But it took e some time to streamline it so that it runs on Kaggle hardware. I was afraid I'd hit a space limit, but now I see that top team used a similar retrieval, at a much larger scale!</p>
<p>Thank you Chris. I agree, more time would have been great. My bad to not have joined earlier.</p>
<p>Very interesting. I should try and compare with what I have to see how much is lost (if any) by this quantization.</p>
<p>I had a significant degradation when using 16 bits versions of the embeddings hence didn't pursue quantization further.</p>
<p>Thanks.</p>
<blockquote>
<p>t couldn't fit in the ensemble</p>
</blockquote>
<p>Indeed. I was hoping to speed it up a bit and allow for ensemble. But it seems that each time I speed it Kaggle was slower. The code of my best solution would have run under 8 hours few weeks ago, which would have been enough to add few Deberta models.</p>
<p>Thanks Giba!</p>
<p>I tried index quantization when I was using a two stage method, but I don't see how I can use it with 35 million chunks on Kaggle notebooks. Did your team do that?</p>
<p>Lots of people discuss LLM fine tuning, but prompt fine tuning is another way to improve LLMs. A great example was provided early by <a href="https://www.kaggle.com/steubk" target="_blank">@steubk</a> in <a href="https://www.kaggle.com/code/steubk/the-art-of-prompt-engineering" target="_blank">The Art of Prompt Engineering</a>. This and the 70B model inference code by <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a> made me spend most of my competition time on how to use pretrained models as effectively as possible. To my surprise I got as far as 0.89 on the public LB and 0.892 private only using pretrained models. I shared one submitted notebook with 0.89 public / 0.892 private <a href="https://www.kaggle.com/code/cpmpml/llm-008-orig" target="_blank">here</a>. Another one that has a better private LB at 0.894 is <a href="https://www.kaggle.com/cpmpml/llm-008" target="_blank">here</a>.</p>
<p>Not surprisingly my solution is a RAG method with these steps:</p>
<ul>
<li>Indexing wikipedia chunks</li>
<li>Retrieving relevant chunks from each sample</li>
<li>Predicting the top 3 answers<br>
However it differs from publicly shared ones in how each individual step is performed. Let's review these one by one.</li>
</ul>
<h1>Indexing Wikipedia</h1>
<p>When I joined, RAG was already a hot topic, with a public notebook disclosing how to retrieve sentences in a two step way: retrieve relevant articles using their titles and first sentence, then retrieve relevant sentences within these articles. I quickly found that retrieving full paragraphs (separated by <code>'\n'</code>) was much better than retrieving sentences. I then wondered why did we need to do it in two steps? Why not directly retrieve relevant paragraphs? Indeed, the article title plus first sentence may not represent correctly all the context of the article. </p>
<p>Issue was that the number of paragraphs is huge. Even when discarding short ones that are not really useful. I settled for all paragraphs with at least 150 characters. This makes about 35 million of these. Using an embedding model of size 1024 (such as bge-large-en-v1.5 or e5-large-v2) this represents about 140 GB. This is way too large for a Faiss index we could load in memory.</p>
<h1>Retrieval</h1>
<p>I went on nevertheless and coded a cosine similarity retrieval between a query embedding and these 140 GB that runs in about 25 minutes on Kaggle notebooks. Code is in the notebook I shared. I previously computed offline the paragraph embeddings and saved them in separate files, one file per million paragraphs. One file uses 4GB for the largest embedding models. Files are loaded in memory one by one and released afterwards.</p>
<p>For the query embedding for each sample I first compute the embedding for the prompt. I also compute one embedding per answer, then take their average. This yields an answers embedding which does not depend on the order in which answers are listed as options. I then average the prompt embedding and the answers embedding. This worked better than other ways I tried.</p>
<p>I then use faiss for retrieval but I don't create an index. I directly used the <code>faiss.knn_gpu(res, key_embeddings, value_embeddings, k)</code> method.</p>
<p>In the notebook shared above I retrieve the top 8 paragraphs then concatenate them with a newline, so that they appear as a set of paragraphs.</p>
<h1>Answer prediction</h1>
<p>I use the Xwin model I shared <a href="https://www.kaggle.com/datasets/cpmpml/xwin-lm-70b-v0-1-part1" target="_blank">here</a> and <a href="https://www.kaggle.com/datasets/cpmpml/xwin-lm-70b-v0-1-part2" target="_blank">here</a> with the following prompt. I had betetr results wuith tghe Platypus model shared by <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a> but that model has a non commercial license hence is not compliant with the competition rules iMHO. I switched for safety.</p>
<p>The prompt I used is a fine tuned version of the one <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a> used in his code. This fine tuning improves the public LB significantly. </p>
<pre><code>Find below an instruction that describes a task, paired an that provides further context. Write a response that appropriately completes the request.
Your task to evaluate the correctness of a proposed answer to a question. Respond yes the answer correct, respond no it correct. Some background Wikipedia articles available, even it might always be relevant.
Background:
CONTEXT
Question: PROMPT
Proposed answer: ANSWER
</code></pre>
<p>I then use the logit for the 'yes' token as in <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a> code.</p>
<p>I tried other logits and also tried difference, like the logit for yes minus the logit for no, but nothing worked both on CV and LB better than the yes logit. I just find out that yes minus no scores a bit better on the private LB at 0.894, see the second notebook I shared above.</p>
<p>I see I made a typo in the code, and the result will probably be a bit better with the typo corrected to mimic the above prompt.</p>
<h1>Some training to 0.908</h1>
<p>Picking one pretrained logit may not be the best text classifier. During the last day of the competition I trained a logistic regression that takes as input the last hidden state of the model. It scores 0.894 on the public LB and 0.908 on the private LB. This is my best submission. </p>
<p>The improvement seems small but I started from a lower point as my 0.89 submission consistently failed when I tried it in the last week of the competition. I had to use a lower MAX_LENGTH of 1024 to make it run. And it took 8h58 to run… I therefore think that training the head yield close to 0.02 improvement.</p>
<p>I share the inference code <a href="https://www.kaggle.com/code/cpmpml/llm-021" target="_blank">here</a>.</p>
<h1>Misc</h1>
<p>I worked most of the time with embeddings computed with the <code>gte-small</code> model as it was the best for me on CV. I switched to larger ones like <code>bge-large-en-v1.5</code>and <code>e5-large-v2</code> during the last few days. My best sub uses <code>e5-large-v2</code>.</p>
<p>I wanted to look at TF-IDF and BM25 retrieval but did not find time for it.</p>
<p>The 9 hours limit and the memory were a real pain, and I was always at the border. My best submission ran for 8h58 minutes. And to keep up with memory I had to trim down max token length to 1024. What is weird is that versions with 3000 tokens ran fine two weeks ago. It would have moved me up by something like 0.005.</p>
<p>I also worked on a Deberta solution where I replace the 70B model above with Deberta. I tried several variants and the best private score is 0.902. It used 3 different embedding models: <code>gte-small</code> , <code>bge-large-en-v1.5</code>and <code>e5-large-v2</code>. For each of these I run the above pipeline with Deberta replacing the 70B model, then average the Deberta logits over the three runs. This yields 0.902 on private LB (and 0.895 public.) This notebook has nearly 400 GB of data sources attached to it! I used FP16 and Pytorch DDP to speedup inference 4x on T4.</p>
<p><strong>Edit.</strong> I also tried middle ground, i.e. 7B and 13B models. I fine tuned some but each time my fine tuned one was a bit worse than what I started from. Maybe I didn't use the right training data? Looking forward to top teams writeups to see how they fine tuned their models.</p>
|
Kaggle - LLM Science Exam
|
10th(11th) Place Solution: This is my first solo gold medal !!!
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Changing it in all RAGs improved public LB score by 0.003, but it seems to have worsened private LB score by 0.001.<br>
I finally decided to combine RAGs with and without clean_prompt_answer.</p>
<p>When I was using 4 different RAGs, average performed better.<br>
In the final submission, max performed better locally when I was using 10 different RAGs; the validation data was not very reliable, so maybe averaging might perform better for private LB socre (I haven't tried it).</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2522289%2F2b8138bddf09cb3168d9e32079b02e84%2Fkaggle-llm.drawio.png?generation=1697103353661989&alt=media" alt=""></p>
<p>It was the help of the Kaggle community that enabled me to get this far on my own. I would like to thank all the participants who made their data, code and ideas available to the public and the competition hosts.</p>
<p><a href="https://www.kaggle.com/code/kami634/11th-solution/notebook" target="_blank">final submission code</a></p>
<h2>Overview</h2>
<ul>
<li>multiple context retrieval from 2+2 different wikipedia data</li>
<li>multiple choice by single DeBERTa model</li>
<li>max probability ensemble</li>
<li>tf-idf post process</li>
</ul>
<h2>Retrieval</h2>
<p>There are four main types of Wiki data</p>
<ul>
<li>Wikipedia database backup dump (extracted with <a href="https://github.com/attardi/wikiextractor" target="_blank">wikiextractor</a>)<ul>
<li>Cons<ul>
<li>All articles are there, but bullet lists and some sections are missing.</li>
<li>Some formulas are not extracted, e.g. "formula_1", "formula_13".</li></ul></li></ul></li>
<li>Wikipedia Cirrus dumps (extracted with wikiextractor)<ul>
<li>Con<ul>
<li>Cannot split into sections</li></ul></li></ul></li>
<li><a href="https://www.kaggle.com/datasets/mbanaei/stem-wiki-cohere-no-emb" target="_blank">270k Data Cohere</a></li>
<li><a href="https://www.kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded" target="_blank">270k Data parsed</a></li>
</ul>
<h3>About dumps data</h3>
<ul>
<li>Split and embed articles 200~250 characters at a time, quantise the index with ivfpq in faiss to get it into memory and search for articles</li>
<li>After retrieving 10 articles, split the text with sliding window, vectorise it and search with faiss's indexflat to create a context<ul>
<li>The answer should be contained within a few sentences</li>
<li>A single sentence is too short.</li>
<li>However, if you split the text into several sentences, it may be split at the centre of the sentence that is the answer → split with sliding window</li></ul></li>
</ul>
<h3>About 270K data</h3>
<p>The following two types of retrieval are used</p>
<ul>
<li>tf-idf</li>
<li>Search with faiss indexflat as embedding</li>
</ul>
<p>Since embedding of only questions can be computed during inference, even relatively large models (e.g. BAAI/bge-large-en-v1.5) can be used, but this did not make much difference.</p>
<h2>Model</h2>
<ul>
<li>Same DeBERTa large single model used for all inference except post-processing</li>
<li>Inference used a context twice as long as the training one</li>
</ul>
<h2>Training data (especially Exam data)</h2>
<ul>
<li>60K data (data generated by GPT-3.5 based on wikipedia + α)</li>
</ul>
<h3>Validation Strategy</h3>
<ul>
<li>Hold out</li>
<li>200 train.csv + 1000 additional data (science exam from wikipedia made by GPT-3.5)</li>
</ul>
<p>No high correlation, but informative. However, the last one was not very helpful as it was a mixture of questions with wrong answers or more than one correct answer.</p>
<h2>Ensemble</h2>
<ul>
<li>Take the max of each prediction probability , and try to predict from largest to smallest<ul>
<li>If there is an answer in the context, it should be high confidence</li>
<li>If there is no answer in the context, it should be low confidence</li></ul></li>
<li>Robust, less likely to overfit, but better performance than simple blending.</li>
</ul>
<h2>Post Process</h2>
<ul>
<li>Modification of predictions by tf-idf with (4,7)-gram for specific conditions.<ul>
<li>Post-processing to correct what is obviously wrong</li></ul></li>
</ul>
|
Kaggle - LLM Science Exam
|
12th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It takes 1 hour for all the 4 contexts. The key is to use GPU to retrieve context.</p>
<pre><code> cuml.feature_extraction.text TfidfVectorizer
</code></pre>
<p>and</p>
<pre><code> = read_index()
= faiss.StandardGpuResources()
= faiss.index_cpu_to_gpu(res, , sentence_index)
</code></pre>
<p>First of all, we are grateful to the organizers of this amazing competition, which was a great learning opportunity for us. We also appreciate all the kagglers who shared their ideas and code during the competition. Thank my team members <a href="https://www.kaggle.com/zzy990106" target="_blank">@zzy990106</a>, <a href="https://www.kaggle.com/berserker408" target="_blank">@berserker408</a>, <a href="https://www.kaggle.com/fankaixie" target="_blank">@fankaixie</a> for their hard work.</p>
<p>In short, our solution consists of diverse contexts and models.</p>
<h1>RAG</h1>
<h2>Context #1:</h2>
<p>Built wiki english faiss index, combine top5 of three recall paths as context for each question:</p>
<ul>
<li>bge prompt + answer</li>
<li>gte prompt + answer</li>
<li>gte prompt </li>
</ul>
<h2>Context #2:</h2>
<p>Sentence based openbook context shared by JJ ( <a href="https://www.kaggle.com/jjinho" target="_blank">@jjinho</a> )</p>
<h2>Context #3:</h2>
<p>270k dataset shared by MB (@mbanaei), 270k data cohere and parsed.</p>
<h2>Context #4:</h2>
<p>Built a 480k dataset using the notebook MB to build 270k dataset to cover more articles.<br>
The 270k dataset did not cover all the 154 articles, we adjust the clustering parameters to cover all the articles with 480k.</p>
<h1>Models</h1>
<ul>
<li>Deberta V3 large multiple-choice classification<br>
Shared by <a href="https://www.kaggle.com/cderotte" target="_blank">@cderotte</a>.</li>
<li>Deberta V3 large one-shot: <br>
Concatenete all answers together, only inference once for each question.<br>
Model architecture reference <a href="https://www.kaggle.com/competitions/feedback-prize-effectiveness/discussion/347433" target="_blank">here</a></li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2125251%2F1b6b3d2fbb7551da6de6fe45f3f00f94%2FScreenshot%202022-08-24%20at%2010.40.29%20AM.png?generation=1661308851093353&alt=media" alt=""></p>
<ul>
<li>Llama2 7B<br>
We trained llama2 7B with lora, not able to optimize it very well, it inferences 5 times for a single question, takes 3 hours using 512 seq length, the naive implementation gave us 0.003 boost for ensemble.</li>
</ul>
<p>We trained multiple checkpoints of deberta, in our final solution, we used 8 checkpoints to take different contexts. We also observed that train with better context did not improve model, so we selected the checkpoints trained with sentence-based context.</p>
|
Kaggle - LLM Science Exam
|
13th place solution: Ensemble context ensemble model
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/corochann" target="_blank">@corochann</a> i test by using "bge-reranker-large" compare with TF-IDF for reranking but i found that TF-IDF was a lot better. I dont remember the exact number 😅.</p>
<p><a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a> of course. Its like if model1 have confidence 99.5% on contextA@1 and have confidence 99.4% on contextB@1<br>
its will select answer from contextA.<br>
.@1 mean most confident choice</p>
<p><a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> i tries OPQ64,IVF65536_HNSW32,PQ64 but accuracy is low.</p>
<p>First of all, thank you to Kaggle for organizing this competition and to my teammates.<a href="https://www.kaggle.com/natnitarach" target="_blank">@natnitarach</a> <a href="https://www.kaggle.com/pongtsu" target="_blank">@pongtsu</a> <a href="https://www.kaggle.com/kunato" target="_blank">@kunato</a> <a href="https://www.kaggle.com/yoyoismee" target="_blank">@yoyoismee</a> 🔥🔥🔥🔥🔥</p>
<p>We have been participating in this competition since the first week of the competition. We dropped from the top 20 down to the nowhere about 3 times in this competition but always managed to get back up. Each time we see the rankings fall from the top 20 to nowhere, it makes us feel frightened (its like playing roller coaster LOL😅) but at the end of the competition we were able to get the gold medal 🥇🎉. <br>
Our solution is ensemble of 2 wikipedia version + 270k dataset and ensemble of debertaV3 large model by max probability<br>
I will split into 2 section retrival and modeling. Retrival has the big impact to our score.</p>
<p><strong>Retrival</strong><br>
In the Retrieval phase, we use two sources of Wikipedia embeddings, utilizing the "bge-small-en" to create a FaissFlatL2 index.</p>
<ol>
<li>Wikipedia cohere 35M -> 35M vector</li>
<li><a href="https://www.kaggle.com/datasets/jjinho/wikipedia-20230701" target="_blank">Wikipedia 2023</a> -> split by character chunk = 1000 -> 21M vector</li>
</ol>
<p>and <a href="https://www.kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded" target="_blank">270k dataset</a><br>
In total, we had 35M + 21M = 56M vector to retrival search that is huge!. We use technique that split big index to small index then merge at the end we call it "Faiss Batch" (1 wikipedia has around 6 index each index is 10GB size). so it total size 100GB+ of only retrival part.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5283167%2F5dc1440b9f7f92e5da231ea771ca60fb%2FRetrivalPart.png?generation=1696997560116021&alt=media" alt=""><br>
to inference you can see image above the step for inference was</p>
<ol>
<li>The wikipedia dataset we use faiss query search to get top ~100 article (6 index * 15 neighbor)</li>
<li>Use TF-IDF for reranking from experiment we found that TF-IDF reranking has score better than bge-reranker</li>
</ol>
<p>Example code of what we call Faiss Batch</p>
<pre><code>from datasets import Dataset
ds = ()
test = Dataset(test)
import os
dir_path =
index_list = os(dir_path)
index_list = (index_list)
k =
total =
distance_list =
indices_list =
res = faiss()
indexBatch index_list:
index_Batch = + indexBatch
(f)
index1 = faiss(index_Batch)
index1 = faiss(res,,index1)
distances1, indices1 = index1(query_vector,k)
updated_indices2 = inner_list indices1]
total += index1
(total)
distance_list(distances1)
indices_list(updated_indices2)
index1
_ = gc()
libc()
torch()
concatenated_indices = np(indices_list, axis=)
concatenated_distances = np(distance_list, axis=)
</code></pre>
<p>what a messy code😂<br>
in the 270k dataset we follow <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/442595" target="_blank">discussion</a> that use TF-IDF </p>
<p>at the end we will have 3 context and each model will predict 3 times.</p>
<p><strong>Model</strong><br>
In the Modeling phase, We use DebertaV3 large and ensemble it.<br>
we use same pipeline as <a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1" target="_blank">Chris</a> unfreeze embedding and use various token from 512-768 with <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60k dataset</a> + dataset that we generate from chatGPT3.5 around 5k question.</p>
<p><strong>What Didn't Work</strong></p>
<ul>
<li>Attempts to compress the index (IVQ, PQ, etc.) resulted in low accuracy.</li>
<li>We tried combining Platypus2-70B-instruct with our retrieval method, but it proved impractical due to the extended runtime, exceeding 9 hours during submission.</li>
<li>The application of PEFT on DeBERTaV3 did not yield the desired results.</li>
</ul>
<p><strong>To improve</strong></p>
<ul>
<li>Obtain better Wikipedia text sources.</li>
<li>Enhance the performance of our models.</li>
</ul>
<p>Github Code<br>
<a href="https://github.com/nat-nischw/kaggel-llm-science-exam-2023" target="_blank">https://github.com/nat-nischw/kaggel-llm-science-exam-2023</a></p>
<p>Thank you <a href="https://www.kaggle.com/jjinho" target="_blank">@jjinho</a>, Your dataset save a lot of my time too. I already try something like IVQ, PQ by using <a href="https://github.com/criteo/autofaiss" target="_blank">https://github.com/criteo/autofaiss</a> it can adjust the hyperparameter. So i dont know what nprobe i use this libary is optimizing the hyperparameter for me😀.</p>
|
Kaggle - LLM Science Exam
|
13th(14th) place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for sharing your knowledge, I couldn't have got solo gold without your help, and I always learn from you in other competitions. 😃</p>
<blockquote>
<p>How much did your TTA improve CV and LB score?</p>
</blockquote>
<p>This approach improve CV: + 0.010, Public: +0.007, Private: +0.005.</p>
<blockquote>
<p>That is interesting that you train with 256 and infer 768. I wonder why this helps. Do you have any ideas?</p>
</blockquote>
<p>I honestly don't know myself, but I have seen an improvement in my score in both CV and LB. <br>
It is just hindsight, maybe the less noise in the context had some positive effect. In the case of max_length 256 for context, we trained with 2 wiki chunk contexts. The 2 most similar ones were more likely to be correct (and 3rd~5th context often be wrong context, so it would be noise.), so model maybe trained to give more importance to the context. <br>
(Sorry for poor English)</p>
<p>Yes. I use only <code>AutoModelForMultipleChoice</code>. I have same thought of you and result is same as you, <code>OpenAssistant/reward-model-deberta-v3-large-v2</code> is the best(0.875), <code>20231003104616_gtesmall_l90s3_maxlen256_bs2_microsoft</code> is second best (0.873), <code>deepset/deberta-v3-large-squad2</code> is worst (0.866).</p>
<blockquote>
<p>OpenAssistant/reward-model-deberta-v3-large-v2 - this model was really good, even without finetuning it gets about 0.84 on a validation set.<br>
I didn't try without finetune but It's suprising result 😮</p>
</blockquote>
<p>I didn't try gradient checkpointing because I didn't know it 😭 I try LoRA. But get lower or same socre and training time is so long, finally I use max_length = 256. Thank you for sharing your knowledge! I'll try it next competition.</p>
<p>First of all, thanks to competition organizers for hosting this competition. <br>
I also like to appreciate <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a>, <a href="https://www.kaggle.com/leonidkulyk" target="_blank">@leonidkulyk</a>, <a href="https://www.kaggle.com/mozattt" target="_blank">@mozattt</a>, <a href="https://www.kaggle.com/nlztrk" target="_blank">@nlztrk</a>, <a href="https://www.kaggle.com/mgoksu" target="_blank">@mgoksu</a> for sharing useful dataset, <a href="https://www.kaggle.com/jjinho" target="_blank">@jjinho</a> for sharing retrieval approach, <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for summary these dataset and approach in <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/436383" target="_blank">this discussion</a>. Without your work, we could not have achieved these results. I learned a lot from you!</p>
<p>I start from <a href="https://www.kaggle.com/cderotte" target="_blank">@cderotte</a>'s <a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1" target="_blank">this notebook</a>. I made four changes from this notebook: <b>dataset, retrieval, models, ensemble</b></p>
<h1>1. dataset</h1>
<p>Because some of the data (mainly numbers?) were missing as discussed <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/434913" target="_blank">here</a>, I changed wiki dump from <a href="https://www.kaggle.com/datasets/jjinho/wikipedia-20230701" target="_blank">https://www.kaggle.com/datasets/jjinho/wikipedia-20230701</a> to cirrussearch wiki dump.</p>
<h1>2. retrieval</h1>
<p>I tried to improve search accuracy.</p>
<h2>2-1. dataset for evaluation</h2>
<p>I create 250 question for evaluate retrieval by ChatGPT. To accumulate a lot of experiments, only data starting with "a" were used in the evaluation, not the entire wikipedia data. Finally, I achieve <code>0.94@recall1</code>, <code>1.0@recall30</code> on this dataset. (question from wiki whole dataset: about <code>0.85@recall1</code>)</p>
<h2>2-2. text processing</h2>
<p>For more accurate retrieval, I have split the wikipedia text. Specifically, the text is read from the beginning, and when it reaches 90 words or more, the data is saved, leaving the last three sentences to continue reading the text. </p>
<p>The pseudo code is as follows:</p>
<pre><code> ():
(text_processed.split())
():
ret = text.split()[-n:]
.join(ret)
texts = load_wiki()
texts_processed = []
text_processed =
length =
window =
text texts:
text_split_period = text.split()
sentence text_split_period:
text_processed += sentence
count_words(text_processed) > length:
texts_processed.append(text_processed)
text_processed = leave_last_sentence(text_processed, window)
</code></pre>
<p>I tried <code>(length, window) = (60, 2), (75, 2), (90, 2), (90, 3), (90, 4), (120, 4), (150, 6)</code> and choose the best LB score.</p>
<h2>2-3. faiss</h2>
<p>I use this parameters: <code>{"nlists": 1, "M": 64, "nbits": 8}</code></p>
<h2>2-4. retrieval model</h2>
<p>I use <code>gte-base</code> and <code>e5-base</code>. I tried {<code>gte</code>, <code>bge</code>, <code>e5</code>}_{<code>small</code>, <code>base</code>, <code>large</code>} and choose the best. (In the beginning, I select the model by looking at the search accuracy in only-a. However, I realized that this retrieval accuracy was not correlated with LB, so I checked it by looking at LB at the end😭.)</p>
<h1>3. model</h1>
<p>Almost the same as baseline notebook. <br>
I set <code>max_length</code> 256 for training, 786 for inference. I wanted to train with <code>max_length</code> 786, but I did not have enough memory resources. 256 was the best when I trained models with 256, 384, and 512 <code>max_length</code>, and then predicted them with 768 <code>max_length</code>.</p>
<p>training parameter is below:</p>
<pre><code>training_args = TrainingArguments(
warmup_ratio=,
learning_rate=,
per_device_train_batch_size=,
per_device_eval_batch_size=,
num_train_epochs=config.epochs,
report_to=,
optim=,
overwrite_output_dir=,
fp16=,
gradient_accumulation_steps=,
load_best_model_at_end=,
metric_for_best_model=,
lr_scheduler_type=,
weight_decay=,
save_total_limit=,
)
</code></pre>
<p>I ensemble 4 models, combination of the following elements</p>
<ul>
<li><p>models</p>
<ul>
<li><code>OpenAssistant/reward-model-deberta-v3-large-v2</code></li>
<li><code>deepset/deberta-v3-large-squad2</code></li>
<li><code>microsoft/deberta-v3-large</code></li></ul></li>
<li><p>retrieval models:</p>
<ul>
<li><code>gte-base</code></li>
<li><code>e5-base</code></li></ul></li>
<li><p>dataset:</p>
<ul>
<li>all</li>
<li>without 3, 7, 8, 9 : worse score compared from other datasets.</li>
<li>without 10 : better score compared from other datasets.</li></ul>
<p>number is <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">this dataset's</a> source.</p></li>
</ul>
<h1>4. ensemble</h1>
<h2>4-1. TTA</h2>
<p>I use the low ranking retrieval results for inference.<br>
I have 4 tta below:</p>
<ul>
<li><code>[ 0, 1, 2, 3, 4, 5]</code></li>
<li><code>[ 0, 6, 7, 8, 9, 10]</code></li>
<li><code>[ 0, 11, 12, 13, 14, 15]</code></li>
<li><code>[ 0, 16, 17, 18, 19, 20]</code></li>
</ul>
<p>the number of array is retrieval ranking of question.</p>
<h2>4-2. ensemble</h2>
<p>After TTA, Instead of the simply average the score, I took the sum of the maximum and the average like below:</p>
<pre><code>df = pd.read_csv()
df[] = np.arange((df))
df = ensemble(df)
df = df.groupby().mean() + df.groupby().()
</code></pre>
<h1>5. Not worked / Can't get result for me.</h1>
<ul>
<li>Create question by ChatGPT.<ul>
<li>I try many prompt, but score is decreased.</li></ul></li>
<li>Create the CV correlated with LB.<ul>
<li>I try 300 row per <a href="https://www.kaggle.com/cderotte" target="_blank">@cderotte</a>'s dataset, but finally use <a href="https://www.kaggle.com/wuwenmin" target="_blank">@wuwenmin</a>'s 300 + test 200. But CV is not correlated with LB.</li></ul></li>
<li>More complex TTA.</li>
<li>Improve model accuracy.<ul>
<li>It was almost same accuracy as the <a href="https://www.kaggle.com/cderotte" target="_blank">@cderotte</a>'s notebook.</li>
<li>I thought that improving the model was not the key point of this competition, so I focused on the retrieval.</li></ul></li>
</ul>
|
Kaggle - LLM Science Exam
|
14th place solution: An attempt to reverse-engineer the dataset
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks! I have uploaded gpt generated dataset and included the link to it in the write-up.<br>
As for the other datasets, I will just leave the page parsing part here</p>
<pre><code> ():
((lock ) == (pbar ))
response = requests.get(url, timeout=)
response.status_code == :
()
,
response.status_code != :
RuntimeError()
soup = BeautifulSoup(response.content, )
title = soup.find().text
title_cleaned = title.replace(, )
semantics soup.find_all():
semantics.decompose()
math_tag soup.find_all():
math_tag.decompose()
sidebar soup.find_all():
sidebar.decompose()
sidebar soup.find_all(, {: []}):
sidebar.decompose()
sidebar soup.find_all(, {: []}):
sidebar.decompose()
sup soup.find_all(, {: [, ]}):
sup.decompose()
edit_button soup.find_all(, {: []}):
edit_button.decompose()
div soup.find_all(, {: [, ]}):
div.decompose()
div soup.find_all(, {: []}):
div.decompose()
div soup.find_all(, {: []}):
div.decompose()
div soup.find_all(, {: []}):
div.decompose()
figure soup.find_all():
figure.decompose()
main_content = soup.find(, {: })
level (, ):
headings = soup.find_all()
heading headings:
heading.string = + heading.get_text().strip() +
result =
el main_content.strings:
el [, , , ]:
(result) == (el) == :
result += el
(el) == (result[-].isspace() result[-] == el[]):
result[-].isspace() el[].isspace():
result += el
:
result +=
result = re.sub(, , result)
lock :
lock:
pbar.update()
title_cleaned, result
</code></pre>
<p>Usage:</p>
<pre><code>example_url =
)
</code></pre>
<p>Thank you! I did finetune them. The pretrained e5-large had about 71% accuracy. After finetuning its accuracy increased to 85%. And after adding tfidf similarity it went further to 91%. All the metrics listed are passage-wise.<br>
After finetuning bge model and applying tfidf modifiers its accuracy became 89% even though the pretrained model performed much worse than pretrained e5 - about 60% accuracy.</p>
<p>Unlike many other competitions, I believe in this contest emphasis was placed on the quality and preprocessing of the data rather than complexity of the model architecture. Since we only had 200 rows of initial data, data gathering became a pivotal part of a solution. Here are some key points of my approach:</p>
<h2>Manual labeling of train.csv</h2>
<p>The starting point was the train.csv. I manually found the origin page of each question. Although I don't believe it was 100% accurate, it served its purpose.<br>
<strong>Key Discovery:</strong> A significant observation during this phase was that many questions were generated from the same categories. Some origin pages were from non-scientific articles like "Droste effect" and "Triskelion" from "Category:Symmetry". Yet, the majority of other articles from this category, like "Spontaneous symmetry breaking", were related to science. This led to my hypothesis that <strong>the competition dataset was possibly generated from science categories</strong> rather than specific science pages. Moreover, this seemed like an automated extraction rather than a manual selection. </p>
<h2>Constructing a category-centric dataset</h2>
<p>With the newfound understanding of categories playing a significant role, I focused on constructing a category-centric dataset:</p>
<ul>
<li><strong>Adjacent by Category (AC) Dataset:</strong> I extracted all categories from the manually labeled csv and fetched every page within these categories. The pages were then split into segments, approximately 1500 characters each. </li>
<li><strong>Training Dataset Creation:</strong> Using gpt3.5, I created 5 multiple-choice questions, each having 5 options, for every 1500-character segment. This generated a massive dataset of about 750k segment-question-answer triplets, which became the one and only dataset of my model training. <a href="https://www.kaggle.com/datasets/chubasik/kaggle-llm-science-exam-gpt-ac-dataset/" target="_blank">Link</a></li>
</ul>
<h3>Dataset Variations</h3>
<p>To expand the available data, I introduced several versions of the AC dataset. These datasets were only used during model inference/validation, not the training part:</p>
<ul>
<li><strong>Grandparent (GP) Dataset:</strong> Starting from the grandparent categories (i.e parent categories of parent categories) of pages in the labeled train.csv, I fetched every child and grandchild page within these categories. </li>
<li><strong>Natural Sciences (NS) Dataset:</strong> A graph search on "Category:Natural_sciences" with a 5-level cutoff covered 193 out of 200 pages from train.csv. To include the remaining 7 pages I added pages from other categories with varying cutoff levels ("Category:Mathematics", "Category:Measuring_instruments", "Category:Crystals", "Category:History_of_science_by_discipline"). </li>
<li><strong>GP Plus 1 (GPP1) Dataset:</strong> An enhanced version of the GP dataset, this included child categories of the leaf categories. It was also heavily modified by manually removing hundreds of unrelated categories, e.g. "Category:Soundtrack_albums_by_year". </li>
<li><strong>AC-750 Dataset:</strong> Similar to the AC dataset but with segments of 750 characters. </li>
</ul>
<p>(The dataset nomenclature was improvised and it later stuck) </p>
<h2>Article search using vector and TFIDF search</h2>
<p>With the datasets ready, I utilized a hybrid search strategy. A combination of vector (with e5 and bge models finetuned on AC) and tfidf similarities were used to identify the most relevant excerpts corresponding to the questions.</p>
<h2>Final Model Ensemble</h2>
<p>My final solution was an ensemble of three Debertas(ForMultipleChoice) with some insignificant differences, like (1) the number of output layers, (2) the order of context-question-option in their inputs, (3) the model awareness of other options. Each of these models was enhanced with 4 distinct context augmentations - GP, NS, GPP1, AC-750. <br>
In conclusion, this competition was a great learning experience. By observing patterns, recognizing the significance of categories, and implementing diverse strategies, I managed to build a solution that was robust enough to survive the transition to private leaderboard.</p>
<p>Thanks! Actually, there is no clear hierarchy. Categories are nodes in a graph, not a tree. However, there are local hierarchical subgraphs.</p>
<p>The similarity models were finetuned on the 750k <a href="https://www.kaggle.com/datasets/chubasik/kaggle-llm-science-exam-gpt-ac-dataset/" target="_blank">dataset</a> mentioned in the write-up.</p>
<p>Thanks! To build a fine-tuning dataset for similarity, I generated hard negative samples. The most challenging negatives were passages from the same page as the positive sample. This approach yielded higher accuracy compared to random sampling from all passages. I used <a href="https://www.sbert.net/docs/package_reference/losses.html#onlinecontrastiveloss" target="_blank">OnlineContrastiveLoss</a> as a loss function.</p>
|
Kaggle - LLM Science Exam
|
161th place solution(without training a model)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Agree! Thanks to all those who contributed in the discussion/code! That why I like Kaggle. The open-source spirit is very admirable.</p>
<p>I entered this competition only two weeks before deadline. I did not train a model or use any tricks. Thanks to <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">MB' notebook</a>, I only do some modification on it and I am able to get a bronze medel(very close to sliver).</p>
<h1>Method</h1>
<ol>
<li>Base on the opensource 270k notebook, I replaced the Longformer model with the finetune <a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1" target="_blank">60k</a> deberta-V3-large model that I found <a href="https://www.kaggle.com/code/bobfromjapan/my-best-submission-with-openbook-context-v2" target="_blank">here</a>. With this simple replacement, I got an LB score of 0.898.</li>
<li>I found that the input of deberta is composed of several parts. I think the retrieval results should be sorted in descending order of similarity, so I put the most similar retrieval result first instead of last.</li>
<li>Increasing the length of deberta input can bring performance improvement, but be aware of OOM. I I limit the maximum input length to 4500.</li>
<li>Drop the <code>util_openbook</code>. MB' s origin notebook used the <code>util_openbook.py</code> to generate a <code>backup_submission.csv</code> for ensemble, but is harmful to the final result.</li>
<li>A littile probability ensemble. I used the Longformer and set its probability weight to 0.1 while the weight of the other two output of deberta is 0.5 and 0.4.</li>
<li>Trust CV! Thanks to <a href="https://www.kaggle.com/wuwenmin" target="_blank">wu</a> and <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/445053" target="_blank">500val dataset</a>. The CV is closely related to LB. I got 0.897 on CV while the private board is 0.902.</li>
</ol>
<h1>Ablation experiment</h1>
<ol>
<li>Replace the Tfidf with <code>bge-small</code> or <code>allminiv2</code> did not bring performance improvement and cost to much time.</li>
<li>Add Wikipidia data did not bring performance improvement. The retrieval result of Wiki data is completely different from the 270k dataset.</li>
</ol>
<p>Thanks to <a href="https://www.kaggle.com/mbanaei" target="_blank">MB</a> and <a href="https://www.kaggle.com/cdeotte" target="_blank">Chris</a> again!</p>
|
Kaggle - LLM Science Exam
|
171st Place Solution - New Kaggle Expert! 🎉
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This was a fantastic competition! Thank you so much to Kaggle for running it, and to everyone who shared such interesting notebooks and discussions. (Plus it's always great to get some more medals under your belt! 😄)</p>
<h1>Data Collection</h1>
<p>It was clear early on that this was essentially a <em>data</em> competition, rather than focussing on modelling. Very little data was available in the beginning, so in the early stages my main focus was on generating high quality training data that could be representative of the train data provided, and hopefully also the hidden test data. </p>
<p>With some manual work, I was able to identify the <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/425242" target="_blank">wikipedia pages used to generate the training data</a>. I then focussed on identifying which <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/425246" target="_blank">wikipedia categories would be most effective to train on</a>, using some <a href="https://www.kaggle.com/code/judehunt23/llm-science-exam-wikipedia-graph-analysis" target="_blank">graph analysis</a> to look at the pages and categories with the most connections. I was able to find that the <code>Concepts in physics</code> category was able to cover 72% of the training dataset with just 900 wikipedia pages. </p>
<p>I also decided to include the categories <code>Chemistry</code>, <code>Biology</code>, and <code>Mathematics</code> to cover the remaining training data, and any drift between this data and the test dataset. I used these wikipedia pages to generate additional questions to train on using gpt-3.5-turbo.</p>
<h1>Retrieval Augmented Generation (RAG)</h1>
<p>After initially experimenting with finetuning deberta for multiple-choice question answering, later on in the competition it became clear that the real improvements in performance would come from adding <em>context</em> to the inputs of the model. This is a common technique used in real LLM applications, known as <em>Retrieval Augmented Generation (RAG)</em>. I used two retrieval pipelines for my final solution:</p>
<ol>
<li>FAISS + Sentence Transformers, finding the top 6 out of 6M Wikipedia articles using the similarity to the question and possible answers, and extracting the top 22 sentences (thank you <a href="https://www.kaggle.com/jjinho" target="_blank">@jjinho</a> for sharing your wonderful notebook!)</li>
<li>TF-IDF on 270k wikipedia articles using cohere embeddings (thank you <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> for your amazing notebook!)</li>
</ol>
<p>In hindsight should have experimented with other techniques further and combined more of them together, for example searching only using questions, using questions + answers, questions + one answer at a time, using answers only etc. </p>
<h1>Modelling</h1>
<p>I experimented and ensembled multiple different modelling approaches for my final solution, inspired by <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/440908" target="_blank">this amazing post</a> from <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>. My final submission included:</p>
<ul>
<li>Deberta <code>AutoModelForMultipleChoice</code> with no context</li>
<li>Deberta <code>AutoModelForMultipleChoice</code> with context 1 </li>
<li>Deberta <code>AutoModelForSequenceClassification</code> (multi-class, num_labels=5) with context 1</li>
<li>Longformer <code>LongformerForMultipleChoice</code> with context 2</li>
<li>Deberta <code>AutoModelForMultipleChoice</code> with context 2 </li>
</ul>
<p>I then took averages of the model outputs to determine the top 3 results.</p>
<p>I also experimented with <code>AutoModelForSequenceClassification</code> (multi-label, num_labels=1) however this didn't give me a CV/LB boost, and tried adding AutoModelForCausalLM using [Platypus 70B]() and [Mistral 7B](), however these notebooks unfortunately timed out on submission. I only finetuned Deberta using context 1, and in hindsight should have experimented with finetuning with other contexts as well, and also should have tried running e.g. AutoModelForSequenceClassification using context 2.</p>
<h1>Things I missed</h1>
<ul>
<li>Ensembling multiple different retrieval techniques together - e.g. retrieving just with questions, with questions + answers, answers only etc</li>
<li>Answer permutation - I thought about using this, but decided against it since my training runs were already taking incredibly long, but in hindsight I could have used this for retrieval as well!</li>
<li>Dropping out incorrect answers - a technique that would have been interesting to try would have been removing answers that were clearly incorrect and then reranking the remaining answers</li>
<li>Stacking CV - with the small amount of training data provided and by using open source models, I never managed to get a holdout dataset that I was happy with to run some stacking on, and instead ended up running trial and error with different model combinations and averages in the last few days. Having a strong local CV is massively important, but I lost track of this in the scramble at the end</li>
<li>Causal LMs - this is the one I really wish I'd played around with more. Deberta worked incredibly well in this competition, but I wish I'd played around more with different language models like Mistral 7B and Llama 2. I have a feeling that especially with context they could have performed really well, and added more diverse predictions to ensemble with at the end.</li>
</ul>
<p>Ultimately a lot of these points came down to time management and compute/submission constraints on Kaggle. I still learned absolutely loads during this competition, and can't wait to take this forward into the next one!</p>
<p>Huge congratulations to everybody who made it to the end of this competition, enjoy those precious points and medals. See you again on the leaderboard soon!</p>
|
Kaggle - LLM Science Exam
|
17th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I only have properly parsed versions of the STEM-related articles. But I will think about it!</p>
<p>Yea I also ran kiwix but setting up a local MediaWiki server was giving the best results. (and handling its errors was the most painful one)</p>
<p>Hi, it's just harsh near-missing my first (solo) gold with a big score-gap but it was an intense competition! I don't know if I'll have the courage to try so hard for a solo gold again. 😅 I had shared a summary of my solution yesterday. Now, I've tried to explain it in as much detail and clarity as possible.</p>
<hr>
<p>When dealing with Wikipedia dump data, you realize that you come across a large number of articles. Processing all of them and getting them into a suitable format for a competition is a quite intense process. I also generated all the articles by processing the common dump files we all know. However, I noticed issues with the mathematical formulas and certain Wikipedia LUA templates in publicly available open-source dump extractor outputs.</p>
<h3>STEM Filtering</h3>
<p>So, I decided to design a separate pipeline to extract the articles related to STEM in their most original form. First and foremost, I needed to locate the articles related to STEM.</p>
<ul>
<li>I downloaded category dumps from wikipedia and <strong>reconstructed the category-links graph</strong> from the sql dumps.</li>
<li>Ran breadth-first search on STEM main categories for 4 levels and got ~1.1M STEM articles total from all that category tree.</li>
</ul>
<hr>
<h3>Data</h3>
<p>In my final solution, I used three different original data sources.</p>
<ul>
<li><strong>Wikipedia JSON Dump (F1)</strong>: It includes all articles but open-source extractors have LUA template compilation errors at extraction as I said above, which causes parsed texts with no numbers or some garbage in it</li>
<li><strong>Local MediaWiki Server TextExtract API (F2)</strong>: Using the filtering technique I described above, I collected 1.1 million STEM article titles. I sent API requests to the MediaWiki server I set up on my own computer to ensure that these articles were parsed in formats closest to the their originals. This way, LaTeX formulas and all LUA templates were parsed without any issues.</li>
<li><strong>Cohere WikiPedia corpus (F3)</strong></li>
</ul>
<p>I passed all the data I listed above through a Unicode character decode process. Then, I created corpus chunks by shifting 3-sentence windows with a stride of one-sentence.</p>
<hr>
<h3>Retrieval</h3>
<p>I noticed that different retrieval methods improved predictive performance. Therefore, I preferred to use three main approaches:</p>
<ul>
<li><strong>FAISS (BAAI/bge-small-en)</strong>: I processed the <strong>F1</strong> data using sentence-embedding models, resulting in an embedding data of approximately ~150GB. I used these without compression during the search phase. I noticed that my retrieval performance decreased when I compressed them.</li>
<li><strong>TFIDF</strong>: Since the competition's question-answer texts are paraphrased versions of the original Wikipedia texts, TFIDF was quite effective in capturing exact word clusters. I used TFIDF for ngrams 1,2 with no stop-words on <strong>F2</strong> and <strong>F3</strong>.</li>
<li><strong>FAISS + TFIDF</strong>: I also reordered F1 embedding retrieval results with TFIDF (called <strong>F4</strong>)</li>
</ul>
<hr>
<h3>Ensemble</h3>
<p>As I mentioned above, I created four different sources of context. I just ran a single debertav3-large model on all contexts and did a max-ensemble on probabilities. Since I used a single model, I noticed that max-ensemble was quite effective. If it found relevant text in any of the four contexts related to the question, it significantly boosted the probability of the corresponding answer. If it couldn't find the relevant context, the probabilities for all options were similar and relatively low.</p>
<hr>
<h2>Summary</h2>
<p>I'm also including a summary diagram of my solution below. It may provide a clearer understanding:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2514988%2F93b60fa0e3d06e32d8cb106b5a848617%2FScreenshot%20from%202023-10-11%2014-07-33.png?generation=1697022470900193&alt=media" alt=""></p>
|
Kaggle - LLM Science Exam
|
191st place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle for hosting this fun competition, and to all those who shared throughout! I learned a ton about LLMs, RAG, and inference optimization. </p>
<h2>RAG part</h2>
<p>The Wikipedia dataset shared by <a href="https://www.kaggle.com/jjinho" target="_blank">JJ</a> was great, but it was missing some articles and sometimes key information needed to answer questions. (discussion <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/434913" target="_blank">here</a>). This can also be confirmed by looking for the answer to the <code>Modified Newtonian Dynamics</code> question provided by the hosts, as the article containing the answer (4th paragraph <a href="https://en.wikipedia.org/wiki/Modified_Newtonian_dynamics" target="_blank">here</a>) is not found in the dataset.</p>
<p>To find more article IDs, I combined JJs and <a href="https://huggingface.co/datasets/Cohere/wikipedia-22-12" target="_blank">Cohere’s Wikipedia data</a>. Then, I used the MediaWiki API to collect the text for all the articles as suggested by Steubk <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/434913#2411374" target="_blank">here</a>. Finally, I split each article into subsections by article headings. After some light filtering, this gave me roughly 22 million texts. In hindsight, even the MediaWiki API missed pieces of the text, and I should have combined multiple wiki data sources.</p>
<p>For retrieval, I used the <a href="https://huggingface.co/intfloat/e5-base-v2" target="_blank">intfloat/e5-base-v2</a> model and the Faiss library. I split the Faiss index into 16 shards to be searchable with <13 GB of RAM. Finally, I split the selected texts based on <code>\n</code> characters and performed a final search to get context for each question.</p>
<h2>Inference Part</h2>
<p>The model I used for inference was an <code>AutoModelForMultipleChoice</code> with a <code>deepset/deberta-v3-large-squad2</code> backbone. I enhanced the class to include other answer options as part of the context. This allowed me to perform TTA by shuffling answer options. See an example of this below.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5570735%2Fdf6897f36282af45f7a18cf841c6cd38%2Fwp2.png?generation=1696982588123448&alt=media" alt="TTA image"></p>
<p>All my experiments were done with the <code>microsoft/deberta-v3-base</code> model, and I scaled up to the deberta-v3-large model variant for my final submission.</p>
<h2>Training Data</h2>
<p>I used similar question-answer sources to <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/436383" target="_blank">Chris’s 60k Dataset</a>, but added <a href="https://www.kaggle.com/erijoel" target="_blank">erijoel’s</a> llmse-few-shot-prompting-1k data. (dataset <a href="https://www.kaggle.com/datasets/erijoel/llmse-few-shot-prompting-1k" target="_blank">here</a>). I also re-generated all question contexts using the e5-base-v2 retrieval model.</p>
<h2>Improvements</h2>
<table>
<thead>
<tr>
<th>Feature</th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>RAG</td>
<td>+0.1</td>
</tr>
<tr>
<td>Add more article IDs</td>
<td>+0.03</td>
</tr>
<tr>
<td>Bigger retriever (intfloat/e5-base-v2)</td>
<td>+0.01</td>
</tr>
<tr>
<td>Other answers in context</td>
<td>+0.01</td>
</tr>
<tr>
<td>TTA</td>
<td>+0.005</td>
</tr>
<tr>
<td>deberta-v3-base -> deberta-v3-large</td>
<td>+0.01</td>
</tr>
<tr>
<td>1024 max tokens (training)</td>
<td>+0.005</td>
</tr>
<tr>
<td>1536 max tokens (inference)</td>
<td>+0.005</td>
</tr>
</tbody>
</table>
<h2>Frameworks</h2>
<p>Models: <a href="https://huggingface.co/models" target="_blank">Huggingface</a><br>
Retrieval: <a href="https://github.com/facebookresearch/faiss" target="_blank">Faiss</a>, <a href="https://www.sbert.net/" target="_blank">SentenceTransformers</a><br>
Logging: <a href="https://wandb.ai/" target="_blank">WandB</a></p>
|
Kaggle - LLM Science Exam
|
19th-place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> !<br>
Regarding the merge, it worked fine because the correlation between the predictions of the two models is relatively low (0.943)</p>
<p>no <a href="https://www.kaggle.com/rusg77" target="_blank">@rusg77</a> it is the base version shared here </p>
<ul>
<li><a href="https://www.kaggle.com/datasets/cpmpml/orca-mini-v3-70b-part1" target="_blank">https://www.kaggle.com/datasets/cpmpml/orca-mini-v3-70b-part1</a> </li>
<li><a href="https://www.kaggle.com/datasets/cpmpml/orca-mini-v3-70b-part2" target="_blank">https://www.kaggle.com/datasets/cpmpml/orca-mini-v3-70b-part2</a> <br>
by <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> </li>
</ul>
<p>Thank you Kaggle for hosting this challenging competition on LLM . <br>
A special thanks to Kagglers who shared their work, making the competition even more stimulating. <br>
As usual, congratulations to the winners and everyone who enjoyed the competition!</p>
<p>Here is a write-up of my solution</p>
<h2>Context Retrieval</h2>
<p>It's a two-step process:</p>
<ul>
<li>In the first step, I generate candidates through an indexed search across all Wikipedia paragraphs.</li>
<li>In the second step, I rerank the candidates using a TF-IDF model to fully leverage the fact that prompts/answers use same words of wikipedia sentences: This second step will prove to be very important for the final score.</li>
</ul>
<p>For index generation, I used the Hugging Face dataset <a href="https://huggingface.co/datasets/wikipedia" target="_blank">wikipedia_20220301.en</a>: I removed all sections such as 'See also,' 'References,' 'External links', … and split the text by <code>\n\n</code>. <br>
Additionally, I eliminated all paragraphs with fewer than 10 words.</p>
<p>I generated the embedding for each paragraph using 'all-MiniLM-L6-v2' with 'MAX_LENGTH = 512' and created 10 Faiss indexes, each containing 4.2 million rows</p>
<p>The sequential search of these indexes on the GPU T4 x2 instances is relatively fast (less then 10 minutes for 200 prompts)</p>
<p>You can find the dataset and an example of a search here: <a href="https://www.kaggle.com/code/steubk/semantic-search-of-wikipedia-en-with-faiss" target="_blank">Semantic search of Wikipedia EN with Faiss</a></p>
<h2>LLAMA2 70B Inference</h2>
<p>I used 'orca-mini-v3-70b' and the amazing job done by <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a> to run LAMA2 70B on GPU T4 x2..</p>
<p>This is the prompt:</p>
<pre><code>
You are an AI assistant follows instruction extremely well. Help much you can.
Your task analyze the question the proposed answer below. If the answer correct, respond ; , respond . As a potential aid, context sentences Wikipedia pages delimited
Context: {row[]
Question: {row[]}
Proposed answer: {row[]
</code></pre>
<h2>ModelForMultipleChoice</h2>
<p>I trained Deberta V3 (max_length = 450) with <code>AutoModelForMultipleChoice</code> on a dataset of 24k prompts extracted from the dataset <a href="https://www.kaggle.com/datasets/mgoksu/llm-science-exam-dataset-w-context" target="_blank">llm-science-exam-dataset-w-context</a> by <a href="https://www.kaggle.com/mgoksu" target="_blank">@mgoksu</a>, taking only the prompts that start with 'What,' 'Which,' 'How,' 'Who,' and removing duplicate prompts from the training set. <br>
I then validated it on the 200 rows training set.</p>
<h2>Submission</h2>
<p>Final submission is merge of softmax of the 2 models.</p>
<p>To minimize memory leaks, each step was executed in a separate script.</p>
<p>Submission time of the pipeline time varies from 7 hours and 20 minutes to the timeout (in the last days!!) </p>
<p>Below is the contribution of each step to the final score:</p>
<p><strong>Single ModelForMultipleChoice Model</strong>: <br>
LB: 0.822, Private: 0.805</p>
<p><strong>Single ModelForMultipleChoice Model with first step context retrivial</strong>:<br>
LB: 0.889, Private: 0.884</p>
<p><strong>Single orca-mini-v3-70b Model with first step context retrivial</strong>:<br>
LB: 0.897, Private: 0.889</p>
<p><strong>Merge ModelForMultipleChoice + orca-mini-v3-70b with first step context retrivial</strong>:<br>
LB: 0.906, Private: 0.901</p>
<p><strong>Single orca-mini-v3-70b Model with two steps context retrivial</strong>:<br>
LB: 0.904, Private: 0.906</p>
<p><strong>Merge ModelForMultipleChoice + orca-mini-v3-70b with two steps context retrivial</strong>:<br>
LB: 0.921, Private:0.917</p>
<p>Link to Inference notebook: <a href="https://www.kaggle.com/code/steubk/llm-mc-qa-openbook-v3-inf-val-tfidf-db450" target="_blank">https://www.kaggle.com/code/steubk/llm-mc-qa-openbook-v3-inf-val-tfidf-db450</a></p>
<p>In the last few days, it was impossible to run 70B model and the DeBERTa model together.</p>
<p>Thank you <a href="https://www.kaggle.com/jjinho" target="_blank">@jjinho</a> ! </p>
<p>As <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> said, orca-mini-v3-70b was one of the 70B models that performed well on LB and on training set, with a Llama-2 type license.</p>
<p>Anyway the score of the same pipeline with Xwin-LM-70B-V0-1 is: LB 0.918, Private: 0.915.<br>
So it seems that there isn't much of a difference</p>
|
Kaggle - LLM Science Exam
|
1st Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>As mentioned above, we are tunning topk retrieval file by file (around 1 GB embedding arrays + their corresponding context text) while keeping track of topk inds, vals and context texts (small memory footprint).</p>
<p>Within the topk retrieval, we further chunk cos_sim calculation and run it on GPU.<br>
This means running <code>cos_sim = torch.mm(embeddings_from, embeddings_to.transpose(0, 1))</code> with <code>embeddings_from</code> being the chunked query_embeddings and <code>embeddings_to</code> the corpus embeddings (as loaded file by file).</p>
<p>We were using late fusion in our final approach, so we never had the issue to run into duplicates in the retrieval. Using TOP_K = number_of_context is already sufficient for that. </p>
<p>Retrieval is about 30 minutes per embedding model and corpus, but of course that number largely depends on embedding size and corpus size.</p>
<p>In the final subs, it was roughly around 1/3 for the retrieval and 2/3 for the model runtime. </p>
<blockquote>
<p>I was waiting for the write-up, thank you for sharing! <br>
Surprized to see 2.46TB of input is used to obtain various context retrieval…</p>
<p>I have 2 questions.</p>
<ol>
1.
</ol>
<blockquote>
<p>Our final ensemble is a blend of five 7B models and one 13B model. Each model uses a different context</p>
</blockquote>
<p>How did you ensemble these 6 model's output? Did you simply take the average of the predictions?<br>
In my experiment, a simple average of many predictions did not work. If there are many predictions with different contexts, some of the output is confident on the wrong answer. I guess this is because there may be a "confusing" context which leads to the wrong answer. How about your case?</p>
</blockquote>
<p>Average blend is what we went for in the end. We also tried max blending or a combination of mean (across models) and max blend (across embeddings), but it was only sometimes better on LB and usually clearly worse on our local CV. </p>
<blockquote>
<p>2.</p>
<p>For the 2.46 TB of input mainly comes from the embedding of all Wikipedia chunk in different embedding models <strong>without any quantization</strong> ?<br>
You did not use <code>faiss</code> library but executed a similarity search with own code. <br>
Is this because you can run similarity calculation by chunk, so that you can run NN search for such a big dataset? I guess if we use <code>faiss</code> library, index (embeddings) must fit on kaggle kernel RAM, 13GB.</p>
</blockquote>
<p>We are running topk retrieval file by file (around 1 GB embedding arrays + their corresponding context text) and keeping track of topk inds, vals and context texts. <br>
Within the topk retrieval, we further chunk cos_sim calculation and run it on GPU.</p>
<blockquote>
<p>Have you tried inverting during training as well, so that the position of the chunk with highest retrieval score is closer to the question & answer option?</p>
</blockquote>
<p>Yes, we did exactly this after we saw this behaviour.</p>
<blockquote>
<p>I'm curious to know what are your thoughts and intuition on that?</p>
</blockquote>
<p>I believe it helps the regularization and to learn that context may not always include the correct answers.</p>
<p>We also tried to train multiclass heads, but noticed that the order of samples has a huge impact on the score and even with a 120 TTA, the scores were not really better than binary classification (5 forward passes, and much cheaper in terms of runtime).</p>
<p>The same issue holds for binary classification with the addition of the other 4 possible answers. If other answers are provided, the order has a way too large impact, thus we went with the idea of using the mean (impact of order is removed) of their last layer embeddings.</p>
<p>Thanks for this well executed competition that attracted a lot of interested from the community. Thanks a lot as well to my great teammates <a href="https://www.kaggle.com/ybabakhin" target="_blank">@ybabakhin</a> and <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a>. </p>
<p>Check out our tldr summary <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/446240" target="_blank">here</a>!</p>
<h2>Context Retrieval</h2>
<p>We started working on this competition very early and quickly faced a performance cap with the simple classification approach for both deberta and LLM models.</p>
<p>So, the next step was to explore the context retrieval from Wikipedia articles. Our first submission was with the <code>all-MiniLM-L6-v2</code> model. And to our surprise, this first attempt was much better than all our previous non-context models. From that point we were mostly focusing on improving the context retrieval part, while keeping the modeling part almost the same. We tried the majority all models from the top-20 list on MTEB Leaderboard: <a href="https://huggingface.co/spaces/mteb/leaderboard" target="_blank">https://huggingface.co/spaces/mteb/leaderboard</a>. In the end, we had a local choice of about 300 combinations of Retrieval + LLM models + different Wikipedia dumps. Our final submission has contexts from <code>e5-base-v2</code>, <code>e5-large-v2</code>, <code>gte-base</code>, <code>gte-large</code> and <code>bge-large</code> models.</p>
<p>Apart from exploring various models, we also tried different ways to encode both Wikipedia chunks and Prompts+Answers from the competition data. For Wikipedia we were mostly encoding <code>title</code> of the article + article chunk, and for the question the final model has 2 variations:</p>
<ol>
<li>Just a simple concatenation: “{prompt} {A}, {B}, {C}, {D}, {E}”</li>
<li>Search contexts individually for each “{prompt} {A}”, “{prompt} {B}”, etc. option, and sort them by similarity in the end.</li>
</ol>
<p>Our Wikipedia dumps had from 30M to 60M text chunks. However, we’ve implemented a pretty quick and scalable similarity search on GPUs. Firstly, we split the whole 60M database into smaller parts, and load them one after another. Then we calculate a similarity matrix between each part and whole test data (using simple matrix multiplication for torch tensors on GPU). All we have to do now is to store the global Top-k list among all parts. And to make it twice faster we’re doing it on 2 GPUs in parallel.</p>
<h2>Different Wikipedia dumps</h2>
<p>As most teams, we started out using the <a href="https://www.kaggle.com/datasets/narek1110/wikipedia-22-12-en-embeddings-all-minilm-l6-v2" target="_blank">public wiki</a> as our source of context and embeddings. We invested quite some time to create our own wiki corpus. First, we recreated the above wiki taking a snapshot from a newer date closer to the start of the competition. We added a few extra post processing steps, such as joining very small paragraphs (usually subtitles) with the subsequent paragraph. When investigating errors in the context matches, we noticed that all public wikipedia dump parsers are not properly expanding lua code which is often used on wikipedia to render values in scientific articles. To include these into our context, we decided to use the <a href="https://dumps.wikimedia.org/other/cirrussearch/" target="_blank">cirrussearch dump</a> which is a dump of all (almost) fully rendered wikipedia pages. Unfortunately, this dump doesn’t include newline characters, so we merged the sentences together to different target lengths of either 256, 512 or 1024 characters while not breaking any sentences. In the end we considered all of these wikis (and multiple embeddings for each) for the blend and it helped to increase diversity. As a standalone wikipedia corpus, the cirrussearch wiki with a target length of 512 characters was best for us.</p>
<p>We also tried to filter the wikis for science articles, but never achieved a significant improvement on public leaderboard over using the whole wiki corpus. Apparently, the embedding models are strong enough to not get distracted by irrelevant articles and we decided to play safe and included the full wikis in our final submissions. </p>
<h2>Longer contexts</h2>
<p>We started by using a single most similar wiki chunk for both train and inference. However, we quickly noticed that increasing the context length during the inference might improve the results. The optimal strategy for us was to train models with 3-chunk contexts and run inference with 5-chunks. Increasing it further for training was too slow, and increasing it more for inference was probably adding too much noise to the context. One interesting test that we’ve done was to inverse the order of contexts during the inference which dropped the score quite a bit. So, it might mean that models learnt the position of the “best” context.</p>
<h2>Data and validation strategy</h2>
<p>For training we mostly used only the publicly shared datasets. And adding more data wasn’t helpful for our models. For the validation, we used the original 200 samples provided as our validation data since the very beginning. However, after introducing the context retrieval part, we quickly got to 0.99 performance there. For the rest of the competition we were using 6k STEM questions from one of the publicly shared dataset.</p>
<h2>Models</h2>
<p>Our final ensemble is a combination of fine-tuned 7B & 13B LLMs. All models are trained in binary fashion, meaning a single sample for each of the five options and final predicted probabilities are used for ranking. This makes the models independent of the ordering of answers. For training data, we found the following steps to be useful:</p>
<ul>
<li>Pick synthetically generated multi-choice samples. There are many of such samples shared on Kaggle. Interestingly, data quantity was not too important here to improve performance.</li>
<li>For each sample, we generate the context using our RAG embedding approaches. This allows us to generate multiple contexts like in inference and adding noise here is helpful for training. Training on the true context the question was generated on, was a bit worse than training on the generated context.</li>
</ul>
<p>We train all models with LORA on all linear layers. We use a binary classification head after the final next-token logits. We also use a lower learning rate for the head vs. the LORA layers. We did tuning on LORA settings, learning rate, and other minor things, but they were not that impactful as for some other deep learning use cases. All models are trained over a single epoch with typical cosine lr decay and BCE loss, picking the last checkpoint.</p>
<p>We had most success with the following LLM models:</p>
<ul>
<li>Llama-2-7b</li>
<li>Mistral-7B-v0.1</li>
<li>xgen-7b-8k-base</li>
<li>Llama-2-13b</li>
</ul>
<p>The following diagram explains our architecture and inference setup on a high level:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2675447%2Fe304b1553bae71f790a313f92ea3bc1b%2Fimage.png?generation=1697040115030343&alt=media" alt=""></p>
<ol>
<li>First, we feed the context and the question through the backbone and save the past_key_values. For each question, we only need to do this once as each answer has the same context and question. This is a big benefit of decoder only models for such use-cases and saves a lot of runtime.</li>
<li>We then have another backbone forward on each of the five answers where each one gets the past_key_values of the previous forward as cache input. We can run this in a batch of five now.</li>
<li>We take the final next token prediction of each output which are the logits over the full vocabulary and feed this through a final classification head trained from scratch. This head now learns to map the output of the LLM across the full vocabulary to a single binary prediction whether the answer is correct or not.</li>
<li>The final ordering is then simply determined by sorting the predicted probabilities.</li>
</ol>
<p>One downside of this binary approach is that each sample lacks the cross-information from all the other options which might hold individual signal. We experimented a lot with multi-class model or simply adding the other answers as context to the binary model, but there can be a huge positional bias added by doing so. Hence, we tried to solve this with TTA approaches, using various ordered combinations of the answers, which definitely helped, but was more shaky and could not outcompete the binary model.</p>
<p>We still came up with, what we think is a pretty clever idea, another solution to help the models a bit with learning cross-information across samples not adding a positional bias. Here, the idea is to add the average next token logits of all other options as an additional input to the final classification head. So the final head gets logits for all possible tokens for the answer at hand, and the average of all logits for all possible tokens of all the other four answers. This does not add positional bias and gives extra information that boosted CV and LB and worked well when blended with the simpler approach.</p>
<p>As you may have guessed already from our team name, we used a forked <a href="https://github.com/h2oai/h2o-llmstudio" target="_blank">H2O LLM Studio</a> to experiment and to train our models. We have added a fully flexible way of training causal classification models. The <a href="https://github.com/h2oai/h2o-llmstudio/pull/449" target="_blank">merged PR</a> allows to train binary and multiclass classification LLM models using the core techniques used in our solution. You can just add a csv with a binary target and input text and train models.</p>
<h2>Inference solution</h2>
<p>Our final ensemble is a blend of five 7B models and one 13B model. Each model uses a different context approach using different wikis, embeddings and topk. The pipeline fits precisely into 9-hour runtime and utilizes 2.5TB of input data.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2675447%2F043168b3c4ac871415b03d8cfcd117be%2FBildschirmfoto%20vom%202023-10-11%2021-22-51.png?generation=1697052212889820&alt=media" alt=""></p>
<p>We have shared our inference code now:</p>
<p>A winning kernel with 0.933 Private: <a href="https://www.kaggle.com/code/ybabakhin/1st-place-team-h2o-llm-studio" target="_blank">https://www.kaggle.com/code/ybabakhin/1st-place-team-h2o-llm-studio</a><br>
A single model with 0.932 Private: <a href="https://www.kaggle.com/code/ybabakhin/1st-place-single-model-inference" target="_blank">https://www.kaggle.com/code/ybabakhin/1st-place-single-model-inference</a></p>
|
Kaggle - LLM Science Exam
|
20th Place (Top 1%, Solo Silver) Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yep, the <code>max_length</code> is much more important than unfreezing layers in my experiments</p>
<p>Yep, it's trained on the 60k dataset using 200 as the validation set. The parameters are as follows:</p>
<pre><code>FREEZE_LAYERS =
FREEZE_EMBEDDINGS =
MAX_LENGHT =
MODEL =
LEARNING_RATE =
EPOCHS =
training_args = TrainingArguments(
warmup_ratio=,
learning_rate=LEARNING_RATE,
per_device_train_batch_size=,
per_device_eval_batch_size=,
num_train_epochs=EPOCHS,
report_to=,
run_name=RUN_NAME,
output_dir=,
overwrite_output_dir=,
fp16=,
fp16_full_eval=,
gradient_accumulation_steps=,
logging_steps=,
evaluation_strategy=,
eval_steps=,
save_strategy=,
save_steps=,
load_best_model_at_end=,
metric_for_best_model=,
lr_scheduler_type=,
weight_decay=,
save_total_limit=
)
</code></pre>
<p>I trained the model on 8 x RTX4090 using <code>nohup deepspeed --num_gpus 8 train_deberta_v3l.py --deepspeed ds_config.json > train.log 2>&1 &</code><br>
so, the effective <code>batch_size</code> is 32.</p>
<p>about 4 hours, the details of the training config are as below ↓</p>
<p>First and foremost, I would like to express our gratitude to the hosts and the Kaggle team for arranging this wonderful competition. Also, a huge shout-out to all the participants for fighting through this intense competition. It was an extremely educational and exciting competition.</p>
<p>As my solution is quite similar to the top teams and I'm a little busy these days, I didn't work on the writeup until I was mentioned by <a href="https://www.kaggle.com/kaggleqrdl" target="_blank">@kaggleqrdl</a> in this <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/447396#2486398" target="_blank">discussion</a>. It's time to spend some time to finish this writeup. Thanks <a href="https://www.kaggle.com/kaggleqrdl" target="_blank">@kaggleqrdl</a> for memorizing me in this competition:)</p>
<p>My final solution is quite simple, 4 RAG + 1 deberta-v3-large ensembling:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3563032%2Fedcd41b7d3935ee3aa69148e88ea9cf2%2Fllm-sci-writeup.jpg?generation=1697638240812008&alt=media" alt="llm-solution"></p>
<p>can improve to 0.917 if change mean ensembling to max + mean ensembling. Refer to my submission <a href="https://www.kaggle.com/code/wuwenmin/llm-science-ensemble-exp-max/notebook" target="_blank">code</a></p>
<p><strong>Things that didn't work for me</strong></p>
<ul>
<li>Pre-finetune on RACE<br>
near the end of the competition, I tried to pre-finetune the model on RACE and then finetune on 60k with much smaller LR, but it didn't improve the CV & LB</li>
</ul>
<p><strong>Special thanks to</strong> </p>
<ul>
<li><a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>, and <a href="https://www.kaggle.com/yalickj" target="_blank">@yalickj</a> for providing us the valuable datasets.</li>
<li><a href="https://www.kaggle.com/ksmcg90" target="_blank">@ksmcg90</a> for providing a faster data loading method using pyarrow (refer to his <a href="https://www.kaggle.com/code/ksmcg90/faster-context-extraction-pyarrow-fixed" target="_blank">notebook</a>)</li>
<li><a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a> for providing a solution to training using deepspeed (refer to his <a href="https://www.kaggle.com/code/itsuki9180/using-deepspeed-with-hf-trainer" target="_blank">notebook</a>). I trained the deberta-v3-large on 8xRTX4090, the speedup was amazing:)</li>
<li><a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> for providing a way to finetune LLMs for sequence classification using RewardTrainer (refer to this <a href="https://www.kaggle.com/code/datafan07/single-model-rewardtrainer-lora-llm/notebook" target="_blank">notebook</a>)</li>
<li>and all the teams for sharing their valuable solution for this task. I need spend one more weekend to learn all the techniques:)</li>
</ul>
|
Kaggle - LLM Science Exam
|
21st place private LB solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi fellow kagglers,</p>
<p>We are the “Contextual Kagglers” team (<a href="https://www.kaggle.com/alexryzhkov" target="_blank">Alex</a>, <a href="https://www.kaggle.com/titericz" target="_blank">Giba</a>, <a href="https://www.kaggle.com/olyatsimboy" target="_blank">Olya</a> and <a href="https://www.kaggle.com/ey4read1mk4realtv" target="_blank">Dima</a>) with the 0.916 private LB score (21st place) and 0.928 public LB score (11th place). Yep, we were in gold medals on public and gone out of it on private - sad news 🙁</p>
<p>Here we want to disclose our solution and share some interesting insights on the data and competition itself. One more thing - I want to mention here that the <strong>majority of ideas and work were done by <a href="https://www.kaggle.com/olyatsimboy" target="_blank">Olya</a> and <a href="https://www.kaggle.com/ey4read1mk4realtv" target="_blank">Dima</a></strong> so they have had a huge impact on the final solution score. I also would like to congratulate <a href="https://www.kaggle.com/olyatsimboy" target="_blank">Olya</a> with <strong>her new Kaggle Competitions Expert tier</strong>.</p>
<h2>Final structure</h2>
<p>The <strong>structure of our solution</strong> is like the scheme below:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F19099%2Fea90eb23380f0e712a493076d6f96e3c%2F2023-10-16%20%2016.21.43.png?generation=1697462519468216&alt=media" alt=""></p>
<h2>Retrieval strategies</h2>
<p>It contains <strong>2 different context retrieval strategies</strong>:</p>
<ul>
<li>the first is based on the <strong>full pack of wikipedia articles sentences (around 117 mln elements)</strong> and <code>all-MiniLM-L6-V2</code> and <code>GTE-base</code> based FAISS quantized indices (we use IVF, PQ64 index). Here we used <strong>both variants (only prompt and prompt + all answers)</strong> to find the best context - so the final count of the contexts from this part is 4:<ul>
<li><code>all-MiniLM-L6-V2</code> with prompt,</li>
<li><code>all-MiniLM-L6-V2</code> with prompt and all answers, </li>
<li><code>GTE-base</code> with prompt, </li>
<li><code>GTE-base</code> with prompt and all answers</li></ul></li>
<li>the second one is the TF-IDF retrieval part from the great kernel created by <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> with the only change: we have added <code>sublinear_tf=True</code> for 200k parsed paragraphs dataset, according to our validation this was the best option. So here we receive 2 more context variants - for 200k and 270k datasets. </li>
</ul>
<h2>Single models</h2>
<p><strong>As for single models</strong>, we have used 2 common architectures as a base for our fine tuned checkpoints:</p>
<ul>
<li>DeBERTaV3 Large</li>
<li>LongFormer Large<br>
We finetune our models using 60k <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> dataset and ours generated STEM 39k dataset. To enrich the variety of contexts for 39k custom dataset we generated refined contexts using GTE-base index both using prompt only and prompt with all answers. For finetuning we train <code>ckpt-3850</code>, <code>ckpt-7200</code> and <code>ckpt-8750</code> for DeBERTa architecture, for each step we relabeled least confident samples, where a finetuned model put the correct answer to the last place. For LongFormer we took only the last checkpoint due to the inference time limitations. During inference we use 2300 character limited context for DeBERTa-s and no limit for LongFormer (we grid search over 2000, 2100, 2200, 2300, 2400 and 2500 locally on 500-samples dataset and 2300 limit appears to be optimal in terms of inference time and final metric MAP@3).</li>
</ul>
<h2>What we have tried and it didn’t work:</h2>
<ul>
<li>CutOut, CutMix augmentations</li>
<li>separate models for each contexts</li>
<li>reward training with and without hard negatives</li>
<li>multiple negatives loss</li>
<li>RoBERTa, ELECTRA, BERT finetuning</li>
<li>averaging over answers order during retrieval: <code>mean(prompt + A + B + C + D + E, prompt + E + A + B + C + D , prompt D + E + A + B + C … )</code> performs exactly the same score on 500 samples, <code>mean(prompt + A, prompt + B, prompt + C, prompt + D, prompt + E)</code> got less in MAP@3</li>
<li>t5, e5, GTE and BGE, but for <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> datasets, TF-IDF was the best option</li>
<li>different index versions, the only meaningful parameter was nprobs</li>
<li>TF-IDF ranker + BM25 as reranker.</li>
<li>Training a single question+single answer pair classifier.</li>
</ul>
<p><strong>We do regret about not probing the other embedders’ options for full wikipedia index creation and using only sentences not chunks.</strong></p>
<h2>Compositions</h2>
<p><strong>To create compositions</strong> we have checked a lot of crazy variants including blending, stacking and backup strategy - all of them were tested and compared on the 500 rows publicly available (200 + 300 rows). The best variant we have found here is to blend 4 predictions from the same DeBERTaV3 checkpoint based on contexts from the first retrieval strategy to receive so called <code>blend_7200</code> and <code>blend_8500</code> along with average backup for predictions <code>DeBERTa_3850</code> using the LongFormer predictions. The final formula looks like: 0.58 * blend_7200 + 0.42 * blend_8500 + TF-IDF_backups<br>
Important notes we found about blending:</p>
<ul>
<li>Blending predictions for more than 4 models at a time on the 500 rows validation dataset decrease the MAP@3 score on both public and private LB</li>
<li>Blending predictions compositions from different retrieval strategies also made the things worse in comparison with simple averaging</li>
<li>Backup strategy for TF-IDF retrieval works better if we replace <code>DeBERTa_3850</code> predictions with <code>max_probability < 0.514</code> using the blend <code>0.5 * DeBERTa_3850 + 0.5 * LongFormer</code> instead of LongFormer predictions itself.</li>
</ul>
<h2>Submissions timing</h2>
<p>As our notebooks can run only for 9 hours max, we <strong>need to monitor how long each submission was</strong>. This can help us to find out how many more models we can add to our solution and also helps to find out in what part the OOM/exception was. To check the submission time we have created a specific script which was run in tmux session with <code>while True</code> loop inside to check the submission status every 1 minute. So we know that <strong>our best submission is 08:52:43 out of 9 hours timeout</strong> - pretty awesome usage of the time, right? 🙂 </p>
<p>As this is not the last kernel competition, <strong>I would like to share my script <a href="https://www.kaggle.com/code/alexryzhkov/submissions-timer-script/notebook" target="_blank">here</a></strong>, which you can use in the further competitions on your side. You can also change the <code>GMT_OFFSET</code> variable to make it more suitable with your timezone.</p>
<h2>Another tricks</h2>
<p>We also have tried to use <strong>pseudolabeling techniques</strong> to make our predictions look more similar for the test data distribution. Basically adding the 4k test samples on the top of the 500 OOF dataset and run ML models didn’t score better than simple weighted average approach.</p>
<h2>Conclusion</h2>
<p>Thanks Kaggle and other participants for such a great competition. For me even after 12 years of kaggling this is still full of learning and fun.</p>
<p><a href="https://www.kaggle.com/asif00" target="_blank">@asif00</a> It is already released - the link is under word <strong>here</strong> in the post :)</p>
|
Kaggle - LLM Science Exam
|
28th-place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to competition organizers for hosting this competition. I would also like to congratulate everyone who made it through this tough competition to the end.</p>
<p>Unfortunately my solution overfits the public lb and I missed the gold medal, but I would like to share a summary of my solution here.</p>
<h2>Wikipedia data for retrieval</h2>
<p>As reported in <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/442595" target="_blank">this discussion</a>, there're cases in which some numbers or even paragraphs are missing from the wikipedia data, so I created whole wikipedia plain text from <a href="https://dumps.wikimedia.org/other/cirrussearch/" target="_blank">CirrusSearch dumped data</a>.</p>
<h2>Context retrieval</h2>
<p>I combined two types of retrieval methods.</p>
<h3>1. bi-gram tfidf inverted index</h3>
<p>For each wikipedia article, I created a 2-gram tfidf based inverted index. I referred <a href="https://github.com/GINK03/minimal-search-engine" target="_blank">this repository</a> to create the inverted index. Indexing all articles resulted in slow search speeds, so I indexed about half of the articles based on their creation date and popularity.</p>
<h4>1st step</h4>
<p>Using prompt and answer as query, extract some wikipedia articles with high similarity from the inverted index database. The example of retrieval notebook is <a href="https://www.kaggle.com/code/daisuketakahashi/wikipedia-bi-gram-inverted-index/notebook?scriptVersionId=146229927" target="_blank">here</a>.</p>
<h4>2nd step</h4>
<p>The extracted wikipedia articles are split into sentences, and the similarity with the query is calculated by tfidf to extract some sentences. The three sentences before and after the extracted sentence were added as the context.</p>
<h3>2. faiss</h3>
<p>The full text of wikipedia was split into sentences so that each chunk is about 25 words. Each chunk was vectorized by <a href="https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2" target="_blank">sentence-transformers/all-MiniLM-L6-v2</a> and a faiss index was created. The faiss index was quantized and compressed to about 7 GB.</p>
<h4>1st step</h4>
<p>Using prompt and answer as query, extract some chunks with high similarity from the faiss index.</p>
<h4>2nd step</h4>
<p>The extracted chunks and the chunks before and after were used as context.</p>
<h2>Model</h2>
<p>Almost the same as a baseline notebook.<br>
I trained two deberta-v3-large model with different training dataset.<br>
The results were inferred by using the following 7 contexts. These results were ensembled by simply averaging each output.</p>
<ul>
<li>two contexts from bi-gram tfidf inverted index</li>
<li>five contexts from faiss index</li>
</ul>
|
Kaggle - LLM Science Exam
|
2nd Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you! I split a wiki page into the sentences using <code>spacy</code> package. I then group sentences into the chunk with at least 1000 characters long. You can try to retrieve the chunk from the index <a href="https://www.kaggle.com/datasets/sorokin/graelo-wikipedia-20230601-en" target="_blank">graelo-wikipedia-20230601-en</a>. I kept the newline character as a separator between sentences.</p>
<p>Thank you! I actually tried to probing the test data and my estimation was around 5%. But I wasn't sure about it.</p>
<p>Thanks to Kaggle for organizing this competition. I do similar things in my work, and it was interesting to see what an open community could achieve here. My final solution is mostly based on Deberta type models and only half of the test questions are handled by Mistral LLM.</p>
<h2>System components</h2>
<ol>
<li><p>Knowledge database<br>
The Hugging Face dataset <a href="https://huggingface.co/datasets/graelo/wikipedia/viewer/20230601.en" target="_blank">graelo/wikipedia/20230601.en</a> was used as a single source of knowledge. Each document was divided into stentenses and grouped into overlapping chunks with an average length of 1000 characters. The chunks were indexed using the <code>pyserini.index.lucene</code> package. The results index takes up 23 GB and retrieving 200*5 queries takes about 5 minutes.</p></li>
<li><p>Reranker model<br>
Each retrieved chunk was rescored using a custom Reranker model. This Reranker <code>deberta-v3-base</code> model is built like the standard <code>DebertaV2ForMultipleChoice</code> model, but instead of predicting the correct answer given a context, this model predicts the best context for a given question/answer pair. To create the training dataset for this, a teacher model was used to label pseudo-ranks. Specifically, for each correct question/answer pair, few contexts was retrieved and evaluated using the teacher model. The best context was considered as correct option for the Reranker model. The teacher model was taken from early experiments and is based on <code>deberta-v3-large</code> model.</p></li>
<li><p>Multiple Choice model<br>
Nothing special here, the standard <code>DebertaV2ForMultipleChoice</code> approach was used for Deberta and Mistral based models. Just one note: to train on datasets with different number of options, the loss function was scaled like this <code>loss * (self.num_labels / num_choices_in_sample)</code>.</p></li>
<li><p>Masked LM<br>
Error analysis showed that the MC models performed very poorly on questions when the options are different in few words. For these specific cases, when the options are very similar and quite long, a special Masked LM was used. It is very easy to train this model, but it is quite difficult to apply this model for answering 5 options question. Ideally, we should find a multiple sequence alignment for 5 options and compare different words in a single pass. But I couldn't find a python package for this and decided to use <code>sed.standard_sed_backtrace</code> between each pairs of options. </p></li>
<li><p>Cross reference options<br>
To be honest, I thought the shakeup would be stronger. I assumed that the test data included more questions with option like <code>None of the above</code>. For example, when I replaced all correct answers in my validation sets with <code>None of the above</code>, the quality drastically dropped from 0.9 to 0.3 MAP@3 score. Keep that in mind, I built a second stage ranker with more training examples like this.</p></li>
<li><p>XGBRanker<br>
XGBRanker was used as the fusion mechanism. This ranker is trained on validation sets and is based on few features: scores from retrival models, logits from Deberta/Mistral MC models and one feature flag for <code>None of the above</code> options.</p></li>
</ol>
<h2>Failed experiments</h2>
<ul>
<li>Focal loss for <code>DebertaV2ForMultipleChoice</code></li>
<li>Removing causal constraints for Llama attentions</li>
</ul>
<h2>Datasets</h2>
<table>
<thead>
<tr>
<th>Author</th>
<th>Name</th>
<th>Size</th>
<th>Licence</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a></td>
<td>all</td>
<td>39249</td>
<td></td>
</tr>
<tr>
<td><a href="https://www.kaggle.com/leonidkulyk" target="_blank">@leonidkulyk</a></td>
<td>stem_1k_v1</td>
<td>928</td>
<td></td>
</tr>
<tr>
<td><a href="https://www.kaggle.com/nlztrk" target="_blank">@nlztrk</a></td>
<td>eduqg_llm_formatted</td>
<td>3297</td>
<td></td>
</tr>
<tr>
<td><a href="https://www.kaggle.com/mozattt" target="_blank">@mozattt</a></td>
<td>test</td>
<td>3084</td>
<td></td>
</tr>
<tr>
<td><a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a></td>
<td>MMLU</td>
<td>17433</td>
<td>NC</td>
</tr>
<tr>
<td>AI2</td>
<td><a href="https://huggingface.co/datasets/openbookqa" target="_blank">openbookqa</a></td>
<td>5957</td>
<td></td>
</tr>
<tr>
<td>AI2</td>
<td><a href="https://huggingface.co/datasets/ai2_arc" target="_blank">ai2_arc</a></td>
<td>7787</td>
<td></td>
</tr>
<tr>
<td>AI2</td>
<td><a href="https://huggingface.co/datasets/qasc" target="_blank">qasc</a></td>
<td>9060</td>
<td></td>
</tr>
<tr>
<td>AI2</td>
<td><a href="https://huggingface.co/datasets/sciq" target="_blank">sciq</a></td>
<td>13679</td>
<td>NC</td>
</tr>
</tbody>
</table>
<h2>Ablation analysis</h2>
<table>
<thead>
<tr>
<th>System</th>
<th>Private Score</th>
<th>Public Score</th>
<th>Kernel</th>
</tr>
</thead>
<tbody>
<tr>
<td>k=1, 2xDebertaMC</td>
<td>0.909763</td>
<td>0.907823</td>
<td></td>
</tr>
<tr>
<td>k=8, 2xDebertaMC, Reranker</td>
<td>0.916119</td>
<td>0.907823</td>
<td></td>
</tr>
<tr>
<td>k=8, 2xDebertaMC, Reranker, XGBRanker</td>
<td>0.915963</td>
<td>0.909904</td>
<td></td>
</tr>
<tr>
<td>k=8, 2xDebertaMC, Reranker, XGBRanker, MLM</td>
<td>0.915546</td>
<td>0.911568</td>
<td></td>
</tr>
<tr>
<td>k=8, 2xDebertaMC, Reranker, XGBRanker, MLM, MistralMC</td>
<td>0.931489</td>
<td>0.931543</td>
<td><a href="https://www.kaggle.com/code/sorokin/llm-xgboost-abc?scriptVersionId=145958932" target="_blank">llm-xgboost-abc</a></td>
</tr>
</tbody>
</table>
<p>*k - number of documents retrieved from the index.</p>
<h2>Conclusion</h2>
<p>I think the key components of this solution are the full Wikipedia index and XGBRanker on top of different models: BM-25/Reranker scores, Multiple Choice Deberta and Mistral logits.</p>
|
Kaggle - LLM Science Exam
|
317th Place Solution for the LLM Science Exam
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank Kaggle for hosting <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam" target="_blank">LLM Scinece competition</a> that answer multiple-choice questions written by an [LLM]. Despite joining my first Kaggle competition just a month before the deadline and not achieving good results😅, I learned a lot about LLMs including how to train them on small GPUs and make effective inferences. The experience was still awesome, and I'm excited to share the lessons I learned from my experiments😁. </p>
<h1>Context</h1>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/data" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/data</a></li>
</ul>
<h1>Overview of the Approach</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16611574%2F84814798a66c9a4555d6521bf647cc8a%2FPicture1.jpg?generation=1697668918085576&alt=media" alt="Workflow"></p>
<p>The figure illustrates the LLM-based question answering system. For each question, we first build up a prompt that consists of: (1) Instruction, (2) Background context relevant to the question, (3) Question and (4) Answer options. Second, we pass the prompt to the LLM, which is run one layer after another to save memory space. Finally, the LLM returns the scores of each answer option, and we select the top three options as the final result. The final solution is the Platypus2-70B-Instruct model with Retrieval-Argumented Generation (RAG) and Weight-Loader with accuracy of 0.888. The notebook is published publicly <a href="https://www.kaggle.com/code/minhsienweng/patypus2-70b-wiki-retrieval-weight-loader" target="_blank">here</a>. </p>
<p><strong>Step 1.</strong> Retrieval-Argumented Generation (RAG) retrieves the Wikipedia texts as background context for each question to help LLM generate accurate predictions.<br>
(a) Generate the question embeddings by concatenating the question and answer options, then converting them using the sentence transformer model <code>all-MiniLM-L6-v2</code>.<br>
(b) Search the Wikipedia embeddings to find the top 5 most similar Wikipedia texts to the question embedding, based on cosine similarity.<br>
(c) Store the retrieved Wikipedia texts to be used as the background context of the question.</p>
<p><strong>Step 2.</strong> Construct the prompt for each question<br>
For each question, create a prompt using <code>question text</code>, <code>answer options</code> and <code>background context</code> (relevant Wikipedia texts). Here is an example of question prompt.<br>
<code>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
Instruction: Your task is to analyze the question and answer below. If the answer is correct, respond yes; if it is not correct, respond no. As a potential aid to your answer, background context from Wikipedia articles is at your disposal, even if it might not always be relevant.
Input: Background context: {context}
Question: {question}
Proposed answer: {answers['A']} {answers['B']} {answers['C']} {answers['D']} {answers['E']}</code></p>
<p><strong>Step 3.</strong> Predict the answer to a question by passing the prompt to LLM <br>
(a) Run the LLM model layer by layer. The <code>Platypus2-70B</code> model is too large (140GB) for T5 GPUs, so we split it into layers to save memory and run each layer at a time. Each layer is roughly 1.7GB in size, which is small enough to fit on most GPUs.</p>
<p>(b) The question prompt is passed to the LLM. The LLM returns a probability distribution normalised over all answer options. We sort the answer options by their probabilities and select the top three for the question as submission result. </p>
<p>The prediction process is slow and memory hungry, so we use multithreading to parallelize prediction tasks, caching to reuse results, and memory clearing to reduce memory usage after each layer is finished.</p>
<h1>Details of the submission</h1>
<p>I tried the below solutions apart from the final solution.</p>
<h2>Fine-tuned LLMs</h2>
<p>I first tried the <code>flan-t5-base</code> and <code>flan-t5-large</code> models without fine-tuning, but their accuracy was poor (0.46 and 0.33). So I decided to fine-tune the models. Since fine-tuning a large model takes up a lot of time, I switched to the smaller <code>bert-base-uncased</code> model (110M parameters) as an experiment and fine-tuned it on the given training dataset, which improved the accuracy to 0.55. As the given training dataset is small (200 questions), another attempt was to include 6,500 extra examples to the training dataset, but this only resulted in a small improvement (0.56). This is likely because the fine-tuned LLMs alone could not handle such a complex task and they needs more background texts to make the predictions.</p>
<h2>Short Sentence texts as background context</h2>
<p>On the discussion forum, the Retrieval-Augmentation Generation (RAG) has been shown to improves LLM accuracy and efficiency by finding relevant Wikipedia data as context, supplementing LLMs with reliable information, and reducing model retraining costs.</p>
<p>The RAG approach was used to retrieve Wikipedia sentences relevant to each question, based on cosine similarity. The retrieved sentences were then combined with the question to form a prompt for the model. Our solution used the <code>deberta-v3-large model</code> (304M), which resulted in a significant improvement in accuracy (to 0.75). A voting ensemble of pretrained models was also experimented with, but the improvement in accuracy was slight (to 0.78). RAG can improve accuracy, but its improvement is limited because the sentences it retrieves may come from different articles and may not be well-connected to provide useful information. Therefore, we should provide them with longer and more coherent paragraph texts, rather than individual sentences. This will allow LLMs to infer the answer from the context of the longer text and improve their accuracy.</p>
<h2>Long Paragraph texts as background context</h2>
<p>The next soluion used TF-IDF to retrieve paragraph texts from Wikipedia articles as background contexts to a question. As the paragraph texts are longer than the maximum length of the transformer BERT model (512 tokens), we used the LongFormer model to process these long paragraph texts (4096 tokens or longer). Additionally, to increase diversity, the solution used two different datasets and combined the model predictions from these two datasets as the final result. Our solution used the Longformer-RACE model, a smaller language model (2GB), to achieve an accuracy of 0.85 with a faster execution time (approximately 40 minutes on 2 GPUs T5). This model is considerably small so in our next (final) solution, we experimented with a larger language model to improve the performance of our solution.</p>
<h3>Very Large LLM</h3>
<p>In our final solution, we used a very "large" <code>Platypus2-70B-Instruct</code> model that contains 70 billion parameters and amounts to 140 GB. The accuracy is improved to 0.888, even when the model was given short sentence texts as background context. However, the downsides of this solution are the long execution time (200 questions take 3 hours on 2 GPUs T5) and high memory demands.</p>
<p>Because I have run out of my weekly quota and also have another competition close to the deadline, I have not been able to improve the final solution. My next attempt would be to pass the long paragraph texts to this "large" LLM and see if we can obtain better accuracy.</p>
<h1>Sources</h1>
<p>The final solution was inspired by the notebook by <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a> and the LongFormer solution by <a href="https://www.kaggle.com/MB" target="_blank">@MB</a></p>
<ul>
<li><a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/440620" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/440620</a></li>
<li><a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/442595" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/442595</a></li>
</ul>
|
Kaggle - LLM Science Exam
|
31th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>No, I haven't made any changes from the public notebook. It finishes in about 4-5 hours without any modifications.</p>
<p>Yes, whole submittion!!</p>
<p>I've published the notebook, so I'd like you to take a look. this notebook takes abount 6 hours.<br>
<a href="https://www.kaggle.com/code/sugupoko/llm-31th-place-solution/notebook" target="_blank">https://www.kaggle.com/code/sugupoko/llm-31th-place-solution/notebook</a></p>
<h1><a href="https://www.kaggle.com/code/sugupoko/llm-31th-place-solution/notebook" target="_blank">Our Inference code</a></h1>
<h1><a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/445438" target="_blank">70k Dataset</a></h1>
<p>Thank you for host and competitors.<br>
It is interesting competition for me.</p>
<p>My first NLP competition was a great learning experience for me.<br>
I'm looking forward to the solutions from the top.</p>
<h2>Team Member</h2>
<p><a href="https://www.kaggle.com/shuichiurabe" target="_blank">@shuichiurabe</a><br>
co-workers!!</p>
<h2>Solution overview</h2>
<h3>inference flow</h3>
<p>We decided on a strategy to have fewer models and more variations in the data. After observing that a 270k dataset was released and the LB scores significantly increased, we guessed this approach is validated.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2930242%2F56096167685f983ea2d2a27a17a6798e%2F2023-10-11%20090525.png?generation=1696982746546638&alt=media" alt=""></p>
<h3>model</h3>
<ul>
<li>we trained with 60k dataset <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/436383" target="_blank">cdeotte's discussion</a></li>
<li>only changed max length to 640. Freeze 18, Freeze embedding.</li>
</ul>
<h2>Things that didn't go well:</h2>
<ul>
<li>Searching for similar documents. <ul>
<li>Training the Sentence Transformer.</li>
<li>Replacing the Sentence Transformer.</li>
<li>BM25 method</li></ul></li>
<li>Regarding answer prediction:<ul>
<li>Increasing the data.</li>
<li>Spent $70 on custom data, but it was ineffective.</li></ul></li>
</ul>
<h2>Things I couldn't do:</h2>
<ul>
<li><p>Searching for similar documents.</p>
<ul>
<li>Expanding data sources.</li>
<li>Couldn't due to PC specs (Needed more than 200GB RAM).</li></ul></li>
<li><p>Regarding answer prediction (continued):</p>
<ul>
<li>Making labels soft.<ul>
<li>Wanted to give partial scores like 0.5 when the answer contained some correct content.</li></ul></li></ul></li>
</ul>
<h2>Reference previous competition</h2>
<p>Our many experiments are from the previous competitions</p>
<ul>
<li><a href="https://www.kaggle.com/competitions/feedback-prize-2021/leaderboard" target="_blank">https://www.kaggle.com/competitions/feedback-prize-2021/leaderboard</a></li>
<li><a href="https://www.kaggle.com/competitions/AI4Code/leaderboard" target="_blank">https://www.kaggle.com/competitions/AI4Code/leaderboard</a></li>
<li><a href="https://www.kaggle.com/competitions/feedback-prize-effectiveness/leaderboard" target="_blank">https://www.kaggle.com/competitions/feedback-prize-effectiveness/leaderboard</a></li>
<li><a href="https://www.kaggle.com/competitions/us-patent-phrase-to-phrase-matching/leaderboard" target="_blank">https://www.kaggle.com/competitions/us-patent-phrase-to-phrase-matching/leaderboard</a></li>
<li><a href="https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/leaderboard" target="_blank">https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/leaderboard</a></li>
<li><a href="https://www.kaggle.com/competitions/feedback-prize-2021/leaderboard" target="_blank">https://www.kaggle.com/competitions/feedback-prize-2021/leaderboard</a></li>
</ul>
|
Kaggle - LLM Science Exam
|
37th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>37th Place Solution</h1>
<p>First off, I would like to thank my teammate, Simon (@simonveitner) for his collaboration. He is definitely responsible for most of the work. Second, I would like to thank all the people who contributed datasets and public notebooks. It seemed like any idea we had was published publicly within the next week (looking at you <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, <a href="https://www.kaggle.com/mgoksu" target="_blank">@mgoksu</a>, <a href="https://www.kaggle.com/mbanei" target="_blank">@mbanei</a>, and <a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a>). This was my first foray into NLP and LLMs, and it has been quite instructive.</p>
<p>Our solution is based on high quality RAG and an ensemble of models. We pulled results from <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a>, <a href="https://www.kaggle.com/jasonzhang2022" target="_blank">@jasonzhang2022</a>, <a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a>, and <a href="https://www.kaggle.com/mgoksu" target="_blank">@mgoksu</a> for modelling. We would like to thank <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a> for his datasets, especially when he included the retrieval context, and <a href="https://www.kaggle.com/yalickj" target="_blank">@yalickj</a> for his 300 high quality samples. We also used <a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a> for his optimization of ensemble weights.</p>
<h2>Context Retrieval</h2>
<p>Early on, it was clear from the way the train and test datasets were generated we needed some context to do the reverse and then would rely on the language model to do reasoning over the context. Given that it was an easier task to pick from 5 answers than to generate a question in the first place, we had hope that the smaller models would be able to match the work of the larger LLMs used for generating questions. As is pointed out by <a href="https://www.kaggle.com/phillippsinger" target="_blank">@phillippsinger</a> <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/446240" target="_blank">here</a>, there is a limit to the best you can do because GPT-3.5 occasionally gets it wrong, as for example the <em>triskeles</em> question pointed out in discussions early in the competition (sorry, I could not find the original discussion now).</p>
<p>Initial work focused on the 60k dataset from <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a>, but we soon realized it was missing many key articles. We then moved to the larger full wikipedia set, which would allow us to train with non-science related questions. Further research showed the full set was missing articles, so we parsed and uploaded our own full set. After some initial small successes with a custom reranking model and some tests with section sized inputs, <a href="https://www.kaggle.com/mbanei" target="_blank">@mbanei</a> published the now famous longformer notebook and 270K article set, and we abandoned this work to do modeling building. </p>
<p>Simon did the hard work of testing different embedding models, and we settled on <code>bge-base-en-v1-5</code>, uploaded by <a href="https://www.kaggle.com/wuwenmin" target="_blank">@wuwenmin</a>. Incidentally this did better than the large model. We then used <code>LuceneSearcher</code> in the pyserini package for the lookup. </p>
<h2>Models and Ensembling</h2>
<p>There is not much to add here. We did look into some multiple choice models besides the DebertaV2ForMultipleChoice, but did not get anywhere. We settled on an ensemble of 3 different public notebooks using context lookup with the DebertaV2ForMultipleChoice. We also used one model without any context.</p>
<p>Using the 200 training + 300 high quality samples as our training set for ensembling, we used the ensemble weight optimization introduced by <a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a>. </p>
<p>With the probability table results from the ensemble, we calculated how much time we had left in the 9 hour window, and calculated the max number of samples we could redo (basically 800 samples per 2 hours left). We used a Platypus 70B model introduced by <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a>, after retraining the <code>lm_head</code> on a limited number of samples. <em>This turned out to be the game changer, as this model performed better on the private board than the public leading to a huge jump in rankings from public leaderboard to private leaderboard!</em> </p>
<h2>Breakdown</h2>
<table>
<thead>
<tr>
<th>Addition</th>
<th>Score Change</th>
</tr>
</thead>
<tbody>
<tr>
<td>BM25 context change</td>
<td>+0.03</td>
</tr>
<tr>
<td>BM25 base model</td>
<td>+ 0.02</td>
</tr>
<tr>
<td>DebertaV3 ensemble</td>
<td>+ 0.02</td>
</tr>
<tr>
<td>Platypus 70B on unsure</td>
<td>+ 0.02</td>
</tr>
</tbody>
</table>
<h2>What didn't work</h2>
<ol>
<li><a href="https://ink.library.smu.edu.sg/cgi/viewcontent.cgi?article=8618&context=sis_research" target="_blank">Evidence Filter models</a>: We altered the DebertaV2 architecture to include these, but they did not improve the result. Perhaps it was poor training methodology, we had hoped that allowing some contrast between the answers would improve scores. </li>
<li>Sparse attention longer context models such as <code>google/bigbird-roberta-large</code>. These also did not train appropriately in our experiments, although we did not spend much time on them.</li>
</ol>
|
Kaggle - LLM Science Exam
|
3rd place solution [Update + Code links]
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks!<br>
Yes, fine-tuning reranker gave huge boost, but I cannot say that it was the main reason of high score. I think without 70b LLM in the ensemble, the gain from reranker would not be that big. But it needs to be checked, have not done it yet.</p>
<p>Thanks!<br>
Actually, while DeBERTa was a clear winner among BERT-like models, ELECTRA was not so far behind on local cv in the end. In early experiments - in the setting of closed book exam - DeBERTa was completely crusing all other models with advatage of 0.1 (like 0.75 vs 0.65). But with adding context the difference immediately reduced to ~0.05 and later with more data it became even smaller - ~0.005 in case of ELECTRA and ~0.015 in case of RoBERTa.</p>
<p>Thanks!<br>
Yes, this competition is all about trade-off between quality and speed and memory. I was also curious about what size of reranker to use. Went with this base-sized <code>ibm/re2g-reranker-nq</code>, which immediately improved my scores even without fine-tuning. But tested fine-tuning only with it, didn't have time to check other options. Though it was very interesting to check much faster options like <code>cross-encoder/ms-marco-MiniLM-L-6-v2</code>, because we can rerank much more passages with it.</p>
<p>Thanks!<br>
Well, BERT-like models are relatively fast, so I ran them through all questions. 70b model are very slow, but the main reason of low speed for them is weight loading from slow kaggle storage. To avoid multiple loading of layers, one can run all questions through a layer and keep intermediate results in memory and then pass them to the next one. But it will not work for the whole test dataset of 4k questions as we will run out of gpu memory storing intermediate activations. So I ran only 500 most challenging questions, which allowed me to do the trick with one-time loading of all layers. I hope I answered your question. Let me know if I didn't.</p>
<p>Thanks!<br>
With index there are two main options:</p>
<ol>
<li>Quantize it. Raw embeddings for all passages were 80GB in my case, but quantized index can be much smaller. But there is trade-off between its size and quality. I found such setting to give reasonably decent results:</li>
</ol>
<pre><code>nlist =
m =
k =
d = vectors.shape[]
quantizer = faiss.IndexFlatL2(d)
index = faiss.IndexIVFPQ(quantizer, d, nlist, m, )
</code></pre>
<p>Then trained it with random subset of 1M embeddings</p>
<pre><code>index.train(embeddings[train_indices])
</code></pre>
<p>and added all vectors chunk by chunk</p>
<pre><code>index.add(chunk)
</code></pre>
<p>The resultant index was ~8GB and it was possible to carry out the search on 16GB GPU.</p>
<ol>
<li>Do search in several iterations. Split all embeddings into N files and create N faiss indexes.</li>
</ol>
<pre><code>
d = embs.shape[]
index = faiss.IndexFlatL2(d)
res = faiss.StandardGpuResources()
co = faiss.GpuMultipleClonerOptions()
co.useFloat16 =
co.useFloat16LookupTables =
index = faiss.index_cpu_to_gpu(res, gpu_num, index, co)
index.add(embs)
D, I = index.search(encoded_questions.astype(np.float32), N_PASSAGES)
I += first_passage_in_chunk_index
</code></pre>
<p>Do this N times and save both <code>D</code> and <code>I</code>. Then load results from these chunks, concatenate them, and sort distances again to obtain indices of best passages.</p>
<pre><code>
indices, distances = [], []
chunk chunks:
(, ) f:
indices.append(pickle.load(f))
(, ) f:
distances.append(pickle.load(f))
distances = np.concatenate(distances, axis=)
indices = np.concatenate(indices, axis=)
indices_of_indices = np.argsort(distances, axis=)
final_indices = np.empty(indices.shape, dtype=np.uint32)
i (indices.shape[]):
j (indices.shape[]):
final_indices[i, j] = indices[i, indices_of_indices[i, j]]
final_indices = final_indices[:, :N_PASSAGES]
</code></pre>
<p>Eventually I went with a bit modified second approach. I did not use faiss in the end (plain pytorch dot prod instead) because needed to get distances to all passages in index for ensembling two retrieval models. But faiss restricts the max number of searched passages to 2048.</p>
<p>First of all, I want to thank organizers for amazing competition which provided a nice chance to dive deep into LLMs and thanks to co-kagglers who shared their datasets and approaches. Specifically, thanks to <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a> and <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for high quality datasets, which helped massively in training good question answering models. Thanks to <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a> for showing how to do the impossible and to <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> for developing these ideas and adding Xwin model weights. Special thanks to <a href="https://www.kaggle.com/sugupoko" target="_blank">@sugupoko</a> for sharing 70k dataset, which helped me to tune re-ranker model (the dataset contributed to qa models as well), which in turn gave huge boost to my scores. To contribute to knowledge sharing, I will publish my code, but it is going to take a few days.</p>
<p>Important note before going into details about the solution: I used Platypus2, but still not sure if it is allowed or not due to its license restrictions. But the main reason why I kept it in final submissions is that organizers stated that winning models are not used for any business purpose, which, I guess, implies that commercial usage restrictions in license should not be a problem.</p>
<p><strong>[Update]</strong> Turns out it was fine to use Platypus2, but surprisingly additional ablation studies showed that slightly better scores could have been achieved without it! More details are in a table below.</p>
<h1>Diagram of the solution</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1210465%2F8f8a9a7095f0f75c61c687a8ff02eb62%2FSolution5.png?generation=1697013081108751&alt=media" alt=""></p>
<h1>Some notes about the solution</h1>
<h2>Wikipedia processing</h2>
<p>It starts with Wikipedia dump processing, because, as many noted before, available processed wikipedia data have numerous flaws from missing numbers to missing articles. I used <a href="https://github.com/attardi/wikiextractor" target="_blank">https://github.com/attardi/wikiextractor</a> and modified it to solve problem of deleted numerical values. Most likely some problems still remained, but some were solved.</p>
<p>Search for context is performed in one stage on passage level as opposed to the search for articles first and then for sentences. The reason is that in some cases two stage search just has no chance to find necessary context. For example, in training data I saw a question about actions of a character from some play. But the title or beginning of the article about this play does not mention this character, so encoding of the article does not contain necessary information. Not sure how frequent such cases, but probably not negligible.</p>
<h2>Reranker training</h2>
<p>I saw some comments in other discussions that people tried to train reranker model, but without success. From what I understood in the process of tuning my reranker, there are two key ingredients here:</p>
<ol>
<li>Having candidates in pairs of (question, candidate), both positive and negative, from the same distribution as they appear during inference. In my case it was important to use wikipedia passages, that I got from wikipedia processing, and not the context from original 70k dataset.</li>
<li>Train with hard negatives. Though there is a caveat here: a raw model, that was not pretrained for reranking task, will fail miserably on hard negatives. So it is important to take some pretrained one (ibm/re2g-reranker-nq in my case) or train it in two stages.</li>
</ol>
<h2>Ablation studies</h2>
<p>It is interesting to see how each part of the solution contributes to overall score, but I have not done thorough ablation studies, because assembled all parts together very late. But there are some hints from earlier submissions:</p>
<table>
<thead>
<tr>
<th>Pipeline configuration</th>
<th>Private</th>
<th>Public</th>
</tr>
</thead>
<tbody>
<tr>
<td>Inference only with 5 debertas (not final models), not tuned reranker</td>
<td>0.894</td>
<td>0.893</td>
</tr>
<tr>
<td>+answering 500 hard questions with Platypus</td>
<td>0.91</td>
<td>0.909</td>
</tr>
<tr>
<td>+replacing debertas with new ones and adding 3 electras and 5 robertas</td>
<td>0.912</td>
<td>0.914</td>
</tr>
<tr>
<td>+replacing reranker with tuned one</td>
<td>0.927</td>
<td>0.922</td>
</tr>
<tr>
<td>+adding Xwin to ensemble</td>
<td>0.928</td>
<td>0.926</td>
</tr>
</tbody>
</table>
<p>I think such results show that question answering part does its job very well if provided with correct context (boost from tuning reranker) and the weakest part of the solution is the initial retrieval. Which makes sense, since it is the part that is not tuned for competition data and also it consists of two very small models with very limited capacity, which are tasked to index huge dataset. Probably much higher score could have been achieved with tuned retrieval models.</p>
<p><strong>[Update]</strong><br>
Table with measurements of each part's contribution to the final score</p>
<table>
<thead>
<tr>
<th>Pipeline configuration</th>
<th>Private</th>
<th>Public</th>
</tr>
</thead>
<tbody>
<tr>
<td>Final submit</td>
<td>0.9284</td>
<td>0.9286</td>
</tr>
<tr>
<td>Without Xwin</td>
<td>0.9272</td>
<td>0.9257</td>
</tr>
<tr>
<td>Without Platypus2</td>
<td>0.9288</td>
<td>0.9309</td>
</tr>
<tr>
<td>Without both LLMs</td>
<td>0.9165</td>
<td>0.9201</td>
</tr>
<tr>
<td>Without bge-small in retrieval</td>
<td>0.9272</td>
<td>0.9261</td>
</tr>
<tr>
<td>Without MiniLM-L6-v2 in retrieval</td>
<td>0.9258</td>
<td>0.9259</td>
</tr>
<tr>
<td>Without Electra and Roberta in MLMs ensemble</td>
<td>0.9246</td>
<td>0.9268</td>
</tr>
<tr>
<td>Without reranker</td>
<td>0.9113</td>
<td>0.9130</td>
</tr>
</tbody>
</table>
<p><br><br>
Code:<br>
<a href="https://www.kaggle.com/code/podpall/3rd-place-reranker-training" target="_blank">Reranker training</a><br>
<a href="https://www.kaggle.com/code/podpall/3rd-place-full-inference" target="_blank">Full pipeline inference</a></p>
|
Kaggle - LLM Science Exam
|
41st Place Solution for the Kaggle - LLM Science Exam
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h3>41st Place Solution for the Kaggle - LLM Science Exam</h3>
<p>First of all thank you to the competition office for organizing such a great competition! Also, thanks to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, <a href="https://www.kaggle.com/mbanaei" target="_blank">@MB</a> and all the competition participants who shared useful ideas and codes. It was a very exciting and interesting competition! My result was 58th for Public LB, 41st for private LB and I got a silver medal.</p>
<h3>Context</h3>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/overview" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/overview</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/data" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/data</a></li>
</ul>
<h3>Overview of the approach</h3>
<p>Our final model was a combination of seven single microsoft/deberta-v3-large models (each with a different seed, data set, and pause token).The final model was a combination of an average ensemble of seven microsoft/deberta-v3-large models and TF-IDF-based retrieval.The final result was Public LB=0.916 / Private LB=0.909.</p>
<h3>Details of the submission</h3>
<p>The datasets used for training were the <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>'s 60k dataset and the 99k dataset. Also, a training model that takes into account pause tokens was created by referring to "Think before you speak: <a href="https://arxiv.org/abs/2310.02226" target="_blank">Training Language Models With Pause Tokens</a> to build a learning model that takes pause tokens into account.Then, TF-IDF-based context retrieving method was performed, referring to <a href="https://www.kaggle.com/mbanaei" target="_blank">@MB</a>'s method during inference.</p>
<p>Below is a summary of CV, Public LB, and Private LB for each model(Use of TF-IDF based contextual search method during inference).The validation data for CV used 200 data from the <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>'s 60k dataset in all cases.</p>
<table>
<thead>
<tr>
<th>model</th>
<th>CV</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>microsoft/deberta-v3-large (seed=42, dataset=60k, without pause token)</td>
<td>0.915</td>
<td>0.907</td>
<td>0.905</td>
</tr>
<tr>
<td>microsoft/deberta-v3-large (seed=52, dataset=60k, without pause tokens)</td>
<td>0.893</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>microsoft/deberta-v3-large (seed=62, dataset=60k, without pause tokens)</td>
<td>0.910</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>microsoft/deberta-v3-large (seed=72, dataset=60k, without pause tokens)</td>
<td>0.895</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>microsoft/deberta-v3-large (seed=82, dataset=60k, without pause tokens)</td>
<td>0.905</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>microsoft/deberta-v3-large (seed=42, dataset=60k, with pause tokens)</td>
<td>0.906</td>
<td>0.902</td>
<td>0.902</td>
</tr>
<tr>
<td>microsoft/deberta-v3-large (seed=42, dataset=60k+99k, without pause tokens)</td>
<td>0.904</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
<h3>Freeze layers</h3>
<p>Some layers were frozen because microsoft/deberta-v3-large did not learn well as it was. (if the layers were not frozen, the MAP@3 was not good with CV~0.37 at the first epoch.)</p>
<h3>Max length</h3>
<p>The max_length for training was 256, 512, and 1024, and 512 was the best CV, so 512 was used.</p>
<h3>TF-IDF-based context retrieving method</h3>
<p>I tried updating <a href="https://www.kaggle.com/mbanaei" target="_blank">@MB</a>'s method, combining two different contexts and setting a threshold for frequency of occurrence, etc., but the most effective was applying logarithmic scaling of TF-IDF frequencies and updating the stopwords. I combined the original stopwords with ENGLISH_STOP_WORDS from sklearn.feature_extraction.text.</p>
<h3>Pause tokens</h3>
<p>The following was done to take pause tokens into account when learning.(According to the <a href="https://arxiv.org/abs/2310.02226" target="_blank">paper</a>, the accuracy of Language Models improves when pause tokens are added during training, just as humans do better when they think carefully, so hopefully the CV will improve a little more… I tried the following, but if the method is more creative, the CV may increase.)</p>
<ul>
<li>Add pause tokens to tokenizer</li>
</ul>
<pre><code>tokenizer.add
model.resize)
</code></pre>
<ul>
<li>Add pause tokens in random position on input</li>
</ul>
<pre><code>PAUSE_TOKEN_COUNT =
def insert_pause_tokens(sentence, count):
= tokenizer.tokenize(sentence)
(count):
= .randint(, len() - )
.insert(, )
tokenizer.convert_tokens_to_string()
def preprocess():
context_with_pause = insert_pause_tokens([], PAUSE_TOKEN_COUNT)
first_sentence = [ + context_with_pause] *
second_sentences = [
+ [] + + [option] +
option
]
tokenized_example = tokenizer(
first_sentence,
second_sentences,
truncation=,
max_length=MAX_INPUT,
add_special_tokens=False,
)
tokenized_example[] = option_to_index[[]]
tokenized_example
</code></pre>
<ul>
<li>Custom Loss Settings</li>
</ul>
<pre><code>def custom:
loss = cross
# pause_id = tokenizer.convert
# pause_mask = (labelspause_id).
pause_id = tokenizer.convert
pause_mask = (labelspause_id).(torch.)
masked_loss = loss( - pause_mask)
return masked_loss.mean
</code></pre>
<ul>
<li>Setup and run custom trainers</li>
</ul>
<pre><code>class CustomTrainer(Trainer):
def compute_loss(self, model, inputs, =):
labels = inputs.pop()
outputs = model(**inputs)
logits = outputs.logits
loss = custom_loss(logits, labels, inputs[])
return (loss, outputs) return_outputs loss
trainer = CustomTrainer(
=model,
=training_args,
=tokenizer,
=DataCollatorForMultipleChoice(tokenizer=tokenizer),
=tokenized_dataset,
=tokenized_dataset_valid,
=compute_metrics,
callbacks=[EarlyStoppingCallback(=1)],
)
trainer.train()
</code></pre>
<h3>Sources</h3>
<ul>
<li><a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2</a></li>
<li><a href="https://www.kaggle.com/datasets/cdeotte/99k-data-with-context-v2" target="_blank">https://www.kaggle.com/datasets/cdeotte/99k-data-with-context-v2</a></li>
<li><a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1" target="_blank">https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1</a></li>
<li><a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles</a></li>
</ul>
|
Kaggle - LLM Science Exam
|
45th Place solutions , learnings , trials and what could have been | 0.919/0.917 submitted too late :D
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>so openbook context didnt work great for our main best.3 models they were based on context 1 and context 2 generated on cohere and 270k dataset via tfidf some were interleaved with both ctxt1 and ctx2 and one was trained just on ctxt1 . Also our infer retrieval was better with our custom wiki data set retrieved via tfidf . Think those were different from your approach . but the CV correlation is bit confusing on ur side maybe bit overfitted is only what I suspect.</p>
<p>Maybe do some submission to test again though on ur models .</p>
<p>dunno maybe some overfitting as one of our model did reach 0.9+ CV but was not as good on Private and public LB with just a single model , you will see we added it but also didnt get a good weight when we ensemble . Or could it be our retrieval worked well on the private/public data .</p>
<p>interesting .Our retrieval was similar though we didnt use MB 270k dataset for inference retrieval but our custom made wiki data and mb cohere dataset, were your models also trained on similar data and context generated via tfidf or openbook ? or something else . </p>
<p>Thanks <a href="https://www.kaggle.com/rusg77" target="_blank">@rusg77</a> , ya also I think validation dataset was important as before that we were not at all confident of our submissions , thanks to <a href="https://www.kaggle.com/yeoyunsianggeremie" target="_blank">@yeoyunsianggeremie</a> . We figured out pretty late which luckily helped us though we could not submit our 0.917/0.919 ones on time . Its good though some of our approaches worked but .</p>
<p>First of all thanks to Kaggle for hosting this competition that open door to trying LLMs and solving problems deemed to be solved by today's llm. Thanks to all the community for generously sharing ideas and code without which most at-least we might not have progressed so far . <br>
Also thanks to all the team <a href="https://www.kaggle.com/urvishp80" target="_blank">@urvishp80</a> <a href="https://www.kaggle.com/ayaanjang" target="_blank">@ayaanjang</a> <a href="https://www.kaggle.com/tonymarkchris" target="_blank">@tonymarkchris</a> as they joined and helped along the way . We were lucky to fix our cv on time but not on time to submit LB score 0.917,0.919 private but still was good figured the former on time :) </p>
<h1>Datasets</h1>
<h2>Train datasets</h2>
<p>Below are the datasets which worked best for us</p>
<ol>
<li>Chris 60k with context</li>
<li>Generating context1 and context2 with <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> tfidf with 270k data and cohere.</li>
</ol>
<h2>Validation</h2>
<p>Thankful for this dataset <a href="https://www.kaggle.com/datasets/yeoyunsianggeremie/validation-500" target="_blank">https://www.kaggle.com/datasets/yeoyunsianggeremie/validation-500</a> from <a href="https://www.kaggle.com/yeoyunsianggeremie" target="_blank">@yeoyunsianggeremie</a> . this helped us a lot get correlation between cv and lb .</p>
<h2>Retrieval /Infer</h2>
<ol>
<li>Openbook minilm , though mpnet-base-v2 also we tried worked well but was not part of our final submission [0.834 LB with openbk on that]. </li>
<li>MB tfidf technique on wiki data created with <a href="https://www.kaggle.com/code/nbroad/create-science-wikipedia-dataset" target="_blank">https://www.kaggle.com/code/nbroad/create-science-wikipedia-dataset</a>. - worked well</li>
<li>MB tfidf technique on cohere by MB - worked welll</li>
<li>we didnt use 270k MB dataset for infer retrival.</li>
<li>We found the sorting order in MB notebook was reverse/ascending order based on the scores we just did a simple correction on the same which helped</li>
</ol>
<pre><code>context1 =
</code></pre>
<h1>Training</h1>
<ul>
<li>CV 0.893 (on 500 dataset)Deberta on 60k with context</li>
<li>CV 0.8963 (on 500 dataset)Deberta on context1 and context2 tfidf context data trained <a href="https://huggingface.co/docs/datasets/stream#interleave" target="_blank">interleaved</a> with hugging face datasets example .. <a href="https://www.kaggle.com/code/gauravbrills/customdeberta-w-context-seq-mix/notebook" target="_blank">https://www.kaggle.com/code/gauravbrills/customdeberta-w-context-seq-mix/notebook</a> - Our best model seems interleaving helped a lot generalizing </li>
<li>CV 0.896 (on 500 dataset)deberta trained only on context 1 generated via tfdidg on 60k</li>
<li>CV 0.836 (on 500 dataset) deberta with 15 freeze layers on 60k dataset</li>
<li>CV 0.92 Llama but on 100 dataset not good on LB trained with peft for multiplechoiceseqclassifcation.</li>
</ul>
<p><strong>Training graph of our best with interleaved dataset 0.8963 one below 0.8917 cv at train</strong><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F768166%2F494b9dd2fe5620dfcb63eb0de5e6d166%2Ftrain_0.8917.png?generation=1697333461425893&alt=media" alt=""></p>
<p>Params </p>
<pre><code>warmup_ratio=,
learning_rate=,
per_device_train_batch_size=,
per_device_eval_batch_size=,
num_train_epochs=,
FREEZE_LAYERS =
</code></pre>
<h1>Final ensemble and infer</h1>
<p>We did struggle with ensemble till we had the 500 datasets, we were just mixing things up with context and non context models and sometimes llama but never confident of our submission .<br>
Once we got the 500 dataset we just picked our top 3 models and <code>average</code> and <code>optuna</code> weighted ensemble them .<br>
<a href="https://www.kaggle.com/code/gauravbrills/t2-0-901-openbook-tfidf?scriptVersionId=146318816" target="_blank">https://www.kaggle.com/code/gauravbrills/t2-0-901-openbook-tfidf?scriptVersionId=146318816</a> </p>
<p>We were lucky to get some in for final submission but missed submitting one last one which could have stored 0.817/0.819 in LB as didnt have time .</p>
<table>
<thead>
<tr>
<th>cv</th>
<th>Private LB</th>
<th>Public LB</th>
<th>description</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.9036</td>
<td><strong>0.909</strong></td>
<td>0.903</td>
<td>before deadline weighted slightly better</td>
</tr>
<tr>
<td>0.9017</td>
<td><strong>0.909</strong></td>
<td>0.905</td>
<td>before deadline SELECTED</td>
</tr>
<tr>
<td>0.91</td>
<td><strong>0.919297</strong></td>
<td>0.917</td>
<td>Missed submitting too late 😃</td>
</tr>
<tr>
<td>0.908</td>
<td><strong>0.917</strong></td>
<td>0.912</td>
<td>Missed submitting too late 😃</td>
</tr>
</tbody>
</table>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F768166%2F9a42967e5a51c866e27cc0bc18cbc553%2Fllm_sol.png?generation=1697337180495104&alt=media" alt=""></p>
<h1>What tried, did not work or could have worked</h1>
<ul>
<li><strong>Rerankers</strong> : We tried a bunch of rerankers using ST marco ones but they did not help in Public LB though seems were good in private . We also reasearched for colbert and <a href="https://huggingface.co/ibm/re2g-reranker-nq" target="_blank">https://huggingface.co/ibm/re2g-reranker-nq</a> which other teams tried but somehow lost confidence</li>
<li>Training on diff context opebook tfidf , openbook we were also trying to generate context with to check how well that works but we could not finish on time </li>
<li>dataset cleaning . we did fiddle around with mwparaserfromhell,LatexNodes2Text and beautiful soup but just lost patience or didnt go ahead in cleaning wiki data from scratch . Seems some top teams did explore this and implemented this better.</li>
</ul>
<pre><code> pylatexenc.latex2text LatexNodes2Text
bs4 BeautifulSoup
mwparserfromhell
():
l2t = LatexNodes2Text()
code = mwparserfromhell.parse(text)
(code.filter_templates())
template code.filter_templates():
(template)
code.replace(template, l2t.latex_to_text((template)))
text = (code)
text = text.replace(, ).replace(, )
text
</code></pre>
<ul>
<li>Using faiss with different embeddings and simple consine similarity via ST , diff embedding faiss idx we found wasn't giving as good scores as tfidf so we dropped trying other embeddings though it was way faster . Cosine similarity was bit gpu intensive so we dropped using it though we think its retrival was good. These all were tried on minilm and we could not finish trying on BGE or e5 on time :( .</li>
<li><strong>LLMs</strong> : Urvish in our team did crack hhow to use Llama for Multiple choicce sequence classification , our inference unfortunately was taking around 4-5 hours so we could not at the end add to our ensemble . Plus we were not able to improve much on Llama cv. <br>
Custom llama by <a href="https://www.kaggle.com/urvishp80" target="_blank">@urvishp80</a> </li>
</ul>
<pre><code> (nn.Module):
():
(CustomLlamaModel, self).__init__()
self.model = backbone
self.model.config.use_cache =
self.config = self.model.config
self.num_labels = num_labels
use_gradient_checkpointing:
self.model.gradient_checkpointing_enable()
self.pooler = MeanPooling()
self.dense = nn.Linear(self.config.hidden_size, )
self.dropout = nn.Dropout()
self.classifier = nn.Linear(self.config.hidden_size, , bias=)
():
num_choices = input_ids.shape[] input_ids inputs_embeds.shape[]
batch_size = input_ids.shape[] input_ids inputs_embeds.shape[]
flat_input_ids = input_ids.view(-, input_ids.size(-)) input_ids
flat_position_ids = position_ids.view(-, position_ids.size(-)) position_ids
flat_token_type_ids = token_type_ids.view(-, token_type_ids.size(-)) token_type_ids
flat_attention_mask = attention_mask.view(-, attention_mask.size(-)) attention_mask
flat_inputs_embeds = (
inputs_embeds.view(-, inputs_embeds.size(-), inputs_embeds.size(-))
inputs_embeds
)
outputs = self.model.model(input_ids=flat_input_ids, attention_mask=flat_attention_mask, output_attentions=,
output_hidden_states=)
last_hidden_state = outputs[][-]
x = self.pooler(last_hidden_state, flat_attention_mask)
x = self.dropout(x)
logits = self.classifier(x)
reshaped_logits = logits.view(batch_size, num_choices)
loss =
labels :
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(reshaped_logits, labels)
self.model.config.output_attentions:
attentions = outputs.attentions
:
attentions =
self.model.config.output_hidden_states:
hidden_states = outputs.hidden_states
:
hidden_states =
SequenceClassifierOutput(
loss=loss,
logits=reshaped_logits,
hidden_states=hidden_states,
attentions=attentions,
)
</code></pre>
<p>At the end was a great learning experience for all of us and still learning a lot still from this comp to use in the real world. Will update this as recall other things we tried or shall write a blog .</p>
|
Kaggle - LLM Science Exam
|
47th place solution -- Simple Method Based on Open Codes
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Thanks</h1>
<p>First and foremost, we would like to express our sincere gratitude to the hosts and the Kaggle team for organizing this amazing competition. Thank my team member <a href="https://www.kaggle.com/yuanji1239" target="_blank">@yuanji1239</a> for hard work. Thank You to Chris Deotte , MGoksu ,MB and Radek to share the OpenBook technique! </p>
<h1>RAG</h1>
<h2>Dataset</h2>
<ol>
<li><a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/436383" target="_blank">Chris Deotte's 60k datasets</a> by <a href="https://www.kaggle.com/cderotte" target="_blank">@cderotte</a> </li>
</ol>
<ul>
<li>this dataset with original contexts is split into train, eval, test_offline datasets for training deberta-v3-large model. </li>
</ul>
<ol>
<li><a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/425941" target="_blank">Radek's 6.5k dataset</a> </li>
</ol>
<ul>
<li><p>using <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">tfidf recall codes</a> on this dataset. </p></li>
<li><p>this dataset with tfidf recall contexts is divided into train, eval datasets for training deberta-v3-large. </p></li>
</ul>
<h2>Retrival methods</h2>
<ol>
<li><p><a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">all-MiniLM-L6-v2 and tfidf methods</a> based on <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> sharing. </p></li>
<li><p><a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/443334" target="_blank">bge-small-faiss by </a><a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a> and tfidf methods. </p></li>
</ol>
<h1>Training</h1>
<ol>
<li><p><a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1" target="_blank">train model code by </a><a href="https://www.kaggle.com/cderotte" target="_blank">@cderotte</a> based on (Dataset 1 - 60k datasets) , and select 6000--24000 samples and set many seeds, and generate models. Then, select best model as my model 1. </p></li>
<li><p>Based on 1 code, we add noise by this codes and retrain my model 1, and generate my model 2. </p></li>
</ol>
<pre><code> ():
a = df[col]..split().apply()
len_q85 = (np.quantile(a, q=))
len_q85
():
s = s.split()
s = np.array(s)
mask = tokenizer.mask_token
(s) > len_q85:
num_replacements = np.random.uniform(low=, high=)
num_replacements = ((s) * num_replacements)
maskidx = np.random.choice((s), size=num_replacements, replace=)
s[maskidx] = mask
s = .join(s.tolist())
s
</code></pre>
<ol>
<li>Training 1 and 2 for (Dataset 2 - 6.5k dataset ) ,and add revised prompts to options by using below code. Train models and generate models 3 and 4. </li>
</ol>
<pre><code> ():
cond1 = x: x[]
cond1 = df.apply(cond1, axis=)
col ():
df.loc[cond1, col] = df.loc[cond1, ].apply( x: x[:-]) + + + + df.loc[cond1, col]
df
</code></pre>
<h1>Inference</h1>
<p>the ensemble mothod of 4 deberta models and <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">longformer model</a> by <a href="https://www.kaggle.com/yuanji1239" target="_blank">@yuanji1239</a> to infer online test dataset. </p>
<h1>Thanks for Dr.JI Yuan, all teams and Kaggle 😄</h1>
|
Kaggle - LLM Science Exam
|
49th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I had not had much experience using language models before, so I entered this competition to learn about them. As a result, I was able to learn a lot about working with language models, and in addition, I was able to win my first silver medal!</p>
<p>However, this result wasn't achieved through my own abilities, but through the collective knowledge of many Kagglers, especially the following people.</p>
<ul>
<li><a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a> for pointing me in the initial direction of Multiple Choice using Deberta.</li>
<li><a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for publishing the RAG approach and providing a large amount of well-maintained training data.</li>
<li><a href="https://www.kaggle.com/yalickj" target="_blank">@yalickj</a> for providing a validation dataset that correlates to LB and help model training.</li>
<li><a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> for providing an improved context retrieving method and pushing the competition to the next stage.</li>
</ul>
<p>What I have done is merely a combination of these things. I did not have time and abilities to make the fundamental and cool improvements that were the major issues in this competition, such as an efficient and accurate context retrieval method, or advanced computer engineering to run huge models on limited compute resources(like <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> shown).</p>
<p>Anyway, the approach I took was the following points.</p>
<ul>
<li>Multiple Choice model for DeBERTa-v3-large and Sequential Classification model for Mistral-7B are trained with 60k+40k+99k dataset with tfidf based 270k Wiki Contexts attached.<ul>
<li>For DeBERTa-v3-large, I freezed the weights of the upper layers to improve the training time.</li>
<li>I originally used LLaMa2-7B as the LLM, but Mistral-7B, which was just released, performed wonderfully. I trained Mistral-7B with 4-bit quantize on sequential classification LoRA and obtained a clear improvement in accuracy.</li>
<li>Actually, a submission done with a model using only the 60k dataset as training data achieved Public LB 0.9, Private LB 0.911, which was my best, although I could not choose😅. In my case, the addition of 40k+99k training data improved the CV and Public score(0.910) but didn't improve the Private score.</li></ul></li>
<li>Using the DeBERTa-v3-large(Multiple choice) and Mistral-7B(Sequential classification) trained with the above methods, I performed inference separately for the two contexts, parsed and un-parsed tfidf based 270k Wiki context , and then used a simple soft-voting for ensembling.<ul>
<li>In using multiple models, I took care to avoid OOM by writing all code in <code>.py</code> files and having it run as <code>!python xxx.py</code>.</li></ul></li>
</ul>
<p>Here is the notebook: <a href="https://www.kaggle.com/code/bobfromjapan/49th-solution-deberta-and-mistral-7b-w-270k-cont" target="_blank">https://www.kaggle.com/code/bobfromjapan/49th-solution-deberta-and-mistral-7b-w-270k-cont</a></p>
<p>Once again, I would like to thank all the Kagglers and staff(, and my RTX4090 that kept computing for almost a whole month😀)</p>
|
Kaggle - LLM Science Exam
|
4th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We're thinking ES was essential for our solution in terms of the retrieval accuracy and speed. <br>
Of course, you may find another way to execute the efficient retrieves.</p>
<p>Thanks, Chris! Your notebooks and discussions are also valuable as usual. See you next competition !</p>
<p><a href="https://www.kaggle.com/kaggleqrdl" target="_blank">@kaggleqrdl</a> Thank you ! Yes, efficient retrievers are essential for our solution. <br>
It was lucky for us to be able to find this strategy. </p>
<p><a href="https://www.kaggle.com/corochann" target="_blank">@corochann</a> Thanks, and I'm proud of your team as a collegue. How impressive experience!</p>
<p>First and foremost, we would like to express our gratitude to the hosts and the Kaggle team for arranging this wonderful competition. Also, a huge shout-out to all the participants for fighting through this intense competition. It was an extremely educational and exciting competition.<br>
Below, we present our solutions.</p>
<p>Our solution is based on the high-quality combination of retrieval + deberta v3 models. The implementation of high-quality retrieval and how it can be incorporated as context was critically important.</p>
<h1>Retrieval Part</h1>
<p>Example code: <a href="https://www.kaggle.com/code/linshokaku/4th-elasticsearch-retrieval-example" target="_blank">https://www.kaggle.com/code/linshokaku/4th-elasticsearch-retrieval-example</a></p>
<h2>Retrieval Workflow</h2>
<ul>
<li>Stage 1 <ul>
<li>Sentence-wise keyword retrieval</li>
<li>Using Elasticsearch </li></ul></li>
<li>Stage 2<ul>
<li>Version 3: Elasticsearch score</li>
<li>Version 5: Edit distance score</li>
<li>Version 7: Semantic search score </li></ul></li>
</ul>
<h2>Validation (Retrieval)</h2>
<p>Implemented multiple retrieval rules and evaluated scores in a zero-shot manner without intervening model training. Initially, evaluations were carried out with llama2 7b and train.csv. After the performance of the pre-trained model using context significantly improved, this model was used to evaluate using a validation dataset drawn from the 60k dataset at <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/436383" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/436383</a>, taking out 2000 instances. As no model training was inserted, it was possible to execute improvement cycles at a relatively high speed.</p>
<h2>Details</h2>
<p>We believed the following elements to be necessary for high-quality retrieval:</p>
<ul>
<li>As much as possible, clean Wikipedia data</li>
<li>Search relating to all elements of the prompt, options</li>
<li>Comprehensive search </li>
</ul>
<p>Acquiring clean Wikipedia data is possible through the cirrussearch dump, which can be referenced from the Wikiextractor document at <a href="https://github.com/attardi/wikiextractor#wikipedia-cirrus-extractor" target="_blank">https://github.com/attardi/wikiextractor#wikipedia-cirrus-extractor</a>. We built an Elasticsearch server with the texts contained in the cirrussearch dump divided into sentences and ran it on the Kaggle kernel.</p>
<p>The searches are always performed sentence by sentence. For each prompt or option divided into sentences, we extract all the words, eliminate duplicates and stop words, access the Elasticsearch server, and extract similar sentences. Based on the Wikipedia sentences fetched in this way, we generate three types of contexts: v3, v5, and v7. Each context involves basic processing such as concatenating surrounding sentences of the target sentence and concatenating consecutive sentences. The differences are as follows:</p>
<ul>
<li>v3: Sorts context based on the score at the time of Elasticsearch search.</li>
<li>v5: Sorts context based on the edit distance with the question sentence.</li>
<li>v7: Sorts context using semantic search, implemented using sentence-transformers.<br>
For the embedding between the prompt and Wikipedia, we used msmarco-bert-base-dot-v5. For the embedding between the option and Wikipedia, we used all-mpnet-base-v2.</li>
</ul>
<h3>Tips:</h3>
<p>Elasticsearch requires writing at startup, so it needs to be copied and run on /kaggle/temp. As discussed in the discussion at <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/444742" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/444742</a>, the I/O speed of /kaggle/temp became unstable, so we added an innovation to directly perform data I/O from the file system on /kaggle/input by linking files in the Elasticsearch database that only perform reading with a symbolic link.</p>
<h1>Modeling Part</h1>
<h2>Validation</h2>
<p>Since it was important to train a model that could effectively reference context irrespective of the domain, we randomly extracted 2,000 instances as validation data from the open dataset without limiting to a specific domain. The raw train.csv was too easy for validation and was hardly useful; therefore, we did not use it much in the later stages.We were able to avoid overfitting by trusting the obtained correlation between validation scores (2,000 samples) and LB.<br>
Here is an illustration of the correlation between validation scores and LB: </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F520183%2Fef62d0c56d959b1d341c72ac5d9a266f%2F2023-10-11%2014.37.50.png?generation=1697002694325672&alt=media"></p>
<h2>training</h2>
<p>Model: Deberta v3 Large <br>
Datasets: Our main training datasets were 60k and 40k. <br>
Improvement in accuracy was noted by extending the number of tokens from 512 to 768 and 1280.</p>
<h3>Method to include context:</h3>
<p>We realized that the performance improved as more tokens were included as context. Thus, we implemented a method where we first positioned the prompt & choices and then packed the remaining portions with as much context as possible.</p>
<h2>Context Ensemble</h2>
<p>There were questions that matched the retrieval and some that did not. Ensembling the results inferred by each of the three types of contexts for a single model improved the score (Even during training, v7 only was good, but ensembling with multiple contexts during inference improved performance). This likely indicates that the diversity of the retrievers ensures the search results.</p>
|
Kaggle - LLM Science Exam
|
50th Place Solution for the Kaggle - LLM Science Exam Competition
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>That was a long road for all of us! Without great insights and sharings by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>, <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a>, <a href="https://www.kaggle.com/mgoksu" target="_blank">@mgoksu</a>, <a href="https://www.kaggle.com/jjinho" target="_blank">@jjinho</a> and others we would not have been able to achieve this result. Kaggle is a great place with great people indeed.</p>
<p><strong>Context</strong></p>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/overview" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/overview</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/data" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/data</a></li>
</ul>
<p><strong>Overview of the Approach</strong><br>
Before the wonderful retrieval algorithm by <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> was published our main strategy was to find the best sentence transformer. We tried a lot of options and the best boost was given by <strong>gte-small</strong> (+0.02 average LB boost in comparsion with the other ST, in particular with all-minilm-v6 and all-minilm-v12 that were used publicly). Also we fine-tuned few debertav3large (different lrs, epochs, datasets, etc) and used them in a weighted ensemble.</p>
<p>After the new way of retrieval was published, we had to reconsider our strategy — new ST no longer gave us an advantage and LB got so tight. We blended 2 ways of retrieval and totally used 8 different models:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13977249%2F883edae001ad95e0ff795a2d83794505%2Fsolution.jpg?generation=1696984676432568&alt=media" alt=""></p>
<p><strong>Details of the submission</strong><br>
As always it is very hard to pick the best private solution. Turns out that our best submission with <code>additional stop words</code> and another models scores 0.912 — +20 places on private LB from ours final position!</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13977249%2F0ab2e1bf43fb90c245f5e8f0edbac4a9%2Fsol2.jpg?generation=1696984882419435&alt=media" alt=""></p>
<p><strong>Conclusion</strong><br>
As we can see debertav3large with 304M backbone parameters (and we freeze first 18 layers!) used in ensemble and with great retrieval algorithms can score 0.91+ easily on exam by chatgpt3.5 with 175B parameters! Thats insane!</p>
<p><strong>Sources</strong></p>
<ul>
<li><a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1" target="_blank">https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1</a> — great starter notebook to train your own LLM model</li>
<li><a href="https://www.kaggle.com/code/simjeg/platypus2-70b-with-wikipedia-rag" target="_blank">https://www.kaggle.com/code/simjeg/platypus2-70b-with-wikipedia-rag</a> — who could have possibly known we can run 70B model on Kaggle…</li>
<li><a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles</a> — game changing retrieval approach</li>
<li><a href="https://www.kaggle.com/datasets/jjinho/wikipedia-20230701" target="_blank">https://www.kaggle.com/datasets/jjinho/wikipedia-20230701</a> — we used this dataset for training</li>
</ul>
<p>Thanks Kaggle for giving us the opportunity to compete and learn! Thanks all the participants who share their thoughts and findings!</p>
|
Kaggle - LLM Science Exam
|
53rd place solution: Ensemble model
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thank you to Kaggle for organizing this competition and to my teammates. <a href="https://www.kaggle.com/aditya2718" target="_blank">@aditya2718</a> <a href="https://www.kaggle.com/namanbansal4122" target="_blank">@namanbansal4122</a> </p>
<p>We participated in this competition for learning about LLM's .At that time we were noobs in this field (we still are 😅)so our initial scores were a mess :<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13602048%2F7be717605e918798f5aff66cb5e475d0%2FKaggle%20-%20LLM%20Science%20Exam%20_%20Kaggle%20-%20Google%20Chrome%2011-10-2023%2019_13_02.png?generation=1697031905209471&alt=media" alt=""><br>
But some how at the end we managed to finish with a score of :<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13602048%2F53e21a7e4efcb0d2b35aaffd9b0b0c4a%2FKaggle%20-%20LLM%20Science%20Exam%20_%20Kaggle%20-%20Google%20Chrome%2011-10-2023%2019_16_57.png?generation=1697032143163355&alt=media" alt=""></p>
<p>We used TF-IDF search to retrieve context from <a href="url" target="_blank">https://www.kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded</a> .</p>
<p>During the initial days, we tried a few different models and got the following results : (Note:- These results were obtained with no context given to model and were trained on given data of 200 questions.)</p>
<table>
<thead>
<tr>
<th>model_name</th>
<th>public_score</th>
<th>private_score</th>
</tr>
</thead>
<tbody>
<tr>
<td>NystromformerForMultipleChoice</td>
<td>0.485</td>
<td>0.480</td>
</tr>
<tr>
<td>Bert Base cased</td>
<td>0.493</td>
<td>0.505</td>
</tr>
<tr>
<td>Flan-t5</td>
<td>0.469</td>
<td>0.456</td>
</tr>
<tr>
<td>Deberta base</td>
<td>0.679</td>
<td>0.681</td>
</tr>
<tr>
<td>Distil Bert</td>
<td>0.538</td>
<td>0.525</td>
</tr>
</tbody>
</table>
<p>We decided to go with Deberta and created an assembly of 6 models(this was the maximum number of models which was not giving time-out).</p>
<p>These models were trained on different sample size of 60 k data <a href="url" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/436383</a> as we did not the resource to train on entire dataset .Some sample were of size 5k , 6k and 2 with 20k.</p>
<p>The weights of model was decided by the following formula :<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13602048%2Fd73e08a44e663e99281920dff2fd3367%2FDocument1%20-%20Word%2011-10-2023%2021_25_31.png?generation=1697039794626917&alt=media" alt=""></p>
<p>Some prompts which we tried that improved our results :</p>
<ul>
<li>"Answer the following questions by eliminating wrong options:\n" </li>
<li>"Read the following passage carefully and answer the questions :\n"</li>
<li>"Please re-evaluate your answer before giving final answer.\n"</li>
</ul>
<p><strong>Thing we tried but failed</strong> :</p>
<ul>
<li>Concept of negative weights : We prompted the model to tell the wrong answer and then subtracted the weighted probability from the ensemble. Got a public score of .903 . It was not exactly a failure but was below our expectation .</li>
<li>Implementing Shepherd model : We tried to implement the model given in the paper <a href="url" target="_blank"></a><a href="https://arxiv.org/abs/2308.04592" target="_blank">https://arxiv.org/abs/2308.04592</a> which was meant to act as critic for our assembly . Because we did not had the sufficient training resources we failed to extract any meaningful result . </li>
</ul>
<p>We would also like to thank <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> , <a href="https://www.kaggle.com/steubk" target="_blank">@steubk</a> , <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> , <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a> and <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a> for providing datasets and various insides to the competition . </p>
|
Kaggle - LLM Science Exam
|
55th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to the organization for this amazing competition and all the participants that shared their data and code. Especially <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for the his datasets <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60k-data</a> and <a href="https://www.kaggle.com/datasets/cdeotte/40k-data-with-context-v2" target="_blank">40k-data</a>, <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> for the <a href="https://www.kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded" target="_blank">270Wikipedia STEM articles</a> and the idea of using TF-IDF. Without this help, it would have been impossible to achieve the silver medal in this competition.</p>
<h1>Overview of the final solution</h1>
<ul>
<li>Retrieval: Single TF-IDF on <a href="https://www.kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded" target="_blank">270Wikipedia STEM articles</a> extended with 159K more articles using Cohere embeddings to filter the articles and wikiextractor to get the paragraphs.</li>
<li>Models: Ensemble of 3 DeBERTa-v3-large models trained and validated with different data.</li>
</ul>
<h1>Training</h1>
<p>We spent most of the time using the <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60k-data</a> and <a href="https://www.kaggle.com/datasets/cdeotte/40k-data-with-context-v2" target="_blank">40k-data</a> to finetune deberta, which was a poor approximation because we should have focused on improving the retrieval ( we were using the first approximation to open book shared by <a href="https://www.kaggle.com/jjinho" target="_blank">@jjinho</a> in this <a href="https://www.kaggle.com/code/jjinho/open-book-llm-science-exam/notebook" target="_blank">notebook</a>).<br>
However, when changing the retrieval to <a href="https://www.kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded" target="_blank">270Wikipedia STEM articles</a> +TF-IDF having good models allows for small gains that make a difference compared to other DeBERTa models that are slightly worse fine-tuned.</p>
<p>Specifically, we used 3 models fine-tuned with different data from the 60k dataset.</p>
<ul>
<li>One model using the <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60k-data</a> for training and 200 data provided in the competition for validation.</li>
<li>Two models using 20k subset from <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60k-data</a> + <a href="https://www.kaggle.com/datasets/cdeotte/40k-data-with-context-v2" target="_blank">40k-data</a>. This subset was built trying to have questions with a similar distribution to the 200 data provided in the competition (we noticed that the questions from the 200 were longer, so we looked for a subset that resembled this distribution more closely). These two models were validated with two different subsets of 2,000k scientific questions also obtained from the <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60k-data</a> , as using this validation genereted models with higher LB- CV correlation and also higher LB scores than using the original 200 questions for validation.</li>
</ul>
<p>In addition, when selecting the 3 models, we chose the 3 cases that were least correlated with each other among the sufficiently good cases. When choosing the weights, we tried to optimize them using the 200 origanal questions and other datasets but there was no correlation with LB. So, we finally settled on the solution that worked best among 3 or 4 simple weight combinations (equally weighted, one twice as much as the other two…)</p>
<h2>Traning parameters/ experiments</h2>
<ul>
<li>Modifying the number of frozen layers. Ultimately, we chose 2 models with 18 frozen layers and one model with no frozen layers.</li>
<li>We didn't see many differences when varying the maximum number of tokens during training. The selected models are trained with max_tokens values ranging from 350 to 500. As for the max_tokens in validation, we found that 650 allowed us to obtain models that correlated well with what later happened on the LB.</li>
<li>Altering the learning rate (both the learning rate and the warmup ratio). The models that generally performed best required around 25,000 steps with batch_size=2. At step 25,000, the learning rate was 1.5e-6, and the warmup had not yet completed, but generally beyond that point, the model stopped improving.</li>
<li>Step strategy for evaluation instead of epoch strategy.</li>
<li>Things that didn´t work: gradual unfreezing and discriminative finetunning.</li>
</ul>
<h1>Retrieval.</h1>
<p>When <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> shared <a href="https://www.kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded" target="_blank">270Wikipedia STEM articles</a> we started to use TF-IDF on it as in his <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">notebook</a>.<br>
We improved the retrieval in two distinct ways.</p>
<ul>
<li><p>On the one hand, we modified certain parameters of the TfidfVectorizer that led to improvements in CV, public LB, and private LB:</p>
<ul>
<li>A substantial improvement was adding the parameter sublinear_tf=True to the TfidfVectorizer, which provided us with an enhancement of 0.005 on both public LB and private LB. This makes sense because in this case, it's beneficial to give less weight to cases very frequent in many documents and give more weight to very infrequent cases.</li>
<li>Another residual improvement was switching from the regular expression r"(?u)\b[\w/.-]+\b|!|/|?|"|'" to r"(?u)\b[\w/.-]+\b" for token generation. The latter discards punctuation signs like ,',- etc., while the former uses them as tokens (this gave us a 0.001 improvement on occasions).</li></ul></li>
<li><p>On the other hand, we tried to increase the number of articles in the dataset. This provided another boost of around 0.006 on both public LB and private LB. To select new articles this we used Cohere/wikipedia-22-12-en-embeddings and mean or max cosine similarity between each new article and the 270_000 articles in use. Our final submission uses 159K articles in addition to the original 270K.</p></li>
</ul>
|
Kaggle - LLM Science Exam
|
5th place solution: Llama 2 70B meets Sparse & Dense Retrievals from Own parsed wikipedia dataset
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We didn't parse Latex code, we thought LLM is wise enough to understand the contents only if the information exists.<br>
So we tried not to skip the number or math expression contents, but we dit not try further cleaning the expression.</p>
<p>Concretely, I used this kind of code:</p>
<pre><code>def _custom_replace_templates(template):
# Ex. -> W(, )
template.name.lower() in [, ]:
# arg[] ->
# Last `[:]` .
.(arg.plain_text()[:] arg in template.arguments)
# ----- case -----
# `template.plain_text()` .
# Use `str` method keep original . Ex. .
# str(template)
# This will retain texts -> .
# .(arg.plain_text()[:] arg in template.arguments)
:
None
def _custom_table_to_text(t: ) -> str:
data = [
[(cell cell not None ) cell in row]
row in t.data()
]
(data) == :
# Below text cannot parsed by original `_table_to_text` of wikitextparser...
# This workaround use mwparsefromhell in this case.
logger.warning()
text = mwparserfromhell.parse(t.plain_text()).strip_code().strip()
logger.warning()
text
widths = [] * (data[])
row in data:
, d in enumerate(row[:-]):
widths[] = (widths[], wcswidth(d))
caption = t.caption
(
( caption not None )
+
+ .(
.( (, d) in zip(widths, r)) r in data
)
+
)
def _custom_table_to_text_v2(t: ) -> str:
# Just the tables!
def _parse_and_clean_wikicode_wtp(raw_content):
# v1:
# wtp.parse(raw_content).plain_text(
# replace_templates=_custom_replace_templates,
# replace_tables=_custom_table_to_text_v2,
# )
# v2: .
wtp.parse(remove_ref_tags(raw_content)).plain_text(
replace_templates=_custom_replace_templates,
replace_tables=_custom_table_to_text_v2,
)
</code></pre>
<p>But I feel it is better to use CirrusSearch dump data (I didn't know that during the competition), as many high rankers mentioned.</p>
<p>We just made csv files which contains</p>
<ul>
<li>1. BM25 retrieved context</li>
<li>2. instructor-xl retrieved context<br>
separately. </li>
</ul>
<p>And used these csv files to train the model. Does it answer your question?</p>
<p>We have generated training dataset with BM25 retrieved context and instructor-xl retrieved context. So the number of dataset will be twice (same prompt, but different context).</p>
<p>Only one model is trained and both dataset are used for the training data.</p>
<p>We did not use huggingface training library, but it is more close to concatenating dataset. In this case I guess interleave dataset also behaves same since the number of dataset is same for both retrieval.<br>
<a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.concatenate_datasets" target="_blank">https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.concatenate_datasets</a></p>
<p>Thank you! I always learn from you <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> too!</p>
<p>Thank you Chris.<br>
We simply successfully trained the GPT style model with LoRA and got a good performance at the early stage, so we didn't go deeper with the BERT style models.<br>
When I look at other teams' solution, we are surprised that there are many high-rankers who only uses DeBERTa model, so I guess the most important part was "retrieval" in this competition, while the base model performance indeed affects a bit. I guess many parameter models actually "memorize" more Wikipedia information when pre-trained, and they can answer more without correct retrieval.</p>
<p>Thank you to the organizers and congratulations to all the participants.</p>
<p>There are really lots of things that can be done from dataset preparation, model training to post-processing (how to utilize trained LLM model), and it was a very tough competition for me. But I enjoyed the competition overall!<br>
I would like to thank my team members <a href="https://www.kaggle.com/zaburo" target="_blank">@zaburo</a>, <a href="https://www.kaggle.com/qhapaq49" target="_blank">@qhapaq49</a>, <a href="https://www.kaggle.com/charmq" target="_blank">@charmq</a>, <a href="https://www.kaggle.com/flowlight" target="_blank">@flowlight</a> for their hard work too. </p>
<p>The figure shows our retrieval to the model overall pipeline.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F518134%2F8f34b4a6918523b94b9ff6fec9b10da1%2Fkaggle-llm-5th-overall.jpg?generation=1697001871726670&alt=media" alt=""></p>
<p> --> Write-up Updated!</p>
<h2>Short summary</h2>
<ul>
<li>We successfully run Llama 70B model on kaggle notebook</li>
<li>We used BM-25 based sparse retrieval (using <a href="https://github.com/castorini/pyserini" target="_blank">pyserini</a> library whose backend is <a href="https://lucene.apache.org/" target="_blank">Apache Lucene</a>) to search all the wikipedia pages.</li>
<li>We parsed wikipedia dataset by ourselves to cover most of the pages with number information.</li>
</ul>
<h2>Dataset creation</h2>
<h3>Wikipedia dataset</h3>
<p>In the early stage of the competition when I checked the given train.csv and tried to answer manually, I felt this competition is a “retrieval” competition rather than a model training competition. We also noticed some pages or numbers are missing in the publicly available dataset. So I decided to parse our wikipedia dump manually. </p>
<p>Below are comparison tables of datasets:</p>
<table>
<thead>
<tr>
<th>Dataset</th>
<th>Parser</th>
<th>Page</th>
<th>Text info</th>
</tr>
</thead>
<tbody>
<tr>
<td>Kaggle published dataset by <a href="https://www.kaggle.com/jjinho" target="_blank">@jjinho</a> <br> <a href="https://www.kaggle.com/datasets/jjinho/wikipedia-20230701" target="_blank">jjinho/wikipedia-20230701</a></td>
<td><code>wikitextparser</code></td>
<td>Some pages are dropped</td>
<td>All templates are dropped.<br>All the “\n” are dropped (difficult to read by human).</td>
</tr>
<tr>
<td>Huggingface dataset<br> - <a href="https://huggingface.co/datasets/wikipedia" target="_blank">wikipedia</a> <br> - <a href="https://huggingface.co/datasets/graelo/wikipedia" target="_blank">graelo/wikipedia</a></td>
<td><code>mwparserfromhell</code></td>
<td>Better coverage</td>
<td>All templates are dropped.<br> <code><math></code>, <code><ref></code>, <code><table></code> tags are dropped.</td>
</tr>
<tr>
<td>Our dataset</td>
<td><code>wikitextparser</code> + custom template processing</td>
<td>Better coverage</td>
<td><code>{val}</code> and <code>{math}</code> templates text are kept (Other templates are dropped).<br> Keep <code><math></code> tag, but <code><ref></code>, <code><table></code> tags are dropped.</td>
</tr>
</tbody>
</table>
<p>Huggingface's Wikipedia dataset covers most of the pages but numbers are missing, it seems due to the <code>mwparserfromhell</code> parser issue. So, I replaced the parser with <code>wikitextparser</code> from the <a href="https://huggingface.co/datasets/graelo/wikipedia" target="_blank">graelo/wikipedia</a> code base (There was a helpful instruction on how to run the code at the bottom of the dataset description!).<br>
I believe our dataset covers most of the pages with numbers or math expression information.</p>
<h3>QA dataset creation</h3>
<p>Many datasets were shared publically during the competition, but we also tried to make it by ourselves as well. Below is the prompt for ChatGPT 3.5 that I used to generate the QA dataset.</p>
<pre><code>system_content =
user_content_template_qa = Template(
new lineWhat ...Which following statements ...What did ` person` doWhat was ...skipmissing baryonic mass
) MOND a theory reduces observed missing baryonic mass galaxy clusters postulating existence a new form matter called
) MOND a theory increases discrepancy observed missing baryonic mass galaxy clusters measured velocity dispersions a factor a factor
) MOND a theory explains missing baryonic mass galaxy clusters was previously considered dark matter demonstrating mass form neutrinos axions.
) MOND a theory reduces discrepancy observed missing baryonic mass galaxy clusters measured velocity dispersions a factor a factor
) MOND a theory eliminates observed missing baryonic mass galaxy clusters imposing a new mathematical formulation gravity require existence dark matter.
A)
Let's start. Here TEXT: $title\n$
</code></pre>
<p>I only give ChatGPT 3.5 an abstract and 1 paragraph from specific page as <code>$text</code>.</p>
<p>Since we can achieve very high scores like 0.995 for train.csv at the end of the competition. We used our generated 1000 dataset for local validation.</p>
<h2>Retrieval</h2>
<h3>Sparse retrieval</h3>
<p>I'm not professional to the NLP field. I surveyed existing research, especially related to retrieval-based LLMs. And <a href="https://aclanthology.org/P19-1612/" target="_blank">ORQA</a> paper affected our approach, it reported that “On datasets where the questioner already knows the answer, a traditional IR system such as BM25 is sufficient.”. Since ChatGPT 3.5 was given the sentence to generate QA, we thought the same applies in this competition too.<br>
So we read <a href="https://aclanthology.org/N19-4013/" target="_blank">BERTserini</a> paper and decided to use <a href="https://github.com/castorini/pyserini" target="_blank">pyserini</a> library, which is Python wrapper of <a href="https://github.com/castorini/anserini" target="_blank">anserini</a> that uses Apache Lucene in the backend. </p>
<p><a href="https://www.kaggle.com/code/strifonov/pyserini-scibert" target="_blank">This notebook</a> by <a href="https://www.kaggle.com/strifonov" target="_blank">@strifonov</a> was really helpful to run pyserini on kaggle notebook (thank you!).</p>
<p>Apache Lucene was very efficient and convenient. It was easy to index all the wikipedia dataset and search through BM25 score.<br>
We splitted each wikipedia page to the paragraph (by “\n\n”). It only required several hours to index in total of 74M records. In the inference time, searching is accelerated by multi-threading, 200 searches are finished in 2 min or so.</p>
<h3>Dense retrieval</h3>
<p>We created 2 types of dense retrieval.</p>
<p><strong>1. All the Wikipedia contents split by sentence wise block</strong></p>
<p>We used <a href="https://huggingface.co/hkunlp/instructor-xl" target="_blank">instructor-xl</a> for the embedding calculation and used <a href="https://github.com/facebookresearch/faiss" target="_blank">faiss</a> to create index.<br>
The original index is too big (300GB) to run on memory, so we quantized the index (300GB → 10GB) for this approach.</p>
<p><strong>2. STEM 270k contents split by paragraph wise block</strong></p>
<p>We used <a href="https://huggingface.co/BAAI/bge-large-en" target="_blank">bge-large-en</a> for the embedding calculation.<br>
And only used same pages published in <a href="https://www.kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded" target="_blank">STEM270k</a> dataset by <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>.</p>
<p>We added “title” information in each block, so that we can obtain correct search result even when “it/he/she” is used to refer to the title.</p>
<p>In total, we used 3 types of retrievals, and combined them as a context to input to the LLM.</p>
<h2>Model (written by <a href="https://www.kaggle.com/zaburo" target="_blank">@zaburo</a>)</h2>
<p>Our team used Llama-2 7B, 13B, 70B and <a href="https://huggingface.co/mistralai/Mistral-7B-v0.1" target="_blank">Mistral 7B</a>. Since Mistral 7B was achieving better scores compared to Llama-2 of the same size, we used Mistral 7B and Llama-70B in our final submission. Based on the scores from MMLU and the results of our preliminary experiments, we used models without Instruction Tuning, such as Llama-2-70b-hf and Mistral-7B-v0.1.</p>
<p>For each model, we input the problem in the following format and had the model output the probability that the token corresponding to "▁A", "▁B", "▁C", "▁D", "▁E" comes immediately after this sentence. This yielded decent performance even with models without any fine-tuning.</p>
<pre><code></code></pre>
<p>Fine-tuning was done using QLoRA. We incorporated bitsandbytes' 4bit quantization and PEFT's LoRAModel into our training pipeline. The hyperparameters of QLoRA were almost exactly the ones in QLoRA's paper, except for the batch size which had to be lowered due to long input and the GPU memory constraint.</p>
<p>We expected that running Llama 70B on Kaggle Notebook would be challenging, so we started working on it from the early stages of the competition. We registered quantized weights (less than 40GB) for each layer as a dataset, and performed inference layer-by-layer. By applying xformers' <code>memory_efficient_attention</code> to the attention layers, we were able to keep memory consumption linear even with long contexts, leaving plenty of room in the GPU memory. We used only approximately 6GB of GPU memory for inference.</p>
<p>LLM can considerably change inference results depending on the order of choices, so we used a test time augmentation strategy in which we rotated the order of choices 5 times to average this effect. This TTA was most effective with pretrained models without any retrieval, and although the effect drastically decreased with retrievals or QLoRA, it remained effective to the end.</p>
<p>Naively carrying out TTA inference would increase the execution time 5 times, but rather than inferring the entire set 5 times, we concatenated all patterns of choice order like <code>{context} {Q} {A B C D E} {B C D E A} … {E A B C D}</code> and set the <code>attention_mask</code> appropriately to reduce the inference of the long common part to only once. By applying this technique, a 70B model with 5 times TTA worked within time for the full test set, and using a single 70B + BM25 Top2 + 5 times TTA resulted in a Private 0.925 score.</p>
<h2>Validation</h2>
<p>We noticed MAP@3 is a bit unstable metric, and can fluctuate a bit. Below are our strategy not to overfit to the public LB.</p>
<ul>
<li>We used 3 datasets, train.csv (200), <a href="https://www.kaggle.com/yalickj" target="_blank">@yalickj</a> <a href="https://www.kaggle.com/datasets/yalickj/dataset-wiki-new-1" target="_blank">dataset</a> (300), and our own dataset (1000, described above), for the local validation.<ul>
<li>We did not use these data in the training, even in the final submission.</li></ul></li>
<li>We tried to run various experiments with local dataset validation. We did not submit methods which do not work well in our local dataset. I think it prevents us from adopting various methods that overfit the public LB.<ul>
<li>For example, a custom query like <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/442595#2454966" target="_blank">this</a> did not work well in our experiment, so we did not adopt this.</li>
<li>Also, we tried embedding model fine tuning, but it did not work well in our local dataset, so we did not submit this too.</li></ul></li>
</ul>
<h2>Final Pipeline - Multi Stage Inference (written by <a href="https://www.kaggle.com/charmq" target="_blank">@charmq</a>)</h2>
<p>We noticed that putting multiple contexts in a prompt at the same time improves the performance of the model. However, because simultaneous input of contexts increases inference time, we used a multi-stage inference pipeline in which easy problems are answered by small models (mistral-7B) and difficult problems are answered by large models (llama2-70B). The final submission consists of three stages as follows:</p>
<p><strong>1st stage: ensemble of mistral-7B models</strong><br>
mistral-7B with BM25 top2 contexts + mistral-7B with instructor and bge contexts<br>
Inference on all data</p>
<p><strong>2nd stage: llama2-70B</strong><br>
llama2-70B with BM25 top2, instructor and bge contexts<br>
Inference on bottom 40% data with low confidence in the 1st stage</p>
<p><strong>3rd stage llama2-70B</strong><br>
llama2-70B with bge contexts (2~3 times longer than bge contexts used in the 1st and 2nd stage)<br>
Inference on bottom 5% data with low confidence in the 1st and 2nd stage</p>
<p>This multi stage pipeline achieved private:0.926, public:0.928.</p>
|
Kaggle - LLM Science Exam
|
64th Place - Elasticsearch to find them all
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>And then the Dark AI lord etched into one of the hyperdimensional manifolds</strong></p>
<p><em>One RAG to rule them all</em><br>
<em>One RAG to find them</em><br>
<em>One RAG to bring them all</em><br>
<em>And in the <strong>context</strong> bind them</em></p>
<h1>Solution highlight - Elasticsearch to find them all</h1>
<p>Having started with just 3 weeks left in the competition, I was convinced that the be most important element will be <strong>search</strong> and <strong>retrieval</strong>. After a few days of trying out trying out some TF-IDF and BM25 ideas, it struck me that a search engine is what's needed.</p>
<p>If you take the correct answer field for many of the questions on the train set, Google straight up highlights the sentence it came from.</p>
<p>(<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3468635%2F44a044ca559c7bb1346f4adbbb2eee14%2FScreenshot%202023-10-11%20at%205.51.58%20AM-1.png?generation=1696992434300492&alt=media" alt="Google search highlight">)</p>
<p>So can we get a search engine running locally? Yes!!</p>
<p>Don't reinvent the wheel - <a href="https://lucene.apache.org/" target="_blank">Lucene</a> is the best search engine for this. <a href="https://www.elastic.co/elasticsearch/" target="_blank">Elasticsearch</a> is an amazing software built on top of it, and it comes integrated in Huggingface Datasets!! (Well, sort of … Usage is a pain on Kaggle 🥹)</p>
<p><a href="https://www.kaggle.com/code/dipamc77/66th-place-elasticsearch-deberta/notebook" target="_blank">Here's my solution notebook</a>, the best submission just uses 5 Deberta models trained on the shared datasets and some data I generated privately. Plus lots of engineering to collect context and run elasticsearch.</p>
<h2>Document Recall</h2>
<p>I'm fairly certain the competition dataset was generated with chunks of 3-5 consecutive sentences from the same article. I hand labelled each of the 200 questions to see the recall rate of finding the correct article based on the text. To test this further, I generated some data with 5 sentence excerpts so that I have the exact section and page for each text. <strong>Elasticsearch beat all other approaches at finding the correct articles by a big margin.</strong></p>
<h1>Context generation</h1>
<p>I opted to pair up question with single answers for sentence semantic search. Intuitively this seemed more meaningful and boosted CV scores on Kaggle set as well as other shared datasets.</p>
<p>It also helped to add upto 2 sentences which were immediately above and below the retrieved sentences in the original article. Since often a single sentence starts with "It" or a similar word, the sentence above and below give context to the context. 😉</p>
<p>Generating contexts based on (question + single answer) also aligns well with how AutoModelForMultipleChoice models are trained. I also used the same single answer pairing approach to train a Mistral LoRA and initial experiments showed it did better than predicting the option directly. Details of context selection can be found in my <a href="https://www.kaggle.com/code/dipamc77/66th-place-elasticsearch-deberta/notebook" target="_blank">solution notebook</a> and <a href="https://www.kaggle.com/code/dipamc77/elasticsearch-mistral-lora-deberta/notebook" target="_blank">Mistral notebook</a>.</p>
<h1>Which wikipedia to use?</h1>
<p>Since the dataset is generated using wikipedia, it makes sense to the wikipedia dumps. The catch is, wikipedia keeps getting edited, <a href="https://en.wikipedia.org/w/index.php?title=Dark_matter&oldid=1158114319#:~:text=These%20models%20attempt%20to%20account%20for%20all%20observations%20without%20invoking%20supplemental%20non%2Dbaryonic%20matter" target="_blank">Here's a line from June Wikipedia that matches Question ID 47</a>, and it is not present in the <a href="https://en.wikipedia.org/w/index.php?title=Dark_matter#:~:text=These%20include%20modified,still%20be%20required." target="_blank">latest article</a>. On top of that, searching all of wikipedia is really difficult to pull off on Kaggle, so filtering the data for science articles seemed like a reasonable idea.</p>
<p>Initially, I decided to filter the dataset by following the <a href="https://en.wikipedia.org/wiki/Category:Main_topic_classifications" target="_blank">Wikipedia Category DAG</a>, but they quickly started including non science articles after some depth. One approach that seemed to work is taking different starting points and selecting overlaps between them based on the depth. But I realized that this was a quick way to overfit to the test set and using the full wikipedia is better. Some other science wiki datasets shared had as few as 40% coverage the the articles I hand labelled from the Kaggle training set.</p>
<p>I started using <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/424821" target="_blank">JJ's wiki dataset</a>, but later found it had a number of missing articles, and decided to take the June 2023 dump of Wikipedia, dataset shared <a href="https://www.kaggle.com/datasets/dipamc77/wikipedia-full-june-2023" target="_blank">here</a>.</p>
<h1>So many errors !!</h1>
<p>I picked the correct submissions, selected based on <em>eval_loss</em> instead of <em>map@3</em>, which seemed to correlate better. And yes I lost 3 submissions on the last day due to slow network drives!! 🥳</p>
<p>(<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3468635%2F0efd5be4b56fd7b31cbfa373b6a9e4f4%2FScreenshot%202023-10-11%20at%206.03.27%20AM-1.png?generation=1696992526193175&alt=media" alt="Errors and final picks">)</p>
<p>Found out the most painful way that 2x T4 nodes have much slower disk read times than P100, enough to slow down elasticsearch to a crawl and hit search timeouts. <a href="https://www.kaggle.com/code/dipamc77/elasticsearch-mistral-lora-deberta/notebook" target="_blank">In this notebook</a>, I added a Mistral 7B LoRA that I trained on the last day, but had to switch to T4 x2 accelerator to make it run.</p>
<p>I was later able to fix it by increasing Elasticsearch's query timeout, but the search would still take 7 hours for 4000 queries (as opposed to 1 hour on P100 nodes, and 3 minutes on my local machine with a regular SSD). So the Elasticsearch approach wouldn't always work on Kaggle for millions of articles, but it's amazing anywhere else. I hope some of the engineering hacks are useful for someone in the future.</p>
<h1>Ensembling</h1>
<p>The models I used had 3 different contexts and the competition had a MAP@3 metric, so some custom ensembling ideas worked well.</p>
<p>If the top probability is above 0.7, the models seem to be well-calibrated, meaning the model has a high chance of being correct. Hence above this threshold, I pick the best model predictions first, but only the top prediction. If the next best model disagrees with high probability, its prediction gets placed in prediction slot 2, and so on for slot 3. This method boosted both CV and LB scores by a good margin.</p>
<p>I think this method worked well because the top model has fewer sentences from different articles but more surrounding context. While the other models have single sentences from more articles which has a higher chance of containing a sentence from the correct article.</p>
<p>Below the max probability of 0.7 and above 0.4 - I take a weighted average of the probabilities.</p>
<p>Below 0.4 the predictions seem close to random, I just pick the best model's predictions.</p>
<p><em>Extra note</em> - Using an LLM or a Long context for the low probability questions might be a good idea which also saves time. My experiment with Mistral 7B suggested the same, I'll post an update with that score if I'm able to fix the engineering issues.</p>
<h1>Ideas left unexplored</h1>
<p>It's now clear I didn't focus on the right things, or at least I shouldn't have worked on this competition solo. Training models was just as important, and I sadly I ignored it until there were just 5 days left.</p>
<p>I trained 4 Deberta models in parallel on the last day, on different contexts generated for the same questions, and a LoRA on Mistral 7B that failed to submit.</p>
<p>These are some of the ideas that I probably should have explored more:</p>
<ol>
<li>Training LLMs instead of just Deberta</li>
<li>Long context models - For the long tail of articles that don't have matches in the top sentences.</li>
<li>Different sentence embedding models</li>
</ol>
<h1>Personal notes</h1>
<p>This is perhaps the only serious amount of NLP work I've ever done, and it was a rollercoaster of a learning experience. Huge thanks to everyone who shared their code, notebooks, datasets and insights, I'll not add a list because that list will never end. Kaggle truly lets you stand on the shoulders of giants. Practically my entire solution is based on what others shared.</p>
<p>The compute management experience was a nightmare but also a great exercise in hacky software engineering. I look forward to all the tricks people use in their solutions. I do believe that the Kaggle hardware is not suitable for such competitions and needs a major update.</p>
<p>The last few days were extremely intense and I barely had any sleep. I'm glad I was able to get into the Silver zone on the last day after being below 300 rank for the entire competition, and more so given that this was the most competitive challenge in recent Kaggle history.</p>
<p>Lessons learnt:</p>
<ol>
<li>Start early, and team up sooner.</li>
<li>Don't trust Kaggle's compute, avoid major engineering changes at the end.</li>
<li>Don't get stuck on one idea even if it seems the most important and delivers results, explore more.</li>
</ol>
<p>In the end, I knew the one RAG to rule them all was not mine. Congrat's to <a href="https://www.kaggle.com/philippsinger" target="_blank">Psi</a>, <a href="https://www.kaggle.com/ybabakhin" target="_blank">ybabakhin</a> and <a href="https://www.kaggle.com/ilu000" target="_blank">ilu000</a>, looking forward to your solution.</p>
<p>(<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3468635%2F99aec1716b14465e0ff1712c9fa64833%2Ffrodo-baggins-its-over.gif?generation=1696992407253017&alt=media" alt="Frodo meme">)</p>
<p>Yeah I should have tried going closer to the source with Lucene instead of ES I guess. Being new to NLP I picked ES first because it was already in HF datasets. Had I hit the error before might have switched it, hindsight of starting late.</p>
|
Kaggle - LLM Science Exam
|
67th Place Solution for the Kaggle - LLM Science Exam Competition
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks a lot for this great challenge on such a hot topic. We (thanks my great teammate <a href="https://www.kaggle.com/fejust" target="_blank">@fejust</a>) learnt a lot in particular through all the opened kernels and lively discussions. </p>
<h2>Approach</h2>
<p>We used the open book approach <a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1" target="_blank">How To Train Open Book Model - Part 1</a> to train our multiple choice models, and the disruptive retrieval option <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">[86.2] with only 270K articles!</a> based on the cleaned datasets and the TF-IDF retriever. Our best submission used this retrieval and was an ensemble of two fine-tuned DeBERTa large models, one trained for 2 epochs on all 60k records of <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60k-data-with-context-v2</a> with 256 max length, and another trained on 30k with max length 512.</p>
<h2>Key Points</h2>
<ul>
<li>Rather computing the logits of all 5 options at once, we computed the logit for each option separately in the inference notebook. This was allowing us to compute logits with much larger contexts, since we saw during the open book approach that several improvements were reported by adding more contexts.</li>
<li>We managed to fine-tune a TensorFlow DeBERTa large model with only frozen embeddings and max length of 512 on the Kaggle TPU on 30k records from the open book approach for 2 epochs, in similar spirit to <a href="https://www.kaggle.com/code/defdet/tpu-accelerated-training" target="_blank">TPU - Accelerated training</a>. This took around 8.5 hours. The weights are finally converted to PyTorch, so that we didn't had to worry about two frameworks in the submission notebook.</li>
<li>We also slightly changed the inference prompt to <code>f"[CLS] {context} #### {question} [SEP] {option} [SEP]"</code>, because this was something that we found more natural to the training. This showed a minor improvement in our first tests and we kept this till the end.</li>
</ul>
<h2>Other</h2>
<p>We spent a considerable amount of time experimenting with dense retrieval options, based on normalized sentence transformer embeddings. However in all our experiments, using this directly on the cleaned datasets, for a second stage selection of smaller chunks, or using them in combination to the TD-IDF retriever, we never reached performance similar to using the TD-IDF alone. We are very interested to see what we might have missed there.</p>
<h2>Key Learning</h2>
<ul>
<li>Multiple choice models and how to train them.</li>
<li>Be careful when extracting data and check if the relevant information is not destroyed by special tags.</li>
<li>Reverse ordering the context helps.</li>
<li>TFIDF is impossible to beat.</li>
<li>Context preparation can be different between training and inference.</li>
</ul>
<p>Thanks all!</p>
|
Kaggle - LLM Science Exam
|
69th Place Solution for the LLM Science Exam Competition
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank Kaggle for hosting such a great competition and all the participants who generously shared useful insights/models/datasets. Thank my teammate <a href="https://www.kaggle.com/jiesong" target="_blank">@jiesong</a> for making this ranking possible.</p>
<p><strong>Context</strong></p>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/data" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/data</a></li>
</ul>
<p><strong>Overview of the approach</strong><br>
We followed <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>'s "270K Wiki STEM articles + TF-IDF retrieval" method and used four ensemble models that three are DeBERTa and one is Longformer.</p>
<p>Due to our computational limits, although we fine-tuned a lot of DeBERTa models ourselves, most of them performed lower than these two open-sourced models - <a href="https://www.kaggle.com/bobfromjapan" target="_blank">@bobfromjapan</a>'s <a href="https://www.kaggle.com/datasets/bobfromjapan/llm-science-exam-context-v2-models" target="_blank">model</a> and <a href="https://www.kaggle.com/mgoksu" target="_blank">@mgoksu</a>'s <a href="https://www.kaggle.com/datasets/mgoksu/llm-science-run-context-2" target="_blank">model</a>. One of our fine-tuned models which used different sentence transformer and dataset was also included. We followed <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>'s <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/436383" target="_blank">method</a> for the fine-tuning.</p>
<p>For Longformer, we just used zero-shot as our fine-tuning wasn't a success.</p>
<p><strong>Details of the submission</strong><br>
<em>Our fine-tuned model</em></p>
<ul>
<li>used all-mpnet_base_v2 to embed <a href="https://www.kaggle.com/datasets/jjinho/wikipedia-20230701" target="_blank">the 6+ million wiki articles</a>, and obtained a new index file</li>
<li>selected 27k "similar" questions from the <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60k samples</a>, as those questions IMO have closer style to the 200 kaggle sample questions, in terms of content and length.<ul>
<li><a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a> <a href="https://www.kaggle.com/datasets/radek1/sci-or-not-sci-hypthesis-testing-pack" target="_blank">6k + 6k</a> + <a href="https://www.kaggle.com/datasets/radek1/15k-high-quality-examples" target="_blank">15k</a></li></ul></li>
<li>used the new index to find relevant context for the 27k questions for training</li>
<li>used the new index to find relevant context for the 200 kaggle questions for validation</li>
</ul>
<p><em>Ensemble weights</em><br>
We followed <a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a>'s method to optimize the ensemble weights. Originally we applied this method to more models, including more of our fine-tuned models and Llama and Galactica models. It showed higher CV score. However they (about five in the last 36 hours) were all timeout at the end of the submission. </p>
<p>Then it came to the last 12 hours in the competition… We don't have enough time to re-run the optimization on our at-the-moment best score which used 0.25 : 0.25 : 0.25 : 0.25 ratios on the models mentioned above. Hence we just used our educated guess to set the ensemble weights as following:<br>
- <a href="https://www.kaggle.com/bobfromjapan" target="_blank">@bobfromjapan</a>'s <a href="https://www.kaggle.com/datasets/bobfromjapan/llm-science-exam-context-v2-models" target="_blank">model</a> : 0.5<br>
- <a href="https://www.kaggle.com/mgoksu" target="_blank">@mgoksu</a>'s <a href="https://www.kaggle.com/datasets/mgoksu/llm-science-run-context-2" target="_blank">model</a>: 0.25<br>
- our model: 0.15<br>
- longformer (zero-shot): 0.1</p>
<p><strong>Acknowledge</strong><br>
Last but not least, we learned a lot from this competition and will keep learning from the solution write-ups. I made a 67 page notes by just copying and pasting the valuable insights shared by the participants. We want to give a big shout-out to the people who inspired us the most and made our ranking possible: <a href="https://www.kaggle.com/leonidkulyk" target="_blank">@leonidkulyk</a>, <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a>, <a href="https://www.kaggle.com/nlztrk" target="_blank">@nlztrk</a>, <a href="https://www.kaggle.com/jjinho" target="_blank">@jjinho</a>, <a href="https://www.kaggle.com/mgoksu" target="_blank">@mgoksu</a>, <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>, <a href="https://www.kaggle.com/bobfromjapan" target="_blank">@bobfromjapan</a>, <a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a>. We also want to thank all the participants who provides valuable insights in the discussions/comments.</p>
|
Kaggle - LLM Science Exam
|
6th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="https://www.kaggle.com/corochann" target="_blank">@corochann</a> and sorry for the late reply. We did the benchmarking at the start of competition, where we observed decent boost e.g. for our <code>span model + all-MiniLM-L6-v2 retriever + top k =2</code>: score improved from <code>0.829</code> to <code>0.841</code> with fine-tuning. </p>
<p>Thanks to Kaggle for hosting the LLM competition - it was both challenging and well-structured. A big shout-out to our community for the insightful discussions and for demonstrating what's possible under limited GPU resources. Congratulations to the winning teams, we learned a lot from the solution write ups. Many thanks to my super talented teammates <a href="https://www.kaggle.com/ubamba98" target="_blank">@ubamba98</a>, <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a>, <a href="https://www.kaggle.com/trushk" target="_blank">@trushk</a> and <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a> for such an amazing collaboration and perfect teamwork!</p>
<h2>Code Links</h2>
<ul>
<li>Github repo: <a href="https://github.com/rbiswasfc/llm-science-exam" target="_blank">https://github.com/rbiswasfc/llm-science-exam</a> </li>
<li>Inference notebook: <a href="https://www.kaggle.com/code/ubamba98/a08-exam-solver?scriptVersionId=145861577" target="_blank">https://www.kaggle.com/code/ubamba98/a08-exam-solver?scriptVersionId=145861577</a></li>
</ul>
<h2>Summary</h2>
<p>From the very beginning, we followed the retriever-reader framework for answering the STEM MCQs.</p>
<ul>
<li>Retrievers: Get context data<ul>
<li>Corpus: we created a custom STEM corpus by filtering wiki articles based on their category metadata</li>
<li>Generated synthetic MCQs using GPT-3.5, GPT4, LLaMA2 70b & Falcon 180b, which were used to train both retrievers and readers (downstream MCQ models)</li>
<li>Reranked top 10 retrieved chunks using a cross-encoder and provided top 2 to 4 re-ranked chunks to readers to solve the MCQs.</li></ul></li>
<li>Readers: Solve MCQs<ul>
<li>We used an ensemble of DeBERTas and LLMs.</li>
<li>DeBERTa: we explored different training strategies, such as<ul>
<li>Span approach: to make use of cross-option information</li>
<li>DebertaV2ForMultipleChoice: for diversity</li>
<li><a href="https://aclanthology.org/2021.eacl-main.20/" target="_blank">PET</a>: for diversity</li></ul></li>
<li>LLM: we explored both fine-tuning and zero-shot strategies<ul>
<li>Finetuned Open-Orca/Mistral-7B-OpenOrca using LoRA</li>
<li>Top 5% of MCQs (as ranked by prediction entropy - proxy for MCQ difficulty) were handled by to 70b LLMs:<ul>
<li>platypus2-70b-instruct</li>
<li>sheep-duck-llama-2</li></ul></li></ul></li></ul></li>
</ul>
<p><strong>NOTE</strong>: We used <code>platypus2-70b-instruct</code> in our pipeline - We're uncertain about the compatibility of this model with the competition rules.</p>
<h2>1. Retrievers: Get Context Data</h2>
<p>We adopted the standard retrieve & re-rank pipeline to find MCQ specific relevant text chunks from a custom STEM wikipedia corpus.</p>
<h3>1.1 STEM Wiki Corpus</h3>
<p>To address frequent rendering issues (number, equations & symbols) and to filter out irrelevant articles from the existing wiki corpuses, we created a custom STEM wiki corpus as below:</p>
<ul>
<li>Define a set of seed wikipedia categories related to STEM topics such as Category:Concepts in physics, Category:Physical quantities, etc.</li>
<li>For each category, recursively collect the member pages and subcategories up to a certain depth.</li>
<li>Extract the page contents of the collected wiki URLs Wikipedia-API (~500k pages).</li>
</ul>
<p><strong>Chunking:</strong> We first split the full text from each article based on different sections. The longer sections were further broken down into smaller chunks containing approximately 300 tokens. We maintained two representations for each chunk:</p>
<ul>
<li>A short representation without overlap - used for embedding and search</li>
<li>A longer representation with overlap from the previous and next chunks - used as contexts in the downstream MCQ models.</li>
</ul>
<h3>1.2 Retriever</h3>
<p>Retrieved text chunks from pre-trained embedding models such as thenlper/gte-base, BAAI/bge-base-en-v1.5 provided a strong performance boost for the downstream MCQ models. In our pipeline, we observed further performance improvement by fine-tuning the embedding models. We re-purposed the synthetically generated MCQs for the retrieval task, as illustrated in the figure below:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2125251%2F51fc3bccc92c4d90ce6d5358ed93fe4d%2FScreenshot%202023-10-17%20at%2012.09.15%20AM.png?generation=1697472582799041&alt=media" alt=""></p>
<p>Retrievers were fine-tuned with the NCE loss + dynamically computed in-batch hard negatives. Specifically, we used the batching strategy from this <a href="https://www.kaggle.com/competitions/learning-equality-curriculum-recommendations/discussion/395110" target="_blank">solution post</a>. In our pipeline, we used a union of top 10 text chunks retrieved from fine-tuned <code>thenlper/gte-base</code> & <code>BAAI/bge-base-en-v1.5</code> models.</p>
<h3>1.3 Reranker</h3>
<p>To reduce noise in the contexts provided to the downstream MCQ models, we re-ranked top retrieved candidates using a cross-encoder (deberta-v3-base). We selected top 2 to 4 re-ranked contexts as per a dynamic threshold on computed relevancy score from the re-ranker.</p>
<h2>2. Readers: Solve MCQs</h2>
<p>We explored several strategies to solve the MCQs using the retrieved contexts.</p>
<h3>2.1 Span Classification</h3>
<ul>
<li>Mean pooling of tokens from each option to extract option-wise features</li>
<li>Classification with cross-option information</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2125251%2Fe0bc561a6930b8247d3f0142db008b2e%2FScreenshot%202023-10-17%20at%2012.12.08%20AM.png?generation=1697472753776815&alt=media" alt=""></p>
<h3>2.2 LLM Fine-tuning using LoRA</h3>
<p>Towards the end of competition, it was evident that we were hitting performance ceiling with DeBERTa models. Hence, we decided to explore fine-tuning of LLMs. While we experimented with several models (e.g. flan-t5-xl (3b), flan-t5-xxl (11b), llama-7b, mistral-7b, llama-30b and llama-70b), our final pipeline include 1x LoRA fine-tuned mistral-7b model. The lora settings were as follows:</p>
<ul>
<li>lr 3.e-4 (slight variations)</li>
<li>rank 16</li>
<li>alpha 16</li>
<li>lora modules on all linear components (qkv, out_proj, ffn, gate_proj, etc.) and embeddings</li>
<li>dropout 0.1</li>
<li>no sequence packing</li>
<li>bf16</li>
</ul>
<p>We followed the cross-option setting during LLM fine-tuning i.e. to predict the right answer given all the options. We did not explore training the model in binary classification set up. </p>
<h3>2.3 Zero Shot using 70b models</h3>
<p>A huge shout out to <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a> for demonstrating the potential of 70b LLM using Kaggle GPU!!</p>
<p>About 5 days before the end of the competition, we started experimenting with 70b models. We realized we wouldn’t be able to run every example though a model this big, but we also knew that our deberta models were doing a good job for 90-95% of the questions. Thus, we took the least confident predictions and passed them to the 70b model, using <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a>'s code. We did slight prompt engineering the last few days and found that taking the difference between two tokens (yes/no or true/false) gave slightly better results. In the end we used both platypus 70b and sheep-duck-llama-70b. We adjusted the prompt from the public notebook to match the way it was trained. Our last few submissions were experiments passing 5-10% of the samples, blending with other models, or just having the 70b predictions be the final predictions. We did get lucky because our best scores came on the final day.</p>
<p>CV on our custom dataset showed slightly better performance passing all the options at once, but we couldn’t figure out how to do that on Kaggle.</p>
<p>Local validation results with <code>Sheep-duck-llama</code>:</p>
<ul>
<li>(all options at once) ABCDE tokens: 0.91</li>
<li>(one option at a time) only yes token: 0.886</li>
<li>(one option at a time) only no token: 0.888</li>
<li>(one option at a time) yes token - no token: 0.891</li>
</ul>
<p>We did try fine-tuning the 70b model using QLoRA, but we ran into issues we could not debug before time ran out.</p>
<h2>Team Members</h2>
<ul>
<li>Abhishek Thakur (@abhishek)</li>
<li>Trushant Kalyanpur (@trushk)</li>
<li>Udbhav Bamba (@ubamba98)</li>
<li>Nicholas Broad (@nbroad)</li>
<li>Raja Biswas (@conjuring92)</li>
</ul>
<p>yes, this is our github repo: <a href="https://github.com/rbiswasfc/llm-science-exam" target="_blank">https://github.com/rbiswasfc/llm-science-exam</a></p>
<p>thanks <a href="https://www.kaggle.com/laineyzheng" target="_blank">@laineyzheng</a>! Here is the notebook used to create the <code>retriever-data-v4</code> dataset: <a href="https://www.kaggle.com/code/conjuring92/retriever-data-prep-v4/notebook" target="_blank">https://www.kaggle.com/code/conjuring92/retriever-data-prep-v4/notebook</a></p>
<p>Thanks <a href="https://www.kaggle.com/sunmokoo" target="_blank">@sunmokoo</a> for your question and sorry for the later reply. We didn't encounter such issues. I'd recommend to check if there is a large bias for 'A' in your training MCQ answer keys.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.