Title of Competition
stringclasses 343
values | Title of Writeup
stringlengths 6
139
| User
stringclasses 1
value | Writeup
stringlengths 419
80.4k
|
---|---|---|---|
Feedback Prize - English Language Learning | 18th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>18 th place Solution</h1>
<p>First of all, I would like to thank kaggle and the hosting staff for hosting such an interesting competition. And I would like to say congrats to all winner. I could not get the solo-gold, but I learned a lot from this competition.</p>
<h1>1. Overview (Almost the same as others)</h1>
<p>My solution is weight average ensemble with 21 models Nealder-mead with pseudo labeling. This is almost the same as other solutions. My originality is the replacement \n to [Chapter]. This improved the cv 0.0015. And in this discussion, I show the my CV and Private LB relationship in detail.</p>
<h1>1.1 Configuration</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4001300%2Fa2ac96703257cc1d4353d8fc871d2028%2FClipboard01.jpg?generation=1670408759841540&alt=media" alt=""></p>
<h1>1.2 Relationship of cv and lb</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4001300%2F5ea21d5c1e0e384bd0e25799427bd7ae%2FClipboard02.jpg?generation=1670408854083583&alt=media" alt=""></p>
<h1>1.3 cv and score in each model</h1>
<p>※ pseudo1 and 2 are explained on next chapter<br>
※ The order is the order with the best private lb score</p>
<table>
<thead>
<tr>
<th>EXP</th>
<th>model</th>
<th>pseudo1</th>
<th>pseudo2</th>
<th>pseudo<br> dataset</th>
<th>kfold</th>
<th>cv</th>
<th>private lb</th>
<th>public lb</th>
</tr>
</thead>
<tbody>
<tr>
<td>214</td>
<td>deberta v3 large</td>
<td>〇</td>
<td>-</td>
<td>FB1 only</td>
<td>4</td>
<td>0.4467</td>
<td>0.4378</td>
<td>0.4404</td>
</tr>
<tr>
<td>123</td>
<td>deberta v3 large</td>
<td>〇</td>
<td>-</td>
<td>FB1 <br>+ Commonlit</td>
<td>4</td>
<td>0.4467</td>
<td>0.4379</td>
<td>0.4402</td>
</tr>
<tr>
<td>200</td>
<td>deberta v3 large</td>
<td>-</td>
<td>〇</td>
<td>FB1 only</td>
<td>4</td>
<td>0.4471</td>
<td>0.4381</td>
<td>0.4396</td>
</tr>
<tr>
<td>250</td>
<td>deberta v3 large</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>15</td>
<td>0.4470</td>
<td>0.4382</td>
<td>0.4410</td>
</tr>
<tr>
<td>161</td>
<td>deberta v3 large</td>
<td>-</td>
<td>〇</td>
<td>FB1 <br>+ Commonlit</td>
<td>4</td>
<td>0.4472</td>
<td>0.4383</td>
<td>0.4401</td>
</tr>
<tr>
<td>102</td>
<td>deberta base</td>
<td>〇</td>
<td>-</td>
<td>FB1 <br>+ Commonlit</td>
<td>4</td>
<td>0.4483</td>
<td>0.4387</td>
<td>0.4386</td>
</tr>
<tr>
<td>201</td>
<td>deberta base</td>
<td>-</td>
<td>〇</td>
<td>FB1 only</td>
<td>4</td>
<td>0.4503</td>
<td>0.4390</td>
<td>0.4399</td>
</tr>
<tr>
<td>163</td>
<td>deberta base</td>
<td>-</td>
<td>〇</td>
<td>FB1 <br>+ Commonlit</td>
<td>4</td>
<td>0.4505</td>
<td>0.4390</td>
<td>0.4403</td>
</tr>
<tr>
<td>215</td>
<td>deberta base</td>
<td>〇</td>
<td>-</td>
<td>FB1 only</td>
<td>4</td>
<td>0.4486</td>
<td>0.4391</td>
<td>0.4389</td>
</tr>
<tr>
<td>190</td>
<td>deberta base</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>10</td>
<td>0.4508</td>
<td>0.4393</td>
<td>0.4393</td>
</tr>
<tr>
<td>777</td>
<td>deberta v3 large</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>10</td>
<td>0.4479</td>
<td>0.4398</td>
<td>0.4413</td>
</tr>
<tr>
<td>1</td>
<td>deberta v3 large</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>4</td>
<td>0.4486</td>
<td>0.4400</td>
<td>0.4421</td>
</tr>
<tr>
<td>170</td>
<td>deberta base</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>4</td>
<td>0.4517</td>
<td>0.4402</td>
<td>0.4401</td>
</tr>
<tr>
<td>20</td>
<td>deberta v3 large</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>15</td>
<td>0.4485</td>
<td>0.4403</td>
<td>0.4413</td>
</tr>
<tr>
<td>18</td>
<td>deberta base</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>4</td>
<td>0.4516</td>
<td>0.4405</td>
<td>0.4393</td>
</tr>
<tr>
<td>131</td>
<td>deberta v1 large</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>4</td>
<td>0.4515</td>
<td>0.4413</td>
<td>0.4431</td>
</tr>
<tr>
<td>164</td>
<td>electra-large</td>
<td>-</td>
<td>〇</td>
<td>FB1 <br>+ Commonlit</td>
<td>4</td>
<td>0.4544</td>
<td>0.4415</td>
<td>0.4455</td>
</tr>
<tr>
<td>216</td>
<td>electra-large</td>
<td>〇</td>
<td>-</td>
<td>FB1 only</td>
<td>4</td>
<td>0.4507</td>
<td>0.4416</td>
<td>0.4454</td>
</tr>
<tr>
<td>202</td>
<td>electra-large</td>
<td>-</td>
<td>〇</td>
<td>FB1 only</td>
<td>4</td>
<td>0.4550</td>
<td>0.4418</td>
<td>0.4457</td>
</tr>
<tr>
<td>114</td>
<td>electra-large</td>
<td>〇</td>
<td>-</td>
<td>FB1 <br>+ Commonlit</td>
<td>4</td>
<td>0.4516</td>
<td>0.4419</td>
<td>0.4458</td>
</tr>
<tr>
<td>17</td>
<td>deberta v3 large</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>4</td>
<td>0.4502</td>
<td>0.4419</td>
<td>0.4430</td>
</tr>
<tr>
<td>30</td>
<td>deberta v1 xlarge</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>4</td>
<td>0.4522</td>
<td>0.4420</td>
<td>0.4423</td>
</tr>
<tr>
<td>33</td>
<td>deberta v2 xlarge</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>4</td>
<td>0.4540</td>
<td>0.4425</td>
<td>0.4432</td>
</tr>
<tr>
<td>29</td>
<td>electra-large</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>4</td>
<td>0.4546</td>
<td>0.4432</td>
<td>0.4494</td>
</tr>
</tbody>
</table>
<h1>2. What worked</h1>
<h2>2.1 High Impact</h2>
<ul>
<li>Not using the lower (cv : 0.002 improvement)</li>
</ul>
<p>At first, I converted the sentence to lower. But this was no good. In this competition, we need to predict the grammer etc… So Upper word may be important.</p>
<ul>
<li>Preprocess (cv 0.002 improvement)</li>
</ul>
<p>As shown in other solution, target name concatenated with space separators and added by [SEP] (cv 0.0005 improvement). After that, I replaced further line breaks with [Chapter] (cv 0.0015 improvement). Here is a concrete example.</p>
<pre><code>Cohesion Syntax Vocabulary Phraseology Grammar Conventions [SEP] I think that students would benefit from learning at home,because they wont have to change and get up early in the morning to shower and do there hair. taking only classes helps them because at there house they'll be pay more attention. they will be comfortable at home. [Chapter] The hardest part of school is getting ready. you wake up go brush your teeth and go to your closet and look at your cloths. …
</code></pre>
<ul>
<li>AWP (cv 0.001 improvement)</li>
<li>Large k-fold</li>
<li>Nealder-mead ensemble </li>
</ul>
<p>I used nealder-mead ensebmle in each target. This could jump the public lb from 600th to about 130 th at that time by changing from the mean ensemble.</p>
<ul>
<li>Pseudo labeling 1 (1 epoch pretrain + train.csv training)</li>
</ul>
<p>At first, about the leakage of pseudo labeling, I have experienced in NBME competition. So I could avoid the leakage. <a href="https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/discussion/323156" target="_blank">my solution on NBME</a></p>
<p>As shown in other discussions, at first, I trained the 5 models using the train.csv and inferred the the feedback 1 ( and commonlit) dataset in each. By ensembling it, the pseudo label data was made. At second, I pretrained the pseudo label data. At last, I re-trained the train.csv from the pretrained models. For example, this could make the cv 0.447060 of deberta v3 large model. </p>
<ul>
<li>Pseudo labeling 2 (pseudo labeled data adding to the train.csv)</li>
</ul>
<p>This is the simple pseudo labeling. I added the pseudo labeled data (as explained pseudo labeling 1) to the train.csv, and training it. I could make the cv 0.446743 of deberta v3 large model.</p>
<h2>2.2 Small Impact</h2>
<ul>
<li>Using CommonLit data for pseudo labeling</li>
</ul>
<p>We were allowed to use Commonlit data <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/348973#2047629" target="_blank">reference</a>. The ensemble improved the cv a little, but only slightly.</p>
<ul>
<li>Changing seed</li>
</ul>
<p>Changing the seed and ensemble improved the cv a bit, but I think it was overfitting.</p>
<h1>3. What didn't worked for me</h1>
<ul>
<li>GBDT post-process with tf-idf etc.</li>
<li>Replacement [MASK] in sentence.</li>
<li>Cutmix</li>
<li>adding LSTM to model</li>
<li>Embedding SVR with fine-tuned models</li>
</ul>
<h1>4.Important ciations</h1>
<ul>
<li>FB3 / Deberta-v3-base baseline [train] <a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">link</a></li>
<li>FB3 / Deberta-v3-base baseline [inference] <a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-inference/notebook" target="_blank">link</a></li>
<li>(fgm)FB3 Deberta v3 base[train] <a href="https://www.kaggle.com/code/mozattt/fgm-fb3-deberta-v3-base-train" target="_blank">link</a></li>
</ul>
<h1>5.Thanks and Acknowledgements:</h1>
<p>For various reasons, I started 11 days before the deadline. At this point, the various public notebooks were out and I was able to incorporate them into my pipeline and improve my score. <a href="https://www.kaggle.com/nakama" target="_blank">@nakama</a> and <a href="https://www.kaggle.com/Mozattt" target="_blank">@Mozattt</a> deserve my deepest thanks.</p>
<p>Also, I learned a lot from this competition, especially the Efficiency Prize Evaluation, which was very practical. In real work, it is often costly and time-consuming to prepare a GPU, and this initiative helped my work a lot. Finally, I would like to say thank you again to my host, kaggle, for organizing such a competition.</p>
<h1>6. Team Members</h1>
<p>chumajin</p>
<p><a href="https://www.kaggle.com/zvr842" target="_blank">@zvr842</a> Thank you very much !! I needed another week for gold medal… 😂</p>
<p><a href="https://www.kaggle.com/mpwolke" target="_blank">@mpwolke</a> Hi, my friend. Always thank you for comment. It always motivates me👍</p>
<p><a href="https://www.kaggle.com/julienmouchnino" target="_blank">@julienmouchnino</a> This one (past feedback competition data). </p>
<p><a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/data" target="_blank">https://www.kaggle.com/competitions/feedback-prize-english-language-learning/data</a></p>
<p><a href="https://www.kaggle.com/julienmouchnino" target="_blank">@julienmouchnino</a> Yes, common knowledge. <br>
You can learn from here.</p>
<p><a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train</a></p>
<p><a href="https://www.kaggle.com/code/mozattt/fgm-fb3-deberta-v3-base-train" target="_blank">https://www.kaggle.com/code/mozattt/fgm-fb3-deberta-v3-base-train</a></p>
<p><a href="https://www.kaggle.com/tonymarkchris" target="_blank">@tonymarkchris</a> Thank you for comment ! In this competition, one of the target is grammar. So I looked for ways to clarify things like paragraph breaks. As a result, I arrived at the word [Chapter]. The 8th team seems to have improved their score by replacing it with [PARAGRAPH] <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369524" target="_blank">link here</a></p>
<p><a href="https://www.kaggle.com/julienmouchnino" target="_blank">@julienmouchnino</a> how about this one (my notebook on the past competition)?</p>
<p><a href="https://www.kaggle.com/code/chumajin/pytorch-bert-beginner-s-room" target="_blank">https://www.kaggle.com/code/chumajin/pytorch-bert-beginner-s-room</a></p>
<p>If you want to learn more, you can search by other kaggle notebooks, google or studying by a book/youtube etc… Maybe most people do it including me.</p> |
Feedback Prize - English Language Learning | 1st Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Since we used 2-3 public models and embedding models [they were surely trained on different splits]. We were aware that this might cause leakage thats why we only added those models to ensemble that gave boost in both ensemble CV/LB. </p>
<p>yes, and since we had 3 final subs to choose. we took this risk with one sub.</p>
<p>Thanks and congratulations to you and <a href="https://www.kaggle.com/rashmibanthia" target="_blank">@rashmibanthia</a> for becoming competition Master as well.</p>
<p>No, it was too late to do this change as most of the models were already trained and we didn't wanted to retrain them.</p>
<p>5 for training all models and 25 for training embeddings using SVR </p>
<p>yes we used only one seed which was 42</p>
<p>In ensemble we used 18 models in total, out of which 15 were trained using 5 folds and remaining 3 models which we used for Embedding extraction and SVR are trained using 25 folds. </p>
<p>We used more folds only train Embedding model and their CV was already weak comparable to others. But you are right ensembling using different folds will lead to leakage and we need to be very careful while doing that. We can't blindly rely on CV score, we need some validation from Lb score as well. </p>
<p>Thanks a lot to the hosts and Kaggle for hosting this interesting competition, we had great fun working on this one. Also congratulations to all other competitors for the great solutions and results. Special thanks to my teammates <a href="https://www.kaggle.com/evgeniimaslov2" target="_blank">@evgeniimaslov2</a> and <a href="https://www.kaggle.com/poteman" target="_blank">@poteman</a> for such a perfect teamwork. </p>
<h2>Summary:</h2>
<p>Our solution is based on training lots of different models with different pooling techniques and max lengths and finally ensembling. We also used 3 models to extract embeddings and add them to ensemble. </p>
<h2>Cross validation</h2>
<p>Throughout this competition we had near perfect correlation between ensemble CV and LB. Whenever we saw some improvement on CV, we saw it reflected in a similar manner on the LB with very small random range. For splitting the folds, we just used MultilabelStratifiedKFold startegy. </p>
<h2>Modeling:</h2>
<p>Our final solution is a combination of different modeling approaches. </p>
<p>Train/PL data + Different Pooling techniques + model + max_len(768/1462 for deberta models, 512 for others) + freezing top n layers + re_init/no re_init top layer + differential LR + AWP. </p>
<h3>Poolings Used:</h3>
<ol>
<li>MeanPooling</li>
<li>ConcatPooling</li>
<li>WeightedLayerPooling</li>
<li>GemPooling</li>
<li>LSTMPooling</li>
</ol>
<h3>Models Used:</h3>
<ol>
<li>microsoft-deberta-v3-base</li>
<li>deberta-v3-large</li>
<li>deberta-v2-xlarge</li>
<li>roberta-large</li>
<li>distilbert-base-uncased</li>
</ol>
<p>Model Used for Embeddings and SVR</p>
<ol>
<li>"facebook/bart-large"</li>
<li>"flax-sentence-embeddings/all_datasets_v3_roberta-large"</li>
<li>"facebook/bart-large-mnli"</li>
</ol>
<p>Public Models: </p>
<h3>Weight Tuning:</h3>
<p></p><ul><br>
<li>We used optuna to tune the model weights, since we used different folds for training, we used final oofs to tune the weights. Also weights are tuned target wise.</li><br>
<li>We only add a model to our ensemble if it improves both our Ensemble CV/LB.</li><br>
<li>The best CV we get using this approach was 0.44073 (We didn't select this sub fearing overfitting but this has best private score.) <br><br>
Our Second Best CV was 0.44096 (It has good CV/LB correlation).<br><p></p>
<table>
<thead>
<tr>
<th>Model</th>
<th>CV</th>
<th>Public LB</th>
<th>Private LB</th>
<th><br></th>
</tr>
</thead>
<tbody>
<tr>
<td>deberta-v3-base with PL (Roh)</td>
<td>0.4464</td>
<td>0.437219</td>
<td>0.437982<br></td>
<td></td>
</tr>
<tr>
<td>deberta-v3-large (Yev)</td>
<td>0.4460</td>
<td>0.436758</td>
<td>0.434625<br></td>
<td></td>
</tr>
<tr>
<td>deberta-v3-large (pub)</td>
<td>0.4548</td>
<td>0.439502</td>
<td>0.437965<br></td>
<td></td>
</tr>
<tr>
<td>nischay (pub)</td>
<td>4588</td>
<td>0.442645</td>
<td>0.439982<br></td>
<td></td>
</tr>
<tr>
<td>deberta-v2-xlarge (pub)</td>
<td>0.4675</td>
<td>0.442497</td>
<td>0.443604<br></td>
<td></td>
</tr>
<tr>
<td>roberta-large (Roh)</td>
<td>0.4596</td>
<td>0.443616</td>
<td>0.444081<br></td>
<td></td>
</tr>
<tr>
<td>deberta-v3-large (pub)</td>
<td>0.4552</td>
<td>0.440045</td>
<td>0.437234<br></td>
<td></td>
</tr>
<tr>
<td>deberta-v3-large (Roh)</td>
<td>0.4556</td>
<td>0.444469</td>
<td>0.440331<br></td>
<td></td>
</tr>
<tr>
<td>distilbert-base-uncased (Roh)</td>
<td>0.4794</td>
<td>0.458219</td>
<td>0.459658<br></td>
<td></td>
</tr>
<tr>
<td>deberta-v3-large (yev)</td>
<td>0.4440</td>
<td>0.435268</td>
<td>0.434334<br></td>
<td></td>
</tr>
<tr>
<td>deberta-v3-base with AVG(trainPl + train) (Roh)</td>
<td>0.4482</td>
<td>0.43856</td>
<td>0.441147<br></td>
<td></td>
</tr>
<tr>
<td>roberta-large with AVG(trainPl + train) finetune train (Roh)</td>
<td>0.4649</td>
<td>0.449538</td>
<td>0.447643<br></td>
<td></td>
</tr>
<tr>
<td>deberta-v3-large (yev)</td>
<td>0.4538</td>
<td>0.441582</td>
<td>0.439262<br></td>
<td></td>
</tr>
<tr>
<td>deberta-v3-large (yev)</td>
<td>0.4617</td>
<td>0.448342</td>
<td>0.442106<br></td>
<td></td>
</tr>
<tr>
<td>deberta-v3-large (yev)</td>
<td>0.4447</td>
<td>0.436268</td>
<td>0.435978<br></td>
<td></td>
</tr>
<tr>
<td>fb_bart_large_svr (Roh)</td>
<td>0.4635</td>
<td>0.455645</td>
<td>0.453132<br></td>
<td></td>
</tr>
<tr>
<td>ad_v3_roberta_large_svr (Roh)</td>
<td>0.5113</td>
<td>0.501901</td>
<td>0.500084<br></td>
<td></td>
</tr>
<tr>
<td>fb_bart_large_mnli_svr (Roh)</td>
<td>0.4708</td>
<td>0.456652</td>
<td>0.459615<br></td>
<td></td>
</tr>
<tr>
<td>Final Ensemble CV</td>
<td>0.44096</td>
<td>0.433821</td>
<td>0.433356</td></tr></tbody></table></li>
</ul>
We also noticed that generating Soft/Pseudo labels on train data leads to huge overfitting, CV around 0.437.
<h2>Pseudo labels:</h2>
<ol>
<li>FB1 data</li>
<li>Train data</li>
<li>Mean of Train Pl / Actual Labels [This gives nice boost in both CV/LB]</li>
</ol>
<h2>What Worked:</h2>
<ol>
<li>Different Pooling techniques</li>
<li>Different max_len</li>
<li>freezing top n layers</li>
<li>re_init top n layers.</li>
<li>Training with Differential learning rate</li>
<li>Pseudo labels</li>
</ol>
<h2>What didn't worked</h2>
<ol>
<li>Augmentations</li>
<li>Post processing</li>
</ol>
<h2>Important Citations:</h2>
<p><a href="https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x" target="_blank">https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x</a><br>
<a href="https://www.kaggle.com/code/nischaydnk/fb3-pytorch-lightning-training-baseline" target="_blank">https://www.kaggle.com/code/nischaydnk/fb3-pytorch-lightning-training-baseline</a><br>
<a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train</a><br>
<a href="https://www.kaggle.com/code/kojimar/fb3-deberta-family-inference-9-28-updated" target="_blank">https://www.kaggle.com/code/kojimar/fb3-deberta-family-inference-9-28-updated</a><br>
<a href="https://github.com/ybabakhin/kaggle-feedback-effectiveness-1st-place-solution" target="_blank">https://github.com/ybabakhin/kaggle-feedback-effectiveness-1st-place-solution</a></p>
<h2>Extra Links:</h2>
<p>Link to our best submission <a href="https://www.kaggle.com/code/rohitsingh9990/merged-submission-01/notebook?scriptVersionId=111953356" target="_blank">https://www.kaggle.com/code/rohitsingh9990/merged-submission-01/notebook?scriptVersionId=111953356</a></p>
<h2>Thanks and Acknowledgements:</h2>
<p>Special thanks to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a>, <a href="https://www.kaggle.com/nischay" target="_blank">@nischay</a>, <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a>, <a href="https://www.kaggle.com/e0xextazy" target="_blank">@e0xextazy</a> and <a href="https://www.kaggle.com/kojimar" target="_blank">@kojimar</a> for sharing their notebooks/datasets/ideas publicly. </p>
<h2>Team Members</h2>
<p><a href="https://www.kaggle.com/rohitsingh9990" target="_blank">@rohitsingh9990</a><br>
<a href="https://www.kaggle.com/evgeniimaslov2" target="_blank">@evgeniimaslov2</a><br>
<a href="https://www.kaggle.com/poteman" target="_blank">@poteman</a></p>
<p>Training code can be accessed from <a href="https://github.com/rohitsingh02/kaggle-feedback-english-language-learning-1st-place-solution" target="_blank">https://github.com/rohitsingh02/kaggle-feedback-english-language-learning-1st-place-solution</a></p>
<p>Congratulations <a href="https://www.kaggle.com/evgeniimaslov2" target="_blank">@evgeniimaslov2</a> for becoming Competitions Master.</p>
<p><a href="https://www.kaggle.com/mhilmiasyrofi" target="_blank">@mhilmiasyrofi</a> Thanks, the idea to use AWP came from past NLP competitions, most of our models we trained for 4 epochs and start epoch of AWP was 2, and the learning rate we used is 2e-5 for all layers except head for which it was 5e-4.</p>
<p>We generated soft labels / pseudo labels for train data using our earlier best ensemble. Then i took average of newly generated train labels and actual train labels. That's what <code>Mean of Train Pl / Actual Labels</code> means.</p>
<p>Here is a code snippet. <br>
<code>if 'deberta-v3-base' in cfg.architecture.model_name or 'mdeberta-v3-base' in cfg.architecture.model_name:
self.model.embeddings.requires_grad_(False)
self.model.encoder.layer[:9].requires_grad_(False)</code><br>
Full training code will be released soon. </p>
<p>What we tried first is generating soft labels (pl) for train data, training on this was hugely overfitted as we were getting CV around 0.437, then i thought i should try average of train pl and train data in an effort to reduce overfitting and this experiment worked, as it gave some boost in both ensemble CV/LB. </p>
<p>First we trained a base model and tested its Lb score, after that every model we train we check its ensemble CV/LB, if it improve both, we add that model to our ensemble else we move to next experiment.</p>
<p><a href="https://www.kaggle.com/rohitsingh9990/weight-tuning-optuna" target="_blank">https://www.kaggle.com/rohitsingh9990/weight-tuning-optuna</a> here i shared code to optimize weights using optuna.</p> |
Feedback Prize - English Language Learning | 20th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to organizers for this competition and all the competitors!<br>
Here is a brief introduction of my solution with the highest Private LB score (equivalent to 20th place).</p>
<h1>Overview</h1>
<p>My solution consists of a weighted average of several BERT models using the Nelder-Mead method and pseudo-labeling with text data from feedbacks 1 and 2.</p>
<p><a href="https://www.kaggle.com/code/columbia2131/fb3-sub-exp16202122303132333638464748" target="_blank"><strong>my inference code</strong></a></p>
<h1>What Worked</h1>
<h3>High Impact Idea</h3>
<p><strong>Pseudo-Labeling</strong></p>
<p>I trained deberta-v3-large on this competition data and then pseudo-labeled on the Feedback 1 and 2 data, excluding data that overlapped with this competition. <br>
The pseudo-labeled data was trained by deberta-v3-base, deberta-v3-large and deberta-v2-xlarge following <a href="https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/discussion/315707#:~:text=org/abs/2003.10580-,Meta%20Pseudo%20Labels,-notebook%20link%20on" target="_blank">a meta pseudo labels method</a>(CV: +0.005~0.01).</p>
<p><strong>Nelder-Mead method</strong></p>
<p>The welder-mead method is one of the algorithms for optimization problems and is implemented in the minimize function of scipy. <br>
The optimal weighted average method using this technique is based on <a href="https://www.kaggle.com/competitions/commonlitreadabilityprize/discussion/258328" target="_blank">2nd solution method in the CommonLit Competion</a> by <a href="https://www.kaggle.com/takoihiraokazu" target="_blank">@takoihiraokazu</a>.</p>
<p>The implementation is in <a href="https://www.kaggle.com/code/columbia2131/fb3-sub-exp16202122303132333638464748" target="_blank">my solution</a>, which is as follows:</p>
<pre><code> scipy.optimize minimize
target_cols = [, , , , , ]
():
opt_preds = np.zeros((train))
i ((weights)):
opt_preds += oof_preds[i, :, idx] * weights[i]
score = mean_squared_error(train[col], opt_preds, squared=)
score
opt_oof_preds = np.zeros(((train), (target_cols)))
opt_sub_preds = np.zeros(((test), (target_cols)))
weight_df = []
idx, col (target_cols):
func = x: optimeze_func(x, idx, col)
weights = [ / (oof_preds)] * (oof_preds)
result = minimize(func, weights, method=)
weight_df.append(result.x)
i, weight (result.x):
opt_oof_preds[:, idx] += oof_preds[i, :, idx] * weight
opt_sub_preds[:, idx] += sub_preds[i, :, idx] * weight
</code></pre>
<h3>Medium Impact Idea</h3>
<p><strong>Ensemble of multiple BERT models</strong></p>
<p>The following model was used:</p>
<ul>
<li>deberta-v3-base</li>
<li>deberta-v3-large</li>
<li>deberta-large</li>
<li>muppet-roberta-large</li>
<li>roberta-base</li>
<li>roberta-large</li>
<li>funnel-transformer-large</li>
<li>funnel-transformer-xlarge</li>
<li>deberta-v2-xlarge</li>
</ul>
<p>The learning parameters for all models are as follows:</p>
<ul>
<li>max_length = 512</li>
<li>criterion = nn.SmoothL1Loss(beta=1., reduction='mean')</li>
<li>learning_rate = 1e-5</li>
<li>optimizer = AdamW(betas=(0.9, 0.98), weight_decay=2e-5)</li>
<li>scheduler = get_linear_schedule_with_warmup</li>
<li>num_warmup_steps_rate = 0.01</li>
<li>clip_grad_norm = 1000</li>
<li>all dropout = 0</li>
</ul>
<h1>What Didn't Worked</h1>
<ul>
<li>Stacking with GBDT</li>
</ul>
<h3>Methods that could have been improved in accuracy if worked on</h3>
<ul>
<li>Adversarial Weight Perturbation(AWP)</li>
<li>Layerwise Learning Rate(LLR)</li>
<li>Support Vector Regression(SVR) trained using embedding vectors of pre-trained models</li>
</ul>
<p>etc.</p>
<h1>Important Citations:</h1>
<ul>
<li><a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">FB3 / Deberta-v3-base baseline [train]</a> by <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a></li>
<li><a href="https://www.kaggle.com/competitions/commonlitreadabilityprize/discussion/258328" target="_blank">2nd place solution</a> in CommonLit competition by <a href="https://www.kaggle.com/takoihiraokazu" target="_blank">@takoihiraokazu</a></li>
</ul>
<p>Thank you for your comment!<br>
Empirically, the nelder-mead methodh can optimize faster and more accurately than optuna in weighted average of predictions for regression tasks.</p> |
Feedback Prize - English Language Learning | 21th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Overview</h1>
<ul>
<li>Weighted average ensemble of 21 models.</li>
<li>Some model used a weighted average of the original prediction and the prediction trained by SVR or Ridge regression of the embedding. (Whether to use SVR or Ridge or neither was based on the pattern that would give the best CV when incorporated into the ensemble)</li>
<li>cv=0.4407/PublicLB=0.434031/PrivateLB=0.434983<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3823496%2F4c3b2f830c5f99822cfe532cf5cf81b7%2Foverview.png?generation=1669903792363382&alt=media" alt=""></li>
</ul>
<h1>What worked</h1>
<ul>
<li>Special tokens for each indicator (Multi Token)<ul>
<li>A token for each indicator ("[REG_COH]", "[REG_SYN]", etc…) instead of a CLS token was added at the beginning and used.</li>
<li>Depending on the backbone, DeBERTa-v3-large performed best this way (CV=0.4478)</li></ul></li>
<li>Weighted average with SVR or Ridge regression on a per single model<ul>
<li>Depends on the model, but can easily be improved by about 0.025 if effective</li></ul></li>
<li>AWP<ul>
<li>Accuracy is about 0.01 to 0.02 better on its own when used</li>
<li>However, when incorporated into ensembles, models without AWP often improved performance, so it was ultimately applied to only some models</li></ul></li>
<li>Classification Task Model<ul>
<li>The stand-alone performance was not that good (cv=0.4562), but it was effective for the ensemble.</li></ul></li>
<li>Pseudo Labeling<ul>
<li>Basically all models improved performance (the most effective model improved by about 0.005, cv=0.445)</li>
<li>However, PublicLB lost a lot of accuracy and only one pseudo-label model was included in the final sub</li>
<li>But this was a huge mistake, and the biggest factor in my shakedown. (See <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369401" target="_blank">this discussion</a> for more details.)</li>
<li>[Reference] Performance of the ensemble model with Pseudo-Label: cv=0.4410/PublicLB=0.436303/PrivateLB=0.433781</li></ul></li>
<li>LLRD</li>
</ul>
<h1>What Didn’t Worked</h1>
<ul>
<li>Last Layer Re-initialization</li>
<li>Post Process<ul>
<li>I tried to correct the final forecast by multiplying the final forecast by the coefficient obtained by Nelder-Mead for each range, but LB worsened, so I did not adopt it.</li></ul></li>
</ul>
<h1>Important Citations:</h1>
<ul>
<li><a href="https://www.kaggle.com/code/abhishek/tez-fb3-beating-the-benchmark-inference" target="_blank">tez: fb3 beating the benchmark inference</a></li>
</ul>
<h1>Thanks and Acknowledgements:</h1>
<p>Thanks to competition organizers for hosting this competition.<br>
And a big thank you to all the participants who shared their wonderful notebooks and discussions.<br>
I was a beginner who first started doing NLP with FB2 about 6 months ago, but thanks to the easy to understand competition design and many useful discussions, I now know a little more about NLP.<br>
I sincerely hope that you will hold such a competition again. Thank you again.</p>
<p>For each token, we defined a fully-connected layer with an output size of 1 and concatenated them to compute the loss.<br>
I think it is difficult to understand in words, so a simplified pseudo code is shown below.</p>
<pre><code> (nn.Module):
():
self.transformer = AutoModel.from_pretrained(model_name, config=config)
self.output_cohesion = nn.Linear(config.hidden_size, )
self.output_syntax = nn.Linear(config.hidden_size, )
self.output_vocabulary = nn.Linear(config.hidden_size, )
self.output_phraseology = nn.Linear(config.hidden_size, )
self.output_grammar = nn.Linear(config.hidden_size, )
self.output_conventions = nn.Linear(config.hidden_size, )
():
transformer_out = self.transformer(input_ids=ids, attention_mask=mask)
sequence_output = transformer_out.last_hidden_state
logits_cohesion = self.output_cohesion(sequence_output[:, , :])
logits_syntax = self.output_syntax(sequence_output[:, , :])
logits_vocabulary = self.output_vocabulary(sequence_output[:, , :])
logits_phraseology = self.output_phraseology(sequence_output[:, , :])
logits_grammar = self.output_grammar(sequence_output[:, , :])
logits_conventions = self.output_conventions(sequence_output[:, , :])
logits = torch.cat([logits_cohesion, logits_syntax, logits_vocabulary, logits_phraseology, logits_grammar, logits_conventions], axis=)
loss = self.loss(logits, targets)
</code></pre> |
Feedback Prize - English Language Learning | 221st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all competitors! Here is s a summary of my solution.</p>
<h1>Overview</h1>
<ul>
<li><p>My model is ensemble of the two published notebooks and my own model:</p>
<ul>
<li>model1: <a href="https://www.kaggle.com/code/jingwora1/fb3-deberta-family-inference-weight-tune" target="_blank">https://www.kaggle.com/code/jingwora1/fb3-deberta-family-inference-weight-tune</a> (by <a href="https://www.kaggle.com/jingwora1" target="_blank">@jingwora1</a>)</li>
<li>model2: <a href="https://www.kaggle.com/code/quangphm/lb-0-43-simple-ensemble-deberta-base-svr" target="_blank">https://www.kaggle.com/code/quangphm/lb-0-43-simple-ensemble-deberta-base-svr</a> (by <a href="https://www.kaggle.com/quangphm" target="_blank">@quangphm</a>)<ul>
<li>Note: The Fine-tuned DeBERTa models are excluded. The SVR side only uses deberta_large_mnli, roberta_base, roberta_large, xlnet_base and xlnet_large.</li></ul></li>
<li>model3: DeBERTa-v3-base/large (4 fold, seed=2024, Layer-wise Decreasing Layer Rate)</li></ul></li>
<li><p>Code: <a href="https://www.kaggle.com/code/fx6300/ensemble-deberta-and-svr/notebook?scriptVersionId=112096434" target="_blank">https://www.kaggle.com/code/fx6300/ensemble-deberta-and-svr/notebook?scriptVersionId=112096434</a></p></li>
</ul>
<h1>Ensemble Weighting Strategy</h1>
<ul>
<li><p>I didn't employ any optimization algorithms (e.g., Nelder-Mead) since I didn't have good validation set and this kind of algorithms work well on CV but don't always work on private LB.</p></li>
<li><p>Ensemble weights are:</p>
<ul>
<li>model1: 0.8</li>
<li>model2: 0.1</li>
<li>model3: 0.1</li></ul></li>
</ul>
<h1>What Worked</h1>
<ul>
<li>Layer-wise Decreasing Layer Rate<ul>
<li>During this competition, I was highly influenced by the paper titled "How to fine-tune bert for text classification?." [0]. This paper also reports the effectiveness of Layer-wise Decreasing Layer Rate (See 5.3.4)</li></ul></li>
</ul>
<h1>What Didn't Work</h1>
<ul>
<li>Metric Learning (Triplet Loss)<ul>
<li>I couldn't find a proper algorithm for applying metric learning on regression tasks</li></ul></li>
<li>Use flan-t5 as an encoder</li>
<li>Use t5 as an encoder</li>
<li>Further Pre-training<ul>
<li>The above mentioned paper [0] also reports that Further Pre-training is effective when doing it with in-domain corpus.</li>
<li>However, it didn't work well in my case.</li>
<li>If you are interested in further pre-training, check it: <a href="https://www.kaggle.com/code/fx6300/further-pre-train-deberta" target="_blank">https://www.kaggle.com/code/fx6300/further-pre-train-deberta</a></li></ul></li>
<li>Re-assign score when given target (e.g., coherence) has score x.5<ul>
<li>I thought that score x.5 targets are unreliable.</li>
<li>For example, If given coherence score was 4.5, this means annotator A scored this as 5.0 and annotator B scored this as 4.0. </li>
<li>I thought uncontroversial instances don't cause this kind of discrepancy.</li></ul></li>
<li>Use sentence generation probability (i.e. LM score of GPT-2) as a feature of a grammar score estimator</li>
<li>Use Random Forest as a vocabulary score estimator</li>
</ul>
<h1>References</h1>
<ul>
<li>[0] Sun, Chi, et al. "How to fine-tune bert for text classification?." China national conference on Chinese computational linguistics. Springer, Cham, 2019.<ul>
<li>paper: <a href="https://arxiv.org/pdf/1905.05583.pdf" target="_blank">https://arxiv.org/pdf/1905.05583.pdf</a></li></ul></li>
</ul> |
Feedback Prize - English Language Learning | 22nd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you very much. It was a fun competition. Good luck to each other!</p>
<p>First, I would like to thank the competition hosts for organizing this competition and also the participants.<br>
As a novice regarding NLP competitions and Kaggle, everyone's codes and discussions have been very helpful to me.<br>
Luckily, I was able to get my first medal.</p>
<h3>Overview</h3>
<p>I did a weighted average ensemble of "models trained on Feedback3 data" + "models trained on a mixture of Feedback1 pseudo-labeled data and Feedback3 data" + "models trained on Rapids SVR using my finetuned model embedding" + "models trained on Rapids SVR using huggingface pre-trained model embedding."</p>
<p>The CV strategy used MultiLabelStratifiedKFold with K = 5.<br>
Incidentally, my use of this MultiLabelStratifiedKFold was buggy according to Discussion (see: <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/368437)" target="_blank">https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/368437)</a>, but the correlation between CV and LB was good.</p>
<ul>
<li><a href="https://www.kaggle.com/code/moritake04/private22nd-final-sub" target="_blank">My inference code</a></li>
</ul>
<h3>What Worked</h3>
<p>Several steps were taken to improve the model and will be explained.</p>
<h5>Step 1. Train a good single model [Medium Impact]</h5>
<p>First I tried to develop a good single model. After tuning by hand, the following configuration worked well.</p>
<ul>
<li>Epoch:<ul>
<li>base model: 4</li>
<li>large, xlarge model: 2</li></ul></li>
<li>Learning Rate:<ul>
<li>base model: encoder -> 1e-5, decoder -> 1e-4</li>
<li>large, xlarge model: encoder -> 1e-5, decoder -> 1e-5</li>
<li>Using Layerwise LR Decay</li></ul></li>
<li>Batch Size: 8</li>
<li>Initialize the last layer</li>
<li>Optimizer: AdamW (weight_decay = 1e-2)</li>
<li>Scheduler: cosine_schedule_with_warmup (warmup_ratio = 0.1, num_cycles = 0.5)</li>
<li>Criterion: SmoothL1Loss</li>
<li>Head: Mean Pooling</li>
<li>When using the xlarge model, freeze half of the layers</li>
<li>gradient_clip_val = 1.0</li>
<li>token length = 512 (and 4096 for ensemble, 512 was better)</li>
</ul>
<p>Many people have achieved Public 0.43 with a single model, and that was my goal, but I just couldn't achieve it, so I decided to rely on an ensemble.</p>
<p>The highest Local CV at this step was 0.4502 for Deberta-xlarge (it was not Deberta-v3 series)</p>
<h5>Step 2. Pseudo Labeling [High Impact]</h5>
<p>I trained the following models in the above configuration and Pseudo Labeled the train and test data for feedback prize1 with a weighted average ensemble of these models.</p>
<ul>
<li>deberta-v3-base (token_length = 512)</li>
<li>deberta-v3-base (token_length = 4096)</li>
<li>deberta-v3-large (token_length = 512)</li>
<li>deberta-v3-large (token_length = 4096)</li>
<li>deberta-large (token_length = 512)</li>
<li>deberta-xlarge (token_length = 512)</li>
<li>muppet-large (token_length = 512)</li>
</ul>
<p>I was careful to avoid leaks and labeled each of the 5 folds to generate a pseudo label. I also removed duplicates in feedback 1 data and feedback 3 data. It seems that preventing this leak was quite important in this competition.</p>
<p>Using the pretrained-model developed in step 1, I mixed the data from feedback 1 with the data from feedback 3 and fine-tuned it just for one epoch.</p>
<p>The highest Local CV at this step was 0.4472 for Deberta-xlarge. Here, the Public score for the single model reached 0.43, and furthermore, the Public Score for the ensemble model entered the bronze medal zone.</p>
<h5>Step 3. Rapids SVR [Medium Impact]</h5>
<p>First, I applied Rapids SVR to the embedding of the model developed in step 2, and although the CV was not good, both CV and LB increased when ensemble with the previous models, so I adopted it.</p>
<p>Next, I applied the model to a pretrained model available on huggingface. This model was also adopted since both CV and LB increased when ensemble with the previous model. At this stage, the ensemble model was able to enter the Public silver medal zone.</p>
<h5>Step 4. Ensemble [High Impact]</h5>
<p>The ensemble is a weighted average. The weights are optimized using the Nelder-Mead method. Since there were six targets in this case, I adjusted the weights for each target.</p>
<p>I also used the same weights for all 5 folds to prevent overfit to the local CV.</p>
<p>The final ensemble was the following model.</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Local CV Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>deberta_v3_base_4096</td>
<td>0.4536</td>
</tr>
<tr>
<td>deberta_large_512</td>
<td>0.4513</td>
</tr>
<tr>
<td>muppet_large_512</td>
<td>0.4571</td>
</tr>
<tr>
<td>deberta_xlarge_512</td>
<td>0.4502</td>
</tr>
<tr>
<td>deberta_v3_base_512_pseudo</td>
<td>0.4489</td>
</tr>
<tr>
<td>deberta_v3_large_4096_pseudo</td>
<td>0.448</td>
</tr>
<tr>
<td>deberta_xlarge_512_pseudo</td>
<td>0.4472</td>
</tr>
<tr>
<td>deberta_v3_base_512_pseudo_svr</td>
<td>0.4529</td>
</tr>
<tr>
<td>deberta_v3_large_4096_pseudo_svr</td>
<td>0.4531</td>
</tr>
<tr>
<td>deberta_xlarge_512_pseudo_svr</td>
<td>0.4538</td>
</tr>
<tr>
<td>deberta_v3_large_4096_svr</td>
<td>0.4519</td>
</tr>
<tr>
<td>deberta_large_512_svr</td>
<td>0.4553</td>
</tr>
<tr>
<td>deberta_v3_base_4096_svr</td>
<td>0.4542</td>
</tr>
<tr>
<td>ensemble</td>
<td>0.4436</td>
</tr>
</tbody>
</table>
<p>In fact, I trained many more models, but these models were chosen as a result of manual and automatic selections.</p>
<p>Finally, I did an all data train using the procedure introduced here and took the average of the predictions of the 5 fold model and the all data train model as the final sub. I was able to choose the last sub without hesitation because Local CV and LB were quite correlated and Best Local CV = Best Public LB.</p>
<h3>What Didn’t Worked</h3>
<ul>
<li>MLM</li>
<li>AWP<ul>
<li>I think it is my problem because some people are doing well.</li></ul></li>
<li>Initialization of two or more layers</li>
<li>Stacking using predictions or embedded + text metadata (number of misspellings, number of words, number of punctuation marks, etc.)</li>
<li>Add newline tokens</li>
<li>Heads such as Weighted Layer Pooling, Attention Pooling, 1dcnn, lstm</li>
</ul>
<h3>Important Citations</h3>
<ul>
<li><a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train</a></li>
<li><a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-inference" target="_blank">https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-inference</a></li>
<li><a href="https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x" target="_blank">https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x</a></li>
</ul>
<h3>Thanks and Acknowledgements</h3>
<p>Again, thank you to the organizers and participants of this competition, and I will continue to work hard to become a kaggle expert and a master.</p> |
Feedback Prize - English Language Learning | 22nd Solution Summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I was expecting a large shakeup but I did not expect to jump that way!</p>
<p>Here is my part of the solution. I teamed late with Dieter with the hope that adding diversity would help. I also knew he would not be overfitting to public LB which made teaming risk free. Overfitting is always the danger when we have small data like this. The final shakeup proves it was a real danger unfortunately.</p>
<p>It turns out that Dieter model are quite diverse from mine, mostly because he used SVR on top of embeddings as part of his solution while I didn't.</p>
<p>I did not have much time for this comp given I was busy on other competitions outside Kaggle (MICCAI then NEURIPS). Therefore I went for well known techniques. My solution is a mostly pseudo labeling + knowledge distillation. It was developed rather quickly, with a first 2weeks phase 2 months ago where I ensembled models based on public LB score (don't do that!), then last week of comp where I used CV to guide my work. CV based ensembling only beat ad hoc ensembling last day of competition for us! It made sub selection choice easy: our best CV was best public LB and turned out to be best private LB too.</p>
<p>Here is what I did on top of a standard baseline. </p>
<ul>
<li>I used variants of deberta model only.</li>
<li>In order to fix the label distribution imbalance between fb1 data and fb3 data I decided to round predictions on fb1 of a reasonable model around the fb3 label values. Then I weighted fb1 samples using the frequency ratio of their rounded predictions. With sample weighting pseudo labeling proved to improve CV score.</li>
<li>I used pseudo labels ( prediction on fb1 data) for knowledge distillation. I ran two rounds of knowledge distillation. At each round I use previous models (round 0 is models trained on fb3 data only) to predict on fb1 data and add this to fb3 data with its original labels. Keeping fb3 labels helps avoid confirmation bias.</li>
<li>models for first and second round of knowledge distillation have deberta architecture + a simple dense regression head</li>
<li>for first level models (trained on fb3 data only) I also used some models with ordinal regression (see my petfinder writeup for a definition)</li>
<li>Final ensemble is a blend of 4 models on my side plus one model from Dieter. I used Huber Regressor for an unknown reason to find the weights. I usually use Ridge Regression for that, not sure what I thought. I probably didn't sleep enough last night of the competition, LOL.</li>
</ul>
<p>That is all I could do in the time I got. Dieter spent even less time than me on this competition. We are happy with surprisingly good end result given we were 700 or so on public LB. Teaming was good too as I would be 36th alone. </p> |
Feedback Prize - English Language Learning | 26th Place solution - Record the first kaggle competition ,failed to gold (unlucky) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to the host for an interesting competition and congratulations to all the winners!<br>
Since I am Chinese and not very good at English, there may be some inappropriate words, please bear with me</p>
<h1><strong>Overview</strong></h1>
<p>The experiment is divided into two parts, one is the debugging of the basic model (adding various tricks), and the other is to distill the knowledge of the better ensmble model and then go to the original data to continue training and fine-tuning.</p>
<h1><strong>Base model</strong></h1>
<table>
<thead>
<tr>
<th>Experiment Number</th>
<th>model type</th>
<th>max_length</th>
<th>fold</th>
<th>cv score</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>deberta-v3-base</td>
<td>768</td>
<td>10</td>
<td>0.4521</td>
</tr>
<tr>
<td>2</td>
<td>deberta-v3-base</td>
<td>768</td>
<td>5</td>
<td>0.4519</td>
</tr>
<tr>
<td>5</td>
<td>deberta-large</td>
<td>512</td>
<td>5</td>
<td>0.4516</td>
</tr>
<tr>
<td>6</td>
<td>deberta-xlarge</td>
<td>512</td>
<td>5</td>
<td>0.4536</td>
</tr>
<tr>
<td>10</td>
<td>deberta-v3-base</td>
<td>512</td>
<td>5</td>
<td>0.4502</td>
</tr>
<tr>
<td>13</td>
<td>deberta-v3-large</td>
<td>512</td>
<td>5</td>
<td>0.4512</td>
</tr>
<tr>
<td>15</td>
<td>deberta-v3-large</td>
<td>1024</td>
<td>5</td>
<td>0.4506</td>
</tr>
<tr>
<td>17</td>
<td>deberta-v2-xlarge</td>
<td>1024</td>
<td>5</td>
<td>0.4546</td>
</tr>
</tbody>
</table>
<h2><strong>what worked</strong></h2>
<p><strong>● [Worked] FGM</strong></p>
<p>○ Result: cv +0.001-0.002</p>
<p>○ Reasoning and context: Starting from epoch 0 with epsilon=0.25 for confrontation attack, too low or too high will not work well (joining fgm confrontation learning means that your training time will basically double)</p>
<p><strong>● [Worked] layernorm</strong></p>
<p>○ Result: cv +0.0005-0.001</p>
<p>○ Reasoning and context: Adding a layernorm layer in front of fc can also make the effect better.</p>
<p><strong>● [Worked] Attention Head</strong></p>
<p>○ Result: cv +0.0005</p>
<p>○ Reasoning and context: Comparing [CLS], mean pooling and attention head, there is basically no difference in the effect of the former two, mean pooling may be better, and the best effect is the attention head.</p>
<p><strong>● [Worked] Multi-sample Dropout</strong></p>
<p>○ Result: cv+0.0001</p>
<p>○ Reasoning and context: The effect of Multi-sample Dropout on the base is not very good, causing the training loss to oscillate, but the effect on the v3-large model is good, and it can prevent the large model from overfitting.</p>
<p><strong>● [Worked] LLRD</strong></p>
<p>○ Result: cv+0.001</p>
<p>○ Reasoning and context: After comparing the learning rate of 0.8-0.9 for each layer, and setting the learning rate by dividing the number of layers into blocks (it is better to divide 3 or 4 layers into a group), the latter brings me a greater improvement in cv .</p>
<h2><strong>What Didn't Worked</strong></h2>
<p><strong>● [Didn't Work] AWP</strong></p>
<p><strong>● [Didn't Work] other heads</strong>(maxmeanpool、weight pool、contact different layers, etc.)</p>
<p><strong>● [Didn't Work] MLM</strong> (pre-training does not work well on fb3 and fb2)</p>
<p><strong>The ensmble of the basic model uses the simplest optuna method to adjust the weights of different models (at that time, I thought about using the second-stage ensmble model after knowledge distillation, such as: svr, lasso, linear, gbdt, etc.)</strong></p>
<h1><strong>Knowledge distillation and continued fine-tuning:</strong></h1>
<p>● dataset:Data after FB1 removes FB3</p>
<p><strong>● stage 1</strong></p>
<p>○ Label the FB1 data predicted by the previous ensmble model.</p>
<p>○ Use the pipeline training of the previous better model</p>
<p>○ Increase epoch (10 I set)</p>
<p>○ Use fb3 as the verification set to save the best model</p>
<p><strong>● stage 2</strong></p>
<p>○ Continue to fine-tune the trained single model on the data on fb3 (5fold)</p>
<p>○ Low learning rate (5e-6)</p>
<table>
<thead>
<tr>
<th>Experiment Number</th>
<th>model type</th>
<th>max_length</th>
<th>cv score</th>
<th>pr score</th>
<th>lb score</th>
<th>head</th>
</tr>
</thead>
<tbody>
<tr>
<td>exp0050</td>
<td>deberta-large</td>
<td>512</td>
<td>0.4396</td>
<td>0.436596</td>
<td>0.440218</td>
<td>mean</td>
</tr>
<tr>
<td>exp0051</td>
<td>deberta-xlarge</td>
<td>512</td>
<td>0.439</td>
<td>0.436895</td>
<td>0.437808</td>
<td>mean</td>
</tr>
<tr>
<td>exp0058</td>
<td>deberta-xlarge</td>
<td>512</td>
<td>0.4384</td>
<td>0.436683</td>
<td>0.436752</td>
<td>attention</td>
</tr>
<tr>
<td>exp0065</td>
<td>deberta-v3-large</td>
<td>1024</td>
<td>0.4387</td>
<td>0.436483</td>
<td>0.43784</td>
<td>mean</td>
</tr>
<tr>
<td>exp0068</td>
<td>deberta-v3-large</td>
<td>1024</td>
<td>0.4385</td>
<td>0.436954</td>
<td>0.437924</td>
<td>attention</td>
</tr>
<tr>
<td>exp0071</td>
<td>deberta-v3-large-squad2</td>
<td>1024</td>
<td>0.4395</td>
<td>0.436845</td>
<td>0.439193</td>
<td>mean</td>
</tr>
<tr>
<td>exp0074</td>
<td>deberta-xlarge-mnli</td>
<td>512</td>
<td>0.4394</td>
<td>0.436168</td>
<td>0.438751</td>
<td>mean</td>
</tr>
<tr>
<td>exp0072</td>
<td>deberta-xlarge-mnli</td>
<td>1024</td>
<td>0.4398</td>
<td>0.435758</td>
<td>0.438707</td>
<td>attention</td>
</tr>
</tbody>
</table>
<p>It's a pity that I didn't choose the model with the best pr performance in the end. I only chose (58, 51, 65, 68), and was deceived by LB, because the method itself has leaks, so I didn't believe in cv very much, but Choose the case where the cv is as low as possible, and the lb performance is also good.<br>
The final model ensmble tried different stack technologies (svr, lasso, linear, gbdt), and finally chose optuna with the best effect, only adjusting the weights of different models (I also feel very strange at this point, the conventional ensmble technology doesn't work)</p>
<h1><strong>Important Citations:</strong></h1>
<p><a href="https://www.kaggle.com/competitions/commonlitreadabilityprize/discussion/257844" target="_blank">CommonLiit Readability Prize 1st place solution</a><br>
<a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">FB3 / Deberta-v3-base baseline [train]</a></p>
<h1><strong>Team Members:</strong></h1>
<p><a href="https://www.kaggle.com/djchenggosolo" target="_blank">@DJcheng_gosolo</a><br>
<a href="https://www.kaggle.com/koukinn" target="_blank">@paradoxg</a><br>
<a href="https://www.kaggle.com/tiandaye" target="_blank">@Tian</a></p>
<p>thank you !</p> |
Feedback Prize - English Language Learning | 26th place solution [lucky shake up 1117th → 26th] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to all the participants and hosts<br>
I was lucky to win a silver medal with big shake up.</p>
<h1>Overview</h1>
<p>finally cv is 0.44582 (public/private 0.439025/0.435032) (deberta-v3-large (x2) ensemble)<br>
MLM on FB2 data (mask prob = 0.4)<br>
CV improved slightly but LB worsened. <br>
However, adding it to the ensemble seems to have boosted the score.</p>
<p>single model cv</p>
<ul>
<li>0.4490 (w/o MLM) (public/private 0.439145/0.436422)</li>
<li>0.4488 (w MLM) (public/private 0.442191/0.437079)</li>
</ul>
<p>In my case, LB did not correlate well with the CV improvement.<br>
LB did not change much when I changed from a single model to an ensemble model<br>
(This doesn't seem to be the case with the top solutions, I don't know why)<br>
I stopped working on the competition because LB did not improve. <br>
However, TrustCV seems to have been right.</p>
<h2>What Worked</h2>
<ul>
<li>[High Impact] Idea<ul>
<li>AWP</li>
<li>attention pooling (target column-wise)</li></ul></li>
<li>[Medium Impact] Idea<ul>
<li>layer wise learning decay</li></ul></li>
</ul>
<h2>What Didn’t Work</h2>
<ul>
<li>smooth-l1</li>
<li>mask augmentaion</li>
<li>lstm(gru) head</li>
</ul>
<p>Small Tips<br>
I experimented with different seed. And I worked to make sure that the cv was improved.<br>
(This scheme is described in FB2 1st place solution)</p>
<p>Looking forward to fb4. Thank you.</p> |
Feedback Prize - English Language Learning | 28th Place Efficiency Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to Kaggle and the competition hosts for organizing this competition. </p>
<p>This was my first competition on Kaggle. I joined this competition a month ago and couldn't try lots of things due to time and hardware limitations. I focused only to the efficiency track part and selected the submissions for evaluation accordingly. In any case, I enjoyed to much to be a part of this competition and learned a lot from discussions and notebooks. Congratulations to all the winners. <br>
<br></p>
<h2>Overview</h2>
<p>My solution is an ensemble of two models (simple averaging of electra-small-discriminator and LGBM regressor models) with knowledge distillation, pseudo labeling (FB1 dataset) and extraction of new features (ngrams and other statistical features) for LGBM regressor. The most significant improvement was achieved by adding predictions from electra-small models as additional features to LGBM regressor models. Since no time left to train a single model, I used only the 'syntax' and 'phraseology' models of electra-small for faster inference.</p>
<p>I would be very grateful if you try this solution on your own models and share the results here. Notebooks for <a href="https://www.kaggle.com/code/bediiaydogan/fb3-efficiency-28th-place-training" target="_blank">training</a> and <a href="https://www.kaggle.com/code/bediiaydogan/fb3-efficiency-28th-place-inference" target="_blank">inference</a> are publicly available.<br>
<br></p>
<h2>Models and Performances</h2>
<table>
<thead>
<tr>
<th>Version</th>
<th>Model</th>
<th>Training Data</th>
<th>Features</th>
<th>Private Scores</th>
</tr>
</thead>
<tbody>
<tr>
<td>v1</td>
<td>Electra-base-discriminator</td>
<td>FB3</td>
<td>text</td>
<td>0.449456</td>
</tr>
<tr>
<td>v2</td>
<td>Electra-small-discriminator</td>
<td>FB3</td>
<td>text</td>
<td>0.461457</td>
</tr>
<tr>
<td>v3</td>
<td>Electra-small-discriminator<br>(distilled from Model-v1)</td>
<td>FB3</td>
<td>text</td>
<td>0.456003</td>
</tr>
<tr>
<td>v4</td>
<td>LGBM Regressor</td>
<td>FB3</td>
<td>text-feats</td>
<td>0.494600</td>
</tr>
<tr>
<td>v5</td>
<td>LGBM Regressor</td>
<td>FB3 + FB1</td>
<td>text-feats</td>
<td>0.491990</td>
</tr>
<tr>
<td>v6</td>
<td>LGBM Regressor + Model-v1</td>
<td>FB3 + FB1</td>
<td>text-feats +<br>model-v1 predictions</td>
<td><strong>0.444570</strong></td>
</tr>
<tr>
<td>v7</td>
<td>LGBM Regressor + Model-v3</td>
<td>FB3 + FB1</td>
<td>text-feats +<br>model-v3 predictions</td>
<td>0.448876</td>
</tr>
<tr>
<td>v8 </td>
<td>LGBM Regressor + Model-v3<br>(syntax and phraseology models)</td>
<td>FB3 + FB1</td>
<td>text-feats +<br>model-v3 predictions</td>
<td><strong>0.452429</strong></td>
</tr>
</tbody>
</table>
<p><code>* text-feats: ngrams and statistical features extracted from full-text</code><br>
<br></p>
<h2>What Worked</h2>
<ul>
<li><p><strong>[High Impact] Predictions as Features and Ensemble</strong><br>
As mentioned above, adding predictions as additional features to LGBMR models and ensembling resulted in a 1% - 1.5% improvement in scores. </p></li>
<li><p><strong>[High Impact] Knowledge Distillation</strong></p></li>
<li><p><strong>[Medium Impact] Pseudo Labeling</strong><br>
<br></p></li>
</ul>
<h2>What Didn’t Work</h2>
<p>I would like to write a long list of "what didn't work", but I haven't had the opportunity to try many things in a one month period.<br>
<br></p>
<h2>Team Members</h2>
<p>Bedii A. Aydoğan <a href="https://www.kaggle.com/bediiaydogan" target="_blank">@bediiaydogan</a> </p> |
Feedback Prize - English Language Learning | 29th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank Feedback competition host and all the participants.<br>
I was able to test much of my knowledge from past competitions in this competition.<br>
My solution is shown below. I have reflected that I could have made more detailed tuning to each solution method, and I hope to make the most of this in the next competition.</p>
<h1>Overview</h1>
<ol>
<li>I had <strong>six pre-trained models</strong> train with the given train data. <strong>AWP</strong> was used in my all training.<br>
<strong>six pre-trained models</strong>: deberta-v3-large, deberta-v3-base, deberta-v2-xlarge, deberta-large, bigbird-roberta-large, bigbird-roberta-base</li>
<li>Six trained models were used to <strong>Pseudo-Label</strong> the FB1 essay texts.</li>
<li>Pseudo Labels were added to the given train data and again had six pre-trained models train.</li>
<li>The predictions of the six models plus SVR were ensembled by weighted average.</li>
</ol>
<h1>What Worked</h1>
<ul>
<li><p><strong>AWP</strong></p>
<ul>
<li>I have tried AWP several times in past competitions, but all failed. And for the first time, I have succeeded in improving my score with AWP.</li>
<li>The parameters worked well with adv_lr=2e-4, adv_eps=1e-3, referring to the discussion.<br>
cf. <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/349594#1932041" target="_blank">https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/349594#1932041</a></li>
<li>CV improve (single model): 0.0007</li></ul></li>
<li><p><strong>Pseudo Labeling</strong></p>
<ul>
<li>No leaks. I knew empirically from past competitions that if I am not careful with folds, they will leak.</li>
<li>CV improve (single model): 0.0003 ~ 0.0026</li></ul></li>
<li><p><strong>Diversity by training pattern</strong></p>
<ul>
<li>I was able to create diversity by dividing the training into the following 1 and 2 training patterns. These ensembles improved the score.<br>
1: train one epoch with Pseudo Labels, then train several epochs with given train data<br>
2: train several epochs with given train data and Pseudo Labels at the same time</li></ul></li>
<li><p><strong>Weighted average Ensemble</strong></p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Folds</th>
<th>CV</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>deberta-v3-large</td>
<td>5</td>
<td>0.4473~0.4495</td>
<td></td>
<td></td>
</tr>
<tr>
<td>deberta-v3-base</td>
<td>5</td>
<td>0.4512~0.4522</td>
<td></td>
<td></td>
</tr>
<tr>
<td>deberta-v2-xlarge</td>
<td>5</td>
<td>0.4475~0.4522</td>
<td></td>
<td></td>
</tr>
<tr>
<td>deberta-large</td>
<td>5</td>
<td>0.4510~0.4538</td>
<td></td>
<td></td>
</tr>
<tr>
<td>bigbird-roberta-large</td>
<td>5</td>
<td>0.4577</td>
<td></td>
<td></td>
</tr>
<tr>
<td>bigbird-roberta-base</td>
<td>5</td>
<td>0.4643</td>
<td></td>
<td></td>
</tr>
<tr>
<td>svr</td>
<td>5</td>
<td>0.4526</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Ensemble</td>
<td></td>
<td>0.4435</td>
<td>0.437183</td>
<td>0.435225</td>
</tr>
</tbody>
</table></li>
</ul>
<h1>What Didn’t Work</h1>
<ul>
<li><strong>Down sampling of Pseudo Labels</strong><ul>
<li>I considered some of the Pseudo Labels to be unreliable and removed those with large std (>0.44/3) of predictions for each model.<br>
However, the CV did not improve.</li>
<li>CV (Weighted average Ensemble): 0.4435 -> 0.4437</li></ul></li>
<li><strong>Efficiency Prize Measures</strong><ul>
<li>For Efficiency Prize, I had deberta-v3-small train with all given train data (1 fold) and all Pseudo Labels.</li>
<li>The inference time for the test data was 40 minutes, which was too long.<br>
I should have set <code>batch_size=1</code> and tried deberta-v3-xsmall. (Thanks to <a href="https://www.kaggle.com/tmhrkt" target="_blank">@tmhrkt</a> for the advice)</li>
<li><strong>Private Score: 0.438587</strong> (Not bad!)</li></ul></li>
</ul>
<p>This competition was a lot of fun, with the Efficiency Prize as before, and a commemorative gift for the top 50. Thank you so much, hosts!<br>
I have already started warming up for FB4😉</p>
<p>Here are the results we have tried so far.<br>
<code>batch_size=8,16</code> were 40 minutes.<br>
<code>batch_size=32</code> was 41 minutes.<br>
<code>batch_size=64</code> was 46 minutes.</p>
<p>And here are the result of the new trial.<br>
<code>batch_size=1</code> was 30 minutes.</p>
<p>I didn't realize that batch_size=1 was the fastest. (but still slow…)<br>
Thanks for letting me know!</p>
<p>Thank you! It was a fun competition. Let's both do our best in the next competition.</p> |
Feedback Prize - English Language Learning | 2nd solution (back-translation & rank-loss) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>There are small number of overlap docids, but it does not affect so much, basicly for safe cv you could remove them.<br>
For me I did not remove them for fb2 pretrain, but for pseudo using fb1 I removed duplicate doc ids.</p>
<p>For deberta-v1 I tried 512 but not used.</p>
<p>You can refer to my code <a href="https://www.kaggle.com/datasets/goldenlock/feedbacken" target="_blank">https://www.kaggle.com/datasets/goldenlock/feedbacken</a> , torch/loss.py FLAGS.rank_loss_rate</p>
<p>This is a fun game, thanks for the organizer and all pepole who share here. </p>
<ul>
<li>CV <br>
I used abiheshark's cv strategy with 5 folds and seed 42.<br>
My pb best choosen one is with <strong>pseudo train and optuna tune CV 44494</strong> PB 43363<br>
(but this <strong>cv is not accurate</strong> and over optimized since optuna tune for all train OOF, I should have used OOF agian for optuna based CV, unfortuantely I trust LB here and it happen to improve my LB but hurt PB a little or just by chance) <br>
My pb best one is without pseudo train and no optuna tune(hand tune rule by per model per target cv) <strong>CV 44543</strong> PB 433541<br>
But after game I tested pseudo+hand tune rule the results is 43380 so hand rule is also not very stable and has chance to overfit as you tune weight on local OOF cv.<br>
TODO: Try Rige, lasso, hill clmb,Nelder-Mead, negative weight of model methods to improve ensemble results<br>
<strong>Best single modelsingle model is cv around 449</strong> without pseudo train, using backtrans pretrain or feedback2 pretrain. <br>
<strong>Best single model with pseudo is cv 4469</strong> , for late submission I find better one with cv <strong>4456</strong> pb <strong>434726</strong> see table below <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F42245%2Fd0f805045cca2060a9f0d146a97de7ee%2Fscore.png?generation=1669801234725714&alt=media" alt=""><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F42245%2F989409d2e8fd2082cce8c99dc9a49a07%2Fcv.jpg?generation=1669801251545911&alt=media" alt=""><br>
Notice above image, epoch n means first(n + 1) models mean. </li>
</ul>
<p>Single model performance (5folds + 1 full train):</p>
<table>
<thead>
<tr>
<th>model</th>
<th>cv</th>
<th>lb</th>
<th>pb</th>
</tr>
</thead>
<tbody>
<tr>
<td>base(dev3-large maxlen1280)</td>
<td>4514</td>
<td>442013</td>
<td>438818</td>
</tr>
<tr>
<td>base+rank_loss</td>
<td>4505</td>
<td>438912</td>
<td>437175</td>
</tr>
<tr>
<td>base+rank_loss+trans-nl pretrain</td>
<td><strong>4498</strong></td>
<td>440225</td>
<td><strong>435711</strong></td>
</tr>
<tr>
<td>base+rank_loss+feedback2 pretrain</td>
<td><strong>4488</strong></td>
<td><strong>438583</strong></td>
<td><strong>435814</strong></td>
</tr>
<tr>
<td>base+feedback1 pseudo train only</td>
<td><strong>4469</strong></td>
<td><strong>438601</strong></td>
<td><strong>436144</strong></td>
</tr>
<tr>
<td>base+rank_loss+feedback1 pseudo pretrain</td>
<td><strong>4497</strong></td>
<td>440576</td>
<td><strong>435144</strong></td>
</tr>
<tr>
<td>base+feedback1 pseudo&feedback3 train</td>
<td><strong>4468</strong></td>
<td><strong>438063</strong></td>
<td><strong>434809</strong></td>
</tr>
<tr>
<td>base+rank_loss+feedback1 pseudo&feedback3 train</td>
<td><strong>4456</strong></td>
<td><strong>438084</strong></td>
<td><strong>434726</strong></td>
</tr>
</tbody>
</table>
<p>Model with pretrain or pseudo or back-trans(<strong>model with more backgound knowlege</strong>) work better for both CV/PB and some models for CV/LB/PB.And these models tend to have better cv performance on vocabulary.<br>
Rank loss always help.</p>
<ul>
<li><p>Backbones<br>
deberta-v3-large (lr 2e-5)<br>
deberta-v3-base (lr 5e-5)<br>
Only the two work, all others much worse then deberta-v3-large<br>
TODO: I should have add <strong>SVR method ensemble</strong><br>
SVR single model reach CV around 450 so good enough to ensemble and it could introduce more knowlege of non-deberta backbones.<br>
I did not add it as I return to this game late and could not make it safe to add for infer, there are some OOM issues , local cv seems to improve like 445146 to 44496 without tune weight for each target, using this one <a href="https://www.kaggle.com/code/quangphm/lb-0-43-simple-ensemble-deberta-base-svr" target="_blank">https://www.kaggle.com/code/quangphm/lb-0-43-simple-ensemble-deberta-base-svr</a>.</p></li>
<li><p>Train<br>
Text maxlen 1280, batch size 8, 4 epochs, linear lr decay, awp training for the last 2 epochs <br>
For back translation pretrain only 2 epochs train without awp. </p></li>
<li><p>Models<br>
I submit 23 models * 2, 23 full train model, 23 fold(0-4) models. Cost 2h20 min using T4 * 2 infer. </p></li>
<li><p>What works</p></li>
</ul>
<ol>
<li><strong>Let bert deal with the target relation</strong><br>
"[CLS] cohesion syntax vocabulary phraseology grammar conventions text [SEP]“ <br>
Each target output emb with 1 unique fc added to predict target value. </li>
<li><strong>Add pearson loss</strong><br>
We could see bert tend to <strong>predict all target with similar value</strong>, in order to predict well for <strong>hard target</strong>, I found adding pearson loss help a lot. But it will hurt for easy target performance, I did not find good method to deal, ensembling with person loss rate 0.1 and 0 is what I used. <br>
Also with rank_loss_rate = 0.1 improve single model cv from 4514 to 4505. Below show add rank loss (name with -crank) help a lot for hard targets(score/max calc for target index np.argmax(label - label.mean())). Here also has improve space, I think maybe we can better ensemble using some specific features and do regression on OOF. <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F42245%2Fd83e3ae6f0a84eaaecff6bf1db7d6ccc%2Fminmax.png?generation=1669801062373059&alt=media" alt=""></li>
<li><strong>Use back translation for pretrain</strong><br>
I used a lot of back translation for ensembling. They are [nl, fr, de, pt, af, cn, ru, fi, sv, ja, ko, el, hr, cy] <br>
Notice back translation data is only used for pretrain, so 2 steps training, the second step is the same as other models.<br>
No back translation models: <br>
('score', 0.44672287515754533),<br>
('score/vocabulary', 0.4109792609030708),<br>
Add back translation models, with weight=np.array([1,1,10,1,0.25,0.25], float) * 0.8:<br>
('score', 0.4453322618856002),<br>
('score/vocabulary', 0.4086610628919421),<br>
Back translation models only:<br>
('score', 0.44640808445875324),<br>
('score/vocabulary', 0.408889105739513),<br>
Vocabulary improve a lot ! Actually for all backtrans models vocabulary improve so it means backtrans help on learning more vocabulary info!<br>
Single model (see image above with name .trans-nl) cv improve a lot 4505 to 4497 and also PB gain.</li>
<li><strong>Use feedback2 data for pretrain</strong><br>
Just train feedback2 listwise model (remove any discourse type info) as pretrain model. This will make model more stable and help especially for some targets like convetions and vocabulary. <br>
TODO: I did not use ensemble and pseudo for feedback2 data pretrain, simply using 1 single full train model on fddeback2 data only, so here might have improvement space. <br>
Single model (see image above with name .ft2) cv improve a lot 4505 to 4488 and also LB/PB improve.<br>
Why use feedback2? I did not investigate but I think discourse effectiveness has relation with our 6 targets.</li>
<li><strong>Pseudo</strong><br>
Pseudo works, pseudo+optuna is much better then no pseudo+optuna, both LB and cv and PB 433811->43363. <br>
Pseudo also helped produce best PB(435144) single model. <br>
TODO: I did not try (Mean of Train PL / Actual Labelsas) mentioned in <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369457" target="_blank">1st solution</a> it might give boost.</li>
<li><strong>Tune for each target</strong><br>
For example back translation help a lot for vocabulary but hurt conventions as show on cv image above. </li>
</ol>
<ul>
<li>Code <br>
<a href="https://www.kaggle.com/code/goldenlock/best-pb-online-1127?scriptVersionId=112354741" target="_blank">https://www.kaggle.com/code/goldenlock/best-pb-online-1127?scriptVersionId=112354741</a> (ensemble infer, best personal PB)<br>
<a href="https://www.kaggle.com/code/goldenlock/pb-433630-pseudo-optuna-tune" target="_blank">https://www.kaggle.com/code/goldenlock/pb-433630-pseudo-optuna-tune</a> (ensemble infer, selected PB)<br>
<a href="https://www.kaggle.com/goldenlock/single-model-infer" target="_blank">https://www.kaggle.com/goldenlock/single-model-infer</a> (single model infer, 5 fold + 1 full train=6 models, 20min)<br>
<a href="https://www.kaggle.com/datasets/goldenlock/feedbacken" target="_blank">https://www.kaggle.com/datasets/goldenlock/feedbacken</a> (train)<br>
<a href="https://github.com/chenghuige/Feedback-Prize---English-Language-Learning" target="_blank">https://github.com/chenghuige/Feedback-Prize---English-Language-Learning</a> (full code)</li>
</ul>
<p>Use gooltrans to translate train text to cn text for example and translate back to english, then using the new train data for pretrain.<br>
from googletrans import Translator<br>
trans = Translator()</p>
<p>Thanks! <a href="https://www.kaggle.com/xianghuang666" target="_blank">@xianghuang666</a> </p>
<p>Best choosen one with pseudo and best score not, I explain more in my post, I think pseudo should work but not improve so much.</p>
<p>Thanks <a href="https://www.kaggle.com/clyde7470" target="_blank">@clyde7470</a> , updated my post, 4467 to 44533.</p>
<p>I used google trans python api.</p>
<p>I choose by the best cv( over optimized by optuna tune on whole OOF), and they are also the best on LB. </p>
<p>Thanks <a href="https://www.kaggle.com/crodoc" target="_blank">@crodoc</a>, congratulation to you also! Interesting game, if I added svr to my ensemble maybe I could be seen on LB😃, just wondering.</p>
<p>For this competion as LB dataset is small, you should trust your cv, you can see from my single model results table in my post, and you will find CV and PB is more aligned then CV and LB. <br>
One thing important is you have to make sure your CV is correct, first no leak like when using pseduo.<br>
Another thing is you should take care not to overfit your CV a lot. For example I got cv 4494 using optuna tune model weights for all OOF then my cv is over optimized, the better way will be you do optuna on OOF fold 1-3 and test on fold 4, by doing this you can avoid overfit to cv.<br>
Also if you manually tune weight on the whole OOF you will face the same problem. </p>
<p>for 1) yes I used all these languages, actually more, but some other languages may not improve my ensemble results so not included finally. I transelate and translate back for each language to get a new train dataset, like for nl get the nl-dataset, train on nl-dataset only to get nl-model, then train on original train dataset with model load from the pretrained model nl-model. <br>
I tired mix different language datasets to do the first step seems not work well.<br>
for 2), actually nl-model it's self not perform well, but it works well as a pretrained model. I used wandb to log cv score for each target and found, by using back-translation some targets improve a lot (vocabulary) and some others decrease a lot (conventions).<br>
And this is a common fact, not by chance, I found all language got same conclution, help vocabulary hurt conventions.<br>
So I tried to manually increase model weights for vocabulary and decrease weight on conventions and it show increase on both CV and LB.</p> |
Feedback Prize - English Language Learning | 37th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to the host for an interesting competition and congratulations to all the winners!</p>
<p><strong>Overview</strong></p>
<ul>
<li>Our final submission is ensemble of 13 models. The weights of the models are determined using nelder-mead.</li>
<li>cv0.44384/PublicLB0.435426/PrivateLB0.435417</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7393510%2F297a629de174ed1245d6fb524837a9ab%2F324.png?generation=1670742662019516&alt=media" alt=""></p>
<p><strong>What Worked</strong></p>
<ul>
<li><p>Ensemble including roberta and funnel<br>
- The best PrivateLB for the deberta-only ensemble model was 0.435807.<br>
- The best PrivateLB in every submit was 0.435103.</p></li>
<li><p>Optimization by nelder-mead<br>
- Our bestCV(0.44287) and bestPrivate(0.435103) were ensemble models with weights optimized by nelder-mead.<br>
- In the final submission, the best model is the ensemble model of cv0.44384,Private0.435417 optimized by nelder-mead</p></li>
</ul>
<p><strong>What Didn’t Work</strong></p>
<ul>
<li>adding LSTM to model</li>
<li>Embedding SVR with fine-tuned models</li>
<li>Different loss functions</li>
</ul>
<p><strong>Important ciations</strong></p>
<ul>
<li><a href="https://www.kaggle.com/code/kojimar/fb3-deberta-family-inference-9-28-updated" target="_blank">FB3 Deberta Family Inference</a></li>
<li><a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">FB3 / Deberta-v3-base baseline</a></li>
<li><a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-inference/notebook" target="_blank">FB3 / Deberta-v3-base baseline [inference]</a></li>
</ul>
<p><strong>Thanks and Acknowledgements:</strong><br>
Thanks to our hosts for hosting the competition. And thanks to the Kaggler's for sharing their helpful notebooks and discussions.Special thanks to my teammates for such a perfect teamwork</p>
<p><strong>Team Members:</strong></p>
<ul>
<li><a href="https://www.kaggle.com/rachisotaro" target="_blank">@rachisotaro</a> </li>
<li><a href="https://www.kaggle.com/shinuk" target="_blank">@shinuk</a> </li>
</ul> |
Feedback Prize - English Language Learning | 38th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to kaggle team, host for hosting this competition and to teammates. Thanks also to the Kaggler's for sharing their helpful codes and discussions. This was my second time to participate in NLP competition after the previous fp2 competition, and I learned a lot.</p>
<h2>Overview</h2>
<p>My final submission is ensemble of 15 models optimized for weights with optuna. Each models are trained with various hyperparameter such as seed, num_fold, max_length, and so on.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5015548%2F2ef9b2f7ebf77e1e92394218afcf78a7%2Fsolution.png?generation=1670233325624196&alt=media" alt=""></p>
<h2>What Worked</h2>
<ul>
<li>SmoothL1Loss was the best in CV.</li>
<li>Using optuna to determine ensemble weights for each target.</li>
<li>Use various custom head.<ul>
<li>mean pooling, lstm, concat last 4 layer</li></ul></li>
<li>Adversarial Weight Perturbation</li>
<li>Use various model. (not only the hugging face model, but also lightgbm)<ul>
<li>Single model is not good, but ensemble has improved score a little.</li>
<li>refs (lightgbm model): <a href="https://www.kaggle.com/code/dlaststark/fpe-no-fancy-stuff" target="_blank">https://www.kaggle.com/code/dlaststark/fpe-no-fancy-stuff</a></li></ul></li>
</ul>
<h2>What Didn’t Work</h2>
<ul>
<li>Different loss functions</li>
<li>mixout</li>
<li>Use text features as meta feature when I stacking models.</li>
</ul>
<p>These methods did not improve cv scores, but few were effective in ensembles.</p>
<h2>Important Citations</h2>
<ul>
<li><a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train</a><br>
<a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a>. His notebook has taught me a lot.</li>
<li><a href="https://www.kaggle.com/code/electro/deberta-layerwiselr-lastlayerreinit-tensorflow" target="_blank">https://www.kaggle.com/code/electro/deberta-layerwiselr-lastlayerreinit-tensorflow</a></li>
<li><a href="https://www.kaggle.com/code/kojimar/fb3-single-pytorch-model-train/notebook" target="_blank">https://www.kaggle.com/code/kojimar/fb3-single-pytorch-model-train/notebook</a></li>
</ul>
<h2>Thanks and Acknowledgements</h2>
<p>Finally, I'd like to thank Kaggle, host for hosting such an iteresting competition and all who participated discussions.<br>
Thank you and Congratulations to my team mates - <a href="https://www.kaggle.com/mitsuruueki" target="_blank">@mitsuruueki</a></p> |
Feedback Prize - English Language Learning | 3rd Place Solution - Congratulations New Competition Grandmaster Amed! | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks Marília!</p>
<p>Models like RoBERTa can only accept input with <code>max_len = 512</code> tokens. So to train with sequences of len 2048, we need to train and infer using a sliding window.</p>
<p>Yes basically. However with sliding window, we also pick a stride length. So if <code>max_len=512</code> and <code>doc_stride=256</code>. Then the you break it into 7 pieces, namely 0:512, 256:768, 512:1024, 768:1280, 1024:1536, 1280:1792, 1536:2024. The pieces overlap by <code>max_len minus doc_stride</code>.</p>
<h1>Congratulations Grandmaster Amed !</h1>
<p>Congratulations Amed ( <a href="https://www.kaggle.com/amedprof" target="_blank">@amedprof</a> ) for becoming Kaggle's newest Competition Grandmaster! Amed was our teams' superstar who designed most of our models. He truly earns the title Kaggle Competition Grandmaster!</p>
<h1>3rd Place Solution Summary</h1>
<p>Our solution is an ensemble of 24 models without using pseudo labels. Most models use Deberta-v3-large backbone and a variety of tricks. The ensemble was chosen using hill climbing. Teaming with Amed ( <a href="https://www.kaggle.com/amedprof" target="_blank">@amedprof</a> ) and CroDoc ( <a href="https://www.kaggle.com/crodoc" target="_blank">@crodoc</a> ) was a pleasure. Thank you guys!</p>
<h1>Hill Climbing - CV 0.4420</h1>
<p>I love hill climbing because it can take lots of models and pick the best small subset of models. (i.e. its like Lasso regression) And it computes ensemble model weights. </p>
<p>We start with the single model with best CV score (for us was 0.4470). Then we iterate through all other models and pick a second model which helps the most. Then we pick a third etc until new models do not help. Our ensemble without pseudo achieves CV 0.4420. We allowed hill climbing to use <strong>negative weights</strong> which boosted CV <code>+0.0010</code> and private LB <code>+0.0060</code> versus using positive weights only. </p>
<pre><code>while not STOP:
potential_new_best_cv_score = GET_BEST()
for k in range( len(MODELS) ):
for wgt in range(-0.5,0.51,0.01):
potential_ensemble = (1-wgt) * current_best_ensemble + wgt * MODELS[k]
cv_score = compute_metric( potential_ensemble )
if cv_score < potential_new_best_cv_score:
potential_new_best_cv_score = cv_score
REMEMBER_THIS_MODEL(k,wgt)
current_best_ensemble, STOP = UPDATE_BEST()
</code></pre>
<p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Nov-2022/hill_climb.png" alt=""></p>
<h1>Models - CV 0.4470</h1>
<p>Each day, we train new diverse models. Then we run hill climbing to see if the new model gets chosen. We don't need to make a new model with a great CV score, we just need to make new models that are diverse. Below are the choices in order which hill climbing picked from our 50 models. We observe that the best CV score models are not chosen first. Instead hill climbing chooses <strong>diverse models</strong>.</p>
<p>The improvement in ensemble CV score is shown in the plot above. The table below displays single model CV scores. Our best single model without pseudo achieves CV 0.4470</p>
<table>
<thead>
<tr>
<th>order selected</th>
<th>backbone</th>
<th>cv score</th>
<th>weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>deberta-v3-large</td>
<td>0.447</td>
<td>0.190</td>
</tr>
<tr>
<td>2</td>
<td>deberta-v3-large-squad2</td>
<td>0.4524</td>
<td>0.142</td>
</tr>
<tr>
<td>3</td>
<td>deberta-v3-large</td>
<td>0.4498</td>
<td>0.124</td>
</tr>
<tr>
<td>4</td>
<td>deberta-large-mnli</td>
<td>0.4548</td>
<td>0.068</td>
</tr>
<tr>
<td>5</td>
<td>deberta-v3-large</td>
<td>0.4492</td>
<td>0.133</td>
</tr>
<tr>
<td>6</td>
<td>xlm-roberta-large</td>
<td>0.4575</td>
<td>0.092</td>
</tr>
<tr>
<td>7</td>
<td>deberta-v3-large-squad2</td>
<td>0.457</td>
<td>-0.160</td>
</tr>
<tr>
<td>8</td>
<td>deberta-v3-large-squad2</td>
<td>0.4525</td>
<td>0.101</td>
</tr>
<tr>
<td>9</td>
<td>deberta-v3-large</td>
<td>0.4489</td>
<td>0.125</td>
</tr>
<tr>
<td>10</td>
<td>deberta-v3-large-squad2</td>
<td>0.4565</td>
<td>-0.124</td>
</tr>
<tr>
<td>11</td>
<td><a href="https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x" target="_blank">RAPIDS-SVR</a></td>
<td>0.4526</td>
<td>0.083</td>
</tr>
<tr>
<td>12</td>
<td>deberta-v3-large</td>
<td>0.4495</td>
<td>0.102</td>
</tr>
<tr>
<td>13</td>
<td>deberta-v3-large</td>
<td>0.4502</td>
<td>0.081</td>
</tr>
<tr>
<td>14</td>
<td><a href="https://www.kaggle.com/code/electro/deberta-layerwiselr-lastlayerreinit-tensorflow" target="_blank">TF-deberta-v3-base</a></td>
<td>0.4554</td>
<td>0.061</td>
</tr>
<tr>
<td>15</td>
<td>deberta-v3-large</td>
<td>0.4527</td>
<td>-0.069</td>
</tr>
<tr>
<td>16</td>
<td>deberta-v3-large</td>
<td>0.4522</td>
<td>-0.058</td>
</tr>
<tr>
<td>17</td>
<td>deberta-v3-large</td>
<td>0.45</td>
<td>0.061</td>
</tr>
<tr>
<td>18</td>
<td>deberta-v3-large</td>
<td>0.4516</td>
<td>-0.058</td>
</tr>
<tr>
<td>19</td>
<td>deberta-v3-large</td>
<td>0.4509</td>
<td>0.071</td>
</tr>
<tr>
<td>20</td>
<td>deberta-v3-base</td>
<td>0.4575</td>
<td>-0.039</td>
</tr>
<tr>
<td>21</td>
<td>deberta-v3-large-squad2</td>
<td>0.4501</td>
<td>0.051</td>
</tr>
<tr>
<td>22</td>
<td>deberta-v3-large</td>
<td>0.4512</td>
<td>-0.049</td>
</tr>
<tr>
<td>23</td>
<td>roberta-large</td>
<td>0.4571</td>
<td>0.030</td>
</tr>
<tr>
<td>24</td>
<td>deberta-v3-large</td>
<td>0.45</td>
<td>0.040</td>
</tr>
</tbody>
</table>
<h1>What Worked - Tricks</h1>
<p>Our models use a diversity of regression heads, pooling techniques, learning schedules, and backbones. Some models used the following tricks:</p>
<ul>
<li>Different loss rates per target <code>{'cohesion':0.21, 'syntax':0.16, 'vocabulary':0.10, 'phraseology':0.16, 'grammar':0.21, 'conventions':0.16}</code> [High Impact] </li>
<li>Clip grad norm with <code>max_norm = 10</code> [High Impact]</li>
<li><code>hidden_dropout_prob = 0.0</code> and <code>attention_probs_dropout_prob = 0.0</code> [High Impact]</li>
<li>Stride window for non deberta models [Medium Impact] </li>
<li>Since Deberta tokenizer ignores "\n" we replaced "\n\n" with "|" [Medium Impact] </li>
<li>Last layer reinitialisation [Medium Impact] </li>
<li>2 stage pooling. First pool either words, sentences, or paragraphs. Then pool that result. [Medium Impact] </li>
<li>Train with <code>max_len=2048</code>, infer <code>max_len = 640</code> [High Impact] </li>
<li>Train with <code>batch_size = 1</code> [High Impact] </li>
<li>RAPIDS SVR using embeddings without train on comp data [Medium Impact] </li>
</ul>
<h1>What Didn't Work - Pseudo Labels</h1>
<p>Our final submission <strong>does not</strong> use pseudo labels. We tried using pseudo labels during the competition. We were careful to avoid leaks, but none-the-less using pseudo labels from Feedback Prize 1 competition boosted our single model best CV score from 0.4470 to 0.4370 and did not improve our LB score. There must be a leak somewhere but we still haven't found it.</p>
<p>We noticed that the distribution of targets from FP1 is different than FP3. So care must to be taken to account for this difference in distribution. Below are histogram of pseudo labels on FP1 vs. pseudo labels on FP3. We observe that the old Feedback Prize 1 comp has greater target values than current Feedback Prize 3 comp:</p>
<p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Nov-2022/shift.png" alt=""></p>
<h1>Team Members</h1>
<ul>
<li><a href="https://www.kaggle.com/amedprof" target="_blank">@amedprof</a></li>
<li><a href="https://www.kaggle.com/crodoc" target="_blank">@crodoc</a></li>
<li><a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> </li>
</ul>
<h1>Solution Code Published</h1>
<p>We published our training code on GitHub <a href="https://github.com/Amed1710/Feedback-Prize--English-Language-Learning" target="_blank">here</a> and we published our inference code on Kaggle <a href="https://www.kaggle.com/code/cdeotte/3rd-place-solution-lb-0-4337-cv-0-4420" target="_blank">here</a></p>
<p>IMO Models want the most train data possible. Training with <code>max_len=2048</code> allows the model to learn from all the provided train text. Once we have a smart model, it does not need to see the entire test essay to predict the targets. (And just like humans if you show it too much maybe it gets confused or overwhelmed). Hence during inference, it can use less. (i.e. Humans read the first paragraph and already have a good sense of the essay's final target scores).</p>
<p>We confirmed this on CV and LB. First train a model with 2048. Then create OOF (using the same model again and again) with different inference lengths like 512, 640, 768, 1024 etc. Compute CV score and submit to LB. Using 640 achieved best CV and LB.</p>
<p>We did not try to train one SVR from all our NLP transformer embeddings. Instead we trained a separate SVR and included it in our ensemble and it boosted CV LB. The 4th place solution <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369621" target="_blank">here</a> best single model was SVR</p>
<p><a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> Also note that Deberta uses "relative embeddings". Therefore if WORD_A in position 1500 relates to the word in position 1501. Then the model learns that WORD_A needs to pay attention to the word in position +1. Therefore when we infer with length 640 and the model sees WORD_A in position 550, it will look at position 551. In other words, the learnings from training with max_len=2028 will generalize to inference at max_len=640. (I think this is how it works)</p>
<p><a href="https://www.kaggle.com/amed" target="_blank">@amed</a> can share the details. These are two of his tricks.</p>
<p>I think the way "2 x pooling" works is as follows. Amed please correct me if i'm wrong. A transformer model outputs embeddings for each token which are pieces of words. Most popular notebook, use mean pooling over all these token outputs. Therefore each token has equal importance.</p>
<p>When doing paragraph pooling, we first takes the mean of all tokens in each paragraph separately. We now have a single embedding for each paragraph. Next we take the mean of these embeddings. This gives each paragraph equal weight. </p>
<p>This is different than normal mean pooling. With normal mean pooling, if one paragraph has twice as many tokens as another paragraph, then the first paragraph would be twice as important when taking average of tokens.</p>
<p>Amed, can you share how you discovered the different learning rates per target?</p>
<p>When train data is small, then the more folds will achieve a better CV and LB. This is because 4 folds uses 75% train data to train a fold model whereas 5 folds, 10 folds, 25 folds, use 80%, 90%, and 95% respectively.</p>
<p>When training data is very large like 100's of thousands of samples than even 75% is plenty. But when train data only has a few thousand then every extra sample to train with makes a noticeable difference.</p>
<p>Validation is key. "Trust Your CV" is king. We picked all our final submissions based on CV score and ignored LB score because LB is the size of 1 fold, and train data CV is the size of 5 folds. A single fold is not as reliable as the full 5 folds CV score to estimate private LB score. </p>
<p>All our models were trained using the same 5 MultilabelStratifiedKFold (seed 42) as seen in public notebooks. For the most part improvements in CV resulted in improvements in LB. But the LB data was the size of 1 fold of 5 fold. And behaved a little different than CV score at times.</p>
<p>Our final solution does not use MLM and I did not try MLM. Perhaps <a href="https://www.kaggle.com/amedprof" target="_blank">@amedprof</a> or <a href="https://www.kaggle.com/crodoc" target="_blank">@crodoc</a> tried MLM.</p> |
Feedback Prize - English Language Learning | 42th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thank you for hosting this competition.<br>
I learned a lot about the NLP of this competition.<br>
Congrats to the winners!</p>
<h1>Overview</h1>
<ul>
<li>I use 11 models(4 fold model x 4 and 10 fold model x 7).<ul>
<li>First, I use LGBM(with optuna) for 4fold model.<br>
※ I didn't use 10 fold model because the experiment was not completed in time for the competition deadline..</li>
<li>Second, I did weight ensemble. </li></ul></li>
<li>cv=0.4426811/PublicLB=0.435475/PrivateLB=0.435906<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8119383%2Ff0b0cf8c5f4020e199bdfad6edd33ed8%2Fmy_submission.png?generation=1670509564450962&alt=media" alt=""></li>
<li>10 fold model is open model(<a href="https://www.kaggle.com/code/jingwora1/fb3-deberta-family-inference-weight-tune" target="_blank">link1</a>,<a href="https://www.kaggle.com/code/kojimar/fb3-single-pytorch-model-inference" target="_blank">link2</a>). So I share only 4 fold model detail of cv score. </li>
</ul>
<table>
<thead>
<tr>
<th>model</th>
<th>CV score</th>
</tr>
</thead>
<tbody>
<tr>
<td>(*1,4,5)deberta-v3-base (pseud label fb1,2)</td>
<td>0.4528044</td>
</tr>
<tr>
<td>(*4,5)deberta-v3-large</td>
<td>0.4568590</td>
</tr>
<tr>
<td>(*2,4,5)deberta-v3-base (topics)</td>
<td>0.4537212</td>
</tr>
<tr>
<td>(*3,4,5)deberta-v3-base (original loss)</td>
<td>0.4564600</td>
</tr>
</tbody>
</table>
<p>※ (*1,4,5) use 「Pseud Labeling」,「Concat last Four hidden」and「Dropout」.</p>
<h1>What Worked</h1>
<ul>
<li>Pseudo Labeling(*1)<ul>
<li>I use all data of fb1 and fb2.</li>
<li>I use MultilabelStratifiedKFold to fold datasets.</li></ul></li>
<li>Add topic(*2)<ul>
<li>I refer <a href="https://www.kaggle.com/code/jdoesv/take2-feedback-essays-to-prompts" target="_blank">this code</a> and predict topic.</li>
<li>I add predicted topic to text. (ex. [1_students_online_school_classes]I think that~)</li></ul></li>
<li>oliginal loss(soft)(*3)<ul>
<li>Much of the data for this competition was concentrated around 3. So I changed the loss to be more robust to outlier value.</li>
<li>sample is below.</li></ul></li>
</ul>
<pre><code>self.mse = nn.MSELoss(reduction='none')
l = self.mse(y_pred, y_true) + eps # eps=1e-9
loss = torch.mul(torch.pow(l,alpha),torch.sqrt(l))
# soft→alpha=1, hard→alpha=2
</code></pre>
<ul>
<li>Concat last Four hidden(*4)<ul>
<li>MLP model include important information in last Four hidden layers.</li>
<li>I refer <a href="https://www.kaggle.com/code/rhtsingh/utilizing-transformer-representations-efficiently" target="_blank">this code</a> </li></ul></li>
<li>Dropout(*5)<ul>
<li>Dropouts can reduce overlearning.</li>
<li>sample is below.</li></ul></li>
</ul>
<pre><code>self.dropouts = nn.ModuleList([nn.Dropout(0.2) for _ in range(5)])
output = sum([self.fc(dropout(feature)) for dropout in self.dropouts])/5
</code></pre>
<ul>
<li>LGBM<ul>
<li>I make 6LGBM model to predict 「cohesion」, 「syntax」, 「vocabulary」, 「phraseology」, 「grammar」, and 「conventions」.</li>
<li>I use 2stage stacking.</li></ul></li>
</ul>
<h1>What Didn’t Work</h1>
<ul>
<li>MLM<ul>
<li>I can't get good LB and CV score</li>
<li>I think MLM is useful when datasets is small. But This competition can use fb1,fb2 and fb3.So I think MLM does not need.</li></ul></li>
<li>Focal loss(hard)<ul>
<li>LB score dropped. So I didn't use.</li></ul></li>
</ul>
<h1>Important ciations</h1>
<ul>
<li>FB3 / Deberta-v3-base baseline [train] <a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">link</a></li>
<li>FB3 / Deberta-v3-base baseline [inference] <a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-inference/notebook" target="_blank">link</a></li>
<li>FB3 Deberta Family Inference [weight tune] <a href="https://www.kaggle.com/code/jingwora1/fb3-deberta-family-inference-weight-tune" target="_blank">link</a></li>
<li>FB3 single pytorch model [inference] <a href="https://www.kaggle.com/code/kojimar/fb3-single-pytorch-model-inference" target="_blank">link</a></li>
</ul>
<h1>Thanks and Acknowledgements</h1>
<p>This was almost my first time to participate in an MLP competition.<br>
I think this competition was perfect for someone with no MLP experience because I learned a lot from the Discussion. <br>
We were also able to reaffirm the importance of trust CV. <br>
Again, thank you very much for a very enjoyable competition! </p>
<p><a href="https://www.kaggle.com/imvision12" target="_blank">@imvision12</a> Thank you for your comment! I refer to this <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/352001" target="_blank">discussion</a>. Please also note the leaks in this competition, as there is some overlap between past data(fb1) and fb3 data. I can get below score in deberta-v3-base.<br>
[without pseud label] PublicLB=0.446387/PrivateLB=0.445974<br>
[with pseud label] PublicLB=0.440058/PrivateLB=0.440377</p> |
Feedback Prize - English Language Learning | 44th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Overview</h2>
<p>Our solution is an ensemble of ridge regression and support vector regression, trained on predictions of 8 fine-tuned deberta models and 2 classical ml algorithms using 10-fold cross-validation and optimized with <em>Optuna</em> (see Fig. 1).<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F303188%2F6a3d10f8713e526a45932203882ddc1e%2Ffb3%20(1).png?generation=1670625664430131&alt=media" alt="Figure 1: Our solution scheme"><br>
The scores are listed in the table below:</p>
<table>
<thead>
<tr>
<th><strong>sample</strong></th>
<th><strong>score</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>cross-validation</td>
<td>0.44355</td>
</tr>
<tr>
<td>public test</td>
<td>0.43845</td>
</tr>
<tr>
<td>private test</td>
<td>0.435586</td>
</tr>
</tbody>
</table>
<h2><a href="https://www.kaggle.com/code/andreyustinov/inference-notebook-stacking?scriptVersionId=112415397" target="_blank">Our code</a></h2>
<h2>What worked:</h2>
<ul>
<li>Very careful cross-validating on 10 folds and own coefficients for each target field</li>
<li>Mix of debertas</li>
<li>Adding some classical ml (even with features based on deberta)</li>
<li>Automatic weight tune</li>
</ul>
<h2>What didn't work:</h2>
<ul>
<li>Handcrafted and semi-automatic features (although some of them were not so bad)</li>
<li>Sentence embeddings (especially for a <code>vocabulary</code> mark)</li>
</ul>
<h2>Important Citations:</h2>
<ul>
<li><a href="https://www.kaggle.com/code/jingwora1/fb3-deberta-family-inference-weight-tune" target="_blank">Ten debertas notebook</a></li>
<li><a href="https://optuna.org" target="_blank">Optuna</a></li>
</ul>
<h2>Thanks and Acknowledgements:</h2>
<ul>
<li>My teammate Andrey Ustinov</li>
<li>All those great and honorable persons who publish high-score notebooks (even if it blows the leaderboard up)</li>
</ul>
<h2>Team members:</h2>
<ul>
<li><a href="https://www.kaggle.com/andreyustinov" target="_blank">Andrey Ustinov</a> </li>
<li><a href="https://www.kaggle.com/user303188" target="_blank">Innokentiy Humonen</a></li>
</ul> |
Feedback Prize - English Language Learning | 46th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Overview</h2>
<p>Our solution was a large ensemble of around 30 models trained with various heads, optimizers, backbones, hyperparameters, MLM pretraining, and pseudo-label configurations. We also included a few SVR/Ridge models similar to <a href="https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x" target="_blank">Chris's SVR solution</a>.</p>
<p>We tuned six groups of ensemble weights, one per column. Doing this allowed us to train single-label models or models trained on a subset of the labels (i.e., we included a model trained to predict only <code>phraseology</code> and <code>vocab,</code> another only <code>syntax</code> and <code>conventions</code>), which helped with diversity.</p>
<p>We trained many additional models but kept only those that improved both CV and LB in the ensemble. However, in the final days of the competition, I worked on an ensemble that we optimized only for CV, which turned out to be our strongest submission.</p>
<h2>What Worked</h2>
<h3>Pseudo-labeling</h3>
<p>I used a subset of my submission ensemble to generate pseudo labels. The pseudo-labels generated had quite a different distribution from the training data labels. So, we selected only a subset of 2021 data that more closely matched the train data distribution. We included an extra 2x additional data with this approach.</p>
<p>We were careful to ensure the pseudo labels did not contain leaks by generating a set of pseudo labels per fold. We also excluded any text_ids that crossed over with this comp's data.</p>
<h3>Synthetic data</h3>
<p>We trained a <code>t5-base</code> model to complete essays based on a prompt, including the first N words and the labels. We then used the model to create many synthetic examples, which we pseudo-labeled. We selected only the instances where the pseudo-labels closely matched the source label.</p>
<p>I doubled the original training dataset with these synthetic examples and added them in with the extra 2021 data.</p>
<h3>Many different pooling layers</h3>
<ul>
<li>Mean Pool</li>
<li>Mean Pool of concatenated hidden layers</li>
<li>Concat Max/Mean/Min Pool</li>
<li>Conv1d Pool</li>
<li>Attention Pool</li>
<li>Weighted Layer Pool</li>
<li>Concat CLS token from multiple hidden layers.</li>
</ul>
<h3>Many different backbones</h3>
<p>deberta-v3-base was the best backbone, but the final solution included a deberta-v3-large, deberta-large, a deberta-xlarge, and a roberta-large.</p>
<p>I tried Bart and Funnel, but they didn't appear to work well on my CV.</p>
<h3>Single-label and subset-label models</h3>
<p>For any model that worked using all six labels at once, we also tried training a model with a single label. For a few examples, we tried 2 or 3 labels. Most of these models improved the ensemble CV, but only some improved the LB.</p>
<h3>Many different hyperparameters</h3>
<ul>
<li>Different max length settings (512, 1024, 1256, and 1408)</li>
<li>Various optimizers (Adam and Adafactor worked best)</li>
<li>MLM pretraining (although this only worked for a few single-label models)</li>
<li>Different epoch settings per label subset (e.g., more epochs for vocabulary single label model and fewer for syntax)</li>
</ul>
<h2>What Didn't Work</h2>
<ul>
<li><p>Vadim worked on many alternative pseudo-labeling approaches, including scraping essays and letters from the internet. I will get him to create a post describing that. Unfortunately, at this stage, Vadim is still without reliable power.</p></li>
<li><p>Focusing on a strong single model, as is usually the best idea, seemed impossible. Training a model with the same configuration with only a differing seed gave wildly different LB and CV results. Any hyperparameter tuning appeared to affect the results randomly. Only pseudo-labeling consistently improved single model CV, and only ensembling reliably improved our LB score.</p></li>
<li><p>We tried to use different Adversarial Training methods, such as the Fast Gradient Sign Method, Adversarial Weight Perturbation, etc. We did have some models that saw improved CV with AWP, but the increase in train time didn't seem worth it, especially with so many pseudo labels.</p></li>
<li><p>Model stacking: We tried a few approaches but they didn't seem to give very strong CV results.</p></li>
</ul>
<h2>Citations</h2>
<ul>
<li>Huge thanks to this masterpiece of a kernel: <a href="https://www.kaggle.com/code/rhtsingh/utilizing-transformer-representations-efficiently" target="_blank">Utilising Transformer Representations Effictively</a> by <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a></li>
<li>This kernel by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>: <a href="https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x" target="_blank">RAPIDS SVR - CV 0.450 - LB 0.44x</a>, which added good diversity to our ensemble.</li>
<li>Many of the previous Feedback and other NLP competition solutions.</li>
</ul>
<h2>Team</h2>
<ul>
<li><a href="https://www.kaggle.com/lexandstuff" target="_blank">@lexandstuff</a> (me) from Brisbane, Australia</li>
<li><a href="https://www.kaggle.com/e0xextazy" target="_blank">@e0xextazy</a> from Moscow, Russia</li>
<li><a href="https://www.kaggle.com/vad13irt" target="_blank">@vad13irt</a> from Dnipro, Ukraine</li>
<li><a href="https://www.kaggle.com/obatek" target="_blank">@obatek</a> from Tashkent, Uzbekistan</li>
</ul>
<p>Unfortunately for us, in October, Vadim was left without reliable internet, power, and water after the Russian military began attacking civilian infrastructure in Dnipro and across Ukraine, leaving him unable to continue working and his family and their neighbors without basic human needs. Vadim's many important contributions in the first half of the competition gave us a solid foundation to continue in the 2nd half. </p> |
Feedback Prize - English Language Learning | 47th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks for hosting this competition and congratulations to all the participants. This is my first time to get medal in Kaggle, I've learned a lot from this experience.</p>
<h1>Overview</h1>
<p>My solution is training a lots of models with different parameters(pooling techniques, backbones, max_len, seed) and ensemble them by weighted average. My best CV score is 0.4484.</p>
<h1>Best Model Configuration</h1>
<ul>
<li>Backbone: deberta-v3-base</li>
<li>Pooling: mean</li>
<li>Max_len: 512</li>
<li>Learning rate:<ul>
<li>Encoder: 2e-4 (layer-wise lr decay: 0.25)</li>
<li>Decoder: 1e-3</li></ul></li>
<li>Epoch: 4 (1 for warm up)</li>
<li>Batch size: 8</li>
<li>Fold: 5</li>
<li>Seed: 42</li>
</ul>
<h1>Parameters</h1>
<ul>
<li>Backbone: deberta-v3-base, deberta-v3-large, deberta-v3-small</li>
<li>Pooling: mean, cls, max, weighted layer(use last 4 or 5 layers)</li>
<li>Max_len: 512, 768, 1024, 1536</li>
<li>Seed: 13, 42</li>
<li>Epoch: 3, 4, 5</li>
</ul>
<h1>Ensemble</h1>
<p>Find weights by <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/363773" target="_blank">this</a> in my models pool, filter out the model whos weight < 1e-3 and re-find weights.</p>
<h1>What worked</h1>
<ul>
<li>[High Impact] Layer-wise Learning Rate Decay</li>
<li>[low Impact] Add special token: "\n\n", "\r\n\r\n"</li>
<li>[low Impact] First epoch warm up</li>
<li>[low Impact] Freeze layers</li>
</ul>
<h1>What Didn’t Work</h1>
<ul>
<li>AWP</li>
<li>Pseudo Labels</li>
<li>Last Layer Re-initialization</li>
</ul>
<h1>Important Citation</h1>
<ul>
<li><a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">FB3 / Deberta-v3-base baseline [train]</a></li>
<li><a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/363773" target="_blank">Strategies For Ensembling Models Using OOF Predictions</a></li>
<li><a href="https://www.kaggle.com/code/rhtsingh/utilizing-transformer-representations-efficiently" target="_blank">Utilizing Transformer Representations Efficiently</a></li>
</ul>
<h1>Thanks and Acknowledgements</h1>
<p>Thanks to competition organizers for hosting this competition and everyone who shared their knowledge during the competition. I am so happy to learn a lot from it!</p> |
Feedback Prize - English Language Learning | 49th place (Weighted Loss etc.) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I'm new to the NLP and this is my first NLP competition.<br>
So I learned a lot of things from the competition. <br>
Thanks for hosting the competition.</p>
<h1><strong>OverView</strong></h1>
<p>I ensemble 15 models (with 4 folds).<br>
I used orthogonal initialization, reinitialize the last 1 layer and layer wise learning rate decay for my all model.<br>
These methods also worked well for me.</p>
<p>And Weighted Loss slightly improved CV and LB for some models.<br>
The Weighted loss is maybe my unique point.<br>
I shared <a href="https://www.kaggle.com/code/taruto1215/49th-place-deberta-v3-base-weightedloss/notebook" target="_blank">my code</a> for Weighted Loss.<br>
The Weighted loss tend to improve the score of targets which can be easily predictable.<br>
For model diversity, I wanted to make two model (One is with Weighted Loss, other is without Weigted Loss) for each models, but There was no time to make all these model.</p>
<p>Then I tried applying some method (AWP, SiFT, MIXout, and so on..) into the Deberta-v3-base model. But all of them didn't work.<br>
However at the all most end of this competition, I found SiFT and MIXout worked well for Deberta-v3-small (Without Weighted Loss).<br>
I wanted to apply these method into other models (large, xsmall, Roberta and so on), but there was no time to train…<br>
So these methods could be applied into only Deberta-v3-small model.</p>
<p>Models used for the my best submission is below.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3830852%2F1f4c9ecc4ae99c559a1b43610f382a52%2F0004-12-01%2010.48.23.png?generation=1669902532219143&alt=media" alt=""></p>
<p>For deberta-v3-XXX set the max_len=1462, others set the max_len=512</p>
<h1><strong>What Worked</strong></h1>
<ul>
<li><p>Orthogonal initialization [High impact]</p></li>
<li><p>Mean pooling [High impact]</p></li>
<li><p>Reinitialize the last layers [High impact]</p></li>
<li><p>Layer-wised learning rate decay [High impact] <br>
for deberta-v3-base lr_mult=0.9, for the small lr_mult=0.8</p></li>
<li><p>SiFT [It depends on model (maybe it depends on hyper-parameter cause it sensitive for hyper-parameter)]<br>
Only for deberta-v3-small / xsmall it works (start at 2 epoch, learning_rate=1e-3, init_perturbation=1e-2<br>
<a href="https://arxiv.org/pdf/1911.03437.pdf" target="_blank">https://arxiv.org/pdf/1911.03437.pdf</a><br>
<a href="https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/sift/sift.py" target="_blank">https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/sift/sift.py</a></p></li>
<li><p>MIXout [Medium impact for some models]<br>
Only for the deberta-v3-small/xsmall, it works (mixout_prop=0.075)<br>
<a href="https://arxiv.org/abs/1909.11299" target="_blank">https://arxiv.org/abs/1909.11299</a></p></li>
<li><p>Weighted Loss [Medium impact]<br>
Please reference below and <a href="https://www.kaggle.com/code/taruto1215/49th-place-deberta-v3-base-weightedloss/notebook" target="_blank">my code</a><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3830852%2F972fe868dfb919663297d2d0a4daa05d%2FWeightedLoss.png?generation=1670244987116720&alt=media" alt=""><br>
<a href="https://openaccess.thecvf.com/content_cvpr_2018/papers/Kendall_Multi-Task_Learning_Using_CVPR_2018_paper.pdf" target="_blank">https://openaccess.thecvf.com/content_cvpr_2018/papers/Kendall_Multi-Task_Learning_Using_CVPR_2018_paper.pdf</a><br>
I guess the weighted loss improve the score of targets which can be easily predictable.<br>
Considering the characteristic of weighted loss the results reasonable.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3830852%2F4557bd4fd2607c1f6ad308cd9730a243%2F0004-12-01%2011.21.02.png?generation=1669904489190367&alt=media" alt=""></p></li>
<li><p>For the ensemble, below methods works (these method did not improved CV and LB)<br>
-- Multi-sampled dropout (For the deberta-base / large, dropout_rate=0.05)<br>
<a href="https://arxiv.org/abs/1905.09788" target="_blank">https://arxiv.org/abs/1905.09788</a><br>
-- AWP (Adversarial weight perturbation) (But in ensemble it work as small weight)<br>
<a href="https://arxiv.org/abs/2004.05884" target="_blank">https://arxiv.org/abs/2004.05884</a><br>
-- Weighted CLS pooling for the last 4 layers.<br>
-- Add NewLine ([BR]) token (CV improved 0.005, but LB worse, but in ensemble it works)</p></li>
</ul>
<h1><strong>What Didn't Work:</strong></h1>
<ul>
<li><p>Pseudo labeling using FB2+3 ( CV was much improved, but LB rank was very low, so I didn't use)</p></li>
<li><p>concatnate/weighted different layers mean pooling </p></li>
<li><p>Attention pooling</p></li>
<li><p>Masked language modeling (Pretrain)</p></li>
<li><p>SWA (Stochastic Weight Averaging)<br>
<a href="https://arxiv.org/abs/1803.05407" target="_blank">https://arxiv.org/abs/1803.05407</a></p></li>
<li><p>FGM (Fast gradient method)<br>
<a href="https://arxiv.org/pdf/1605.07725.pdf" target="_blank">https://arxiv.org/pdf/1806.01477.pdf</a></p></li>
<li><p>Random token dropout</p>
<p>AND SO ON….</p></li>
</ul>
<h1><strong>Ensemble strategy:</strong></h1>
<ol>
<li>Optimize ensemble weights (target-wise) using optuna. (Trust CV)<br>
If I use optuna for my OOF. the optuna set some Deberta-v3-base weights ZERO. <br>
<strong>I was afraid to submit, but I believed in my CV and this is the best in the PB!</strong></li>
<li>Manually optimize weights considering LB rank. (Trust LB)</li>
<li>Just averaging 1. + 2. (Trust CV and LB))</li>
</ol>
<h1><strong>CV strategy</strong></h1>
<ul>
<li>Just using MultilabelStratifiedKFold, Fold4, seed42</li>
</ul>
<p>Thanks.</p>
<p>Sorry for my delay.</p>
<p>There are no big change from the <a href="https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/sift/sift.py" target="_blank">source code</a>.<br>
I changed the KL loss into the MSE loss in the line 161.</p>
<p>I changed this class like below.<br>
Thanks.</p>
<pre><code> :
():
adv_modules :
self.adv_modules = [m m model.modules() (m, PerturbationLayer)]
:
self.adv_modules = adv_modules
self.parameters = [p p model.parameters()]
self.model = model
self.rmse = RMSELoss(reduction=)
():
self.prepare()
(loss_fn, ):
loss_fn = perturbation_loss_fns[loss_fn]
pert_logits = self.model(inputs)
pert_loss = loss_fn(pert_logits, target.detach())
scaler != :
scaler.scale(pert_loss).backward()
:
pert_loss.backward()
m self.adv_modules:
ok = m.update_delta()
r,p (self.prev, self.parameters):
p.requires_grad_(r)
pert_logits = self.model(inputs)
pert_loss = self.rmse(pert_logits, target)
self.cleanup()
pert_loss
():
self.prev = [p.requires_grad p self.parameters]
p self.parameters:
p.requires_grad_()
m self.adv_modules:
m.adversarial_()
():
r,p (self.prev, self.parameters):
p.requires_grad_(r)
m self.adv_modules:
m.adversarial_()
</code></pre> |
Feedback Prize - English Language Learning | 4th place solutioin | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Overview</h2>
<p>My solution is stacking a total of 22 models. All the models trained in this competition are the same folds (4fold).</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1403936%2F9a491f9c96cbbe035ed12ee656ba812b%2F2.jpeg?generation=1669829854528737&alt=media" alt=""></p>
<h2>What Worked</h2>
<ul>
<li>[High Impact] Train SVR / Ridge using pre-trained model embeddings<ul>
<li>Extracted features from the last 4 layers of the 38 pre-trained models and used forward feature selection to explorate the best SVR</li>
<li>Trained ridge model with optimal embeddings combination for SVR</li>
<li>SVR is my best single model (CV: 0.4467)</li></ul></li>
<li>[High Impact] Pseudo Labeling<ul>
<li>Two patterns are used for each models<ul>
<li>Pre-train with pseudo labels and fine-tune with only the given train data afterwards</li>
<li>Concatenate pseudo labels with the given train data, and train all this data</li></ul></li>
<li>The amount of FB1 data used as pseudo label<ul>
<li>use only FB1 data similar to FB3</li>
<li>use all FB1 data (but only few model)</li></ul></li>
<li>Repeat over and over again</li></ul></li>
<li>[High Impact] Ridge and LGB stacking<ul>
<li>CV: 0.4425(Ridge), 0.4443(LGB), 0.4423(Weighted Average of Ridge and LGB)</li>
<li>Train Ridge using the predictions of fine-tuned models as input values</li>
<li>Train LGB using the predictions and meta-features created by <a href="https://pypi.org/project/readability/" target="_blank">readability</a></li></ul></li>
<li>[Middle Impact] Add special token ('\n')<ul>
<li>CV increase (0.001)</li></ul></li>
<li>[Low Impact] Shorten sequence length<ul>
<li>Almost use 512 length</li>
<li>Inference time can be reduced</li>
<li>For model diversity, train few models with 1500 length</li></ul></li>
<li>[Low Impact] Tips for stable training<ul>
<li>Full precision training</li>
<li>Layer wise learning rate decay</li>
<li>Cosine learning rate scheduler</li>
<li>Hyperparameter tuning</li></ul></li>
</ul>
<h2>What Didn’t Worked</h2>
<ul>
<li>AWP, FGM</li>
<li>Train SVR using fine-tuned model embeddings</li>
<li>Last Layer Re-initialization</li>
<li>Regular MLM</li>
<li>Predict punctuation errors and their statistics use as meta-feature in the stacking model<ul>
<li><a href="https://www.kaggle.com/code/nulldata/deep-learning-powered-punctuation-corrector/notebook" target="_blank">https://www.kaggle.com/code/nulldata/deep-learning-powered-punctuation-corrector/notebook</a></li></ul></li>
<li>Predict the quality rating of discourse element (FB2 task targets) and their statistics use as meta-feature in the stacking model<ul>
<li>CV score and public score increase, but private score decrease (0.0002)</li></ul></li>
<li>Fine-tune spelling correction model as convention-specific model<ul>
<li><a href="https://huggingface.co/oliverguhr/spelling-correction-english-base?text=lets+do+a+comparsion" target="_blank">https://huggingface.co/oliverguhr/spelling-correction-english-base?text=lets+do+a+comparsion</a></li></ul></li>
<li>Fine-tune pre-trained weights from top solution in the past competition<ul>
<li><a href="https://www.kaggle.com/competitions/feedback-prize-2021/discussion/313424" target="_blank">https://www.kaggle.com/competitions/feedback-prize-2021/discussion/313424</a></li></ul></li>
</ul>
<h2>Important Citations:</h2>
<ul>
<li><a href="https://www.kaggle.com/c/petfinder-pawpularity-score/discussion/300938" target="_blank">Petfinder.my - Pawpularity Contest 1st place solution</a></li>
<li><a href="https://www.kaggle.com/c/commonlitreadabilityprize/discussion/258328" target="_blank">CommonLiit Readability Prize 2nd place solution</a></li>
<li><a href="https://www.kaggle.com/competitions/feedback-prize-effectiveness/discussion/347536" target="_blank">Feedback Prize - Predicting Effective Arguments 1st place solution</a></li>
<li><a href="https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x" target="_blank">RAPIDS SVR - CV 0.450 - LB 0.44x</a></li>
<li><a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">FB3 / Deberta-v3-base baseline [train]</a></li>
</ul>
<h2>Thanks and Acknowledgements:</h2>
<p>Thanks to the organizers and Kaggle for a very exciting competition!<br>
I will use what I learned here to do my best in the next competition.</p>
<h2>Team Members:</h2>
<ul>
<li><a href="https://www.kaggle.com/shuheigoda" target="_blank">@shuheigoda</a></li>
</ul>
<p>Thanks for your great sharing!</p>
<p>In step 3, the predictions for the FB1 data of models trained in step1 and step2 were blended and used as pseudo labels. Step 4 is also the same.</p> |
Feedback Prize - English Language Learning | 5th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>No, just old data</p>
<p>I always 100% make sure to have same folds.</p>
<p>My final CV was btw ~0.445</p>
<blockquote>
<p>I wanna ask, why do you choose GeM Pooling? what's the rationale behind?</p>
</blockquote>
<p>It is my favorite type of pooling, I always use it :)<br>
Attention pooling does not make much sense to me personally, so I never try it.</p>
<p>I see, I am checking the ensemble score of the average of the three seeds.</p>
<p>Yes absolutely, which is why I tried to use the bounds.</p>
<p>We actually used negative weights successfully in FB2, I should have trusted it :)</p>
<p>As I use dynamic padding it is just to go up to the largest one.</p>
<blockquote>
<p>Is there any reason that you take the average instead of the score of the ensemble?</p>
</blockquote>
<p>I am always just averaging seeds of same model</p>
<blockquote>
<p>When you start training for the training data, did you resume cosine decay as a continuous one from the pseudo label or start another one again?</p>
</blockquote>
<p>Start another one</p>
<p>What GPU do you have? Also 5 epochs is probably too much, I was fine with 2-3 epochs.</p>
<p>You also need to utilize all the runtime tricks, mixed precision, dynamic padding, etc.</p>
<p>Exact same paras.</p>
<p>Deberta-v3-large is very fast. XL and XXL can take a while, but nothing I couldnt run overnight. The data is very small. I did not use XL and XXL for pseudos to save time. </p>
<p>Thanks to Kaggle and hosts for hosting the 3rd addition of the Feedback competition.</p>
<h1>Overview</h1>
<p>My solution is based on an ensemble of multiple finetuned NLP transformer models. Additionally, I employ two rounds of pseudo tagging on old Feedback data.</p>
<p>I follow a lot of our advice from the second Feedback competition described <a href="https://www.kaggle.com/competitions/feedback-prize-effectiveness/discussion/347536" target="_blank">here</a> and worked for around two weeks on this competition. I split my time working around 50% on accuracy, and 50% on efficiency solution. </p>
<h2>Cross validation</h2>
<p>In general, I observed very good correlation between local CV and public LB. As the data is very small and the metric is RMSE, the local scores can be quite shaky. To that end, for each experiment I was running, I trained three unique seeds and always only compared the average of these three seeds. So for example, if I would want to compare LR=1e-5 vs. LR=2e-5 I would run for each of those two experiments three separate seeds for a single fold, and only if the average of the three seeds improves, I would run on all my 5-folds, and then again compare 3-seed blends to make sure.</p>
<p>This allows to bring more trust to my experiments and as the data is really small, this was in general possible for me to do.</p>
<h2>Modeling</h2>
<p>The problem at hand is very much straight forward, feed in the text to a transformer model, apply some pooling, add a linear head, and predict regression targets. I used combinations of the following variations of the training routine for my final ensemble:</p>
<p>Token length:</p>
<ul>
<li>512</li>
<li>1024</li>
<li>2048</li>
</ul>
<p>All my models are trained and predicted with dynamic padding.</p>
<p>Pooling:</p>
<ul>
<li>CLS Token</li>
<li>GeM Pooling</li>
</ul>
<p>Backbones:</p>
<ul>
<li>Deberta-V3-Base</li>
<li>Deberta-V3-Large</li>
<li>Deberta-V2-XL</li>
<li>Deberta-V2-XXL</li>
<li>Longformer Large</li>
<li>Roberta-Large</li>
</ul>
<p>I usually run 3 epochs for most of my models, all with cosine decay learning rate always picking the last epoch. I use differential learning rate for backbone and head of the model. I do not use any other techniques suggested in forums like differential lr across layers of backbone or reinitialization.</p>
<p>As always, I retrained my models on full data for final subs, but also blended some fold models in as I had lots of runtime left.</p>
<h2>Pseudo labels</h2>
<p>I followed our routine from 2nd FB competition and employed two stages of pseudo labeling following these steps:</p>
<ol>
<li>Train an ensemble of models only on the given train data</li>
<li>Run predictions on the previous Feedback competition data excluding this competition's data</li>
<li>Use pseudo labels from this extra dataset and apply it to modeling by pre-train models on the pseudo labels and finetune it only on the given train data afterwards. </li>
<li>Repeat steps 1-3 three times using an ensemble of models trained on pseudo labels now</li>
</ol>
<p>By doing pre-training and finetuning, I did not need to adjust the distribution of the pseudo labels, because I am doing the final adjustment on this competition's data. This allowed me to use all prior data without issues.</p>
<h2>Ensembling</h2>
<p>For most of my subs I just did usual average across seeds and models. My final best sub is a Nelder-Mead optimized ensemble of models, where I optimize the ensemble weights separately per target column. To not overfit too much on CV, I added weight bounds between 1 and 3 on the weights.</p>
<p>Actually, I could have trusted the local optimization even more, I have an unselected sub from couple of days ago with best local CV that would score #2 on private, which has unrestricted weights for the ensemble, also with negative weights, but it felt a bit too risky and as I only did one sub for best CV, I chose a bit more of a conservative one.</p>
<h2>What did not work</h2>
<ul>
<li>Augmentations (as always, specifically tricky with regression)</li>
<li>Different losses</li>
<li>TFIDF</li>
<li>Other backbones (such as T5, GPT, etc) - Deberta is so strong</li>
<li>2nd stage models / stacker models</li>
</ul>
<h2>Final subs</h2>
<p>I did one sub based on my conservative best CV score, which also was my best selected private LB sub (although I had a better one unselected). I did one sub based on best public LB which was clearly worse on private LB. And I spent one sub on efficiency.</p>
<p>yes exactly</p> |
Feedback Prize - English Language Learning | 67th place solution(Silver medal) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to Kaggle and Host of the comp host for providing this opportunity.</p>
<p>To be honest, I was almost heartbroken because the LB did not go up at all until the end, but in my case it was Trust CV. Here is a brief description of the solution.</p>
<h3>Multi task learning</h3>
<p>Since there were 6 targets in this competition, I added the following new targets and performed multi-task learning, which improved CV/LB.<br>
In the end, only the targets with the best CV, target_min and target_max, were added to the competition targets for multi-task learning.</p>
<ul>
<li>target sum</li>
<li>target mean</li>
<li>target median</li>
<li>target min</li>
<li>target max</li>
</ul>
<h3>Layer-wise Learning Rate Decay(LLRD)</h3>
<p>I divided the layers into 4 groups and set the following learning_rate for each group, and trained the first layer with a smaller learning_rate.</p>
<ul>
<li>Learning_rate 2.5-e5</li>
<li>learning_rate per group<br>
group1: learning_rate/5<br>
group2: learning_rate/4<br>
group3: learning_rate/2<br>
group4: learning_rate</li>
</ul>
<h3>Ensemble</h3>
<p>Ensemble with blending was performed on 8 models.<br>
Nelder-Mead was used for blending.</p>
<p>8model</p>
<ul>
<li>microsoft/deberta-v3-base</li>
<li>microsoft/deberta-v3-large</li>
<li>roberta large</li>
<li>funnel-transformer/large</li>
<li>microsoft/deberta-large</li>
<li>microsoft/deberta-v2-xlarge</li>
<li>facebook/muppet-roberta-large</li>
<li>google/bigbird-roberta-large</li>
</ul>
<p>Blending</p>
<ul>
<li>Nelder-Mead</li>
</ul>
<h3>Other</h3>
<ul>
<li>Mean pooling</li>
<li>MultilabelStratifiedKFold</li>
<li>MultilabelStratifiedKFold5</li>
<li>No initialization of final layer</li>
</ul> |
Feedback Prize - English Language Learning | 6th Place Efficiency Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Overview</h1>
<ul>
<li><strong>Model:</strong> 1 fold deberta-v3-small (same params 4 fold cv: 0.448930774, 1 fold public LB: 0.436674, private LB: 0.438798)</li>
<li><strong>Training:</strong> Pseudo Labelling + Distillation Learning (CustomDistillLoss)</li>
<li><strong>Inference:</strong> ONNX + ONNX Optimisation</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3563032%2F5c44ab0fb0cc21a53301ac3fc26b1238%2Ffb3-6th-efficiency-solution.drawio.png?generation=1669819431207768&alt=media" alt="fb3-6th-efficiency-price-overview"></p>
<h1>Code</h1>
<p>Teacher Models Ensemble Submission: <a href="https://www.kaggle.com/code/wuwenmin/fb3-inference-ensemble-colwise/notebook" target="_blank">https://www.kaggle.com/code/wuwenmin/fb3-inference-ensemble-colwise/notebook</a><br>
Efficiency LB Submission: <a href="https://www.kaggle.com/code/wuwenmin/fb3-onnx-cpu-inference-small-fast-truncate/notebook" target="_blank">https://www.kaggle.com/code/wuwenmin/fb3-onnx-cpu-inference-small-fast-truncate/notebook</a><br>
ONNX Optimization and Load Test: <a href="https://www.kaggle.com/wuwenmin/fb3-debertav3-small-export-onnx-and-test" target="_blank">https://www.kaggle.com/wuwenmin/fb3-debertav3-small-export-onnx-and-test</a></p>
<h1>What Worked for Both LB and Efficiency LB</h1>
<ul>
<li>[high impact] AWP<ul>
<li>add AWP at the 3rd epoch</li>
<li>Result: 4 folds base CV change from 0.450 to 0.449</li>
<li>Reasoning and context: on this fine-tuning task it's easy to get overfitting on the training set, AWP helps to reduce overfitting</li></ul></li>
</ul>
<h1>What worked for Effificeny LB</h1>
<ul>
<li><p>[high impact] DistillLoss refer to this <a href="https://arxiv.org/pdf/2002.12597.pdf" target="_blank">paper</a> </p>
<ul>
<li>Implement the DistillLoss mentioned in the paper and manually set the outlier threshold to 1.0</li>
<li>Result: 4 folds small CV change from 0.452 to 0.449</li>
<li>Reasoning and context: The student model can learn the knowledge from the teacher model, and setting the outlier threshold can avoid the student model from overfitting the outliers</li></ul></li>
<li><p>[high impact] Pre-train using pseudo labeling on FB1 data</p>
<ul>
<li>Result: 4 folds small CV change from 0.449 to 0.4485</li>
<li>Reasoning and context: With more data the student model able to learn more knowledge from the teacher model</li></ul></li>
<li><p>[Low Impact] ONNX optimization</p>
<ul>
<li>After several testing the onnx-optimizer always outputs the fast one, so I hard-coded to save the one optimised by onnx-optimizer</li>
<li>Result: Got around 30s running time reduced</li>
<li>Reasoning and context: the one optimised by onnx-optimiser is around 16MB small than others, thus I think it successfully removed some unnecessary calculations.</li></ul></li>
<li><p>[Low Impact] <code>intra_op_num_threads</code></p>
<ul>
<li>No matter whether you set it to 4 or 2, the running time is almost the same</li>
<li>Reasoning and context: I guess it's because the submission CPU only has 2 physical cores</li></ul></li>
</ul>
<h1>What Didn't Work</h1>
<ul>
<li><p>Batch Inference for Efficiency</p>
<ul>
<li>I sorted the test set by the text length and tune the batch_size from 8, 16, 128, to 256</li>
<li>Result: the running time increased a lot</li>
<li>Reasoning and context: add the batch increase, it will waste more computation on padding tokens</li></ul></li>
<li><p>Propensity Score Matching</p>
<ul>
<li>I use the propensity score matching to select the subset of FB1 data that has similar distribution as this competition.</li>
<li>Result: With these ~5000 subset data, I failed to make the xsmall work</li>
<li>Reasoning and context: Even some samples in FB1 data have different distribution they can also help with distillation learning (will verify this when I have time)</li></ul></li>
</ul>
<h1>Additional Context</h1>
<h2>CV Strategy</h2>
<p><code>MultiLabelStratifiedKFold</code> 4 folds with seed=42</p>
<h2>Important Citations:</h2>
<ul>
<li><a href="https://arxiv.org/pdf/2002.12597.pdf" target="_blank">https://arxiv.org/pdf/2002.12597.pdf</a></li>
<li><a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train</a></li>
<li><a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-inference" target="_blank">https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-inference</a></li>
<li><a href="https://www.kaggle.com/code/yasufuminakama/fb3-submission-time" target="_blank">https://www.kaggle.com/code/yasufuminakama/fb3-submission-time</a></li>
</ul>
<h1>Thanks and Acknowledgements:</h1>
<p>Thanks to the organizers for holding this interesting competition. I'm new to NLP and Transformers. Thanks to the kagglers for sharing very helpful codes and discussions, especially <a href="https://www.kaggle.com/meli19" target="_blank">@meli19</a> and <a href="https://www.kaggle.com/xyzdivergence" target="_blank">@xyzdivergence</a>. I've learned a lot through this competition.</p>
<p>Thanks, <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, and Congrats on your 3rd LB! I'm new to NLP and Transformers. I've learned a lot from other kagglers through this competition. I'll keep learning and participating:)</p>
<p>Congrats on your 9th LB!</p>
<p>I just treat my shake-down of this competition as a negative sample to learn from. I learned a lot from other kagglers through this competition, that's mostly important:)</p> |
Feedback Prize - English Language Learning | 6th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Overview</h2>
<p>I noticed that the correlation between 6 classes is quite high in my baseline models. So I tried to relax it and created models for each class, which I would call “separate models” here. I also trained single models (no-separate models) but with separate heads. In the end, my solution is a class-wise weighted average of 4 separate models and 5 single models, where I used the gp_minimise function from scikit-optimize to determine the weights. <br>
Note: a 5-fold separate model uses 5 x 6 = 30 models and it took around 1h for GPU submission.</p>
<p>selected submission: CV=0.4422, PublicLB=0.436362, PrivateLB=0.434121 <br>
(best CV was my best PrivateLB)</p>
<table>
<thead>
<tr>
<th>exp</th>
<th>type</th>
<th>model</th>
<th>fold</th>
<th>CV</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>14_v1_01</td>
<td>separate</td>
<td>deberta-v3-base</td>
<td>5</td>
<td>0.4524</td>
<td>0.4371x</td>
<td>0.4380x</td>
</tr>
<tr>
<td>14_v1_07</td>
<td>separate</td>
<td>deberta-v3-base</td>
<td>10</td>
<td>0.4512</td>
<td>0.4374x</td>
<td>0.4370x</td>
</tr>
<tr>
<td>14_v1_10</td>
<td>separate</td>
<td>deberta-v3-large</td>
<td>5</td>
<td>0.4518</td>
<td>0.4411x</td>
<td>0.4370x</td>
</tr>
<tr>
<td>14_v1_12</td>
<td>separate</td>
<td>deberta-large</td>
<td>5</td>
<td>0.4534</td>
<td>0.4409x</td>
<td>0.4388x</td>
</tr>
<tr>
<td>29_v1_02</td>
<td>single</td>
<td>deberta-v3-base</td>
<td>5</td>
<td>0.4557</td>
<td>0.4424x</td>
<td>0.4416x</td>
</tr>
<tr>
<td>29_v1_04</td>
<td>single</td>
<td>deberta-v3-large</td>
<td>5</td>
<td>0.4531</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>29_v1_11</td>
<td>single</td>
<td>deberta-v3-large-squad2</td>
<td>5</td>
<td>0.4526</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>29_v1_14</td>
<td>single</td>
<td>deberta-v2-xlarge</td>
<td>5</td>
<td>0.4569</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>29_v1_15</td>
<td>single</td>
<td>deberta-xlarge</td>
<td>5</td>
<td>0.4552</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1565399%2F0c04d6e8103e6af95ebf083575b09f3a%2Ffb3_6th_place_solution.png?generation=1669817196747414&alt=media" alt=""></p>
<p>training code: <a href="https://github.com/tikutikutiku/kaggle-feedback-prize-english-language-learning" target="_blank">https://github.com/tikutikutiku/kaggle-feedback-prize-english-language-learning</a><br>
inference code: <a href="https://www.kaggle.com/code/tikutiku/feedback3-inference2" target="_blank">https://www.kaggle.com/code/tikutiku/feedback3-inference2</a></p>
<h2>What Worked</h2>
<p>[Worked] separate model </p>
<ul>
<li>What you did: created models for each class</li>
<li>Result: CV+0.003x, Public LB+0.005x, Private LB+0.003x</li>
</ul>
<h2>What Didn’t Work</h2>
<p>[Didn’t Work] awp <br>
[Didn’t Work] pre-training with fb1 data <br>
[Didn’t Work] detector pre-training with fb1 data<br>
[Didn’t Work] pseudo-label with fb1 data : models trained with pseudo-labels did not contribute to ensemble CV <br>
[Didn’t Work] random masking augmentation <br>
[Didn’t Work] lgb/catboost stacking</p>
<h2>Additional Context :</h2>
<p>Since the separate model was too slow for submission, I gave up the CPU track . </p>
<h2>Thanks and Acknowledgements:</h2>
<p>Thanks to kaggle and the organizers for having this 3rd feedback competition! I enjoyed all of the three feedback competitions and had a great time with the team and solo play :) </p>
<h2>Team Members:</h2>
<p><a href="https://www.kaggle.com/tikutiku" target="_blank">@tikutiku</a> </p> |
Feedback Prize - English Language Learning | 7th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the winners, and thanks to hosts for interesting competition.<br>
I still can't believe I shaked up to gold zone.</p>
<h1>overview</h1>
<p>My final submission is ensemble of 16 models. The weights of the models are determined using nelder-mead.</p>
<h1>model</h1>
<p>The default parameters are as follows:</p>
<ul>
<li>max_length = 512</li>
<li>awp <ul>
<li>start_epoch = 3</li>
<li>adv_lr = 0.0001</li>
<li>adv_eps = 0.01</li></ul></li>
<li>bce</li>
<li>lr = 2e-5</li>
<li>lwld<ul>
<li>base model: lr_decay = 0.9 </li>
<li>large model: lr_decay = 0.95 </li></ul></li>
<li>freeze layers <ul>
<li>large model: 1/2</li>
<li>xlarge model: 3/4</li></ul></li>
<li>fp16</li>
<li>5-fold</li>
</ul>
<table>
<thead>
<tr>
<th>model</th>
<th>changed</th>
<th>cv</th>
</tr>
</thead>
<tbody>
<tr>
<td>deberta-v3-base</td>
<td>max_len=1024</td>
<td>0.4517</td>
</tr>
<tr>
<td>deberta-v3-base</td>
<td>no awp</td>
<td>0.4572</td>
</tr>
<tr>
<td>deberta-v3-large</td>
<td></td>
<td>0.4525</td>
</tr>
<tr>
<td>deberta-v3-large</td>
<td>no freeze</td>
<td>0.4549</td>
</tr>
<tr>
<td>deberta-v3-large</td>
<td>max_len=1024</td>
<td>0.4566</td>
</tr>
<tr>
<td>deberta-v3-large</td>
<td>l1_loss</td>
<td>0.4576</td>
</tr>
<tr>
<td>deberta-v2-xlarge</td>
<td>lr=5e-6</td>
<td>0.4602</td>
</tr>
<tr>
<td>deberta-xlarge</td>
<td></td>
<td>0.4514</td>
</tr>
<tr>
<td>deberta-large</td>
<td></td>
<td>0.4502</td>
</tr>
<tr>
<td>deberta-large</td>
<td>no awp</td>
<td>0.4555</td>
</tr>
<tr>
<td>deberta-large</td>
<td>l1_loss</td>
<td>0.4557</td>
</tr>
<tr>
<td>deberta-base</td>
<td></td>
<td>0.4567</td>
</tr>
<tr>
<td>roberta-large</td>
<td></td>
<td>0.459</td>
</tr>
<tr>
<td>muppet-roberta-large</td>
<td>no awp</td>
<td>0.4635</td>
</tr>
<tr>
<td>distilbart-mnli-12-9</td>
<td></td>
<td>0.4626</td>
</tr>
<tr>
<td>bart-large-finetuned-squadv1</td>
<td></td>
<td>0.4635</td>
</tr>
</tbody>
</table>
<h1>What worked</h1>
<ul>
<li><p>[High Impact] awp</p>
<ul>
<li>cv+0.004 ~ 0.005</li>
<li>awp didn't work for public lb, so I added no awp model for ensemble</li></ul></li>
<li><p>[Middle Impact] lwld</p>
<ul>
<li>cv+0.002~0.003</li></ul></li>
<li><p>[Middle Impact] nelder-mead method</p></li>
<li><p>[Low Impact] bce</p>
<ul>
<li>converged faster than l1_loss</li></ul></li>
<li><p>[Low Impact] freeze layers</p>
<ul>
<li>stable and faster training</li></ul></li>
</ul>
<h1>What Didn’t Work</h1>
<ul>
<li>fgm</li>
<li>Last Layer Re-initialization</li>
<li>lgb stacking</li>
</ul>
<h1>Important Citation</h1>
<ul>
<li><p>feedback3</p>
<ul>
<li><a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">FB3 / Deberta-v3-base baseline [train]</a></li></ul></li>
<li><p>feedback1</p>
<ul>
<li><a href="https://www.kaggle.com/competitions/feedback-prize-2021/discussion/313177" target="_blank">feedback1 1st solution</a></li></ul></li>
<li><p>commonlit</p>
<ul>
<li><a href="https://www.kaggle.com/competitions/commonlitreadabilityprize/discussion/258328" target="_blank">commonlit 2nd place solution</a></li>
<li><a href="https://www.kaggle.com/competitions/commonlitreadabilityprize/discussion/260729" target="_blank">The Magic of No Dropout</a></li></ul></li>
</ul>
<h1>inference code</h1>
<p><a href="https://www.kaggle.com/code/tanakar/fb3-ensemble-per-col/notebook" target="_blank">https://www.kaggle.com/code/tanakar/fb3-ensemble-per-col/notebook</a></p>
<h1>Thanks and Acknowledgements:</h1>
<p>Thanks to our hosts for hosting the competition. And thanks to the Kaggler's for sharing their helpful notebooks and discussions.</p>
<h1>Team Members:</h1>
<ul>
<li><a href="https://www.kaggle.com/tanakar" target="_blank">@tanakar</a></li>
</ul> |
Feedback Prize - English Language Learning | 8th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Overview</h2>
<p>Our team ensembled 26 models in the following table. (including knowledge distilled and pseudo labeled ones) Ridge regression was used to integrate the prediction results of each model. Post-processing was applied to adjust the output values of the final ensemble's predictions. For the single model, the following techniques contributed to score improvement.</p>
<ul>
<li>Knowledge distillation and Pseudo labeling</li>
<li>Preprocessing - replacing “\n\n” with [PARAGRAPH]</li>
<li>AWP</li>
<li>Change the max len for each epoch</li>
</ul>
<table>
<thead>
<tr>
<th> </th>
<th>Model</th>
<th>CV / LB</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Deberta v3 base</td>
<td>0.448</td>
<td>512 Max length , Pseudo labels inferred from the tensorflow Deberta v3 base (LLRD technique) , and those plus competition data trained on the deberta v3 base (freezing+meanpooling)</td>
</tr>
<tr>
<td>2</td>
<td>Deberta v3 large</td>
<td>0.448</td>
<td>Same as above , deberta v3 large trained on meanpooling + freezing</td>
</tr>
<tr>
<td>3</td>
<td>Deberta v3 small</td>
<td>0.447</td>
<td>Knowledge distilled model</td>
</tr>
<tr>
<td>4</td>
<td>Deberta v2 xlarge</td>
<td>0.448 / 0.44</td>
<td>LSTM + mean pooling +freezing (Same pseudo label from tensorflow)</td>
</tr>
<tr>
<td>5</td>
<td>Deberta v3 large</td>
<td>0.4495 / 0.43</td>
<td>Max len = 470, trained with decreasing max len - 768/512/470/470</td>
</tr>
<tr>
<td>6</td>
<td>Deberta v3 base</td>
<td>0.4515 / 0.43</td>
<td>Max len =470, Mean pooling, AWP, val_steps = 250, reinit layers</td>
</tr>
<tr>
<td>7</td>
<td>Deberta xlarge</td>
<td>0.4534 / 0.43</td>
<td>CLS token, val steps =20</td>
</tr>
<tr>
<td>8</td>
<td>deberta v3 large</td>
<td>0.4495 / 0.43</td>
<td>CLS Token, AWP, val steps = 250, max len 768</td>
</tr>
<tr>
<td>9</td>
<td>Electra large</td>
<td>0.4545</td>
<td>Meanpool, LSTM</td>
</tr>
<tr>
<td>10</td>
<td>deepset/deberta-v3-bbase-squad2</td>
<td>0.4522</td>
<td></td>
</tr>
<tr>
<td>11</td>
<td>Luke Large</td>
<td>0.4551</td>
<td>Max len 512, Mean pooling, bidirectional-LSTM</td>
</tr>
<tr>
<td>12</td>
<td>Deberta v3 large</td>
<td>0.4467</td>
<td></td>
</tr>
<tr>
<td>13</td>
<td>Deberta v3 base</td>
<td>0.4471</td>
<td>AutoModelForTokenClassification, last 4 layer concatenate, mean pooling,max_len=768</td>
</tr>
<tr>
<td>14</td>
<td>Deberta v3 large</td>
<td>0.4469</td>
<td>PL Model, 2 seed averaged, CLS Token</td>
</tr>
<tr>
<td>16</td>
<td>Cocolm large</td>
<td>0.4568</td>
<td>No [PARAGRAPH], CLS Token, AWP</td>
</tr>
<tr>
<td>17</td>
<td>GPT2 Medium</td>
<td>0.4648</td>
<td>max_len=1024, Mean pooling, SWA</td>
</tr>
<tr>
<td>18</td>
<td>Longformer Large</td>
<td>0.4592</td>
<td>max_len=1536, Mean pooling</td>
</tr>
<tr>
<td>19</td>
<td>Bart Large Squad</td>
<td>0.4649</td>
<td>max_len=1024, Mean pooling</td>
</tr>
<tr>
<td>20</td>
<td>Distill Bart CNN</td>
<td>0.4607</td>
<td>max_len=1024, Mean pooling</td>
</tr>
<tr>
<td>21</td>
<td>OPT-350M</td>
<td>0.4639</td>
<td>max_len=1536, Mean pooling</td>
</tr>
<tr>
<td>22</td>
<td>Bart Base</td>
<td>0.4788</td>
<td>max_len=1024, Mean pooling, SWA</td>
</tr>
<tr>
<td>23</td>
<td>T5 Large</td>
<td>0.4717</td>
<td>max_len=1024, Mean pooling</td>
</tr>
<tr>
<td>24</td>
<td>Deberta v3 base</td>
<td>0.4490</td>
<td>max_len=512</td>
</tr>
<tr>
<td>25</td>
<td>Deberta v3 base</td>
<td>0.4484</td>
<td>max_len =384</td>
</tr>
<tr>
<td>26</td>
<td>Deberta v3 large</td>
<td>0.4498</td>
<td>max_len=1429, Mean pooling, Pseudo Labeling</td>
</tr>
</tbody>
</table>
<h2>What worked ?</h2>
<ul>
<li><p><strong>Preprocessing:</strong> - replacing “\n\n” with [PARAGRAPH] There are about 17.5K instances of \n\n in the dataset. Deberta v3 large tokenizer simply ignores these. So we decided to create one additional special token by replacing \n\n with [PARAGRAPH]. Also resized embeddings to take care of this one extra token. Just this change moved deberta v3 large from 0.44 to 0.43.</p></li>
<li><p><strong>Knowledge Distillation:</strong> We had one deberta v3 x small model with knowledge distillation. <br>
We decided to distill knowledge i.e. (get OOF data) from an ensemble of 9 models and utilize these to train the model. This is only utilizing current competition data not previous feedback data.</p></li>
<li><p><strong>Pseudo labels (PL)</strong> <br>
This was challenging to determine if pseudo labels are working or not. We tried generating labels with a few different models to determine if it’s working. Eventually we created two seed deberta v3 large models which seemed more stable than trying to determine with single deberta v3 large. Some of the models in our ensemble were with PL. <br><br>
We also did pseudo labeling inferred from the tensorflow Deberta v3 base model , which is trained with layer wise learning rate decay (LLRD) technique and used those PL inferred data into some of our models , which helps in improving the cv as well as LB.</p></li>
<li><p><strong>Model Architecture:</strong><br>
Just CLS token, Mean pooling, Mean pooling + CLS token, MaskAddedAttentionHead, MaskAddedAttentionHead + CLS token(AutoModel and AutoModelForTokenClassification) , LSTM</p></li>
<li><p><strong>Training strategies:</strong><br>
AWP worked, different max len per epoch, validation steps 20-250 were utilized for different models. Some of the models were trained using end of epoch validation as well. <br>
CV was calculated with 5 folds models.</p></li>
<li><p><strong>Diversity in ensemble:</strong> <br>
Folds - By Abhishek - <a href="https://www.kaggle.com/code/abhishek/multi-label-stratified-folds" target="_blank">https://www.kaggle.com/code/abhishek/multi-label-stratified-folds</a> <br><br>
We had ensemble of total 27 models (runtime ~7 hrs) - 7 Deberta v3 large models, 5 Deberta v3 base, Deberta v2 xlarge, Deberta xlarge, Deberta v3 small, Luke Large, Electra, Cocolm, Deberta v3 base squad, GPT2 medium, Longformer Large, Bart Large Squad, Distill Bart CNN, OPT 350M, Bart base, T5 large</p></li>
<li><p><strong>Ridge Stacking</strong><br>
We used Ridge for stacking our models, there were 2 deberta large (two seed models) which we combined so that generated only 1 ridge coefficient and it helped in ensemble. </p></li>
<li><p><strong>Post processing</strong><br>
We used “Nelder-mead” <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html" target="_blank">https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html</a> for post processing - divided the predictions into 20 bins and multiplied the resulting array with predictions. <br>
This post processing is based on commonlit's 2nd place solution method.<br>
<a href="https://www.kaggle.com/competitions/commonlitreadabilityprize/discussion/258328" target="_blank">https://www.kaggle.com/competitions/commonlitreadabilityprize/discussion/258328</a></p></li>
</ul>
<p>Our final ensemble is here - <a href="https://www.kaggle.com/code/aman1391/fb3-8th-place-solution-ensemble-27-models?scriptVersionId=112097448" target="_blank">https://www.kaggle.com/code/aman1391/fb3-8th-place-solution-ensemble-27-models?scriptVersionId=112097448</a></p>
<h2>Ideas that didn’t work</h2>
<ul>
<li><p><strong>Mask Language modeling:</strong> - Tried many different ways for MLM to work, unsuccessful attempts. Tried to match similarity between FB3 data and previous feedback data for MLM which also didn’t work. </p></li>
<li><p><strong>Other models:</strong> - Various other models from HuggingFace didn’t work. A few did work however it wouldn’t work with our ensemble so they were unutilized.</p></li>
<li><p><strong>Different loss functions:</strong> - Focal Loss, Ordinal regression from public kernel, Asymmetric loss - Asymmetric loss did work with a few ensembles but it wasn’t part of our best submission. </p></li>
<li><p><strong>Ensemble with weights:</strong> - In the last 2 days, we tried ensemble with weight tuning but Ridge did give us better results. This needed more tuning or work for us to feel confident in selecting a submission. </p></li>
<li><p>Many different model architectures, hyperparameters, token dropout, 4 fold model, dividing folds by number of tokens. </p></li>
</ul>
<h2>Other</h2>
<ul>
<li><p><strong>Compute:</strong> A100s, 4090 (3090 and Colab Pro+ prior to 4090) </p></li>
<li><p>Please see this post for a single model (average 2 seeds) gold solution - <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369368" target="_blank">https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369368</a> and corresponding <a href="https://github.com/rashmibanthia/feedback3" target="_blank">training code</a> </p></li>
</ul>
<h2>Important Citations</h2>
<ul>
<li><a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train</a><br>
<a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a>. Almost all of our scripts are based on this </li>
<li><a href="https://www.kaggle.com/code/vslaykovsky/lb-0-43-ensemble-of-top-solutions" target="_blank">https://www.kaggle.com/code/vslaykovsky/lb-0-43-ensemble-of-top-solutions</a><br>
<a href="https://www.kaggle.com/vslaykovsky" target="_blank">@vslaykovsky</a> This notebook made our ensemble easier </li>
<li><a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/366525#2038773" target="_blank">https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/366525#2038773</a><br>
Kelvin <a href="https://www.kaggle.com/xyzdivergence" target="_blank">@xyzdivergence</a> - for sharing ideas that worked - different max len in ensemble</li>
<li><a href="https://www.kaggle.com/competitions/commonlitreadabilityprize/discussion/258328" target="_blank">https://www.kaggle.com/competitions/commonlitreadabilityprize/discussion/258328</a><br>
Post processing is based on commonlit's 2nd place solution method</li>
<li><a href="https://www.kaggle.com/code/electro/deberta-layerwiselr-lastlayerreinit-tensorflow" target="_blank">https://www.kaggle.com/code/electro/deberta-layerwiselr-lastlayerreinit-tensorflow</a><br>
<a href="https://www.kaggle.com/electro" target="_blank">@electro</a> We retrained above model on our folds and created pseudo labels</li>
</ul>
<h2>Thanks and Acknowledgements</h2>
<p>Thanks to our hosts for hosting the competition and to teammates. Thanks also to the Kaggler's for sharing their helpful notebooks and discussions.</p>
<p>Thank you and Congratulations to my team mates - <a href="https://www.kaggle.com/aman1391" target="_blank">@aman1391</a>, <a href="https://www.kaggle.com/kanbehmw" target="_blank">@kanbehmw</a>, <a href="https://www.kaggle.com/fightingmuscle" target="_blank">@fightingmuscle</a></p>
<p><strong>Team Members</strong><br>
<a href="https://www.kaggle.com/rashmibanthia" target="_blank">@rashmibanthia</a><br>
<a href="https://www.kaggle.com/aman1391" target="_blank">@aman1391</a><br>
<a href="https://www.kaggle.com/kanbehmw" target="_blank">@kanbehmw</a><br>
<a href="https://www.kaggle.com/fightingmuscle" target="_blank">@fightingmuscle</a></p>
<p>Code for adding special token: </p>
<pre><code>CFG.tokenizer = AutoTokenizer.from_pretrained(model)
CFG.tokenizer.add_special_tokens({: []})
</code></pre>
<p>Then, within model class, you need <code>self.model.resize_token_embeddings(len(CFG.tokenizer))</code> (see below code) </p>
<pre><code> (nn.Module):
():
().__init__()
self.model_config = AutoConfig.from_pretrained(model_name)
self.model_config.update(
{
: ,
: ,
: ,
: CFG.num_labels,
}
)
self.model = AutoModel.from_pretrained(model_name, config=self.model_config)
self.model.resize_token_embeddings((CFG.tokenizer))
self.output = nn.Linear(self.model_config.hidden_size, CFG.num_labels)
():
transformer_out = self.model(input_ids=ids, attention_mask=mask)
sequence_output = transformer_out.last_hidden_state[:, , :]
logits = self.output(sequence_output)
logits
</code></pre>
<p>Likewise, It was pleasure working with you and Congratulations to you as well 🎉🎉</p> |
Feedback Prize - English Language Learning | 999 place solution🤣(Public 185) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all the participants, and thanks a lot to the organizers of this competition!</p>
<p>I got a very large shake-down, but would like to share my methods in this thread!</p>
<h2>Main approach</h2>
<p>Based on <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/349336" target="_blank">nakama's starter code</a>, I tried various ideas discussed in Discussion.</p>
<p>My idea, which I think was unique, was to round the target. The target value is 9 steps from 1.0 to 5.0 but the original rubric is 5 steps from 1 to 5. I see <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/348973#2013264" target="_blank">this thread</a> and I have known that this competition target was made from two evaluators' averages. So I made two models, one rounding up 0.5 and one rounding down 0.5, and took the average of them.</p>
<p><img src="https://raw.githubusercontent.com/trtd56/FeedbackPrize3/main/img/%E3%82%B9%E3%82%AF%E3%83%AA%E3%83%BC%E3%83%B3%E3%82%B7%E3%83%A7%E3%83%83%E3%83%88%202022-11-29%2020.18.51.png" alt=""></p>
<h3>ensemble</h3>
<p>My final submission Weight Averaging ensemble from OOF score.</p>
<p>In addition to some DeBERTa-v3-base with different learning parameters, we used the following model:</p>
<ul>
<li>Longformer-base</li>
<li>BigBird-RoBERTa-base</li>
<li>BART-large</li>
<li>FunnelTransformer-small</li>
<li>simple embedding and SVR model (<a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/351577" target="_blank">Chris's approach</a>)</li>
</ul>
<h2>Other things that worked well</h2>
<ul>
<li>AWP</li>
<li>FGM</li>
<li>Layer-wise Learning Rate</li>
<li>Weights Initialization</li>
<li><a href="https://www.kaggle.com/code/rhtsingh/on-stability-of-few-sample-transformer-fine-tuning/notebook" target="_blank">Pre-trained Weight Decay</a></li>
<li>Training with a short text (max_len=768)</li>
</ul>
<h2>Not work for me (so many…)</h2>
<ul>
<li>pseudo labeling<ul>
<li>unlabeled data worked meta pseudo label only.</li></ul></li>
<li>SiFT</li>
<li>custom loss considering target distribution</li>
<li>solve as a classification problem</li>
<li>large model</li>
<li>Custom Head<ul>
<li>LSTM</li>
<li>CNN</li>
<li>Attention</li></ul></li>
<li>Weighted Layer Pooling</li>
<li>MLM</li>
<li>SWA</li>
<li>consider line breaks</li>
<li>consider masking characters (<code>Generic_Name</code>, <code>Generic_City</code> …)</li>
<li>stacking <a href="https://pypi.org/project/readability/" target="_blank">readability</a> score</li>
</ul>
<h2>CV vs LB</h2>
<p>My CV and Public LB rank are well correlated, but I got large shake-down(185→999).</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1483555%2Fa8f073eb31adbdc39b81795376ece9ea%2F2022-11-30%2019.52.13.png?generation=1669805554267453&alt=media" alt=""></p>
<p>I don't use Pseudo label, so there should be no leak, but I still don't understand this part, so I will try to learn more from others' solutions.</p>
<p>Finally, in this competition, I only used kaggle notebook and a free plan colab to train. <br>
Thank you for the nice platform that has created an opportunity for people like me who can't afford to spend money on a computing machine😊</p> |
Feedback Prize - English Language Learning | 9th Place Efficiency Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Overview</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1940189%2F3f67af9eb22b46743bd978e1e34ebf34%2FScreenshot%202022-12-02%20at%207.54.24%20PM.png?generation=1669993007713917&alt=media" alt=""></p>
<p>For the efficiency track , I trained model which is using pre-trained weights from <code>microsoft/deberta-v3-small</code>. <br>
This model uses combined features generated from <strong>mean_pooling + attention_pooling ( using last 3 layers ) + last_layer_cls_token</strong></p>
<p>Model trained on full dataset + pseudo labels generated from FB1 dataset , also created new dataset by applying data augmentation technique like back_translation, word embedding augmentation ( glove) and synonyms augmentation ( wordnet ) on the current competition training dataset.</p>
<p>Final model weights are saved by averaging last 5 checkpoints ( applying the learning from SWA ).</p>
<p>But before making the decision on applying SWA on full training dataset, I trained the same model on 4 folds generated using <code>MultilabelStratifiedKFold</code> and calculated the oof score: 0.4504400497619689 </p>
<p>This exercise was helpful for me to keep track on the model architecture performance and selection is based on the best oof score.</p>
<h1>Pseudo Labels:</h1>
<p>For pseudo labels generation I used the ensemble of four models, which is trained on the using following configuration:</p>
<ul>
<li>seed = 42</li>
<li>epochs = 4</li>
<li>batch_size = 8</li>
<li>weight_decay = 0.01</li>
<li>warmup_ratio = 0.01</li>
<li>n_folds = 5</li>
<li>learning_rate = 2e-5</li>
<li>max_len = 512</li>
<li>max_grad_norm = 1</li>
<li>gradient_accumulation_steps = 1</li>
<li>layerwise_learning_rate_decay = 0.8</li>
<li>classifier_lr = 1e-4</li>
<li>reinit_n_layers = 1</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1940189%2Fb7f354c471e5099251592bc1c560aede%2FScreenshot%202022-12-02%20at%208.13.54%20PM.png?generation=1669993096314328&alt=media" alt=""></p>
<p>This ensemble I called as <strong>'ens1_magic'</strong> which is giving the <strong>oof_score = 0.44848052478266</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1940189%2Fadfd7493af24e16e7faa86eb552ab8b6%2FScreenshot%202022-12-02%20at%208.21.04%20PM.png?generation=1669993069970852&alt=media" alt=""></p>
<h1>Speedy Inference:</h1>
<p>Runtime is also the another most crucial thing to improve along with mcrmse score. </p>
<p><strong>First exported trained model to ONNX .</strong></p>
<pre><code> def create_onnx_model(input_model, onnx_model_name):
export_model_path = os.path.join(output_dir, onnx_model_name)
# Enable overwrite to export onnx model and download latest script each time when running this notebook.
enable_overwrite = True
# Set model to inference mode, which is required before exporting the model because some operators behave differently in
# inference and training mode.
input_model.eval()
input_model.to(DEVICE)
if enable_overwrite or not os.path.exists(export_model_path):
with torch.no_grad():
symbolic_names = {0: "batch_size", 1: "seq_length"}
torch.onnx.export(input_model, # model being run
args=tuple(inputs.values()), # model input (or a tuple for multiple inputs)
f=export_model_path, # where to save the model (can be a file or file-like object)
opset_version=13, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input_ids', # the model's input names
'attention_mask',
],
output_names=["output"], # the model's output names
dynamic_axes={"input_ids": symbolic_names, # variable length axes
"attention_mask" : symbolic_names,
}
)
print("Model exported at ", export_model_path)
cust_config = AutoConfig.from_pretrained(DOWNLOADED_MODEL_PATH)
cust_model = Deberta_FBP_MPAPCLS(DOWNLOADED_MODEL_PATH, cust_config)
checkpoint = torch.load(current_model['model_base_path'], map_location=torch.device('cpu'))
cust_model.load_state_dict(checkpoint)
create_onnx_model(cust_model, 'model.onnx')
</code></pre>
<p><strong>Second, pre-tokenized all text data using multi-processing before making predictions.</strong></p>
<pre><code>def encode_deberta(text):
sample = dict()
encodings = TOKENIZER_CFG.tokenizer(
text,
# return_tensors="pt",
# padding="max_length",
truncation=True,
max_length=TOKENIZER_CFG.max_len,
)
sample["input_ids"] = np.array(encodings["input_ids"]).astype(np.int32).reshape(1,-1)
sample["attention_mask"] = np.array(encodings["attention_mask"]).astype(np.int32).reshape(1,-1)
return sample
test = pd.read_csv('/kaggle/input/feedback-prize-english-language-learning/test.csv')
pool_obj = mp.Pool(N_CORES)
encoded_texts = pool_obj.map(encode_deberta, test.full_text)
encoded_dataset = FBP_Dataset_Inference_Fast(encoded_texts)
</code></pre>
<p><strong>Finally, for prediction using multi-processing</strong></p>
<pre><code>model_output = []
sess_options = onnxruntime.SessionOptions()
session = onnxruntime.InferenceSession(export_model_path_dv3xs_f3 , sess_options, providers=['CPUExecutionProvider'])
def speedy_inference(text_idx):
data = encoded_dataset[text_idx]
ort_inputs = {
'input_ids': data['input_ids'],
'attention_mask': data['attention_mask']
}
ort_outputs = session.run(None, ort_inputs)
return ort_outputs[0][0].reshape(-1)
# start = time.time()
test_idx = list(range(len(test)))
# test_idx
pool_obj = mp.Pool(N_CORES)
model_output = pool_obj.map(speedy_inference, test_idx)
test_preds = np.array(model_output)
</code></pre>
<p>all this work complemented each other for faster inference. <strong>My model has the runtime of 951 seconds.</strong></p>
<h1>What Worked</h1>
<ul>
<li>Pseudo Labelling</li>
<li>Data Augmentation</li>
<li>Training the model on complete dataset</li>
<li>SWA ( average of last 5 checkpoints )</li>
</ul>
<h1>What Didn’t Work</h1>
<p>I don't want to say did not work but putting like I am not able to make it work, makes more sense.</p>
<ul>
<li>AWP</li>
<li>FGM</li>
<li>Quantization</li>
</ul>
<h1>Important Citations</h1>
<ul>
<li>Data Augmentation: <a href="https://github.com/makcedward/nlpaug/blob/master/example/textual_augmenter.ipynb" target="_blank">https://github.com/makcedward/nlpaug/blob/master/example/textual_augmenter.ipynb</a></li>
<li><a href="https://wandb.ai/darek/fbck/reports/How-To-Build-an-Efficient-NLP-Model--VmlldzoyNTE5MDEx?accessToken=pmm41mpdkxif0lsbm927tfxj947to4gbd0nrgjjq9rdoq1c4jr3kruf993ys5kpg" target="_blank">How To Build an Efficient NLP Model (Article)</a></li>
</ul>
<h1>Thanks and Acknowledgements:</h1>
<p>I am grateful to the kaggle and competition organizers for hosting this amazing competition. </p>
<p>This is my first efficiency track completion and I have learnt hell lot of things during this competition. So THANK YOU for this new track and looking forward to more and more NLP competition which includes efficiency track. </p>
<p>I also want to shout out to all of the kagglers who brought up amazing, valuable discussion which really helped me to learn new things. </p> |
Feedback Prize - English Language Learning | A 0.45 tree-based solution and a few thoughts | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1><a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/overview" target="_blank">Feedback Prize - English Language Learning:</a></h1>
<h1>My first Kaggle competition</h1>
<p>I'm new to Kaggle and wanted to take this challenge as an opportunity to:</p>
<ul>
<li>learn how to compete in Kaggle: How to set things up, manage the constraint of number of submissions etc.. You might not realize it if you have been competing for a while, but there is a learning curve.</li>
<li>start using Pytorch (until now, I have been using Tensorflow for ANNs)</li>
<li>learn more about NLP</li>
</ul>
<p>My score during the competition was really bad: I didn't have time to do much more than setting things up and create a dummy model. But I kept working on it after it ended and obtained a decent score of <strong>0.45</strong> with a relatively simple solution that doesn't involve fine tuning a Large Language Model, although it relies on using one. I think this is not bad considering the best score was <strong>0.43</strong> and a dummy model gets a score of <strong>0.63</strong>.</p>
<p>Along the way, I've built and come across a few things that are hopefully worth your time. The main ones are:</p>
<ul>
<li><strong>The way I setup the code</strong>: The code is executed from this <a href="https://www.kaggle.com/code/josarago/best-tree-based-model-score-but-there-s-a-catch" target="_blank">notebook</a> which imports code from a <a href="https://www.kaggle.com/datasets/josarago/kaggle--feedback-prize-ell" target="_blank">Kaggle Dataset</a> and synced with a <a href="https://github.com/josarago/kaggle--feedback-prize-ell" target="_blank">Github repo</a> through <a href="https://github.com/josarago/kaggle--feedback-prize-ell/blob/main/.github/workflows/main.yml" target="_blank">Github Actions</a>. Meaning you can work in your favorite IDE, keep things neatly organized the way you like, push code to Github and finally run the code in Kaggle to generate submissions. I would imagine I am not the only one who has a hard time with 10,000 lines notebooks. Just the scrolling itself become a distraction.</li>
<li><strong>The pattern I used to wrap pre-trained models into sklearn transformer</strong>: I looked around and couldn't find anything (please point me in the direction of any similar implementation). This is hopefully a reusable <a href="https://github.com/josarago/kaggle--feedback-prize-ell/blob/fe7a51ce11021d2e773284addc16a08426228494/sklearn_transformers.py" target="_blank">pattern</a> and allowed me to leverage two pre-trained models (<a href="https://pypi.org/project/fasttext-langdetect/" target="_blank">fasttext-langdetect</a> and <a href="https://www.kaggle.com/datasets/kojimar/fb3models" target="_blank">DeBERTa</a>) in combination with other features all while using sklearn pipelines. This also make it very easy to try different models using the same features. <strong>However this is NOT useful if your goal is to fine-tune a Large Language Model using Pytorch.</strong></li>
<li><strong>Some thoughts I share about the competition format</strong>: again I am new to Kaggle so if my thoughts are naive or misguided, I would love to hear different point of views.</li>
</ul>
<h2>Summary of the approach</h2>
<p>I'll describe briefly the steps I took to get to 0.45. Again, this is far from being the best solution but it gets us <em>90% of the way</em> and is relatively low effort.</p>
<h3>EDA</h3>
<p>Looking at the available data, I made a few observations:</p>
<ul>
<li>The target variables, these 6 English scores, are very highly correlated. The Pearson correlation coefficients between target variables are typically between ~0.64 and 0.74. Another way to look at it is by performing a PCA on the target variable. The first component accounts for 73.9% of the variance, while the other components are relatively evenly distributed, ranging from 3.7% to 6.6%. So it is likely that a few relevant features would do most of the work.</li>
<li>Looking at the lowest and highest scores is useful to find feature ideas:<ul>
<li>Many students with low scores used sms like english and used many contractions</li>
<li>High scores essay tend to be longer, and have more structure (paragraphs)</li>
<li>Some students used other languages than English. They might have recently immigrated and just started learning English, which is really hard, but since these essays are rated for English quality, they usually get lower scores.</li>
<li>Some students systematically put a whitespace <strong>before</strong> periods not after, for instance "My sentence just ended .Then it started again ."</li></ul></li>
</ul>
<h3>Setting things up</h3>
<p>After finding a few simple feature ideas, I started writing a notebook and created a skeleton that can accommodate increasing number of features, different models and manages loading the data, a training script, the evaluation metrics and the function writing the submission file. <br>
I started by creating a simple class that made predictions from a <strong>Dummy regressor</strong> (always predicting the mean scores) which will be later swapped with an hopefully more useful model. The dummy regressor, which is not using the text at all, obtained a score of <strong>0.632</strong>. It's always good to start with something as simple as possible: In fact, if you look at some of the scores in the competition, a number of submission are worse than this dummy model, which suggests there is something wrong with the code.</p>
<h3>Baseline model</h3>
<p>I then created a few features:</p>
<ul>
<li><strong>total number of unigram used</strong>: we said earlier that longer essays tend to have better ratings. </li>
<li><strong>number of line breaks</strong>: this is arguably a bit funny but an attempt to capture the fact that better essays have a better structure.</li>
</ul>
<p>As you might expect that would not get us very far, these are actually not great first choices, but I was just curious what would happen. I'm not planning of doing a systematic breakdown of feature importances and I'm at work trying to deliver value ASAP so I just wanted to see what these vaguely relevant features would give me. We should still see some progress. Using a Linear Regression model with these simple features gets us to <strong>0.616</strong>. This is not impressive by any mean, but we are moving in the right direction.</p>
<h3>More features and tree based model</h3>
<p>I then kept going adding features that were easy to implement although unlikely to cause a huge improvement in the score. I just wanted to see how far I could go using simple but highly interpretable features:</p>
<ul>
<li><strong>TF-IDF</strong>: it can help the model understand what vocabulary the students are using. At first I would used a dictionary listing common English contractions (I'll, we'd've, it'd etc..) but then realized I might as well use the full vocabulary instead as it should include the relevant contractions.</li>
<li><strong>punctuation errors</strong>: missing trailing whitespaces, extra leading whitespaces, missing brackets or quotes etc…</li>
<li><strong>English score</strong> using <code>fasttext-langdetect</code>. Like I mentioned earlier, some students used non English language which tend to get them a bad rating.</li>
</ul>
<p>At this point, I started using a tree based model. I used lightGBM because, in my experience it is faster and usually works a little better than XGBoost with the default parameters, before any hyper-parameter tuning (I could use a small dopamine hit to keep going). The score dropped to <strong>~0.52</strong>. That's a lot better but still far from <strong>0.43</strong>.</p>
<h3>More features and tree based model</h3>
<p>I started the competition pretty late and, by the time I was setup to iterate conveniently on the model, the highest score was already the final score <strong>0.43</strong>. By that point it was clear I would not have time to get a good rank, so I decided to focus on learning from the best solutions instead. It's nice for people with a top score to be sharing their notebooks, although as I argue below, I think it limits the outcome of the competition. Most of the best solutions seemed to use DeBERTa, which provided me with a simple next step. Use pre-trained DeBERTa model, feed the text through the model and, like so many, use the last hidden layer and a pooling layer to get a low(-ish) dimensional representation of each essay. But most people were actually fine tuning the model for the task at end which, quite frankly, is a probably a better approach. But I decided to take another route for a few reasons:</p>
<ul>
<li>I had never touched Pytorch so far so just adding an extra feature in a similar way to what I did with <code>fasttext-langdetect</code> seemed easier than fine-tuninig the model</li>
<li>Many participants were fine-tuning DeBERTa anyways, what could I get with a more straightforward/out-of-the-box approach? I was just curious.</li>
<li>I wasn't sure how to combine the already built features/sklearn pipelines and didn't want to just discard them.</li>
<li>I might want to try other LLM and would love to not have to rewrite a lot of boiler plate code every time(didn't have time)</li>
</ul>
<p>So I ended up building a <a href="https://github.com/josarago/kaggle--feedback-prize-ell/blob/da76be074a8c757646adeb86d7ca0701e7249949/sklearn_transformers.py#L68" target="_blank">sklearn transformer</a> that apply mean pooling to the last hidden layer of the model after making predictions on the essay:</p>
<ul>
<li>the <code>fit</code> method function does nothing in this case, since we just use the pre-trained model as is.</li>
<li>the <code>transform</code> method can call either <code>simple_transform</code> or <code>batch_transform</code> that I ended up using for the submissions after having some out of memory issues when running the inference on GPU.</li>
<li>in the end the transformer is wrapped in a <a href="https://github.com/josarago/kaggle--feedback-prize-ell/blob/da76be074a8c757646adeb86d7ca0701e7249949/pipelines.py#L99" target="_blank">pipeline</a> so that I can just add it, using <code>FeatureUnion</code> to the existing features. Note that I had to add <a href="https://github.com/josarago/kaggle--feedback-prize-ell/blob/da76be074a8c757646adeb86d7ca0701e7249949/pipelines.py#L102" target="_blank">this weird</a> <code>FunctionTransformer</code> to make sure we reset the DataFrame's index is reset. Without it, the indexes shuffled from the <code>train_test_split</code> call cause the <code>DataLoader</code> object to throw an error but I didn't dive deeper here to really understand why</li>
</ul>
<p>Doing this with LightGBM, XGBoost or a simple Pytorch model (not fine tuned) gives essentially the same results of <strong>~0.45</strong> (<a href="https://www.kaggle.com/code/josarago/best-tree-based-model-score-but-there-s-a-catch" target="_blank">notebook here</a>) which I think is noteworthy if we consider the best score is 0.43. Now that I have a simple way to generate features from any Language Model, if I have the time, I would like to explore other models than DeBERTa. After all there are tons of models specifically designed to find correct grammatical errors. I'm by no mean a NLP expert but this <a href="https://github.com/sebastianruder/NLP-progress/blob/master/english/grammatical_error_correction.md" target="_blank">list from Sebastian Ruder</a> seems like a good place to start.</p>
<h2>Thoughts on the competition format</h2>
<p>Kaggle is a great platform. I don't know of any other open platform that give access to so many interesting datasets and let you explore other users' approaches to solve real world problems. The competition format is instrumental as it incentivizes participants to give their best and keep improving their models. However, I was very surprised to realize that people were sharing their detailed notebooks as the competition was still running. Looking back my impression is that <strong><em>people should not be able to share code related to the competition, while it's running.</em></strong></p>
<p>This may be an unpopular opinion, and quite frankly, since I'm very new to Kaggle, I could be completely missing something, but I think it really beats the purpose of the competition format:</p>
<ul>
<li><strong>For the organizers, you get less bang for your buck</strong>: If you organize a Kaggle competition, you invest time and money to solve a problem. The more you let people think independently about a problem the more creative people get, and avenues are explored. I think it would have been a lot better to have 10 solutions with a 0.45 scores, each with their own original approach, rather than hundreds of 0.43 solution that are often indistinguishable from each others, Sure it's not a zero sum game as people who just copied pasted the published solution with the highest score might not have spent time exploring anything else. But I feel like seeing seeing so teams with the highest score acts a bit as a deterrent: It feels like the solution has already converged to the best possible and it will be extremely hard to beat it. Also, you might just want to do a deep dive into the top 10 scores and build a new model from scratch, combining all the most interesting ideas: you could also consider putting all these solutions into an ensemble model that will likely get you a better score.</li>
<li><strong>For hiring organizations, a way to find the best ML practitioners</strong>: If the top solution is shared, people can just come and copy and paste a solution and achieve a top score. These copy-pasted top scores are then hard to distinguish from original solution ones. So the ranking loses its value</li>
<li><strong>For participants</strong>, it limits your creativity to see write ups on the highest score. It's very hard not to be tempted to focus on understanding them rather than pursuing your own ideas. It's also a bit unfair: a copy pasted solution could definitely obtain the final top score just by chance, which makes it less motivating.</li>
</ul>
<p>All in all I really enjoyed the competition and learned a ton of things so, as far as I'm concerned, I'm very happy with my experience!</p> |
Feedback Prize - English Language Learning | Efficiency 17th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Overview</h1>
<p>We are a group of three students representing <a href="https://penndsg.com/" target="_blank">Penn Data Science Group</a>. We had a lot of fun working on this competition and thank the Competition Host and Kaggle for hosting. </p>
<p>Here goes our Efficiency Solution. </p>
<p>We pseudolabeled the 15K essays from FB1 and FB2 using our own ensemble and then finetuned deberta-v3-xsmall for Efficiency.<br>
The ensemble has two components:</p>
<ol>
<li><strong>Pretrained Embeddings:</strong> We concatenated the mean-pooled hidden states from different debertas and different layers therein and trained MLPs and KernelRidge.<ul>
<li>The hidden states add to over 20K in dimensions; interestingly, standardization / min-max scaling (which seems imperative to SVM etc.) leads to significantly worse result. This may be due to the layer normalization.</li>
<li>KernelRidge uses <code>rbf</code> as kernel, with default <code>gamma</code>, and with <code>alpha</code> set to 0.075.</li>
<li>MLP is structurally as follows:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5048151%2Ff17f5d18d746d22eff9e065799ed1823%2FWeChat%20Image_20221203000831.png?generation=1670044122478406&alt=media" alt=""></li>
<li>We used <a href="https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer" target="_blank">Ranger</a> as optimizer for the MLP. Per recommended, <code>beta_1</code> is set to 0.95, and the learning schedule is flat for the first 60-75% and then on a cosine decay for the remaining epochs. We set the flat rate as <code>3e-4</code> for the first 10 epochs, and then used cosine decay for the final 6 epochs, finishing on a learning rate of <code>9e-7</code> (in other words, <code>alpha</code> is set to <code>3e-3</code>). The batch size is 64, and we used a weight decay of <code>1e-1</code>.</li>
<li>To stablize results, we trained 50 iterations and took the mean prediction.</li></ul></li>
<li><strong>Finetuned Models:</strong> We kinda sucked at finetuning models, so we took <a href="https://www.kaggle.com/kojimar" target="_blank">kojimar</a>'s models. Specifically, we took <a href="https://www.kaggle.com/datasets/kojimar/0919-deberta-v2-xlarge-mnli" target="_blank">deberta-v2-xlarge-mnli</a>, <a href="https://www.kaggle.com/datasets/kojimar/0919-deberta-v2-xlarge" target="_blank">deberta-v2-xlarge</a>, <a href="https://www.kaggle.com/datasets/kojimar/0926-deberta-v3-large-unscale" target="_blank">deberta-v3-large-unscale</a>, and <a href="https://www.kaggle.com/datasets/kojimar/20221012-123357-deberta-v3-large" target="_blank">20221012-deberta-v3-large</a>.</li>
</ol>
<p>This ensemble gave us an unimpressive 594th place on the private LB w/ 0.436685 and 1061th place on the public LB w/ 0.439137.</p>
<p>The deberta-v3-xsmall is finetuned with the 4K essays from this competition with their true labels and the 15K essays from the previous two competitions with the pseudolabels.</p>
<p>This model gave us 17th place on the Efficiency Private LB w/ 0.442 in 1336 seconds.</p>
<h1>What Worked for Ensemble</h1>
<p><strong>MLP & Kernel Ridge for Pretrained Embeddings</strong></p>
<ul>
<li>Result: 0.446 CV as opposed to 0.451 CV for SVM.</li>
<li>These two methods allow for a nonlinear specification whilst maintaining, if trained properly, robustness. </li>
<li>SVM seems to be worse because of differences in objective and penalty (in direct comparison with Kernel Ridge). Actually, Bayesian Ridge with its linear specification can consistently beat SVM.</li>
</ul>
<p><strong>mish and Ranger on MLP</strong></p>
<ul>
<li>Result: 0.447 CV as opposed to 0.449 with swish and Ranger.</li>
</ul>
<p><strong>Including <em>Some</em> Finetuned Models</strong></p>
<ul>
<li>Results: 0.445 CV.</li>
</ul>
<h1>What Worked for Efficiency</h1>
<p><strong>Psuedolabelling</strong></p>
<ul>
<li>Result: The xsmall model can closely trace the CV performance of the ensemble (I don't have the data here because we could not cross-validate the entire ensemble, but just take my words that an xsmall trained with only the 4K essays do worse).</li>
<li>Knowledge distillation.</li>
</ul>
<p><strong>deberta-v3-xsmall</strong></p>
<ul>
<li>deberta-v3-xsmall performs better than deberta-v3-small, because it has 12 layers with hidden dimension as 384 as opposed to 6 layers with 768. The training/inference time is about the same, despite the increased layer counts.</li>
<li>Interestingly, a base model I late-submitted performed much worse at 0.44674 private LB.</li>
</ul>
<p>Edit: <strong>Weight decay on weights</strong></p>
<ul>
<li>Place no weight decay on the bias and normalization terms; useful for both finetuning and the training of the MLP.</li>
</ul>
<h1>What Did Not Work</h1>
<ul>
<li>Translation for data augmentation.</li>
<li>Finetuning techniques (Multi Sample Dropout, Layer Weight Reinitialization, Layerwise Learning Rate Adaptation). <ul>
<li>I cannot quantify their improvements, and it seems that the increased sample size due to pseudolabelling offsets some benefits from these techniques.</li>
<li>I submitted a version without MSD on the last date (but forgot to select it 😫); that had a private LB of 0.44 with probably a lower runtime (I didn't bother to prune the dropout, so the model technically stills feeds forward the last layer 8 times before taking an average); then again, I also changed some hyperparameters (lowering weight decay from 1e-1 to 1e-2 and increasing <code>beta_1</code> from 0.9 to 0.95).</li></ul></li>
</ul>
<h1>What we should have done</h1>
<ul>
<li>Use PyTorch? It inferences faster and without the pain of memory leakage. This would also allow for the usage efficienct data pipelines from huggingface. I suspect sub-1000s would need this and some more.</li>
</ul>
<h1>Important Citations</h1>
<ul>
<li><a href="https://www.kaggle.com/kojimar" target="_blank">kojimar</a>: Excellently finetuned models; we probably should have taken more models from you.</li>
<li><a href="https://www.kaggle.com/cdeotte" target="_blank">Chris Deotte</a>: As far as I can tell, the first person to share using pretrained embeddings.</li>
<li><a href="https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer" target="_blank">Ranger</a>: Robust and reliable optimizer.</li>
</ul>
<h1>Thanks and Acknowledgement</h1>
<p>We thank the Competition Host and Kaggle for hosting and all those whom we cited for their insights.</p>
<h1>Team Members</h1>
<ul>
<li><a href="https://www.kaggle.com/randyli2" target="_blank">Randy Li</a></li>
<li><a href="https://www.kaggle.com/mangarahutagalung" target="_blank">Mangara Paul Alexander Hutagalung</a></li>
<li><a href="https://www.kaggle.com/msgchris" target="_blank">Chris Sebastian</a></li>
</ul> |
Feedback Prize - English Language Learning | Efficiency 2nd solution (Private 82th) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank you for organizing this competition.<br>
Honestly, I did not expect to be 2nd in efficiency prize. <br>
There has been some unfortunate event in my personal life, this makes it difficult to prepare according to the <a href="https://www.kaggle.com/WinningModelDocumentationGuidelines" target="_blank">Guidelines</a>.<br>
But I think it is better to provide my solution asap, so I am writing a summary.</p>
<h1>OverView</h1>
<ul>
<li><strong>Leaderboard:</strong> I used multiple Deberta series models, SVR, as well as pseudo-labels based on FB1 data and classify argumentative elements models from FB2 provides meta data. Finally, 13 models were weight-tuned with L-BFGS-B method.</li>
<li><strong>Efficiency:</strong> I used deberta-v3-xsmall with max_len:512, knowledge distillation by above-mentioned ensemble model and uniform length batching. I did not do any special tricks and spent a lot of time tuning.</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2701619%2Fa64d6d2130b7138cf7b226553d40c909%2Fshape.png?generation=1670207602959148&alt=media"></p>
<h1>CV strategy</h1>
<p>MultiLabelStratifiedKFold 4fold for targets and topics from BERTopic.<br>
After adjusting the parameters, I used all train data (1fold) .</p>
<h1>What Worked Efficency LB</h1>
<ul>
<li>[High Imapct] deberta-v3-xsmall<ul>
<li>I think it was important to use xsmall. My verification confirmed that the CV is higher than small but can be made up of processing speed.</li></ul></li>
<li>[High Imapct] uniform length batching<ul>
<li>Increased processing speed compared to the normal method (inference) .</li></ul></li>
<li>[High Imapct] knowledge distillation<ul>
<li>Even the 82th model was useful enough.</li></ul></li>
<li>[Midium Imapct] adjust_max_len<ul>
<li>512 was consistently good performance for me.</li></ul></li>
</ul>
<h1>What Didn't Work Efficency LB</h1>
<ul>
<li>Torch.jit<ul>
<li>I continued to apply torch.jit except for uniform length batching. But there was almost no difference in results.</li></ul></li>
<li>increase max_len<ul>
<li>I noticed that I get better CV by increasing xsmall_max_len slightly (e.g.576) . However, they did not useful in private.</li></ul></li>
</ul>
<h1>Important Citations:</h1>
<ul>
<li><a href="https://www.kaggle.com/code/rhtsingh/speeding-up-transformer-w-optimization-strategies" target="_blank">Dyanmic Padding and Uniform Length Batching</a><ul>
<li><a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a> 's notebooks is very good learning experience.</li></ul></li>
<li><a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">FB3 / Deberta-v3-base baseline [train]</a><ul>
<li>I used <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> 's starter code, this was really helpful.</li></ul></li>
<li><a href="https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x" target="_blank">RAPIDS SVR - CV 0.450 - LB 0.44x</a><ul>
<li>I always admire <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>. Check out about the SVR.</li></ul></li>
<li><a href="https://www.kaggle.com/code/mujrush/train-deberta-v3-base-with-bertopic#Deberta-v3-base-baseline" target="_blank">[Train]Deberta-v3-base with BERTopic</a><ul>
<li>It worked for my CV strategy.</li></ul></li>
</ul>
<hr>
<p>I'd like to use this opportunity to also thank <a href="https://www.kaggle.com/kashiwaba" target="_blank">@kashiwaba</a>. I joined this competition after reading the article you summarized about FB2<a href="https://zenn.dev/yume_neko/articles/6e8f78a12a1d2a" target="_blank">(external link : Japanese)</a>.</p>
<h1>Thanks and Acknowledgements:</h1>
<p>My first NLP competition was so fun. Thanks to the organizers for holding this interesting competition. Thanks to the kagglers for sharing very helpful codes and discussions. </p>
<p>Please take good care of yourself.</p> |
Feedback Prize - English Language Learning | Efficiency 7th / Private 84st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Overview</h1>
<p>I took the average of 6 models with a 3 seed average in the final submission of LB. <br>
For efficiency LB, I used deberta-v3-small with knowledge distillation and small max_len(=384), which took 0:16:42 min for submission and scored 0.438537 for public LB, 0.439835 for private LB.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5017202%2Fa680c252509a2029174d7800c59a0389%2Fsolution_summary.png?generation=1669809477192815&alt=media" alt=""></p>
<h1>Code</h1>
<p>LB submission: <a href="https://www.kaggle.com/code/tmhrkt/fb3-ensemble" target="_blank">https://www.kaggle.com/code/tmhrkt/fb3-ensemble</a> <br>
Efficiency LB: <a href="https://www.kaggle.com/code/tmhrkt/fb3-efficiency?scriptVersionId=112206338" target="_blank">https://www.kaggle.com/code/tmhrkt/fb3-efficiency?scriptVersionId=112206338</a> <br>
SVR training: <a href="https://www.kaggle.com/code/tmhrkt/fb3-embedding-learning-training" target="_blank">https://www.kaggle.com/code/tmhrkt/fb3-embedding-learning-training</a> <br>
github: <a href="https://github.com/ktm98/kaggle-feedback-prize-3" target="_blank">https://github.com/ktm98/kaggle-feedback-prize-3</a> </p>
<h1>What Worked For Both LB and Efficiency LB</h1>
<ul>
<li>[high Impact] Changing max_len<ul>
<li>CV improved 0.002, 0.001 (1024 -> 512, 1024 -> 768 each)</li>
<li>Submission speeded up about 4 minutes (512 -> 384)</li>
<li>Reasoning and context: trying several max_len, <code>max_len<384</code> got worse results compared to <code>maxlen=1024</code>, 1428. And oof correlations between different max_len (0.97+ for conventions) were lower than others (0.98+). So ensembling different max_len improved my CV score.</li></ul></li>
<li>[High Impact] Knowledge Distillation<ul>
<li>CV improved 0.014+ for deberta-v3-small</li>
<li>What I did: set loss function as <code>criterion(y_preds, labels) + criterion(y_preds, oof_preds)</code></li>
<li>Reasoning and context: I tried KD for efficiency LB. However, the impact for both CV and LB was higher than expected, so I added KD models for LB.</li></ul></li>
</ul>
<h1>What Worked For Efficiency LB</h1>
<ul>
<li>[Low Impact] num_workers=1<ul>
<li>Mean inference speed got faster about 5 seconds for the first 200 rows of <code>train.csv</code>.</li>
<li>Reasoning and context: Up until midway through the competition, I was using <code>num_workers=0</code>. Changing <code>num_workers=1</code> slightly speeded up inference time.</li></ul></li>
</ul>
<h1>What Did’t Work</h1>
<ul>
<li>unscale<ul>
<li>What I did: Adding unscale described in <a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train/comments" target="_blank">nakama’s notebook</a> comments.</li>
<li>Result: CV score got worse.</li></ul></li>
<li>XGBoost For embeddings<ul>
<li>What I did: Train by XGBoost instead of Rapids SVR .</li>
<li>Result: CV score got worse.</li></ul></li>
<li>TorchScript<ul>
<li>What I did: Converting PyTorch model to TorchScript.</li>
<li>Result: Inference time was too slow. Stopped while inferencing.</li>
<li>Reasoning and context: TorchScript does not allow dynamic sequence length. So If the tokenized length of a sentence is shorter than the max length, the attention mask portion must also be calculated, resulting in wasted computation time.</li></ul></li>
<li>Dynamic Quantization<ul>
<li>What I did: Apply dynamic quantization for my pretrained model.</li>
<li>Result: LB score significantly dropped (0.43 -> 0.64)</li>
<li>Reasoning and Context: Because quantizing <code>nn.Linear</code> layers directly affects the output of the model, quantizing layers is not suitable for regression tasks. </li></ul></li>
</ul>
<h1>Additional Context</h1>
<h2>CV Strategy</h2>
<p><code>MultiLabelStratifiedKFold</code> with binned <code>n_words</code></p>
<h2>Efficiency submissions</h2>
<p>First, I trained 4 folds to check the performance and submitted 4 folds. <br>
After checking the performance, the model was trained with whole FB3 data. </p>
<table>
<thead>
<tr>
<th>model</th>
<th>max_len</th>
<th>submission time</th>
<th>Public</th>
<th>Private</th>
<th>notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>deberta-v3-small</td>
<td>384</td>
<td>0:16:42</td>
<td>0.4398</td>
<td>0.4385</td>
<td>final sub</td>
</tr>
<tr>
<td>deberta-v3-small</td>
<td>290</td>
<td>0:15:07</td>
<td>0.4430</td>
<td>0.4435</td>
<td></td>
</tr>
<tr>
<td>deberta-v3-small</td>
<td>512</td>
<td>0:21:33</td>
<td>0.4388</td>
<td>0.4383</td>
<td></td>
</tr>
<tr>
<td>deberta-v3-xsmall</td>
<td>768</td>
<td>0:19:07</td>
<td>0.4396</td>
<td>0.4413</td>
<td></td>
</tr>
<tr>
<td>deberta-v3-xsmall</td>
<td>1428</td>
<td>0:20:11</td>
<td>0.4402</td>
<td>0.4394</td>
<td>2 process</td>
</tr>
</tbody>
</table>
<h1>Important Citations:</h1>
<ul>
<li><a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train</a></li>
<li><a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-inference" target="_blank">https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-inference</a></li>
<li><a href="https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x" target="_blank">https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x</a></li>
<li><a href="https://www.kaggle.com/code/kojimar/fb3-single-pytorch-model-train" target="_blank">https://www.kaggle.com/code/kojimar/fb3-single-pytorch-model-train</a></li>
</ul>
<h1>Thanks and Acknowledgements:</h1>
<p>Thanks to the organizers for holding this interesting competition. Thanks to the kagglers for sharing very helpful codes and discussions.</p>
<p>Thank you. KD was a todo idea from the joining of the competition for efficiency prize!</p>
<p>Thank you! Congratulations too for your 1st place of efficiency LB!</p> |
Feedback Prize - English Language Learning | Efficiency solution 39th place | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>First of all, we would like to thank Kaggle team and all participants who shared their codes and ideas.</strong></p>
<h1><strong>Overview</strong></h1>
<p>Our final submission is a single Deberta-base model with 1 fold and 6 epochs. We trained the model using 6 target columns and 12 new targets added by ourselves to help the model to learn intermediate insight on the essay quality.</p>
<h1><strong>Preprocessing</strong></h1>
<p><em>12 new targets that we added are</em></p>
<ul>
<li>Long words(count the number of the long words in the text which are the words that have more letters that the mean of the number of the letters per word on the dataset)</li>
<li>Number of grammar mistakes(<a href="https://pypi.org/project/language-tool-python/" target="_blank">using language tool</a>)</li>
<li>Readability score (<a href="https://pypi.org/project/py-readability-metrics/" target="_blank">Dale–Chall readability score</a>)</li>
<li>Number of distinct words(<a href="https://pypi.org/project/spacy/" target="_blank">using spacy library</a>)</li>
<li>Nouns count(<a href="https://pypi.org/project/spacy/" target="_blank">using spacy library</a>)</li>
<li>Verbs count(<a href="https://pypi.org/project/spacy/" target="_blank">using spacy library</a>)</li>
<li>Adjectives count(<a href="https://pypi.org/project/spacy/" target="_blank">using spacy library</a>)</li>
<li>Adverbs count(<a href="https://pypi.org/project/spacy/" target="_blank">using spacy library</a>)</li>
<li>Grammar mistakes related to words and sentences(<a href="https://www.grammarbot.io/" target="_blank">web scraping</a>)</li>
<li>Grammar mistakes related to the punctuation(<a href="https://www.grammarbot.io/" target="_blank">web scraping</a>)</li>
<li>Grammar mistakes related to the part of the speech(<a href="https://www.grammarbot.io/" target="_blank">web scraping</a>)</li>
<li>Other grammar mistakes(<a href="https://www.grammarbot.io/" target="_blank">web scraping</a>)</li>
</ul>
<p>Every scores are normalized between 1 and 5. </p>
<h1><strong>Model</strong></h1>
<p>Using <a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">this public notebook</a> as a reference, we experimented a lot, changing hyperparameters. </p>
<h1><strong>Code</strong></h1>
<p><a href="https://www.kaggle.com/code/msanselme/notebook1" target="_blank">Code Link</a></p>
<h1><strong>Team Members</strong>🤝</h1>
<p><a href="https://www.kaggle.com/dianascopeo" target="_blank">@dianascopeo</a> <br>
<a href="https://www.kaggle.com/jccorvisier" target="_blank">@jccorvisier</a> <br>
<a href="https://www.kaggle.com/pabloskewesscopeo" target="_blank">@pabloskewesscopeo</a> <br>
<a href="https://www.kaggle.com/msanselme" target="_blank">@msanselme</a> </p> |
Feedback Prize - English Language Learning | Lower value for LLRD (67th → 207th solution) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Dear Kagglers.</p>
<p>Thank you to the competition organizers for hosting this competition.<br>
Congrats to all who won this game!</p>
<p>We had a big shakedown from silver to bronze. (67th → 207th) :(<br>
However, we learned a lot about NLP from the shared notebooks, discussions and top solutions. We had a very good experience.</p>
<h2>Summary</h2>
<p>Our solution is a simple ensemble of finetuned models and SVR with pre-trained embeddings(with several poolings and max_length patterns).</p>
<ul>
<li><p>finetuned models</p>
<ul>
<li>deberta-v3-base</li>
<li>deberta-v3-large</li>
<li>roberta-large</li>
<li>longformer-base-4096</li></ul></li>
<li><p>pretrained models for SVR</p>
<ul>
<li>deberta-v3-base</li>
<li>deberta-v3-large</li>
<li>deberta-v2-xlarge</li>
<li>deberta-xlarge</li>
<li>deberta-large-mnli</li>
<li>roberta-large</li>
<li>muppet-roberta-large</li>
<li>longformer-large-4096</li>
<li>bart-large</li>
<li>electra-large-discriminator</li></ul></li>
</ul>
<h2>fine-tuning</h2>
<p>While fine-tuning the single model, we noticed that a combination of very low LLRD <br>
and high LR improves both CV and Public LB. For example, the following parameters got the scores.</p>
<ul>
<li>Model: deberta-v3-base</li>
<li><b>LR: 16.0e-05<b></b></b></li>
<li><b>LLRD: 0.1<b></b></b></li>
<li>batch_size: 16</li>
<li>other: AWP</li>
<li>CV: 0.44912</li>
<li>Public LB: 0.436664</li>
<li>Private LB: 0.440466</li>
</ul>
<p>Has anyone tried parameters like these? Why would scores improve with these parameters? <br>
It is inexplicable to us. We will study and think about it some more.</p>
<p>Thank you for your attention!</p> |
Feedback Prize - English Language Learning | Single 5 fold deberta v3 large to obtain silver medal | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Overfitting may be caused by weights being too confident that some features of the input are more important than others. This can lead to weights ignoring most of the information in the input (weight decay pulls parameters towards 0 to reduce this).</p>
<p>In the case of biases and embeddings you may want this feature or not. Maybe you want biases to grow very big, etc..<br>
In the case of layernorm weights, they are initiated to 1 to not affect the input, you dont want them to be 0.</p>
<p>You can start reading about weight decay from here:<br>
<a href="https://cs231n.github.io/" target="_blank">https://cs231n.github.io/</a></p>
<p>Thank you!! Learning rate for deberta needed to be very low to make it work I think </p>
<p>Cv score of the model was 0.4447 and it scored 0.436405 in private leaderboard.<br>
Key things were:</p>
<ul>
<li>Concat last 4 layers (it stabilized trained by a lot)</li>
<li>Adding lstm on top</li>
<li>AWP</li>
</ul>
<p>Here is training code:<br>
<a href="https://www.kaggle.com/code/themadrambito/63th-place-deberta-v3-large-private-0-436405" target="_blank">https://www.kaggle.com/code/themadrambito/63th-place-deberta-v3-large-private-0-436405</a></p>
<p>In theory it makes the model to converge faster, but it shouldnt matter at all. Weight decay only helps in parts where there are dot products.</p> |
Feedback Prize - English Language Learning | Solution writeup, Efficiency leaderboard #24. My first kaggle competition, top 50! | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>24th place solution, efficiency leaderboard!</h1>
<p>This was my first kaggle competition and I was aiming for the efficiency prize. I think I did well to be in the top 50!</p>
<p>You can see my solution's train code <a href="https://www.kaggle.com/code/wisawesome/24th-place-eff-prize-feedback-prize-ell-train" target="_blank">here</a> and the infer code <a href="https://www.kaggle.com/wisawesome/24th-place-eff-prize-feedback-prize-ell-infer" target="_blank">here</a>. Note that I forgot to set the seed for these, hence you may obtain slightly different results to the ones I obtained.</p>
<h2>The solution</h2>
<p>I used iterative stratification to get 5 folds from the training data. I then trained catboost and svr models for each of the 6 targets (cohesion,syntax,vocabulary,phraseology,grammar,conventions) independently using the embeddings (mean pooling, max pooling and all 12 hidden state embeddings) extracted from DeBERTa V3 xsmall.</p>
<p>I used published code from the following notebooks to extract the embeddings and to generate the folds using iterative stratification:</p>
<p><a href="url" target="_blank">https://www.kaggle.com/code/lextoumbourou/feedback3-eda-hf-custom-trainer-sift#Mean,-Min-and-Max-Pooling-Implementations</a><br>
<a href="url" target="_blank">https://www.kaggle.com/code/tafreen/notebook-ell</a><br>
<a href="url" target="_blank">https://www.kaggle.com/code/brandonhu0215/feedback-deberta-large-lb0-619</a><br>
<a href="url" target="_blank">https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x</a><br>
<a href="url" target="_blank">https://www.kaggle.com/code/kvsnoufal/lb0-46-gb-debertaembedding</a></p>
<h2>Some functions used to extract features from DeBERTa V3 xsmall:</h2>
<p><code>def mean_pooling(model_output, attention_mask):
token_embeddings = model_output.last_hidden_state.detach().cpu()
input_mask_expanded = (
attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
)
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(
input_mask_expanded.sum(1), min=1e-9
)
</code></p>
<p><code>def MaxPooling(output, attention_mask):
last_hidden_state = output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(last_hidden_state.size()).float()
last_hidden_state[input_mask_expanded == 0] = -1e9 # Set padding tokens to large negative value
max_embeddings = torch.max(last_hidden_state, 1)[0]
return max_embeddings
</code></p>
<p><code>def get_embeddings(MODEL_NM='', MAX=640, BATCH_SIZE=4, verbose=True, use_cuda=use_cuda):
global tokenizer, MAX_LEN
if use_cuda:
DEVICE="cuda"
else:
DEVICE="cpu"
config = AutoConfig.from_pretrained(MODEL_NM)
config.update({'output_hidden_states':True})
model = AutoModel.from_pretrained( MODEL_NM, config=config )
tokenizer = AutoTokenizer.from_pretrained( MODEL_NM )
MAX_LEN = MAX
model = model.to(DEVICE)
model.eval()
all_train_text_feats_esr = []
all_train_text_feats_mean = []
for batch in tqdm(embed_dataloader_tr,total=len(embed_dataloader_tr)):
input_ids = batch["input_ids"].to(DEVICE)
attention_mask = batch["attention_mask"].to(DEVICE)
with torch.no_grad():
model_output = model(input_ids=input_ids,attention_mask=attention_mask)
#sentence_embeddings = mean_pooling(model_output, attention_mask.detach().cpu())
## Normalize the embeddings
#sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
#sentence_embeddings = sentence_embeddings.squeeze(0).detach().cpu().numpy()
mean_pool_embeddings = mean_pooling(model_output, attention_mask.detach().cpu())
## Normalize the embeddings
mean_pool_embeddings = F.normalize(mean_pool_embeddings, p=2, dim=1)
mean_pool_embeddings = mean_pool_embeddings.squeeze(0).detach().cpu().numpy()
max_pool_embeddings = MaxPooling(model_output, attention_mask.detach().cpu())
## Normalize the embeddings
max_pool_embeddings = F.normalize(max_pool_embeddings, p=2, dim=1)
max_pool_embeddings = max_pool_embeddings.squeeze(0).detach().cpu().numpy()
#cls_pool_embeddings = CLS_embeddings(model_output)
## Normalize the embeddings
#cls_pool_embeddings = F.normalize(cls_pool_embeddings, p=2, dim=1)
#cls_pool_embeddings = cls_pool_embeddings.squeeze(0).detach().cpu().numpy()
# Concatenate last 4 hidden layers
all_hidden_states = torch.stack(model_output.hidden_states)
concatenate_pooling = torch.cat(
(all_hidden_states[-1], all_hidden_states[-2], all_hidden_states[-3], all_hidden_states[-4], all_hidden_states[-5], all_hidden_states[-6], all_hidden_states[-7], all_hidden_states[-8], all_hidden_states[-9], all_hidden_states[-10], all_hidden_states[-11], all_hidden_states[-12]),-1
)
concatenate_pooling = concatenate_pooling[:, 0]
concatenate_pooling = F.normalize(concatenate_pooling, p=2, dim=1)
concatenate_pooling = concatenate_pooling.squeeze(0).detach().cpu().numpy()
#embeddings_concat = np.concatenate([mean_pool_embeddings, max_pool_embeddings, cls_pool_embeddings, concatenate_pooling], axis=1)
#embeddings_concat = np.concatenate([mean_pool_embeddings, concatenate_pooling], axis=1)
embeddings_concat = np.concatenate([mean_pool_embeddings, max_pool_embeddings, concatenate_pooling], axis=1)
#all_train_text_feats.extend(sentence_embeddings)
all_train_text_feats_esr.extend(embeddings_concat)
all_train_text_feats_mean.extend(mean_pool_embeddings)
all_train_text_feats_esr = np.array(all_train_text_feats_esr)
all_train_text_feats_mean = np.array(all_train_text_feats_mean)
#if verbose:
# print('Train embeddings shape',all_train_text_feats.shape)
te_text_feats_esr = []
te_text_feats_mean = []
for batch in tqdm(embed_dataloader_te,total=len(embed_dataloader_te)):
input_ids = batch["input_ids"].to(DEVICE)
attention_mask = batch["attention_mask"].to(DEVICE)
with torch.no_grad():
model_output = model(input_ids=input_ids,attention_mask=attention_mask)
#sentence_embeddings = mean_pooling(model_output, attention_mask.detach().cpu())
## Normalize the embeddings
#sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
#sentence_embeddings = sentence_embeddings.squeeze(0).detach().cpu().numpy()
mean_pool_embeddings = mean_pooling(model_output, attention_mask.detach().cpu())
## Normalize the embeddings
mean_pool_embeddings = F.normalize(mean_pool_embeddings, p=2, dim=1)
mean_pool_embeddings = mean_pool_embeddings.squeeze(0).detach().cpu().numpy()
max_pool_embeddings = MaxPooling(model_output, attention_mask.detach().cpu())
## Normalize the embeddings
max_pool_embeddings = F.normalize(max_pool_embeddings, p=2, dim=1)
max_pool_embeddings = max_pool_embeddings.squeeze(0).detach().cpu().numpy()
#cls_pool_embeddings = CLS_embeddings(model_output)
## Normalize the embeddings
#cls_pool_embeddings = F.normalize(cls_pool_embeddings, p=2, dim=1)
#cls_pool_embeddings = cls_pool_embeddings.squeeze(0).detach().cpu().numpy()
# Concatenate last 4 hidden layers
all_hidden_states = torch.stack(model_output.hidden_states)
concatenate_pooling = torch.cat(
(all_hidden_states[-1], all_hidden_states[-2], all_hidden_states[-3], all_hidden_states[-4], all_hidden_states[-5], all_hidden_states[-6], all_hidden_states[-7], all_hidden_states[-8], all_hidden_states[-9], all_hidden_states[-10], all_hidden_states[-11], all_hidden_states[-12]),-1
)
concatenate_pooling = concatenate_pooling[:, 0]
concatenate_pooling = F.normalize(concatenate_pooling, p=2, dim=1)
concatenate_pooling = concatenate_pooling.squeeze(0).detach().cpu().numpy()
#embeddings_concat = np.concatenate([mean_pool_embeddings, max_pool_embeddings, cls_pool_embeddings, concatenate_pooling], axis=1)
#embeddings_concat = np.concatenate([mean_pool_embeddings, concatenate_pooling], axis=1)
embeddings_concat = np.concatenate([mean_pool_embeddings, max_pool_embeddings, concatenate_pooling], axis=1)
#te_text_feats.extend(sentence_embeddings)
te_text_feats_esr.extend(embeddings_concat)
te_text_feats_mean.extend(mean_pool_embeddings)
te_text_feats_esr = np.array(te_text_feats_esr)
te_text_feats_mean = np.array(te_text_feats_mean)
#if verbose:
# print('Test embeddings shape',te_text_feats.shape)
return all_train_text_feats_esr, all_train_text_feats_mean, te_text_feats_esr, te_text_feats_mean
</code></p>
<p>Targets cohesion, syntax, vocabulary, phraseology, and conventions were then targeted using catboost or SVR models using the features in te_text_feats_esr (which contains mean pooling, max pooling, and all 12 hidden states). Grammar was targeted using an SVR model using the features in te_text_feats_mean (which just contains the mean pooling).</p>
<p>The catboost and SVR models had their hyperparameters optimized using optuna. During training, the training set was split into 5 folds using iterative stratification. The best performing hyperparameters (as measured by mean performance on all 5 folds) were then used to train one model (either a catboost or SVR model, depending on which performed better on the cv) for each target using all available training data.</p>
<h2>What Worked</h2>
<p><strong>[High Impact] Idea</strong></p>
<ul>
<li>I quickly realized that the efficiency metric was designed well, meaning that neither the leaderboard score component nor the computation time penalty dominate. In other words, in order to do well in the efficiency competition the submission has to have a competitive score, but must also have a low computation time.</li>
<li>I settled on using DeBERTa V3 xsmall from huggingface (<a href="url" target="_blank">https://huggingface.co/microsoft/deberta-v3-xsmall</a>) which provided a good score at a low computation cost. All other transformers either took too long or did not improve my cross-validation score.</li>
<li>I set max_len to 400. I tried various max_len options from 100 to 640, timed them, and computed the cross-validation score. As expected, higher max_len resulted in better scores, but also longer computation times. It was hard to find an exact tradeoff between computation time and score improvement: it was impossible to tell exactly what the leaderboard scores would be at the end of the competition; full precision was not provided; and there was variation in computation times between runs on the training data. Through experimentation and some back-of-the-napkin calculations I estimated that a 0.01 improvement in the score was roughly equal to 25 minutes of computation time. This tradeoff guided my decision to choose a max_len of 400: longer lengths improved the score, but the time penalty was too high; lower max_len lowered the score but could not be justified on the basis of the computation time improvement.</li>
<li>I did not train any transformer models because I did not even know what a transformer was before this competition. Instead, I focused on optimizing hyperparameters using optuna and selecting the correct max_len for the features extracted from DeBERTa V3 xsmall.</li>
<li>I quickly found that catboost and SVR performed the best in my setup, and focused extensively on optimizing hyperparameters for those two methods.</li>
</ul>
<h2>What Didn’t Work</h2>
<ul>
<li>xgboost and lightgbm. I was surprised by the significantly worse performance of these two methods in comparison to catboost. Catboost's ordered boosting mode and symmetric tree method dominated the above but did take longer to train.</li>
<li>I could not get ensembles to improve my score, but on reflection I should have spent more time trying to do so. I believe an ensemble of catboost and svr models would have dominated my score, but on the public leaderboard that was not the case, so the idea was prematurely dropped.</li>
<li>Inclusion of embeddings from any other transformer model. I was really dismayed by this. There are many "tiny" models from which it only takes a minute to extract all encodings, yet the inclusion of these encodings did not improve my score.</li>
<li>Taking a risk by rounding to the nearest possible score did not pay off. For instance, if the model thought grammar was 3.55, rounding to 3.5 did caused a slight decrease in the score. I would have been better off if I did not round to the nearest possible score.</li>
</ul>
<h2>Important Citations:</h2>
<p><a href="url" target="_blank">https://www.kaggle.com/code/lextoumbourou/feedback3-eda-hf-custom-trainer-sift#Mean,-Min-and-Max-Pooling-Implementations</a><br>
<a href="url" target="_blank">https://www.kaggle.com/code/tafreen/notebook-ell</a><br>
<a href="url" target="_blank">https://www.kaggle.com/code/brandonhu0215/feedback-deberta-large-lb0-619</a><br>
<a href="url" target="_blank">https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x</a><br>
<a href="url" target="_blank">https://www.kaggle.com/code/kvsnoufal/lb0-46-gb-debertaembedding</a></p>
<h2>Thanks and Acknowledgements</h2>
<p>I would like to thank the following kagglers for publishing their code which helped me reach my solution:<br>
<a href="https://www.kaggle.com/lextoumbourou" target="_blank">@lextoumbourou</a> <a href="https://www.kaggle.com/tafreen" target="_blank">@tafreen</a> <a href="https://www.kaggle.com/brandonhu" target="_blank">@brandonhu</a> <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> <a href="https://www.kaggle.com/kvsnoufal" target="_blank">@kvsnoufal</a> </p>
<p>I would like to thank the organizers as well as Kaggle for hosting this excellent competition.</p>
<p>See you all in the next one!</p> |
Feedback Prize - English Language Learning | Team Turing: 9th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - English Language Learning
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you my friend and congratulations on a strong solo 18th performance in this competition! Great work! Yes, one more gold to go for KGM! This solution describes our Team Turings collective team effort!</p>
<p>A big thanks to Kaggle and the competition hosts for organizing the amazing feedback series! We learned a lot during the competitions and especially enjoyed the addition of efficiency track in Feedback 2 & 3. We look forward to participating in future competitions!</p>
<h2>Special Mention</h2>
<p>Congratulations to our teammate <a href="https://www.kaggle.com/syhens" target="_blank">@syhens</a> for becoming a Kaggle Competitions Master! Very well deserved and this was special after many narrow gold misses! </p>
<h2>Overview</h2>
<p>Our solution focused on building a set of diverse models to facilitate better ensemble performance. To this end, we trained the following models which were finally used in our ensemble. The actual experiments trained were a lot more :) </p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Backbone</th>
<th>Training Approach</th>
<th>PL</th>
<th>CV </th>
<th>PB </th>
</tr>
</thead>
<tbody>
<tr>
<td>exp009a</td>
<td>deberta-v3-large</td>
<td>PET</td>
<td>Yes, 10k Random Samples</td>
<td>0.4464</td>
<td>0.4374</td>
</tr>
<tr>
<td>exp022b</td>
<td>deberta-v3-large</td>
<td>Keywords</td>
<td>Yes, 4k Min-distance Samples</td>
<td>0.4444</td>
<td>0.4382</td>
</tr>
<tr>
<td>exp024</td>
<td>deberta-v3-large</td>
<td>Multiscale</td>
<td>No</td>
<td>0.4493</td>
<td>0.4399</td>
</tr>
<tr>
<td>exp024a</td>
<td>deberta-v3-large</td>
<td>Multiscale, MSE + Ranking Loss</td>
<td>No</td>
<td>0.4484</td>
<td>0.4424</td>
</tr>
<tr>
<td>exp026b</td>
<td>deberta-v3-large</td>
<td>Freeze + Reinit</td>
<td>Yes, 4k Random Samples</td>
<td>0.4447</td>
<td>0.4373</td>
</tr>
<tr>
<td>exp027</td>
<td>deberta-v3-large</td>
<td>SETFIT</td>
<td>No</td>
<td>0.4470</td>
<td>0.4371</td>
</tr>
<tr>
<td>exp030</td>
<td>deberta-v3-small</td>
<td>MSE + Ranking Loss</td>
<td>Yes, 4k Min-distance Samples</td>
<td>0.4490</td>
<td>0.4396</td>
</tr>
<tr>
<td>exp120</td>
<td>deberta-v3-small</td>
<td>2 Stage PL + Loss Function</td>
<td>Yes</td>
<td>0.4586</td>
<td>0.4467</td>
</tr>
<tr>
<td>exp121</td>
<td>luke-large</td>
<td>Same as above</td>
<td>Yes</td>
<td>0.4567</td>
<td>0.4462</td>
</tr>
<tr>
<td>exp132</td>
<td>roberta-large</td>
<td>Same as above</td>
<td>Yes</td>
<td>0.4492</td>
<td>0.4438</td>
</tr>
<tr>
<td>exp203b</td>
<td>deberta-large</td>
<td>Prompt + Smooth L1 Loss</td>
<td>Yes, 4k Min-distance Samples</td>
<td>0.448</td>
<td>0.4380</td>
</tr>
<tr>
<td>exp207a</td>
<td>deberta-v3-large</td>
<td>PET</td>
<td>Yes, 4k Min-distance Samples</td>
<td>0.4468</td>
<td>0.4381</td>
</tr>
<tr>
<td>exp208</td>
<td>deberta-large</td>
<td>Prompt + Smooth L1 Loss</td>
<td>Yes, 4k Min-distance Samples</td>
<td>0.4501</td>
<td></td>
</tr>
<tr>
<td>exp300</td>
<td>deberta-v3-base</td>
<td>Clean Text MCRMSE Loss</td>
<td>Yes, 4k Min-distance Samples</td>
<td>0.4553</td>
<td>0.4460</td>
</tr>
<tr>
<td>exp302</td>
<td>deberta-large</td>
<td>GRU Head. <br> Mean Max Pool.<br> Smooth L1 Loss</td>
<td>Yes, 4k Min-distance Samples</td>
<td>0.4549</td>
<td>0.4460</td>
</tr>
<tr>
<td>exp303d</td>
<td>deberta-v3-large</td>
<td>LSTM Head. Mean Max Pool.</td>
<td>2 Stage PL</td>
<td>0.4464</td>
<td>0.4366</td>
</tr>
<tr>
<td>exp310</td>
<td>deberta-v3-large</td>
<td></td>
<td>2 Stage PL</td>
<td>0.4508</td>
<td>0.4387</td>
</tr>
<tr>
<td>exp320</td>
<td>deberta-v3-large</td>
<td></td>
<td>No</td>
<td>0.4512</td>
<td>0.4387</td>
</tr>
<tr>
<td>SVR</td>
<td>SVR Models trained on embeddings</td>
<td></td>
<td>No</td>
<td>0.45</td>
<td>0.4431</td>
</tr>
</tbody>
</table>
<h2>What Worked</h2>
<ul>
<li><p><strong>[High Impact] Model Diversity</strong></p>
<ul>
<li><strong>PET Based Approach</strong><ul>
<li>Use the Pattern-Exploiting Training (PET) that reformulates input examples as cloze-style phrases to help language models understand a given task.</li>
<li>Pattern:</li></ul></li></ul>
<pre><code> Student Essay Evaluation:
Performance on cohesion (essay organization; transition; logical sequencing)? [MASK]. Performance on syntax (sentence structure and formation; word order)? [MASK]. Performance on vocabulary (word diversity; topic related terms)? [MASK]. Performance on phraseology (phrases; idioms; collocations)? [MASK]. Performance on grammar? [MASK]. Performance on conventions (spelling; capitalization; punctuation; contractions)?[MASK]. [SEP]
Essay: {full_essay}
</code></pre>
<ul>
<li>Verbalizer<ul>
<li>Positive Tokens List: <code>['Excellent', 'Good', 'A', '5', '10', '100', 'excellent', 'good', 'High', 'high', 'Strong', 'strong']</code></li>
<li>Negative Tokens List: <code>['1', 'C', 'D', 'F', 'Bad', 'Limited', 'limited', 'poor', 'Poor', '0', 'Low', 'low']</code></li></ul></li>
<li>Additional Pattern/Verbalizer<ul>
<li>Pattern: Example for cohesion here. </li></ul></li></ul>
<pre><code> Rate the following essay for cohesion (good text organization, transitional, overlap) with a score between 1 (bad) and 5 (great). The cohesion score for the following essay is 1 or 2 or 3 or 4 or 5? The score is [MASK]. {full_essay}
</code></pre>
<pre><code>- Verbalizer
- Positive Tokens List: `['3', '4', '5']`
- Negative Tokens List: `['1', '2']`
</code></pre>
<ul>
<li><strong>Keywords Approach</strong><ul>
<li>Create a set of keywords (words / phrases) by studying Feedback 3 that carries important signal regarding the different targets</li>
<li>Use the <strong>**<code>Finding Dataset Shortcuts with Grammar Induction</code></strong> to identify the keywords</li>
<li>Pre-process the essay by prefixing the identified keywords to the essay. For example:</li></ul></li></ul>
<pre><code> thomas jefferson, much we, always doing, this statement, it is always, a sense, it makes, makes you, more than, what you, you can also, the way, the first reason, thing is, it gives you, you worked, your school, real world, do more, i di, with my, my life, could make you, want to do, such a, give you, opportunity to, you may be, able to, how to, finishing school, you will take, that you learn, our life, in life, also a, trying to accomplish, you become, better person, in conclusion, it helps you, can alway [SEP] [SOE] Thomas Jefferson once states that <……> makes you feel accomplished in life, and shows that you can always strive to do more in you life [EOE]
</code></pre>
<ul>
<li>Multi-scale approach<ul>
<li>Predict target using different granularities of the essay<ul>
<li>Scale 1: mean pooling over entire essay (standard)</li>
<li>Scale 2: mean pooling over paragraphs, followed by LSTM and mean pooling transformation</li>
<li>Scale 3: mean pooling over sentences, followed by LSTM and mean pooling transformation</li></ul></li>
<li>Mean of 3-scale predictions to get the final score</li></ul></li>
<li>SETFIT approach<ul>
<li>Two step process<ul>
<li>Step 1: Fine-tuning of a sentence-transformer in contrastive manner</li>
<li>Step 2: Training a regressor using embeddings from step-1 fine-tuned sentence transformer</li></ul></li>
<li>Reference: <a href="https://huggingface.co/blog/setfit" target="_blank">https://huggingface.co/blog/setfit</a></li></ul></li></ul></li>
<li><p><strong>[High Impact] Ensemble</strong></p>
<ul>
<li>Result : Filter N models down to a manageable list keeping diversity in mind. </li>
<li>Reasoning: Given the large number of diverse experiments trained by our team, it was important to find a subset of diverse models that helped the overall CV. Hill climbing was the perfect approach for this. </li></ul></li>
<li><p><strong>[High Impact] Pseudo Labelling</strong></p>
<ul>
<li>We carefully curated an unlabelled essay corpus and subsequently sampled essays from that corpus such that PL distribution matches the training distribution as closely as possible. For details, please refer to our 1st place efficiency solution here: <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369646" target="_blank">https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369646</a></li>
<li>Result: CV improved by around 0.003</li>
<li>Reasoning: PL helped to distill knowledge from our best ensemble into the low capacity model used for efficiency. PL also has regularizing effect and often leads to better generalization.</li></ul></li>
<li><p><strong>[High Impact] 2 Stage Pseudo Labeling</strong></p>
<ul>
<li>Result: CV improved 0.002 to 0.005</li>
<li>Reasoning and Context: There were 2 sets of pseudo labeling(PL) for the team</li>
<li>Based on FB1 + FB2 - FB3 unlabeled data - Two-stage training works best. <br>
[stage 1] Train a model with PL only, validate 5 times an epoch and save the checkpoint. <br>
[stage 2] Load the checkpoint from stage 1 and fine-tuning on the train set with relatively smaller lr. </li>
<li>Min-dist PL - The only difference is save the last checkpoint of stage 1</li></ul></li>
<li><p><strong>High Impact] Freezing & Re-initializing</strong></p>
<ul>
<li>Freeze bottom 6 layers of the model and re-initialized the top layer of the transformer backbone.</li>
<li>Result: CV around 0.001 better</li>
<li>For this task, it was helpful to freeze bottom transformer layers to keep intact lower level syntactic patterns learned by the backbone during its LM pre-training. On the other hand, re-initializing a top layer gave more flexibility to learn task specific signals.</li></ul></li>
<li><p><strong>[High Impact] AWP & EMA</strong></p>
<ul>
<li>Adversarial training via AWP helps the model to become more robust by inducing a flatter weight loss landscape. CV also improved by around 0.001</li>
<li>EMA helps with generalization by keeping track of running average of model weights (similar to SWA)</li></ul></li>
<li><p><strong>[High Impact] LSTM + MeanMax Pooling</strong></p>
<ul>
<li>Reasoning and Context: Tried different combinations of pooling and heads but LSTM + MeanMax pooling worked.</li></ul></li>
<li><p><strong>[High Impact] 2 Stage Loss and PL Training</strong></p>
<ul>
<li>This methodology was used by some models to increase diversity.</li>
<li>Stage 1: Use loss function X with Pseudo Labels and train for one epoch</li>
<li>Stage 2: Use stage 1 checkpoint. Use another loss function Y for original train data and train for more epochs. Same loss function across stages gave worse CV vs different loss functions. </li></ul></li>
<li><p><strong>[Medium Impact] Grouped differential learning rate</strong></p>
<ul>
<li>Grouped differential learning rate: This is very similar to standard LLRD. Split backbone's encoder layers into 3 groups, set lr per group. [e.g. sSet 5e-5/2e-5 as base_lr for high level group, base_lr / 2.6 for middle level group, base_lr / 2.6 ** 2 for low level group; for head and LSTM + MeanMax pooling, set 5e-3 or 1e-3]<br>
</li></ul></li>
<li><p><strong>[Low Impact] MLM</strong></p>
<ul>
<li>Result: -~0.001</li>
<li>Reasoning and Context: MLM seemed to only work with deberta-large and FB2 dataset. </li></ul></li>
<li><p><strong>[Low Impact] Adding a new token for <code>\n\n</code></strong></p></li>
<li><p><strong>[Low Impact] Ranking Loss</strong></p>
<ul>
<li>Adding ranking loss to standard MSE loss made aware of relative ranking of training examples within a batch</li></ul></li>
</ul>
<h2>What Didn’t Work</h2>
<ul>
<li>Mask Augmentation</li>
<li>Random Augmentation</li>
<li>Auxiliary Target Training - add POS tagging, Statistical feature, … as auxiliary target </li>
<li>Stacking - Add statistical and readability features to apply stacking with OOFs </li>
<li>Paragraph Random Shuffle </li>
</ul>
<h2>Important Citations</h2>
<ul>
<li>Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference (<a href="https://arxiv.org/abs/2001.07676" target="_blank">https://arxiv.org/abs/2001.07676</a>)</li>
<li>Improving and Simplifying Pattern Exploiting Training (<a href="https://arxiv.org/abs/2103.11955" target="_blank">https://arxiv.org/abs/2103.11955</a>)</li>
<li>SetFit: Efficient Few-Shot Learning Without Prompts (<a href="https://huggingface.co/blog/setfit" target="_blank">https://huggingface.co/blog/setfit</a>)</li>
<li>On the Use of Bert for Automated Essay Scoring: Joint Learning of Multi-Scale Essay Representation (<a href="https://aclanthology.org/2022.naacl-main.249/" target="_blank">https://aclanthology.org/2022.naacl-main.249/</a>)</li>
<li>Finding Dataset Shortcuts with Grammar Induction (<a href="https://github.com/princeton-nlp/ShortcutGrammar" target="_blank">https://github.com/princeton-nlp/ShortcutGrammar</a>)</li>
<li>SVR <a href="https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x" target="_blank">https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x</a></li>
</ul>
<h2>Thanks and Acknowledgements</h2>
<p>Thanks to the organizers and Kaggle for a very exciting competition!<br>
Many thanks to my teammates (@conjuring92, <a href="https://www.kaggle.com/syhens" target="_blank">@syhens</a>, <a href="https://www.kaggle.com/harshit92" target="_blank">@harshit92</a>) for such an amazing collaboration and perfect teamwork! I am sure we will be teaming up again for future competitions! </p>
<h2>Team Members</h2>
<p>Raja Biswas <a href="https://www.kaggle.com/conjuring92" target="_blank">@conjuring92</a><br>
Yao He <a href="https://www.kaggle.com/syhens" target="_blank">@syhens</a><br>
Harshit Mehta <a href="https://www.kaggle.com/harshit92" target="_blank">@harshit92</a><br>
Trushant Kalyanpur <a href="https://www.kaggle.com/trushk" target="_blank">@trushk</a> </p>
<p>Thanks <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> ! The details of that are actually posted in our efficiency solution here: <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369646" target="_blank">https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369646</a>. I can also add a link to this post. </p> |
Feedback Prize - Evaluating Student Writing | 10th solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Evaluating Student Writing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, much thanks to the competition hosts and Kaggle administrators for holding such an interesting competition!<br>
Here is the our 10th solution:</p>
<p><strong>Token classification (NER)</strong><br>
We created 6 deberta-large models and 1 deberta-xlarge model then took weighted average of probability of each token (the weight is according to LB).<br>
Points are following:</p>
<ul>
<li>Various versions of deberta-large models with these modifications:<ul>
<li>Tagging of tokens (begin, inside, outside / begin, inside, end, outside / begin, inside, end, begin-of-outside, inside-of-outside, end-of-outside).</li>
<li>Pretrain with MLM (Masked Language Model) method.</li>
<li>Replace tokens with masked tokens randomly when training.</li>
<li>Add LSTM before classification head.</li>
<li>Higher and lower learning rate.</li>
<li>Multi-task learning of ordinary NER and NER focused on specific discourses.<br>
Additional loss with higher weight on begin tokens.</li></ul></li>
<li>Train deberta-xlarge<ul>
<li>Deberta-xlarge often failed to converge. To make it converged, we used warm-up scheduler and tried several patterns of learning rate.</li></ul></li>
<li>Ensemble<ul>
<li>We tried various backbones like Bigbird etc., but found ensemble of deberta-large models achieves the highest score.</li>
<li>Especially ensemble of models with different tokenizers lowered score when decomposing tokens to char-level to take weighted average of probability.</li>
<li>Maybe ensemble of models with shared tokenizer is desirable.</li></ul></li>
<li>Credit<ul>
<li>Architectures of models are based on <a href="https://www.kaggle.com/Abhishek" target="_blank">@Abhishek</a>'s code.</li>
<li>Experiment results of deberta etc. from <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> were very helpful.</li>
<li>Thanks to <a href="https://www.kaggle.com/Abhiskek" target="_blank">@Abhiskek</a> and <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>, we could focus on try-and-error instead of spending time on building architectures of models.</li></ul></li>
</ul>
<p><strong>Postprocessing with LGBM</strong><br>
Points are following:</p>
<ul>
<li>First, generate predictionstrings from results of token classification (it's same with public notebooks' "create_submission" function).</li>
<li>Second, create a LGBM model that predicts whether generated predictionstrings are TP or not, then outputs the probability of TP.<ul>
<li>Features are mainly come from aggregation of probability of token classification results.<br>
75 aggregation features are generated from each material of ensemble (5 types of aggregation (min, max, mean, 20 percentile, 80 percentile) * 15 types of tokens (Beginning, Inside * 7 discourses + Outside))</li>
<li>Other feats are about 15, length of prediction strings, length of essays etc.</li></ul></li>
<li>Finally, filter predictionstrings with probability of TP.</li>
<li>Without this postprocessing: Public: 0.716 Private 0.724, With this postprocessing Public: 0.719 Private 0.727</li>
</ul>
<p><strong>Other postprocessing</strong><br>
Followed by the filtering written in previous section , these postprocessing methods are applied.</p>
<ul>
<li>Define the starts of predictionstrings as these patterns: begin token or a token that differs from the previous class (exp. I-Claim, B-Claim <-).</li>
<li>Calculate mean probability (begin + inside) of predictionstrings then filter predictionstrings by it and length of predictionstrings.</li>
<li>If multiple Lead, Position and Concluding Statement are found, keep only one with the highest mean probability.</li>
<li>Apply <a href="https://www.kaggle.com/kaggleqrdl/tensorflow-longformer-ner-postprocessing" target="_blank">link evidence</a> on Evidence, Counterclaim and Rebuttal (much thanks to <a href="https://www.kaggle.com/kaggleqrdl" target="_blank">@kaggleqrdl</a>!).</li>
</ul> |
Feedback Prize - Evaluating Student Writing | 11th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Evaluating Student Writing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks, I can't filter predictionstrings without a threshold because the output of Xgboost is not 0 or 1, but numeric in the range 0 ~ 1.</p>
<p>Thank you for your comment. CV was measured before PP using the same method as below<br>
<a href="https://www.kaggle.com/cdeotte/tensorflow-longformer-ner-cv-0-633" target="_blank">https://www.kaggle.com/cdeotte/tensorflow-longformer-ner-cv-0-633</a></p>
<ul>
<li>longformer-large-4096 (max_length:2048) : 0.6696</li>
<li>roberta-large (max_length:512) : 0.6098</li>
<li>bart-large (max_length:512) : 0.6073</li>
<li>funnel-transformer-large (max_length:512) : 0.5984</li>
<li>distilbart-cnn-12-6 (max_length:512) : 0.5957</li>
<li>deberta-large (max_length:1024) : 0.6672</li>
</ul>
<p>Yes, we tried, but the Public Score did not increase, so we did not adopt it in the end.</p>
<p>First of all, thanks to competition organizers for hosting this competition.<br>
Thanks also to the community for sharing many ideas in Notebook and Discussion.</p>
<h2>Summary</h2>
<p>We ensembled 6 models and did post process. Next, we extracted candidate predictionstrings and predicted whether each predictionstring was TP or not using XGBoost, etc. Finally, we set a threshold for each class and cut off the right edge of the predictionstrings.</p>
<h2>Models</h2>
<p>We referred to the following notebook.<br><br>
<a href="https://www.kaggle.com/cdeotte/tensorflow-longformer-ner-cv-0-633" target="_blank">https://www.kaggle.com/cdeotte/tensorflow-longformer-ner-cv-0-633</a><br>
<br><br>
The following 6models were trained with different max_length depending on the model.</p>
<ul>
<li>longformer-large-4096 (max_length:2048)</li>
<li>roberta-large (max_length:512)</li>
<li>bart-large (max_length:512)</li>
<li>funnel-transformer-large (max_length:512)</li>
<li>distilbart-cnn-12-6 (max_length:512)</li>
<li>deberta-large (max_length:1024)</li>
</ul>
<h2>Ensemble</h2>
<p>Since different models use different tokenizers, once the prediction results were converted to the character level, they were reaggregated into longformer token units.<br>
Ensemble with different weights for each of the following positions.</p>
<ol>
<li>0 ~ 512</li>
<li>0 ~ Funnel-transformer-large-token-len</li>
<li>Funnel-transformer-large-token-len ~ 1023</li>
<li>1023 ~ <br><br>
Funnel-transformer-large-token-len is the length of the 512 token of funnel-large when it is converted back to longformer tokenizer units. It was about 720.</li>
</ol>
<h2>Post Process</h2>
<ul>
<li>Multiply each prediction by the coefficient</li>
<li>Ensemble with predictions of neighboring token</li>
<li>Extracted candidates for predictionstrings with reference to the following notebook. These candidates were not only for the class with the highest prediction, but also for the second highest class. The some predictionstrings were extracted where the start was not B-token.<br><br>
<a href="https://www.kaggle.com/cdeotte/tensorflow-longformer-ner-cv-0-633" target="_blank">https://www.kaggle.com/cdeotte/tensorflow-longformer-ner-cv-0-633</a></li>
<li>The following 3 models predicted whether predictionstrings were TP or not. The features were created by aggregating the ensemble results for each predictstrings. A threshold was set for each class, and only predictstrings with predictions above the threshold were left.<ol>
<li>XGBoost</li>
<li>MLP</li>
<li>LSTM</li></ol></li>
<li>Finally, predictstrings length threshold and percentage of clipping were set for each class, and The right end of predictstrings that exceeded the threshold was deleted by the set percentage.</li>
</ul>
<h2>Edit</h2>
<p>Inference notebook<br>
<a href="https://www.kaggle.com/code/columbia2131/exp-038-ensemble-xgb-mlp-lstm-fe-fix-tail" target="_blank">https://www.kaggle.com/code/columbia2131/exp-038-ensemble-xgb-mlp-lstm-fe-fix-tail</a><br>
training code<br>
<a href="https://github.com/TakoiHirokazu/Feedback-Prize-Evaluating-Student-Writing" target="_blank">https://github.com/TakoiHirokazu/Feedback-Prize-Evaluating-Student-Writing</a><br>
post process code<br>
<a href="https://colab.research.google.com/drive/1J2NTaRSKi-X5SBXxvQtVJxokZIW0oq4L?usp=sharing" target="_blank">https://colab.research.google.com/drive/1J2NTaRSKi-X5SBXxvQtVJxokZIW0oq4L?usp=sharing</a><br>
<a href="https://www.kaggle.com/code/columbia2131/tuning-exp038-right-edge-removing/notebook" target="_blank">https://www.kaggle.com/code/columbia2131/tuning-exp038-right-edge-removing/notebook</a></p> |
Feedback Prize - Evaluating Student Writing | 12th place solution - Stretching short 'predictionstring's | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Evaluating Student Writing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for the comment!<br>
I tried to train deberta xlarge at google colab with batch size 1, freezing a few layers at the beginning of the encoder, but couldn't trained it.</p>
<p>Therefore, I borrowed a preemptive instance of A100 at GCE for training deberta xlarge.</p>
<p><a href="https://www.kaggle.com/aman1391" target="_blank">@aman1391</a> <br>
Hi, thanks for the comment.<br>
You can do the first part of the postprocessing by the following steps.</p>
<p>①Define thresholds(only the threshold of 'Claim' is defined here)</p>
<pre><code>claim_thresh_list=[
{
'start':1,
'end':5,
'add_num':1
},
{
'start':6,
'end':10,
'add_num':2
},
{
'start':11,
'end': 20,
'add_num':4
},
]
</code></pre>
<p>②Define a function</p>
<pre><code>def add_token_v4(row, discourse_type, thresh_list: list):
#sort in ascending order
thresh_list=sorted(thresh_list, key=lambda x: x['start'])
#check if the spans are overlapping
count_array=np.zeros(10000)
for d in thresh_list:
count_array[d['start']: d['end']+1]+=1
if (count_array>1).sum()>=1:
raise Exception("spans are overlapping")
pred_len=len(row['predictionstring'].split())
for d in thresh_list:
if (row['class']==discourse_type) and (pred_len>=d['start']) and (pred_len<=d['end']):
max_idx=int(row['predictionstring'].split()[-1])
min_idx=int(row['predictionstring'].split()[0])
fix_pred=' '.join([str(i) for i in range(min_idx, max_idx+d['add_num']+1)])
return fix_pred
#If the predictionstring is not contained in any span, return the original predictionstring
return row['predictionstring']
</code></pre>
<p>③Perform the postprocess<br>
You can postprocess by applying the above function to the df for submission.</p>
<pre><code>submission['predictionstring']=submission.apply(lambda x: add_token_v4(x, 'Claim', claim_thresh_list), axis=1)
</code></pre>
<p>First of all, I would like to thank Kaggle and the organizers for hosting this competition, and everyone who shared useful methods and information.</p>
<p>I've refered to a lot of shared ideas in this competition.</p>
<h2>Solution</h2>
<p>As the published method, I've handled the task as NER.<br>
My solution consists of below two parts.</p>
<ol>
<li>Weighted ensemble of 5 models</li>
<li>Several postprocess (including stretching short 'predictionstring's) </li>
</ol>
<h3>1. Weighted ensemble of 5 models</h3>
<p>My models are based on <a href="https://www.kaggle.com/Abhishek" target="_blank">@Abhishek</a>'s code.</p>
<p>I took a weighted average of below 5 models.<br>
Each model is trained on 5 folds and chosen some folds in submission.</p>
<table>
<thead>
<tr>
<th>model</th>
<th>weight</th>
<th>the amount of used folds</th>
</tr>
</thead>
<tbody>
<tr>
<td>longformer-large</td>
<td>0.175</td>
<td>3</td>
</tr>
<tr>
<td>funnel-large</td>
<td>0.175</td>
<td>3</td>
</tr>
<tr>
<td>deberta-large(seed:77)</td>
<td>0.15</td>
<td>3</td>
</tr>
<tr>
<td>deberta-large(seed:456)</td>
<td>0.15</td>
<td>2</td>
</tr>
<tr>
<td>deberta-xlarge</td>
<td>0.35</td>
<td>5</td>
</tr>
</tbody>
</table>
<h3>2. Several postprocesses</h3>
<p>Mainly, I used below 3 postprocesses.<br>
① <br>
I stretched short 'predictionstring's slightly.<br>
The thresholds are as follows:</p>
<table>
<thead>
<tr>
<th>discourse_type</th>
<th>threshold1<br>(length/degree)</th>
<th>threshold2<br>(length/degree)</th>
<th>threshold3<br>(length/degree)</th>
<th>threshold4<br>(length/degree)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Lead</td>
<td>7~13 / +6</td>
<td>14~19 / +12</td>
<td>20~30 / +14</td>
<td>-</td>
</tr>
<tr>
<td>Position</td>
<td>5~15 / +3</td>
<td>16~20 / +2</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Claim</td>
<td>1~5 / +1</td>
<td>6~10 / +2</td>
<td>11~20 / +4</td>
<td>-</td>
</tr>
<tr>
<td>Counterclaim</td>
<td>5~24 / +4</td>
<td>25~37 / +5</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Rebuttal</td>
<td>2~4 / +1</td>
<td>5~13 / +5</td>
<td>14~21 / +7</td>
<td>22~27 / +8</td>
</tr>
<tr>
<td>Evidence</td>
<td>17~20 / +11</td>
<td>21~23 / +14</td>
<td>24~29 / +17</td>
<td>30~36 / +20</td>
</tr>
<tr>
<td>Concluding Statement</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
<p>Basically, I increased the length to be stretched in proportion to the length of the 'predictionstring'.<br>
This process increased TP while decreasing FP, finally improving the public/private by 0.007/0.005.</p>
<p>② <br>
The points are following.</p>
<p>・Modifying 'jn'<br>
'link_evidence' used '-1' to distinguish between each 'predictionstring', <br>
so the result of 'link_evidence' contained '-1' while ground truth don't include '-1'.</p>
<p>Therefore I modified 'jn' which is used to concatenate several 'predictionstring's in 'link_evidence' so that the result of 'link_evidence' don't contain '-1'.</p>
<pre><code>#before
def jn(pst, start, end):
return " ".join([str(x) for x in pst[start:end]])
#after
def jn(pst, start, end):
return " ".join([str(x) for x in pst[start:end] if x !=-1])
</code></pre>
<p>・Applying it on other discourse types<br>
I applied this method on other discourse types excluding 'Claim'.</p>
<p>・Optimizing thresholds<br>
====<br>
The improvement increased public/private by +0.003 /+0.003.(excluding the effect of the default 'link_evidence')</p>
<p>③</p>
<p>Looking at the predictions, <br>
I found that the end of prediction is sometimes a capital letter(e.g. 'This'). <br>
So, if the capital letter in the end of prediction is not a proper noun, I removed it.</p>
<h2>What didn't worked well for me</h2>
<p>・multi-task training using 15 topics<br>
・concatenating last N layers of output of BERT<br>
・several losses(Focal Loss, Dice Loss)<br>
・large batch size</p>
<h2>Resourse</h2>
<p>Colab pro+ (Mainly, V100)<br>
GCE preemptive instance(A100)</p>
<h2>Inference code</h2>
<p>The original code is <a href="https://www.kaggle.com/kurokurob/12th-place-solution-original" target="_blank">here</a>.<br>
After the end of competition, I found some errors in the original code, so I fixed them.<br>
The modified code is <a href="https://www.kaggle.com/code/kurokurob/12th-place-solution-error-fix" target="_blank">here</a>.<br>
<br><br>
Thanks for reading!</p> |
Feedback Prize - Evaluating Student Writing | 15th Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Evaluating Student Writing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank the excellent notebooks and discussions from <a href="https://www.kaggle.com/abhishek/two-longformers-are-better-than-1" target="_blank">https://www.kaggle.com/abhishek/two-longformers-are-better-than-1</a> <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a> <br>
<a href="https://www.kaggle.com/c/feedback-prize-2021/discussion/308992" target="_blank">https://www.kaggle.com/c/feedback-prize-2021/discussion/308992</a> <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> <br>
<a href="https://www.kaggle.com/librauee/infer-fast-ensemble-models" target="_blank">https://www.kaggle.com/librauee/infer-fast-ensemble-models</a> <a href="https://www.kaggle.com/librauee" target="_blank">@librauee</a> <br>
I learned a lot from their work. This is the second kaggle competition we have participated in, and although we are one short of gold, we are already very satisfied.</p>
<h3>Models</h3>
<ul>
<li>In the final commit, we ensemble 6 debreta_xlarge, 6 longformer-large-4096, 2 funnel-large, 2 deberta-v3-large and 2 deberta-large. We set the max_length to 1600. We use Fast Gradient Method(FGM) to improve robustness and use Exponential Moving Average(EMA) to smooth training.</li>
<li>Use optuna to learn all the hyperparameters in the post processing stage.</li>
<li>CV results show that deberta-xlarge(0.7092) > deberta-large(0.7025) > deberta-large-v3(0.6842) > funnel-large(0.6798) = longformer-large-4096(0.6748)</li>
</ul>
<h3>Post processing</h3>
<ul>
<li><p>Merge consecutive predictions with same label, for example we merge [B-Lead, I-Lead, I-Lead], [B-Lead, I-Lead] into one single prediction. We only do this operation when the label is in ['Lead', 'Position', 'Concluding', 'Rebuttal'], since there are not consecutive predictions for these labels in the training data.</p></li>
<li><p>Filter "Lead" and "Concluding". There are only one Lead label and Concluding Label in almost all the trainging data, so we only keep the predictions that has higher score than threshold. Besides, we found that merge two Lead can increase cv further.</p></li>
</ul>
<pre><code>concluding_df = sorted(concluding_df, key=lambda x: np.mean(x[4]), reverse=True)
new_begin = min(concluding_df[0][3][0], concluding_df[1][3][0])
new_end = max(concluding_df[0][3][-1], concluding_df[1][3][-1])
</code></pre>
<ul>
<li><p>Since the score is based on the overlap between prediction and ground truth, so we extend the predictions from word_list[begin:end] to word_list[begin - 1: end + 1]. Hoping the extended predictions can better hit ground truth and accross the 50% threshold.</p></li>
<li><p>Scaling. The probabilities of each token are multiplied by a factor. The factors are obtained through genetic algorithm search.</p></li>
<li><p>There are some other attempts but didn't work well. These attempts are included in the inference notebook.</p></li>
</ul>
<h3>Code</h3>
<ul>
<li>Inference and Post-processing: <a href="https://www.kaggle.com/telmazzzz/fb-inference-0-711-final?scriptVersionId=90185445" target="_blank">https://www.kaggle.com/telmazzzz/fb-inference-0-711-final?scriptVersionId=90185445</a></li>
<li>Training: <a href="https://github.com/TelmaZzzz/FeedBack" target="_blank">https://github.com/TelmaZzzz/FeedBack</a></li>
</ul> |
Feedback Prize - Evaluating Student Writing | 1st solution with code(cv:0.748 lb:0.742) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Evaluating Student Writing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>In stage 1, we used bert for token prediction. and In stage 2, we also tried to use bert for sentence prediction, but the result was not good, so we did not use it in the end</p>
<p>Features we have made some conventional statistical features for the current sentence and before and after the sentence. There is nothing special. I think our core strength is not features, but how to get and process candidate samples.</p>
<p>A longformer-large model requires 4 A100s to train for about 5 hours, and a deberta-xxlarge model requires 4 A100s to train for about 24 hours</p>
<p>You can refer to the after_deal function in this code <a href="https://www.kaggle.com/wht1996/feedback-longformer-5fold-0-697" target="_blank">https://www.kaggle.com/wht1996/feedback-longformer-5fold-0-697</a></p>
<p>we tried back translation, but it didn't work</p>
<p>you can refer to <a href="https://www.kaggle.com/hengck23/1-birdformer-1-longformer-one-fold" target="_blank">https://www.kaggle.com/hengck23/1-birdformer-1-longformer-one-fold</a></p>
<p>yes, LightGBM</p>
<p>I think it is Bert (low threshold, high recall) -> span manipulation -> span selection</p>
<p>Yes, we used A100 for training</p>
<p>Because the intersection of prediction and label only needs to be greater than 50%, if the probability of two boundaries of a sample is very large, I can be sure that the sample will not be longer, then I can make it shorter to improve robustness</p>
<p>It's similar to the simple average. A little optimization is added. See the code for details.<br>
<a href="https://www.kaggle.com/wht1996/feedback-nn-train" target="_blank">https://www.kaggle.com/wht1996/feedback-nn-train</a></p>
<pre><code>if i == 0:
preds = logits
else:
if i == (loops-1):
if i==1:
inter_length = loops_end[0]-loops_start[-1]
weight = torch.floor_divide(torch.arange(inter_length), inter_length-1).reshape(1,-1,1).to(device)
intersection = preds[:,start:]*(1-weight) + logits[:,:inter_length]*(weight)
preds = torch.cat([preds[:,:start],intersection,logits[:,inter_length:]],dim=1)
else:
preds = torch.cat([preds[:,:start],logits],dim=1)
else:
preds = torch.cat([preds,logits[:,duplicate_cnt:]],dim=1)
</code></pre>
<p>I don't quite understand what is w/ length and score filtering, you can refer to <a href="https://www.kaggle.com/wht1996/feedback-two-stage-cv0-747" target="_blank">https://www.kaggle.com/wht1996/feedback-two-stage-cv0-747</a></p>
<p>like this<a href="https://postimg.cc/mPWM0XQv" target="_blank"><img src="https://i.postimg.cc/y8kP32W8/image.png" alt="image.png"></a></p>
<p>First of all, I would like to thank the competition organizers for this great competition. At the same time, I am very grateful to many brothers for providing very good notebooks and discussions. I learned a lot from them and applied them to our final solution. </p>
<p>In this competition, because each token needs to be combined into a sentence as the final result, in order to reduce post-processing, we divide our solution into two stages</p>
<h1>stage 1: bert token prediction</h1>
<p>First thanks to the excellent notebooks and discussions <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a> <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> and others<br>
<a href="https://www.kaggle.com/cdeotte/tensorflow-longformer-ner-cv-0-633" target="_blank">https://www.kaggle.com/cdeotte/tensorflow-longformer-ner-cv-0-633</a> <br>
<a href="https://www.kaggle.com/abhishek/two-longformers-are-better-than-1" target="_blank">https://www.kaggle.com/abhishek/two-longformers-are-better-than-1</a><br>
<a href="https://www.kaggle.com/hengck23/1-birdformer-1-longformer-one-fold" target="_blank">https://www.kaggle.com/hengck23/1-birdformer-1-longformer-one-fold</a><br>
<a href="https://www.kaggle.com/c/feedback-prize-2021/discussion/297591" target="_blank">https://www.kaggle.com/c/feedback-prize-2021/discussion/297591</a><br>
<a href="https://www.kaggle.com/c/feedback-prize-2021/discussion/308992" target="_blank">https://www.kaggle.com/c/feedback-prize-2021/discussion/308992</a><br>
etc…</p>
<p>we tried various pretrain models, since the max length of some models is 512, for these models, we choose the method of segmented prediction and splicing, Finally we choice longformer-large, roberta-large, deberta-xxlarge, distilbart_mnli_12_9, bart_large_finetuned_squadv1 for ensemble. This stage takes 7 hours online, and <strong>cv score 0.712</strong>, <strong>lb score 0.706</strong> with post-processing.</p>
<p>here is each pretrain model cv score:<br>
<a href="https://postimg.cc/c6hTg2qd" target="_blank"><img src="https://i.postimg.cc/wT8rPY0R/1647309894393-e37bd0b0-7aac-45e6-a7d0-d749e3a10fb2.png" alt="1647309894393-e37bd0b0-7aac-45e6-a7d0-d749e3a10fb2.png"></a><br>
We put the bert training code <a href="https://www.kaggle.com/wht1996/feedback-nn-train" target="_blank">here</a>, Because the kaggle online resources are insufficient, you need to copy it to your own machine for training</p>
<h1>stage 2: lgb sentence prediction</h1>
<p>Thanks <a href="https://www.kaggle.com/chasembowers" target="_blank">@chasembowers</a> for the excellent notebook <a href="https://www.kaggle.com/chasembowers/sequence-postprocessing-v2-67-lb" target="_blank">https://www.kaggle.com/chasembowers/sequence-postprocessing-v2-67-lb</a></p>
<p>we first recall as many candidate samples as possible by lowering the threshold. On the training set, we recall three million samples to achieve a mean of 95% of recalls, the recalls for each class are </p>
<table>
<thead>
<tr>
<th>class</th>
<th>recall</th>
</tr>
</thead>
<tbody>
<tr>
<td>Claim</td>
<td>0.938</td>
</tr>
<tr>
<td>Concluding Statement</td>
<td>0.972</td>
</tr>
<tr>
<td>Counterclaim</td>
<td>0.906</td>
</tr>
<tr>
<td>Evidence</td>
<td>0.974</td>
</tr>
<tr>
<td>Lead</td>
<td>0.970</td>
</tr>
<tr>
<td>Position</td>
<td>0.928</td>
</tr>
<tr>
<td>Rebuttal</td>
<td>0.895</td>
</tr>
</tbody>
</table>
<p>in addition, after getting the recall samples, we select sample with high boundary threshold and choice 65% length with the highest probability of the current class as a new sample, this method can help improve score about 0.008. Finally, We made about 170 features for lgb training, and select some samples as the final submission. This stage takes 1.5 hours online, and <strong>cv score 0.748</strong>, <strong>lb score 0.742</strong>.</p>
<p>We tested our lgb on the 5 fold longformer model, and the score increased from <a href="https://www.kaggle.com/wht1996/feedback-longformer-5fold-0-697" target="_blank">0.697</a> to <a href="https://www.kaggle.com/wht1996/feedback-two-stage-lb0-727" target="_blank">0.727</a>. Because lgb is not trained on this prediction, the improvement will be lower than the actual. At the same time, we uploaded our model ensemble results <a href="https://www.kaggle.com/wht1996/feedback-two-stage-cv748" target="_blank">here</a>. If you are interested, you can replace the prediction results with your own to see how much the cv score can improve.</p>
<h1>Summarize</h1>
<p><strong>useful attempt:</strong><br>
1、adversarial learning (awp/fgm): cv increase 0.01,lb 5fold ensemble increase 0.003.<br>
2、model ensemble: single model lb 0.691, model ensemble 0.706, longformer and deberta ensemble increase most.<br>
3、lgb sentence prediction: cv increase 0.036, lb increase 0.036, among this, select sample with high boundary threshold and choice 65% length with the highest probability of the current class as a new sample can increase 0.008</p>
<p><strong>useless attempt:</strong><br>
1、add paragraph information to input<br>
2、back translation<br>
3、adjust the weight according to the position of the sentence in which the word is located<br>
4、lgb model use overlap percentage as label to train<br>
5、stage 2 use bert to predict and ensemble with lgb</p>
<p><strong>code:</strong><br>
longformer train: <a href="https://www.kaggle.com/wht1996/feedback-nn-train" target="_blank">https://www.kaggle.com/wht1996/feedback-nn-train</a><br>
lgb train: <a href="https://www.kaggle.com/wht1996/feedback-lgb-train" target="_blank">https://www.kaggle.com/wht1996/feedback-lgb-train</a><br>
5fold longfomer with post-processing(lb 0.697): <a href="https://www.kaggle.com/wht1996/feedback-longformer-5fold-0-697" target="_blank">https://www.kaggle.com/wht1996/feedback-longformer-5fold-0-697</a><br>
5fold longfomer with lgb(lb 0.727): <a href="https://www.kaggle.com/wht1996/feedback-two-stage-lb0-727" target="_blank">https://www.kaggle.com/wht1996/feedback-two-stage-lb0-727</a><br>
cv ensemble with lgb(cv 0.747): <a href="https://www.kaggle.com/wht1996/feedback-two-stage-cv0-747" target="_blank">https://www.kaggle.com/wht1996/feedback-two-stage-cv0-747</a></p>
<p>Our code and data is published on GitHub <a href="https://github.com/antmachineintelligence/Feedback_1st" target="_blank">here</a></p>
<p>How to recall samples and how to get new samples according to the boundary can refer to the code <a href="https://www.kaggle.com/wht1996/feedback-two-stage-cv0-747" target="_blank">https://www.kaggle.com/wht1996/feedback-two-stage-cv0-747</a></p>
<p>here is the lgb train code <a href="https://www.kaggle.com/wht1996/feedback-lgb-train" target="_blank">https://www.kaggle.com/wht1996/feedback-lgb-train</a></p>
<p>If you recall samples and do features according to <a href="https://www.kaggle.com/wht1996/feedback-two-stage-cv0-747" target="_blank">https://www.kaggle.com/wht1996/feedback-two-stage-cv0-747</a>, you can directly train a tree model of your own, so we don't provide additional training code for the tree model. Of course, I can also tidy up our lgb model training code if needed<br>
btw, our lgb model params are</p>
<pre><code>params = {
'boosting': 'gbdt',
'objective': 'binary',
'metric': {'auc'},
'num_leaves': 15,
'min_data_in_leaf': 30,
'max_depth': 5,
'learning_rate': 0.03,
"feature_fraction": 0.7,
"bagging_fraction": 0.7,
'min_data_in_bin':15,
"lambda_l1": 5,
'lambda_l2': 5,
"random_state": 1996,
"num_threads": num_jobs,
}
</code></pre>
<p>One is the id of the training set and the other is the id of the validation set</p> |
Feedback Prize - Evaluating Student Writing | 26th Place Solution: datasaurus, ln & Tom | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Evaluating Student Writing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I’d like to thank the organisers for a great dataset and competition, and I’d especially like to thank my teammates <a href="https://www.kaggle.com/nazarov" target="_blank">@nazarov</a> and <a href="https://www.kaggle.com/tikutiku" target="_blank">@tikutiku</a> who I have learnt a huge amount from.</p>
<h1>CV Strategy</h1>
<p>We used a 5 fold <code>StratifiedGroupKFold</code> method grouped by text id. Initially I was not validating with post-processing, but since PP was a major part of the final score I switched to use the <code>min_thresh</code> and <code>proba_thresh</code> methods used in the public notebooks. Initially our CV-LB agreement was good, but it seemed to diverge past the 0.710 LB boundary.</p>
<h1>Models</h1>
<p>Like most other teams, we quickly found that although Longformer and BigBird were convenient, they were quite inefficient and models like deberta seemed to perform much better. <a href="https://www.kaggle.com/nazarov" target="_blank">@nazarov</a> did a great job of building a pipeline to split and merge the long sequences using weighted overlaps (Tom also had a similar method before we merged) that enabled us to use the following models:</p>
<ul>
<li>deberta-large CV: ~0.697 (<- best single model public LB 0.704)</li>
<li>deberta-xlarge CV: ~0.698-0.700</li>
<li>bart-large CV: ~0.680-0.692</li>
<li>muppet-large CV: 0.670</li>
<li>luke-large CV: 0.691</li>
</ul>
<h1>Training strategy</h1>
<h2>Parameters</h2>
<p>Typically trained for 5 epochs. We found that the results were highly sensitive to batch size, and BS=4 seemed to be a sweet spot, using a LR=1e-5 & cosine decay. I was able to train deberta-xlarge on my GPU using DeepSpeed, BS=2 and gradient accumulation=2.</p>
<p>Our models also used multisample dropout and my model used an attention layer in the head. We also trained a few models using the BIEO scheme. We also also replaced all the "\xa0" characters with a space.</p>
<h2>Loss</h2>
<p>We used a range of loss functions for diversity, Cross Entropy, a modified Dice Loss (since Dice and F1 are equivalent) and Tom was using <a href="https://pytorch-crf.readthedocs.io/en/stable/" target="_blank">Conditional Random Fields (CRF)</a></p>
<h2>Augmentation</h2>
<p>Random token masking with probs in the 0.03-0.07 range.</p>
<h1>Post-processing</h1>
<p>The only methods we used were the <code>min_thresh</code> and <code>proba_thresh</code> methods which were optimised using OOF predictions, the link evidence method and a method to ensure that there was only a single “Lead” prediction. Tom also made a heroic effort in the last few hours of the competition to use the essay topic clusters in PP, but we ran out of time. Unfortunately, we missed the gradient boosted trees post processing notebook.</p>
<h1>Ensemble</h1>
<p>The ensemble was tricky since the CRF method after the Viterbi decoding outputted hard labels instead of logits/probabilities . We found that taking the simple mean of the OHE CRF predictions with the softmax probabilities of the other models worked better than linear or 2nd stage stacking models. </p>
<p>I also managed to get a pipeline to create accurate token to word mappings which enabled the majority of the prediction splitting using tensors instead of expensive for/while loops. This gave quite a bit of speed up over the public notebook methods and allowed us to ensemble more models in the time limit.</p>
<p>To choose the final ensemble models, I used a similar technique I used in CommonLit, where I start with a basket of all the models, and iteratively remove one model at a time based on how much it impacts the CV F1 score, until I have a basket of the most informative models that will fit into the time limit.</p>
<p>Our final two subs used:</p>
<ul>
<li>6 models (2x deberta-xlarge, deberta-large-mnli, deberta-large (CRF & BIEO), bart-large (CRF), muppet-large), using 3 out of 5 folds (18 checkpoints). CV: 0.716, Public: 0.710, Private: 0.723</li>
<li>11 models (as above plus a few more variants of deberta & bart), using 2 out of 5 folds (22 checkpoints), CV: 0.717, Public: 0.710, Private: 0.721</li>
</ul>
<h1>What did not work</h1>
<ul>
<li><a href="https://arxiv.org/pdf/2010.02394.pdf" target="_blank">Mixup Transformers</a></li>
<li>Deberta versions 2 & 3 (we built a custom wrapper for sentencepiece to get offset mappings)</li>
<li>Using essay topic clusters as either an additional target or input features</li>
<li>2nd stage stacking models</li>
<li>Larger batch sizes</li>
</ul>
<h1>Final words</h1>
<p>I’d like to again thank my amazing teammates and I really enjoyed the competition.</p>
<p>Unfortunately my long wait for the GM title will have to be a little longer since I will be taking a 6-7 month break from Kaggle to thru-hike the full Pacific Crest Trail 🎒️. I’ll see you all again on the leaderboard towards the end of the year 💪️</p>
<p>We used <code>studio-ousia/luke-large</code>. I seem to remember there was no fast tokenizer, so we just used a regular <code>roberta-large</code> tokenizer.</p> |
Feedback Prize - Evaluating Student Writing | 2nd Place - Weighted Box Fusion and Post Process | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Evaluating Student Writing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We published our inference code <a href="https://www.kaggle.com/cdeotte/2nd-place-solution-cv741-public727-private740" target="_blank">here</a>. And we made all trained models (i.e. Kaggle datasets) public.</p>
<p><a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> thanks for all your great work at Hugging Face. The NLP library <code>transformers</code> and model repository are fantastic. We also used your code to convert deberta-v2 and v3 slow tokenizers to fast tokenizers. Thanks.</p>
<p>ha, good question. The "CV score" is the score when training with 10 K-Fold. First we train with 10 K-Fold, and make note of the learning rate and number of epochs. Second we trained a second time using 100% data. So actually it's two models. The first model was K-Fold and got CV 706. The second model used the same hyperparameters and trained on 100% data and got Public 710 Private 721</p>
<p>Thanks! I could not have done it without my awesome teammates Chun Ming Lee and Udbhav Bamba</p>
<p>It should work now. It will display a quick save and we're running now too. We will also share the datasets later today. Congratulations CroDoc and team for you great finish and big jump in last few days!</p>
<h1>Second Place Solution</h1>
<p>Thank you Georgia State University, The Learning Agency Lab, and Kaggle for an awesome competition. The data was high quality and interesting. The metric and train test split were well done. This competition was a success and will benefit a good cause.</p>
<p>Teaming with Chun Ming Lee and Udbhav Bamba has been wonderful. I learned more about NLP than if I had taken an online course. These two Kagglers are grandmasters at NLP and I now know how to use HuggingFace Trainer efficiently and effectively. I also know how to customize models and losses. Thank you <a href="https://www.kaggle.com/leecming" target="_blank">@leecming</a> <a href="https://www.kaggle.com/ubamba98" target="_blank">@ubamba98</a> </p>
<h1>Solution Summary</h1>
<p>The secret sauce to our solution is powerful <strong>post process</strong> by Chun Ming (boost CV LB!), a huge <strong>variety of models</strong> implemented by Udbhav (many long sequence models!), and <strong>weighted box fusion</strong> by Chris (used in my previous comp <a href="https://www.kaggle.com/c/tensorflow-great-barrier-reef/discussion/307609" target="_blank">here</a>). Everything was fine tuned on local CV and achieved 2nd Place $35,000!</p>
<table>
<thead>
<tr>
<th>Hugging Face Model</th>
<th>CV</th>
<th>Public LB</th>
<th>Private LB</th>
<th>special</th>
</tr>
</thead>
<tbody>
<tr>
<td>microsoft/deberta-large</td>
<td>706</td>
<td>710</td>
<td>721</td>
<td>trained with 100% train data</td>
</tr>
<tr>
<td>microsoft/deberta-large</td>
<td>699</td>
<td>706</td>
<td>719</td>
<td>add lstm, add jaccard loss</td>
</tr>
<tr>
<td>microsoft/deberta-v3-large</td>
<td>694</td>
<td>697</td>
<td>709</td>
<td><a href="https://www.kaggle.com/nbroad/deberta-v2-3-fast-tokenizer" target="_blank">convert slow tokenizer to fast</a></td>
</tr>
<tr>
<td>microsoft/deberta-xlarge</td>
<td>708</td>
<td>704</td>
<td>713</td>
<td></td>
</tr>
<tr>
<td>microsoft/deberta-v2-xlarge</td>
<td>699</td>
<td>700</td>
<td>716</td>
<td><a href="https://www.kaggle.com/nbroad/deberta-v2-3-fast-tokenizer" target="_blank">convert slow tokenizer to fast</a></td>
</tr>
<tr>
<td>allenai/longformer-large-4096</td>
<td>702</td>
<td>705</td>
<td>716</td>
<td>add lstm head</td>
</tr>
<tr>
<td><a href="https://github.com/ccdv-ai/convert_checkpoint_to_lsg" target="_blank">LSG converted roberta</a></td>
<td>703</td>
<td>702</td>
<td>714</td>
<td>convert 512 roberta to 1536</td>
</tr>
<tr>
<td>funnel-transformer/large</td>
<td>688</td>
<td>689</td>
<td>708</td>
<td></td>
</tr>
<tr>
<td>google/bigbird-roberta-base</td>
<td>675</td>
<td>676</td>
<td>692</td>
<td>train 1024 infer 1024</td>
</tr>
<tr>
<td>uw-madison/yoso-4096</td>
<td>652</td>
<td>655</td>
<td>668</td>
<td>lsh_backward=False</td>
</tr>
</tbody>
</table>
<h1>Weighted Box Fusion - CV 741, Public 727, Private 740</h1>
<p>Our final solution contained the 10 models listed above. We included 3 out of 10 K-folds for each of the 10 models. Our 8 hour 30 minute submission inferred 27 models! (we removed 3 folds from 30)</p>
<p>Weighted Box Fusion by <a href="https://www.kaggle.com/zfturbo" target="_blank">@zfturbo</a> from his GitHub <a href="https://github.com/ZFTurbo/Weighted-Boxes-Fusion" target="_blank">here</a> was the magic that combined all the model predictions together. Individual models had average CV around 700 and the WBF ensemble achieved CV 741. (Folds from same models were averaged before WBF).</p>
<p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Mar-2022/wbf.png" alt=""></p>
<p>Weighted Box Fusion can do something that no other ensemble method can do. If one model has prediction string <code>8 9 10 11</code> and another model has <code>10 11 12 13</code>. Then weighted box fusion takes the average of the two starts, <code>(8 + 10)/2 = 9</code> and the average of the two ends, <code>(11 + 13)/2 = 12</code> resulting in <code>9 10 11 12</code>. Furthermore, WBF reads <code>submission.csv</code> files and doesn't care what tokenizer you used, so its easy to use. (Note we add an extra column to submission dataframe with span confidence).</p>
<p>If we averaged the token/word probabilities, or used a voting classifier, we would either get the union <code>8 9 10 11 12 13</code> or the intersection <code>10 11</code>. Neither of which we want. Also averaging two "BIO" predictions gives a new prediction with two different "B" which we do not want either.</p>
<h1>Post Process</h1>
<p>Chun Ming is the mastermind behind our team's strong PP. We applied the same PP to each model before WBF. Our PP code is posted <a href="https://www.kaggle.com/cdeotte/2nd-place-solution-cv741-public727-private740" target="_blank">here</a> and Chun Ming will now describe it. Our PP significantly boosted CV LB!</p>
<p>We applied heavy post-processing to the word-level soft predictions each model made (after averaging token probabilities across folds and before the WBF ensembler). Overall CV was improved by ~.008. </p>
<p>In order of descending impact on CV: </p>
<ol>
<li><p>Repairing span predictions<br>
Since we trained using cross-entropy loss at the token level, the raw predictions would often result in broken spans. For example, given a chain of Lead token predictions, there might be a misprediction of an “Other” in the middle. We had a rule to convert that “Other” back to Lead. </p></li>
<li><p>Discourse-specific rules<br>
We came up with common sense heuristics for the discourses. For example, for Lead, Position, and Concluding Statement - there should only be a maximum of one each for a text. We’d predict the most probable candidate for each or merge close duplicate spans. </p></li>
<li><p>Adjusting lengths of predicted spans<br>
We adjusted lengths of spans based on their original predicted length. For example, for a predicted Evidence span that was less than 45 words long, we’d shift the start of the predicted span back by 9 words. These rules and the improvement in LB/CV suggest we were taking advantage of the evaluation metric only requiring a 50% overlap in prediction and label to be counted as correct.</p></li>
</ol>
<p>It’s not sexy but sitting down and eye-balling model predictions and comparing them with ground truths using a tool like displacy was rewarding in helping us better understand the task, how our models were behaving, and how to improve model predictions. </p>
<h1>Model Details</h1>
<p>Surprisingly many more models than LongFormer and BigBird could be trained with sequence lengths greater than 512. All our models were trained with sequence length 1536 and inferred at 1536 (except BigBird-base at 1024).</p>
<p>The model DeBERTa accepts any input size, so we just train with 1536 as is. </p>
<p>The model Funnel-transformer can accept any size after updating its config.</p>
<pre><code>model = AutoModelForTokenClassification.from_pretrained(
'funnel-transformer/large', num_labels=15,
max_position_embeddings=1536)
</code></pre>
<p>For BigBird, we used full attention</p>
<pre><code>model = AutoModelForTokenClassification.from_pretrained(
'google/bigbird-roberta-base', num_labels=15,
attention_type="original_full")
</code></pre>
<p>For YOSO, we disabled <code>lsh_backward</code></p>
<pre><code>model = AutoModelForTokenClassification.from_pretrained(
'uw-madison/yoso-4096', num_labels=15,
lsh_backward=False)
</code></pre>
<h1>Model Training</h1>
<p>Thank you Nvidia for providing us some V100 32GB GPUs and A100 40GB GPUs. For each model, we trained 6 out of 10 KFolds and built our CV from the resultant 60% train data OOF. Each fold would take approximately 4 hours on 1xGPU and required between 3 and 5 epochs to maximize val score. </p>
<p>For two models, we used my train public notebooks <a href="https://www.kaggle.com/cdeotte/tensorflow-longformer-ner-cv-0-633" target="_blank">here</a> and <a href="https://www.kaggle.com/cdeotte/pytorch-bigbird-ner-cv-0-615" target="_blank">here</a>. For the other eight models, we used Hugging Face Trainer. Most models, used the following parameters. And we added <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a> Kaggle metric <a href="https://www.kaggle.com/robikscube/student-writing-competition-twitch-stream" target="_blank">here</a> and our PP directly to the Trainer evaluation, so we could watch the Kaggle metric score every 500 steps and save the model weights with best Kaggle score.</p>
<p>Notice how we use FP16, gradient_accumulation_steps, and gradient_checkpointing. With these settings, you can basically train any size model on any GPU including Kaggle's P100 16GB GPU.</p>
<pre><code>from transformers import TrainingArguments, Trainer
args = TrainingArguments( NAME,
PRETRAINED_MODEL,
evaluation_strategy = 'steps',
eval_steps = 500,
dataloader_num_workers=8,
warmup_ratio=0,
lr_scheduler_type = 'linear',
learning_rate = 2e-5,
log_level = 'warning',
fp16 = True,
per_device_train_batch_size = 2,
per_device_eval_batch_size = 2,
gradient_accumulation_steps = 2,
gradient_checkpointing = True,
num_train_epochs = 5,
save_strategy = 'no',
save_total_limit = 1)
trainer = Trainer(model,
args,
train_dataset=train_dataset,
eval_dataset=valid_dataset,
compute_metrics=KaggleMetric(valid_df, valid_dataset),
callbacks=[SaveBestModelCallback],
data_collator=data_collator,
tokenizer=tokenizer)
</code></pre>
<h1>Inference Code with PP with WBF</h1>
<p>We posted our final submission inference code on Kaggle <a href="https://www.kaggle.com/cdeotte/2nd-place-solution-cv741-public727-private740" target="_blank">here</a>. It includes our post process and weighted box fusion implementation. Enjoy!</p>
<h1>Training Code on GitHub</h1>
<p>Our training code is published on GitHub <a href="https://github.com/ubamba98/feedback-prize" target="_blank">here</a></p>
<p>Thanks <a href="https://www.kaggle.com/jacob34" target="_blank">@jacob34</a> I'm excited to enter competitions top 10. My favorite thing on Kaggle is discussions and notebooks, but recently I've been working hard on competitions.</p>
<p>Thanks Jingyuan</p>
<p>Post processing made a large difference. Probably around +0.010</p>
<p>Thanks FPiotro</p> |
Feedback Prize - Evaluating Student Writing | 33rd Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Evaluating Student Writing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks for competition host and competitor. In this competition I learned a lot about many insights and knowledge about NLP.<br>
Especially I experienced NLP competition on Kaggle and token classification at the first time.</p>
<p>I modified my following baseline. Mainly I describe different and important points here</p>
<ul>
<li>training<br>
<a href="https://www.kaggle.com/ytakayama/train-pytorch-longformer-5fold-forgooglecolab" target="_blank">https://www.kaggle.com/ytakayama/train-pytorch-longformer-5fold-forgooglecolab</a></li>
<li>inference<br>
<a href="https://www.kaggle.com/ytakayama/infer-pytorch-longformer-5fold" target="_blank">https://www.kaggle.com/ytakayama/infer-pytorch-longformer-5fold</a></li>
</ul>
<h2>training</h2>
<ul>
<li><p>preprocess: replace LF to sep token.<br>
This time line feed seems to be important so that we judge discourse type, because it might be the point discourse type changes.</p></li>
<li><p>augmentation: replace 10% of tokens of each documents except special tokens to mask tokens<br>
<a href="https://www.kaggle.com/spidermandance/masking-feedback-prize" target="_blank">https://www.kaggle.com/spidermandance/masking-feedback-prize</a></p></li>
<li><p>multi sample dropout: the same as the baseline</p></li>
<li><p>model: BERT backbone + 15 class output(NER) the same as baseline</p></li>
<li><p>optimizer: AdamW,</p></li>
<li><p>weight_decay =0.01(except LayerNorm and bias)</p></li>
<li><p>lr scheduler: linear warmup(warmup rate:0.05)</p></li>
<li><p>epoch: 6</p></li>
<li><p>loss function: Cross Entropy Loss(label smoothing=0.1)<br>
label smoothing is effective.</p></li>
<li><p>max_length(inference): 1536</p></li>
<li><p>cv: KFold(n_splits=5)</p></li>
<li><p>batch size * gradient accumulation steps: fixed to 4.<br>
I trained all models on single GPU V100/A100(Google Colab Pro+), to prevent from OOM I used gradient accumulation. I got A100 only when training deberta-large. Most of all batch size is 1 and accumulation step is 4</p></li>
</ul>
<h1>detail about models</h1>
<p>Including post process described at next section</p>
<table>
<thead>
<tr>
<th>No</th>
<th>model</th>
<th>max length※</th>
<th>max lr</th>
<th>cv</th>
<th>public LB</th>
<th>private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>allenai/longformer-large-4096</td>
<td>1536</td>
<td>1e-5</td>
<td>0.6814</td>
<td>0.683</td>
<td>0.694</td>
</tr>
<tr>
<td>2</td>
<td>funnel-transformer/large</td>
<td>1536</td>
<td>8e-6</td>
<td>0.6926</td>
<td>0.698</td>
<td>0.709</td>
</tr>
<tr>
<td>3</td>
<td>microsoft/deberta-large</td>
<td>1536</td>
<td>1e-5</td>
<td>0.6956</td>
<td>0.700</td>
<td>0.712</td>
</tr>
<tr>
<td>4</td>
<td>microsoft/deberta-v3-large</td>
<td>1024</td>
<td>1e-5</td>
<td>0.699</td>
<td>0.703</td>
<td>0.715</td>
</tr>
<tr>
<td>5</td>
<td>1+2+3</td>
<td>-</td>
<td>-</td>
<td>0.7059</td>
<td>0.705</td>
<td>0.717</td>
</tr>
<tr>
<td>6</td>
<td>1+2+3+4 simple average</td>
<td>-</td>
<td>-</td>
<td>0.7098</td>
<td>0.709</td>
<td>0.719</td>
</tr>
<tr>
<td>7</td>
<td>1+2+3+4 weighted average</td>
<td>-</td>
<td>-</td>
<td>0.7111</td>
<td>0.710</td>
<td>0.720</td>
</tr>
</tbody>
</table>
<p>※at training</p>
<h2>post process</h2>
<ul>
<li>fixed minimum length of words about each discourse type to 3</li>
<li>Calculate minimum probability of each discourse type by using OOF so that f1 is higher and chose reasonable thresholds.</li>
<li>fixed threshold of probability and tuned minimum length of words about each discourse type</li>
<li>average probability of some models<br>
No.7: No.1 * 0.1 + No.2* 0.25 + No.3 * 0.3 + No.4 * 0.35</li>
<li>link evidence<br>
<a href="https://www.kaggle.com/abhishek/two-longformers-are-better-than-1" target="_blank">https://www.kaggle.com/abhishek/two-longformers-are-better-than-1</a></li>
</ul>
<h2>submission</h2>
<p>I chose No.6 and No.7, best on both cv and lb <br>
Aggregated documents which are similar length to minibatch, I accelerated inference and colud ensemble 5 folds of 4 models.<br>
To aggregate documents to minibatch, I calculate length of words instead of that of characters.<br>
which is different from the following notebook.<br>
<a href="https://www.kaggle.com/librauee/infer-fast-ensemble-models" target="_blank">https://www.kaggle.com/librauee/infer-fast-ensemble-models</a></p>
<h2>what did not work for me ( I cannot utilize better)</h2>
<ul>
<li>symmmetric cross entropy loss<br>
tuned alpha but convergence became worse </li>
<li>CRF</li>
<li>use last some layers in hidden states</li>
<li>add activation functions / arrange head of model</li>
<li>calculate not first logits but average logits in each word</li>
</ul>
<p>I wrote overview of my solution in a hurry. I may update it later.<br>
In this competition I became competition expert and notebooks expert. Thanks for upvoting my notebooks and sharing great knowledge.</p>
<p>Sometimes. I don't know how runtime is provided.</p>
<p><a href="https://www.kaggle.com/chumajin" target="_blank">@chumajin</a> You're welcome. I learned a lot from your notebooks, thanks.</p> |
Feedback Prize - Evaluating Student Writing | 36th place - deberta-large-1024/longformer-large-1536 ensemble with thresholding + code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Evaluating Student Writing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I'd like to thank organizers for creating this nice competition and the whole kaggle community for sharing a lot of fruitful ideas during the competition. I learned a lot about modern NLP techniques used for analysis of large texts. </p>
<p>In particular, I'd like to thank <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for his comprehensive <a href="https://www.kaggle.com/cdeotte/tensorflow-longformer-ner-cv-0-633" target="_blank">starter kernel</a>, which really helped me to get going, <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a> for a strong <a href="https://www.kaggle.com/abhishek/two-longformers-are-better-than-1" target="_blank">baseline</a>, which was probably used by majority of the top teams in this competition, as well as <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>, whose <a href="https://www.kaggle.com/hengck23/1-birdformer-1-longformer-one-fold/" target="_blank">model ensembling</a> approach I used and who also generously shared a lot of results from his experiments. Please, give their kernels an up-vote!</p>
<p>My solution is using an ensemble of 5 models: 2x deberta-large-1024, 2x deberta-v3-large-1024 and longformer-large-1536 trained on 5 folds each.</p>
<ul>
<li>I used Abhishek's 5-fold split and trained all the deberta-large models using max_len=1024, since due to GPU memory restrictions I was not able to go any higher and deberta models do not seem to support gradient checkpointing out of the box. I used max_len=1536 for longformer-large.</li>
<li>OOF validation scores showed good correlation with the public leaderboard scores, which was very nice to see and gave me confidence in my modeling approach.</li>
<li><strong>Deberta-large-1024 models gave the best performance. I did not verify this with a LB submit, but I think a simple ensemble of two deberta-large models using vanila Abhishek's code with properly tuned thresholds would be enough to get a silver medal. I estimate that one would get a public LB score of 0.702-0.703 and 0.713-0.714 on the private LB.</strong></li>
<li>Adding 2x deberta-v3-large-1024 and longformer-large-1536 allowed me to get ~+0.004 boost and the public LB score of 0.707 (or 0.719 on the private LB). This ensemble appeared to be fairly robust as I picked up 11 positions on the LB after the shake up.</li>
<li>I trained the models using Abhishek's code with batch_size=1, where I added gradient_accumulation=4 (so effectively I used batch_size=4) to fit them on my 15GB GPU. I used half a cycle cosine schedule with warm up of 0.1 and 5 epochs and AdamW optimizer. Training converged after 3-4 epochs for the most part. </li>
<li>I observed that slightly larger hidden_dropout=0.15 gives better validation scores than hidden_dropout=0.1, although I included both cases into the final ensemble for deberta. I also used LR=2e-5, however there were a couple of folds, when training did not converge in which case I dropped LR to 1e-5 on a subsequent attempt.</li>
<li>For ensembling different models, I used <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>'s idea of predicting probabilities for every symbol in the text, then averaging probabilities of different models before predicting discourse.</li>
<li>I tuned the probability and length thresholds for different discourse classes using OOF data and computed average thresholds using the 5 folds, which gave me +0.004 boost compared to the thresholds posted in public kernels.</li>
<li>Inference of the 5 models, having 5 folds each, took a little over 8 hours in the Kaggle kernel.</li>
<li>I tried different ensembles by adding BigBird-Large, Funnel-Large, and medium-size deberta models to the mix, however this did not improve the score.</li>
<li>I also tried applying ranking to the probabilities before averaging to correct for any systematic model prediction shifts, however that made the scores worse.</li>
<li>Unfortunately I ran out of time and did not get a chance to try to build a better 2nd level post-processing model myself, but I am currently looking at the amazing solutions from the top teams. Thanks for sharing!</li>
</ul>
<p>My training code: <a href="https://github.com/akuritsyn/feedback-prize-2021" target="_blank">https://github.com/akuritsyn/feedback-prize-2021</a><br>
Submission kernel: <a href="https://www.kaggle.com/akuritsyn/feedback-model-ensemble" target="_blank">https://www.kaggle.com/akuritsyn/feedback-model-ensemble</a></p>
<p>Good luck everyone!</p>
<p>Thanks, <a href="https://www.kaggle.com/kohecchi" target="_blank">@kohecchi</a> . Yes, since you can use gradient accumulation (and gradient checkpointing out of the box for some models or other tricks like DeepSpeed discussed in this forum for deberta-large models) to effectively get a larger batch size, you do not necessarily need a GPU card with a lot of memory, although it would of course make your life easier. In fact, I did most of the initial experimentation using two 1080Ti cards with 11GB of memory and initially trained deberta-large models using max_len=512. I only retrained deberta-large-1024 models using 15GB card at the very end of the competition. I think it was still possible to get a gold-medal-zone result even with a 11GB card if one were smart enough to come up with a post-processing approach used by one of the top teams.</p>
<p>Thanks, <a href="https://www.kaggle.com/abdulkadirguner" target="_blank">@abdulkadirguner</a>! My plan was to team up with someone towards the end of the competition, however I got distracted and was somewhere in the middle of the leaderboard before the team merger deadline so I was not in a position to team up with anyone on top and had to finish solo, which limited amount of things I could try, but all in all I am happy with the final result :)</p>
<p><a href="https://www.kaggle.com/vialactea" target="_blank">@vialactea</a> Re: I was very surprised to see that although my main ideas didn't improve the public LB, they did improve the private LB: +0.002, for a total gain of +0.010 on top of Abhishek's models.</p>
<p>It is important to create your own validation set and not to overfit to the LB when building a model. While in this competition I observed good correlation between my offline validation results and the public LB and there was almost no shake up at the top after the private LB results were released, I still picked up 11 positions on the LB meaning that people above me over-fitted to the public LB.</p> |
Feedback Prize - Evaluating Student Writing | 3rd Place Solution w code and notebook | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Evaluating Student Writing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to the organizers for hosting an interesting and very competitive competition and congrats to all the winners. Next, thanks to my amazing teammates <a href="https://www.kaggle.com/aerdem4" target="_blank">@aerdem4</a> <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> Although we shook down by one place, it was still a good team effort and I'm happy about the results. This was my first NLP comp although it does fall in a familiar category (i.e. seq2seq predictions) for me.</p>
<p>Our inference notebook is public: <a href="https://www.kaggle.com/code/aerdem4/xgb-lgb-feedback-prize-cv-0-7322/notebook" target="_blank">https://www.kaggle.com/code/aerdem4/xgb-lgb-feedback-prize-cv-0-7322/notebook</a> and training code is public as well: <a href="https://github.com/Shujun-He/3rd_Solution_Feedback_Prize_Evaluating_Student_Writing" target="_blank">https://github.com/Shujun-He/3rd_Solution_Feedback_Prize_Evaluating_Student_Writing</a>.</p>
<h1>Overview</h1>
<p>Our solution is a combination of token classification models using transformers and a stacking framework that classifies spans for 7 discourse types separately. Credit to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> and <a href="https://www.kaggle.com/chasembowers" target="_blank">@chasembowers</a> for sharing their amazing notebooks (<a href="https://www.kaggle.com/cdeotte/tensorflow-longformer-ner-cv-0-633" target="_blank">https://www.kaggle.com/cdeotte/tensorflow-longformer-ner-cv-0-633</a> and <a href="https://www.kaggle.com/chasembowers/sequence-postprocessing-v2-67-lb" target="_blank">https://www.kaggle.com/chasembowers/sequence-postprocessing-v2-67-lb</a> ).</p>
<p><img src="https://raw.githubusercontent.com/Shujun-He/Feedback_Prize_2022_3rd_place_solution/main/graphics/write_up_graphic.png" alt="overview"></p>
<h1>Deep learning models</h1>
<p>Our best ensemble consists of 6 folds of Longformer and 6 folds of sliding windows Deberta-xl models (weighted by discourse type based on cv). Due to the sequential nature of the problem, we add a 2-layer GRU on top of the transformer hidden states before outputting predictions for each token. We find it beneficial to increase max_len during training/inference to 2048, beyond which we actually saw performance degradation. In our case, deberta-l outperforms longformer large by quite a bit, and deberta-xl is even better, although deberta-xl is more difficult to train.</p>
<p>Below I will discuss some specificities of training these models.</p>
<h3>Longformer</h3>
<p>Training longformers is relatively simple because longformers can take sequences longer than 512 without any issues. Our pipeline is similar to what's publicly available, mostly from <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> </p>
<h3>Sliding window (SW) Deberta-xl</h3>
<p>Training deberta is a little trickier than longformers since we can't directly input a sequence longer than 512. Instead, we use a sliding window method when the sequence length is longer than 512. First, the first 512 positions are inputted to the deberta encoder. When we input the next segment into the deberta, we actually only increment the end position by 384, input position [512-64:512-64+512], and only concat the hidden states of the middle segment [64:448] to the hidden states of the first 512 positions, which avoids edge effects. We do this until we reach the end of the sequence and if the last segment is equal to smaller than 64, we simply take the last positions of the previous segment. After we run and concat all the segments, we run the sequence through a 2-layer GRU layer. Since the GRU processes the concatted segments sequentially, it essentially reconnects all the segments.</p>
<pre><code>B,L=input_ids.shape
if L<=self.window_size:
x=self.backbone(input_ids=input_ids,attention_mask=attention_mask,return_dict=False)[0]
else:
segments=(L-self.window_size)//self.inner_len
if (L-self.window_size)%self.inner_len>self.edge_len:
segments+=1
elif segments==0:
segments+=1
x=self.backbone(input_ids=input_ids[:,:self.window_size],attention_mask=attention_mask[:,:self.window_size],return_dict=False)[0]
for i in range(1,segments+1):
start=self.window_size-self.edge_len+(i-1)*self.inner_len
end=self.window_size-self.edge_len+(i-1)*self.inner_len+self.window_size
end=min(end,L)
x_next=input_ids[:,start:end]
mask_next=attention_mask[:,start:end]
x_next=self.backbone(input_ids=x_next,attention_mask=mask_next,return_dict=False)[0]
if i==segments:
x_next=x_next[:,self.edge_len:]
else:
x_next=x_next[:,self.edge_len:self.edge_len+self.inner_len]
x=torch.cat([x,x_next],1)
</code></pre>
<h3>Augmentation</h3>
<p>We used 2 forms of augmentations:</p>
<ol>
<li>Masked aug, where we mask 15% of the tokens during training.</li>
<li>Cutmix, similar to how cutmix works for images, we cut a portion of one sequence and paste it (and its labels) into another sequence in the same batch. Implementation is quite simple:</li>
</ol>
<pre><code>if np.random.uniform()<0.5:
cut=0.25
perm=torch.randperm(ids.shape[0]).cuda()
rand_len=int(ids.shape[1]*cut)
start=np.random.randint(ids.shape[1]-int(ids.shape[1]*cut))
ids[:,start:start+rand_len]=ids[perm,start:start+rand_len]
mask[:,start:start+rand_len]=mask[perm,start:start+rand_len]
labels[:,start:start+rand_len]=labels[perm,start:start+rand_len]
</code></pre>
<h3>Hyperparameters</h3>
<p>Since we use augmentations, we train for 7 epochs at learning rates of [2.5e-5, 2.5e-5, 2.5e-5, 2.5e-6, 2.5e-6, 2.5e-6, 2.5e-7] for longformer and deberta-l; for deberta-xl, we use the same schedule with 1/5 of the learnig rate. For longformer, we use a batch size/max_length of 4/2048, for deberta-large 3/2048, and for debrta xl 2/1536 (mostly due to memory constraints).</p>
<h3>Training/inference tricks</h3>
<p>During training, instead of padding every sequence to a fixed max length, we pad sequences to the max length in its batch, which speeds up training considerably. During inference, we also sort all texts by their lengths before batching, thereby minimizing the amount of padding in each batch, which speeds up inference quite a bit.</p>
<h1>Stacking framework</h1>
<p>Our stacking framework is the same as the one posted by chase bowers; we just made a lot of improvements to it. In short (for those who aren't aware of the notebook), for each discourse type, using out of fold predictions, we generate features for candidate spans based on a begin token probability threshold and a max span length (99.5 percentile for each discourse type), and train gradient boosting models that operate on these features and classify spans. Therefore, we have 7 binary classification models, one for each discourse type. During inference, we sort all candidate spans by predicted probability and take those as predictions until a low threshold, while removing spans that intersect more than 0.15/0.2 with existing predicted spans. </p>
<p>Below I will discuss the improvements.</p>
<h3>CV setup</h3>
<p>We set up cross-validation which ended up being mostly consistent with public lb. As a result, most of our improvements locally observed translated to public lb improvements as well. In some cases, however, improvements in noisy classes (e.g. Rebuttal), we saw some inconsistencies, but that was to be expected.</p>
<h3>Features</h3>
<p>First we fixed a minor error with calculations of probability that a word corresponds to either a 'B'-egin or 'I'-nside token for a class in the original stacking notebook, where B and I tokens happening for the same class are considered independent events. From:</p>
<pre><code>prob_or = lambda word_preds: (1-(1-word_preds[:,disc_begin]) * (1-word_preds[:,disc_inside]))
</code></pre>
<p>to</p>
<pre><code>prob_or = lambda word_preds: word_preds[:,disc_begin] + word_preds[:,disc_inside]
</code></pre>
<p>We added more probabilities at the edges of the span as well as the probability of another B token for any class following the span. Additionally, we added the location of the max/min probabilities for the class in the span. Further, we added something we call instability, which is the average squared difference in prob_or from position to position:</p>
<pre><code>s = prob_or(text_preds[pred_start:pred_end])
instability = 0
if len(s) > 1:
instability = (np.diff(s)**2).mean()
</code></pre>
<p>Aside from span specific features, we added global features of the average prob_or of every discourse type and the positions with the max B token probs of every disourse type.</p>
<p>Last but not least, not all features are used for every discourse type, and instead we tune our feature selection based on CV. We have used around 25 features on average.</p>
<h3>Increasing the amount of candidate spans</h3>
<p>For some discourse types, we reduced the min begin token probability threshold so we have more candidate spans.</p>
<h3>Gradient boosting models</h3>
<p>In our best ensemble, we have both an lgbm and an xgb model, each with 5 folds trained on oofs. The predictions from both lgbm and xgb are weighted equally.lgbm is trained on dart mode. Since it is not possible to do early stopping on dart mode, first xgb is trained and its optimal number of trees x 1.4 is used for number of trees. We accelerated our gradient boosting models with RAPIDS ForestInference and got room for an extra deberta fold.</p>
<h3>Decoding</h3>
<p>All candidate spans are sorted by predicted probability and taken as predictions until a low threshold. The original notebook did not allow any intersection with existing predicted spans during decoding, but we were able to improve our score by allowing a certain level of intersection (normalized by span length) instead, such as 0.2.</p>
<p>Thank you! For us, just adding masked augmentation only led to a very small amount of boost, less than 0.001, and the combination of masked aug and cutmix works the best. I don't know the exact amount of boost but I'd say around 0.001 to 0.002</p>
<p>Thank you. The main ideas were publicly available, so we kind of just utilized the power of the kaggle community and added a lot of improvements.</p>
<p>Thanks. The link should work now. The link included an extra parentheses before so it didn't work. The minor error is with the calculations of prob_or. I think you can find more details in the notebook. By "more probabilities at the edges of the span" I just mean probabilities at start_position - 2, end_position + 1, and so fourth. The probability of another B token for any class is essentially the probability of another span of any class starting right after the current one. Hope I cleared it yp</p>
<p>It depends. When we add features, we keep everything the same, but sometimes it makes sense to change hyperparameters. For instance, when we added augmentations, it made sense to increase the amount of epochs from 5 to 7 since training with heavier augmentations usually require more epochs to converge</p>
<p>Thanks. We didn't tune the p and cut parameters extensively. We tried decreasing p when training deberta-xl mostly due to its tendency to diverge but cv was worse. Earlier I tried incresing p or cut, but results became worse in both cases.</p> |
Feedback Prize - Evaluating Student Writing | 4th place solution - 🎖️ my first gold medal 🎖️ (+source code available!) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Evaluating Student Writing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://github.com/affjljoo3581/Feedback-Prize-Competition" target="_blank">Here is a source code in github!</a></p>
<p>Hi kagglers! Congrats to the winners and everyone who enjoyed this competition! I saw many people were tired, and me as well… But I've got my very first solo gold medal 🎖️ and I'm now really happy to become a Kaggle master!</p>
<p>To be honest, I participated this competition about 3 weeks ago. Thanks to the community, I can accelerate my experiments and improve the score quickly. If I had more time, maybe I could get more chances… 🤔🤔</p>
<p>I think some tricks and techniques are important in this competition. <a href="https://www.kaggle.com/c/feedback-prize-2021/discussion/313177" target="_blank">The lightgbm postprocessing</a> is really cool and I've never thought about that idea. I focused on some technical processing and I managed to get a high score.</p>
<h2>Resolve Encoding Error</h2>
<p>As you can see, there are some abnormal characters in the documents. Let's see <code>0D1493FDAAD3</code>:</p>
<pre><code>...
Another reason to talk to multiple people when making a purchase is to learn from their successes. ÃÅf you saw a group of people successfully do what you are trying now, learn what they did to overcome the obstacle.
...
</code></pre>
<p>There are nonrecognizable characters <code>ÃÅ</code> and you may see this frequently in other documents. After some tries, I found this is a problem about <code>cp1252</code> and <code>utf8</code> encodings, and the below code can clean the documents.<br>
<img src="https://i.ibb.co/Q9Wzrzr/image.png" alt=""><br>
The result is as follows:</p>
<pre><code>...
Another reason to talk to multiple people when making a purchase is to learn from their successes. If you saw a group of people successfully do what you are trying now, learn what they did to overcome the obstacle.
...
</code></pre>
<p>Actually, It doesn't seem to improve the final scores significantly, but I just applied this to make sure.<br>
And because it reduces the characters in the documents, we have to adjust the <code>discourse_start</code> and <code>discourse_end</code> offsets. To correct the offsets, I use <code>difflib.SequenceMatcher</code> to compare the differences of the characters. The details are in my code.</p>
<h2><code>word_ids</code> vs <code>offset_mapping</code>?</h2>
<p>Because many NER examples use <code>word_ids</code> with <code>is_split_into_words=True</code> parameter, I also tried to use this for labeling BIO-named subword tags. However, I got 0.595 public LB with a single-fold bigbird base model. So I tried another way from the community, <code>offset_mapping</code> with <code>return_offsets_mapping=True</code> parameter to make subword NER tags, and I could get 0.630 public LB score.</p>
<table>
<thead>
<tr>
<th>model name</th>
<th>public lb score</th>
<th>private lb score</th>
</tr>
</thead>
<tbody>
<tr>
<td>bigbird-roberta-base (single fold, <code>word_ids</code>)</td>
<td>0.595</td>
<td>0.609</td>
</tr>
<tr>
<td>bigbird-roberta-base (single fold)</td>
<td>0.630</td>
<td>0.644</td>
</tr>
<tr>
<td>bigbird-roberta-base (5 folds)</td>
<td>0.659</td>
<td>0.677</td>
</tr>
</tbody>
</table>
<p>Why is this happened? What is the difference between <code>word_ids</code> and <code>offset_mapping</code> approach? After trying some experiences, I found that <code>word_ids</code> needs to split the text into words using <code>.split()</code>, and it prunes the line-break information <code>\n</code>. Since this task is for recognizing the structure of the document, <strong>it would be necessary to use the line-break characters.</strong> Remember this because I will mention this fact in the later section.</p>
<h2>Beam-Search Decoding</h2>
<p>BIO-naming rule is quite complex, so the greedy decoding (i.e. argmax from the token predictions) cannot perform well. I observed many cases like:</p>
<table>
<thead>
<tr>
<th>sorted candidate tags</th>
<th>sorted corresponding probs</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>B-Claim</code> …</td>
<td>1.0 …</td>
</tr>
<tr>
<td><code>I-Claim</code> …</td>
<td>0.99 …</td>
</tr>
<tr>
<td><code>I-Claim</code> …</td>
<td>0.99 …</td>
</tr>
<tr>
<td><code>I-Claim</code> <code>B-Evidence</code> …</td>
<td>0.49 0.43 …</td>
</tr>
<tr>
<td><code>I-Evidence</code> <code>I-Claim</code> …</td>
<td>0.99 0.01 …</td>
</tr>
<tr>
<td><code>I-Evidence</code> <code>I-Claim</code> …</td>
<td>0.99 0.01 …</td>
</tr>
</tbody>
</table>
<p>According to the greedy decoding, the prediction result would be <code>B-Claim I-Claim I-Claim I-Claim I-Evidence I-Evidence</code>. The entities should be started with B-tags, so the Evidence entity will be dropped even it has high confidence. Therefore I implement beam search decoding algorithm for NER tagging:<br>
<img src="https://i.ibb.co/RjhFk1P/image.png" alt=""><br>
I wrote this code on PyTorch and GPU-capable for parallelization. I use <code>beam_size=4</code> for all both evaluation and prediction.</p>
<h2>Entity-Level Post-Ensemble</h2>
<p>The above multi-fold predictions are from averaged token probabilities. However, if the architectures are different and the subword tokenizers are different as well, it is impossible to combine the subword token predictions (probabilities). Some people tried to ensemble with character-level probabilities, but it did not work for me. Hence I created entity-level post ensemble algorithm.</p>
<p>The main idea is from the metric of this competition. This compeition treats more than 50% matched entities as the same group. So I group at least 50% overlapped entities which have same class. After that, I average the ranges in each group. I tested the longest, shortest, union and intersection ways, but they were all worse than the average method.<br>
<img src="https://i.ibb.co/5cCqwYb/image.png" alt=""><br>
It indeed showed significant improvements by ensembling several models! It can be even applied to the folds (I mean, it is useful for the same tokenizer and same subwords as well) and I can get about +0.002 lb score.</p>
<table>
<thead>
<tr>
<th>model name</th>
<th>cv</th>
<th>public lb score</th>
<th>private lb score</th>
</tr>
</thead>
<tbody>
<tr>
<td>deberta-large (5folds)</td>
<td>0.6938</td>
<td>0.705</td>
<td>0.713</td>
</tr>
<tr>
<td>deberta-large (5folds, entity-level ensemble)</td>
<td>0.6938</td>
<td>0.707</td>
<td>0.714</td>
</tr>
<tr>
<td>deberta-large + deberta-xlarge</td>
<td>0.718</td>
<td>0.712</td>
<td>0.722</td>
</tr>
<tr>
<td>deberta-v3-large-v2 + deberta-v2-xlarge-v2</td>
<td>0.7251</td>
<td>0.719</td>
<td>0.731</td>
</tr>
</tbody>
</table>
<h2><code>DebertaV2TokenizerFast</code>?</h2>
<p>Unfortunately, deberta-v2 does not have fast-version of its tokenizer. The slow-version does not support <code>offset_mapping</code>, so we need the fast one. Fortunately, <a href="https://github.com/huggingface/transformers/pull/14928" target="_blank">the code</a> is already written and I could train deberta-v2 and deberta-v3 without tokenization errors. But the problem was the performance of models. CV scores were around ~0.68, even worse than longformers. After some experiments, I observed that <strong>the deberta-v2 tokenizer removes line-break <code>\n</code> characters.</strong> As I mentioned above, including <code>\n</code> characters is necessary, so I changed the code as below:<br>
<img src="https://i.ibb.co/JFz2mNK/image.png" alt=""><br>
Finally I can get the correct scores from deberta-v2 and deberta-v3 models.</p>
<table>
<thead>
<tr>
<th>model name</th>
<th>cv</th>
<th>public lb score</th>
<th>private lb score</th>
</tr>
</thead>
<tbody>
<tr>
<td>deberta-v2-xlarge</td>
<td>0.7019</td>
<td>0.705</td>
<td>0.714</td>
</tr>
<tr>
<td>deberta-v3-large</td>
<td>0.7038</td>
<td>0.707</td>
<td>0.719</td>
</tr>
</tbody>
</table>
<p>I also tried larger models (e.g. deberta-v2-xxlarge) but they are all worse than the above models. Thus I only used up to deberta-xlarge scale.</p>
<h2>Hyperparameters</h2>
<p>I know many people used small batch size like 2 or 4, but I use 8 to activate tensor-cores on A100 😁 I had to train the model faster because I had no time. All models are trained for 5k/7k steps with 500/700 warmup steps. The learning rate is basically 3e-5, but depends on the scale. The detailed configurations are in my code as well. The learning rate is decayed linearly. I applied gradient clipping with <code>1.0</code> and gradient checkpointing to reduce the memory usage. AdamW optimizer is used, and I evaluate 20 times per epoch to save the best-scored model. Note that I use exactly same evaluation metric (overlap-based f1 score) to avoid the score mismatch from validation to lb score.</p>
<h2>Conclusion</h2>
<p>I tried many combinations of the models to submit because there is a time limit. The belows are my last three submissions🥲<br>
<img src="https://i.ibb.co/9pmKZbb/image.png" alt=""><br>
And these are my final selections:<br>
<img src="https://i.ibb.co/Dp7DGNn/image.png" alt=""></p>
<table>
<thead>
<tr>
<th>model name</th>
<th>public lb score</th>
<th>private lb score</th>
</tr>
</thead>
<tbody>
<tr>
<td>deberta-v3-large (10 folds) + deberta-xlarge (3/5) + deberta-v2-xlarge (4/5)</td>
<td>0.721</td>
<td>0.735</td>
</tr>
<tr>
<td>deberta-large + deberta-v3-large + deberta-v3-large + deberta-xlarge (3/5) + deberta-v2-xlarge (3/5)</td>
<td>0.724</td>
<td>0.735</td>
</tr>
</tbody>
</table>
<p>Since deberta-xlarge and deberta-v2-xlarge are too heavy to run all, I only use some of them.</p>
<p><br>
<a href="https://github.com/affjljoo3581/Feedback-Prize-Competition" target="_blank">Here is a source code in github!</a></p>
<p>P.S. I also tried pseudo-labeling for wikipedia talk corpus (150k), it did not work. Maybe more data is required…?</p>
<p>Thanks! You made me really nervous 😣</p>
<p>Oh that's right :)</p>
<p>I didn't train with all 15k training data because I needed a validation set to evaluate the model. If you mean just single-fold model, yes it indeed performs much better. Or not, I didn't try full training dataset. I always prepare the validation set by splitting from the train set.</p>
<p>I cannot understand what many folds mean, but I think kfolds and train-val splits are not pretty different if the number of models are same</p>
<p>Yes… I was also surprised when v2 and v3 models worked worse</p>
<p>I have no idea how much does the beam search increase the score because I already applied it before using deberta models… I didn't compare the scores strictly.<br>
And here is a configuration of the deberta-v2-xlarge:</p>
<pre><code>optim:
optimizer:
lr: 7e-6
betas: [0.9, 0.999]
eps: 1e-6
weight_decay: 0.01
scheduler:
name: linear
num_warmup_steps: 700
num_training_steps: 7000
train:
name: deberta-v2-xlarge
batch_size: 8
accumulate_grads: 1
max_grad_norm: 1.0
gradient_checkpointing: true
validation_interval: 0.05
logging_interval: 10
evaluate_after_steps: 4000
save_best_checkpoint: true
precision: 16
gpus: 1
</code></pre>
<p>But it also worked with similar scores:</p>
<pre><code>optim:
optimizer:
lr: 1e-5
betas: [0.9, 0.999]
eps: 1e-6
weight_decay: 0.01
scheduler:
name: linear
num_warmup_steps: 700
num_training_steps: 7000
</code></pre>
<p>thank you 👍👍👍👍</p>
<p>The detailed code is <a href="https://github.com/affjljoo3581/Feedback-Prize-Competition/blob/master/src/ensemble.py" target="_blank">here</a>. Basically I grouped entities from the submissions of the models. That means the similarly located and same-class entities will be grouped. And then I averaged the lengths of the entities in each group. This is what I did:</p>
<pre><code>if args.group_strategy == "union":
start, end = min(x[1] for x in group), max(x[2] for x in group)
elif args.group_strategy == "intersect":
start, end = max(x[1] for x in group), min(x[2] for x in group)
elif args.group_strategy == "longest":
start, end, _ = max(
((x[1], x[2], x[2] - x[1] + 1) for x in group), key=lambda i: i[2]
)
elif args.group_strategy == "shortest":
start, end, _ = min(
((x[1], x[2], x[2] - x[1] + 1) for x in group), key=lambda i: i[2]
)
elif args.group_strategy == "mean":
start = int(sum(x[1] for x in group) / len(group))
end = int(sum(x[2] for x in group) / len(group))
else:
raise NotImplementedError(f"{args.group_strategy} is not supported.")
</code></pre>
<p>Well… actually I didn't strictly compare the performance between greedy and beam search</p> |
Feedback Prize - Evaluating Student Writing | 5'th place : simultaneous span segmentation and classification + WBF | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Evaluating Student Writing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, many thanks to the Kaggle team and Feedback Prize team for hosting this competition, and congrats to all winners ! Thanks to my teammates too, namely <a href="https://www.kaggle.com/amedprof" target="_blank">@amedprof</a> and <a href="https://www.kaggle.com/crodoc" target="_blank">@crodoc</a> for their hard work and commitment during all those 3 laborious months.</p>
<h2>At the beginning</h2>
<p>We struggle a lot during this comp’. We spent almost 2 months far from the bronze zone because our strategy was the exploration one. In fact, a great kernel was published by Abishek at the early beginning of the comp’ and one could just stick to it and keep tuning its hyperparameters to get a decent score (silver zone). But, we didn’t go that way and we kept exploring many ideas.</p>
<h2>Solution approach</h2>
<p>It seems like the go-to approach here is NER. We didn’t reinvent the wheel, we went that way as well since it seems straightfull and answers the problem quite correctly. In fact, we tested some QA approaches but not only were they too slow (training and much more inference) but they had slightly lower scores than our NER approaches. </p>
<p><strong>Setup 1:</strong> <em>siblings NER & Span Segmentation</em><br>
We use a multitask approach to output the segmentation scores (multiclass classification on 3 different values: 0 for background, 1 when inside any entity and 2 for beginning) and the entity scores (15 classes). We still use 15 classes for the NER as well but before computing the class, we convert B-eginning tokens into I-nside.</p>
<p><strong>Backbone:</strong> 5 folds Deberta-v1 Large + 5 folds Deberta-v1 xLarge (maxlen = 1024, stride during inference, positinon_biased_inputs=False), trained 5 epochs<br>
<strong>Scheduler:</strong> Cosine Annealing with no warmup<br>
<strong>Optimizer:</strong> AdamW<br>
<strong>Loss:</strong> Cross-Entropy with dynamic cosine-based class weights. In fact we overweight rare classes (Rebuttal and Counterclaim) by starting with high values and converge to 1 on final epochs.</p>
<p><strong>Setup 2:</strong> <em>Pure NER over 15 classes</em>, alternatively over 10 classes by removing the B target for non-overlapping classes.<br>
<strong>Backbone:</strong> 5 folds Deberta Large + 5 folds Deberta xLarge (maxlen = 1024, stride during inference, positinon_biased_inputs=False), trained 5 epochs<br>
<strong>Scheduler:</strong> Polynomial decay with 10% warmup<br>
<strong>Optimizer:</strong> AdamW<br>
<strong>Loss:</strong> Cross-Entropy (no weights over classes)</p>
<h2>Validation strategy</h2>
<p>We used same validation strategy as the one shared by Abishek we also use an enhanced version of the clustesr made by cdeotte to make our MultilabelStratifiedKFold.<br>
These folds were really stable and CV to LB correlation was great.</p>
<h2>Post-processing and ensembling models & boxes</h2>
<p>As cdeotte’s team, we make use of WBF on our final solutions. WBF was very effective and was the main ingredient behind our spectacular jump in the final days of the competition. Take a look on this tab for further details about CV / LB.</p>
<p><img src="https://i.ibb.co/Ntx57Gz/image.png" alt="cv/lb before and after wbf"></p>
<h2>What works</h2>
<ul>
<li>Filtering on num_tokens and score</li>
<li>Smart boxe ensembling (WBF)</li>
<li>Random Masking</li>
<li>Random start (if not striding during training)</li>
<li>Small batch size</li>
<li>Small learning rates</li>
</ul>
<h2>What doesn’t work</h2>
<ul>
<li>Training on longer sequences</li>
<li>Training long text aware models : LongFormer, Funnel and BigBird wasn’t better than deberta-v1 for us.</li>
<li>More epochs, higher batch size</li>
<li>Training on cleaned data</li>
<li>Using better word splitter than python split</li>
<li>QA instead of NER</li>
<li>Simply bagging models by averaging word level scores</li>
</ul>
<h2>Final thoughts</h2>
<ul>
<li><p><strong>Training on 10 labels</strong><br>
We saw in <a href="https://www.kaggle.com/ibrezmohd/feedback-price-eda-markov-transition-matrix" target="_blank">this kernel</a>'s markov transition matrix that some <em>discourse_types</em> never overlap so we decide to use B-I-O only on Claim and Evidence and binary target for others.</p></li>
<li><p><strong>positinon_biased_inputs = False</strong><br>
The HuggingFace Debrta-v1 version has positinon_biased_inputs parameter set to False by default, so Deberta-v1 is not using global position information by default. We manage to turn global position ON but the perfs are not much better than the vanilla one. Furthermore, turning positinon_biased_inputs OFF allows to use maxlen above 512 without the need to resize the embedding layers.</p></li>
</ul>
<p>Thanks for reading our solution.</p>
<h3>Edit</h3>
<ul>
<li><strong>Setup 1 source code</strong>: <a href="https://github.com/neroksi/fprize_final_cleanup" target="_blank">https://github.com/neroksi/fprize_final_cleanup</a></li>
<li><strong>Setup 2 source code</strong>: <a href="https://github.com/crodoc/feedback-prize" target="_blank">https://github.com/crodoc/feedback-prize</a></li>
</ul>
<p>Yes ! Second stage modeling has been very important in this comp'. Relying only on the direct output of the transformer model seems to be not enough.</p>
<p>Thanks <a href="https://www.kaggle.com/duykhanh99" target="_blank">@duykhanh99</a> </p> |
Feedback Prize - Evaluating Student Writing | 55th place solution – Shortformer + Sliding Window + Topic-dependent Postprocessing | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Evaluating Student Writing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>My approach – Shortformer + Sliding Window Training / Inference + Topic-dependent Postprocessing</h2>
<p>First of all, a huge thank you to the competition hosts and Kaggle for organizing this competition.<br>
I can't wait to read all the other write-ups in detail.</p>
<p>My approach is a weighted ensemble of different BERT backbones and topic-dependent postprocessing. To enable training Shortformer-like models for this competition, I use a sliding window approach. Although I only achieved 55th place, I hope to still share and discuss some new ideas with you.</p>
<p><strong>Sliding Window Training / Inference</strong><br>
By default, Shortformer-like models can’t process long documents. To overcome this issue, I split long essays into chunks with a left and right context. Then I only make predictions for the central part of the chunk, but not for the context window. <a href="https://postimg.cc/V0XxRq3P" target="_blank"><img src="https://i.postimg.cc/6qPBBzcq/flert-paper.png" alt="flert-paper.png"></a> This approach was inspired by this <a href="https://arxiv.org/pdf/2011.06993.pdf" target="_blank">paper</a> and improved my single models by 0.005+ compared to applying normal stride.</p>
<p><strong>Training Setup and Hyperparamters</strong><br>
For training, I use learning rate of 1e-5, batch size of 4 (with gradient accumulation), linear/cosine lr schedule with warmup of 10%, BIO-label scheme and 8-bit Adam.</p>
<p>Furthermore, I tried to apply multi-sample dropout, data augmentation with [MASK], gradually (un)freezing layers, (grouped) discriminative learning rate, BIEO-label scheme, and within-task-pretraining. However, all these techniques did not lead to significant improvements.</p>
<p><strong>Error Analysis</strong><br>
Improving deep learning systems is a highly iterative process. It is important to perform error analysis to identify areas to improve frequently. I wrote a function that takes the raw submission.csv, the post-processed submission.csv, and the ground truth train.csv and displays them side-by-side while ranking them from worst-to-best/best-to-worst overall or discourse-dependent performance. This visualization helped me quite a lot throughout this competition. <a href="https://postimg.cc/BjMQG3tP" target="_blank"><img src="https://i.postimg.cc/g0Pw8kWs/error-analysis.png" alt="error-analysis.png"></a></p>
<p><strong>Models</strong><br>
DebertaV3-Large<br>
5 folds, 512 max length, 128+128 context window, public/private LB 0.698/0.709, CV 0.697</p>
<p>Funnel-Large<br>
5 folds, 512 max length, 128+128 context window, public/private LB 0.692/0.704, CV 0.92</p>
<p>Roberta-Large<br>
5 folds, 512 max length, 128+128 context window, public/private LB 0.684/0.695, CV 0.682</p>
<p>Longformer-Large<br>
5 folds, 1536 max length, no context window, public/private LB 0.688/0.699, CV 0.686</p>
<p><strong>Ensamble</strong><br>
I built a weighted ensemble of DebertaV3 (0.4788), Funnel Transformer (0.3192), Longformer (0.152), and Roberta (0.05) for submission by applying the ensemble pseudo code from this <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/175344" target="_blank">discussion</a>.</p>
<p>Using weighted ensemble improved private/public LB 0.702/0.714 compared to public/private LB 0.700/0.712 using average ensemble.</p>
<p><strong>Topic-dependent Postprocessing</strong><br>
In the early days of the competition, I discovered 15 essay topics in the dataset. Later on this was also revealed in a <a href="https://www.kaggle.com/c/feedback-prize-2021/discussion/301481" target="_blank">discussion post</a> and <a href="https://www.kaggle.com/cdeotte/rapids-umap-tfidf-kmeans-discovers-15-topics" target="_blank">notebook</a> by Chris.</p>
<p>Using BERTopic, I quickly came up with a simple model that allowed me to predict the topic for a specific essay with over 99.5% accuracy. With this model, I can apply topic-dependent postprocessing, i.e. tuning individual <code>length_threshold</code> and <code>proba_threshold</code> dictionaries for each topic, rather than having two global dictionaries. However, I think my training/tuning setup was suboptimal for this postprocessing since I only consider the discourse, but not the topic in <code>StratifiedKFold</code>. With the right CV strategy, the score probably could have been further improved.</p>
<p>Even with my CV strategy, topic-dependent postprocessing still boosted my score slightly (public/private LB 0.706/0.716).</p>
<p>I also tried to add a prompt at the start of the essay, e.g. <a href="https://postimg.cc/K3BB9jGM" target="_blank"><img src="https://i.postimg.cc/C567xn9m/prompt.png" alt="prompt.png"></a> Unfortunately, this did not lead to any significant improvements. That is because topic information is already present in the individual word(piece) embeddings based on topic-specific words that appear in an essay.</p>
<p><strong>Additional Trick: Replace \n by <code><newline></code> token</strong><br>
The available public notebooks showed that \n is an important feature. However, \n is handled differently by different tokenizers (Some split ‘\n’ into ‘\’ and ’n’, some don’t split ‘\n’, some map it to <code><UNK></code>, etc.). That is why I mapped all \n to a <code><newline></code> token and added this token as a new special token to the tokenizer during initialization.</p>
<p><strong>Hardware Resources</strong><br>
I have access to Kaggle + Colab.</p>
<p><strong>Final toughts</strong><br>
In conclusion, FeedbackPrize was a fun competition.</p>
<p>However, I am still skeptical about using Token Classification models for this competition. That is because the naive decoding procedure (using softmax) feels similar to applying a Greedy Search to Neural Machine Translation, which may lead to suboptimal output sequences, even if the models were trained correctly. As an example, it was common for my prototype models to make predictions like this: <br>
<a href="https://postimg.cc/G9GDnBRB" target="_blank"><img src="https://i.postimg.cc/Y2yzggyx/suboptimal-output.png" alt="suboptimal-output.png"></a> <br>
I was experimenting with a Conditional Random Field decoder because we can add constraints to such a model, that can prevent these results to occur. But I was unable to beat postprocessing using <code>length_threshold</code> and <code>proba_threshold</code> dictionaries. </p>
<p>More research has to be done in this direction😊</p>
<p>You can have a look at the this <a href="https://www.kaggle.com/kaggleqrdl/oof-vrs-valid-eda" target="_blank">notebook</a> by <a href="https://www.kaggle.com/kaggleqrdl" target="_blank">@kaggleqrdl</a> and modify it according to your workflow. I mainly added another column to the side-by-side plot.</p>
<p>Thank you <a href="https://www.kaggle.com/zhutianyucs" target="_blank">@zhutianyucs</a>. The same goes for you. Congrats on an impressive solo performance!!<br>
I still remember our discussion on generating reproducible performance using Longformer. It seems like everything worked out well in the end.😊</p>
<p>Thank you for your question! Let me briefly explain.😊</p>
<p>In general you are correct, this is the main downside we have to keep in mind while dealing with shortformer models. While dealing with a long document (e.g. 1536 tokens) and having a shortformer of maximal sequence length 512, the Lead tokens won't be able to attend to e.g. Concluding Statement tokens. This was the motivation why Longformer / Big Bird got proposed in order to tackle this problem on a global level. </p>
<p>But Longformer / Big Bird are hard to train and it turns out that for this competition local information captured by shortformer models might be actually enough to predict the discourse types correctly, This is why most of my shortformer models outperformed Longformer and Big Bird.</p>
<p>The problem you mentioned is rather a local one and in order to prevent it from occurring I applied a context window approach rather than using "normal" stride, as "normal" stride only takes the left context into account but not the right context. This also explains the observed boost in performance while using the context window approach, compared to "normal" stride.</p>
<pre><code>import itertools
from joblib import Parallel, delayed
def apply_topic_length_threshold(df):
df = df.copy()
df['len'] = df['predictionstring'].apply(lambda x: len(x.split()))
for TOPIC in range(0, 15):
for key, value in length_topic_thresh[TOPIC].items():
index = df.loc[(df['class']==key)&(df['topic']==TOPIC)].query(f'len<{value}').index
df.drop(index, inplace = True)
return df
def apply_topic_proba_threshold(df):
df = df.copy()
for TOPIC in range(0, 15):
for key, value in proba_topic_thresh[TOPIC].items():
index = df.loc[(df['class']==key)&(df['topic']==TOPIC)].query(f'pred_proba<{value}').index
df.drop(index, inplace = True)
return df
def check_hyperparamters(oof, gt_dataframe, TOPIC, LENGTH_THRESH, PROBA_THRESH):
# change threshold
length_topic_thresh[TOPIC][OPTIMIZE] = LENGTH_THRESH
proba_topic_thresh[TOPIC][OPTIMIZE] = PROBA_THRESH
# apply postprocessing
oof_pp = oof.copy()
oof_pp = apply_topic_length_threshold(oof_pp)
oof_pp = apply_topic_proba_threshold(oof_pp)
# compute cv for postprocessed oof
metrics = {}
f1_scores = []
classes = [OPTIMIZE]
for c in classes:
pred_df = oof_pp.loc[oof_pp['class']==c].copy()
gt_df = gt_dataframe.loc[gt_dataframe['discourse_type']==c].copy()
f1 = score_feedback_comp(pred_df, gt_df)
f1_scores.append(f1)
metrics[f"PP-F1-{c}"] = f1
metrics[f"PP-F1-Overall"] = np.mean(f1_scores)
for key, val in metrics.items():
if f'PP-F1-{OPTIMIZE}' in key:
return val
length_topic_thresh = {topic: defaultdict(int) for topic in range(0, 15)}
proba_topic_thresh = {topic: defaultdict(float) for topic in range(0, 15)}
# topic-dependent parameter search
# define search space
OPTIMIZE_LIST = ['Lead', 'Position', 'Counterclaim', 'Rebuttal', 'Claim', 'Evidence', 'Concluding Statement']
LENGTH_LIST = [[i for i in range(1, 30)] for _ in OPTIMIZE_LIST]
PROBA_LIST = [[round(i,2) for i in np.arange(0.2, 1, 0.05)] for _ in OPTIMIZE_LIST]
for OPTIMIZE, length, proba in zip(OPTIMIZE_LIST, LENGTH_LIST, PROBA_LIST):
for TOPIC in range(0, 15):
# filter data
ID2topic_topic = {key: val for key, val in ID2topic.items() if val == TOPIC}
gt_dataframe_topic = gt_dataframe[gt_dataframe['id'].isin(ID2topic_topic)]
oof_topic = oof[oof['id'].isin(ID2topic_topic)]
# find best parameters
COMBINATIONS = list(itertools.product(length, proba))
results = Parallel(n_jobs=4, backend="multiprocessing")(
delayed(check_hyperparamters)(oof_topic, gt_dataframe_topic, TOPIC, LENGTH_THRESH, PROBA_THRESH) for LENGTH_THRESH, PROBA_THRESH in COMBINATIONS
)
best_score = 0
best_setting = None
for result, (LENGTH_THRESH, PROBA_THRESH) in zip(results, list(itertools.product(length, proba))):
if result > best_score:
best_score = result
best_setting = (LENGTH_THRESH, PROBA_THRESH)
print(TOPIC, best_score, best_setting)
length_topic_thresh[TOPIC][OPTIMIZE] = best_setting[0]
proba_topic_thresh[TOPIC][OPTIMIZE] = best_setting[1]
</code></pre>
<p>I checked different hyperparameter settings in parallel using joblib library.<br>
I think the code took around 9 hours to find the optimal setting.😊</p> |
Feedback Prize - Evaluating Student Writing | 6th place solution. A YOLO-like text span detector. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Evaluating Student Writing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you.</p>
<blockquote>
<p>what is your first output head objectness meaning? </p>
</blockquote>
<p>Objectness target is a vector of shape (num_words, ). For each ground truth span, you have at least one positive word (1 in the objectness) as representative of the span. Others words in the span are negative (0 in the objectness vector).</p>
<blockquote>
<p>Is your inference result only based on the 2nd output-head calculation and the discource-type based on 3rd output head?</p>
</blockquote>
<p>All three heads' output are used in inference.</p>
<p>I suggest reading YOLO papers and source code to understand the training target design and how to decode raw prediction to target in inference.</p>
<p>Thank you for hosting this awesome competition and congrats to the winners.</p>
<p>This problem is interesting and challenging. The data and annotations are high quality. It's a great learning and practicing experience working on this competition.</p>
<p>I started this competition compeletely ignoring those shared NER baselines. I thought in an "object detection" task, segmentation + posprocess approaches can not do better than object detection approaches. I'm surprised by the postprocess ideas in top solutions.</p>
<p>Overall, what I made is a YOLO-like text span detector. I share my code in <a href="https://www.kaggle.com/tascj0/a-text-span-detector" target="_blank">this notebook</a>.</p>
<p>Checkpoints are shared <a href="https://www.kaggle.com/datasets/tascj0/feedback-checkpoints" target="_blank">here</a>.</p>
<p>To reproduce the checkpoints, check the code and configs <a href="https://github.com/tascj/kaggle-feedback-prize-2021" target="_blank">here</a>.</p>
<h2>Modeling</h2>
<h3>Network</h3>
<pre><code>AutoModelForTokenClassification.from_pretrained(num_labels=1 + 2 + num_classes, ...)
</code></pre>
<p>1 for objectness (or span-ness?. fg/bg classification)</p>
<p>2 for regression (distance from fg location to first and last index of corresponding span)</p>
<p>num_classes for discourse type classification</p>
<h3>Aggregate tokens to words</h3>
<p>Network logits is in shape (num_tokens, 10). This is inconvenient (decoding output, ensembling models with different tokenizers). So I aggregate the logits to (num_words, 10) using RoIAlign.</p>
<h3>Training target</h3>
<p>The problem of this formulation is how to define positive for objectness training. In object detection, center of object is a natural choice of positve. In text span detection, I found the first word of span a good choice of positive.</p>
<p>In addition, I assign lowest cost word in each span as positive during training. This is inspired by YOLOX.</p>
<h3>Augmentation, loss, decoding outputs, etc.</h3>
<p>I randomly replace tokens with mask token during training.</p>
<p>For other details, please check the code.</p>
<h3>Post process</h3>
<p>The only post process is nms.</p>
<h3>Ensemble</h3>
<p>I used one cycle policy in training and averaged weights of the last few epochs.<br>
To ensemble different models/folds, I simply averaged outputs of models.<br>
WBF ensemble does not work in local validation, and I didn't figure out why. I think I did something wrong here.</p>
<h2>Results</h2>
<table>
<thead>
<tr>
<th>backbone</th>
<th>Validation</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>google/bigbird-roberta-base</td>
<td>0.685~0.69</td>
<td></td>
<td></td>
</tr>
<tr>
<td>allenai/longformer-base-4096</td>
<td>0.685~0.69</td>
<td></td>
<td></td>
</tr>
<tr>
<td>allenai/longformer-large-4096</td>
<td>0.70~0.71</td>
<td></td>
<td></td>
</tr>
<tr>
<td>microsoft/deberta-base</td>
<td>0.70~0.705</td>
<td></td>
<td></td>
</tr>
<tr>
<td>microsoft/deberta-large</td>
<td>0.715~0.72</td>
<td></td>
<td></td>
</tr>
<tr>
<td>microsoft/deberta-xlarge</td>
<td>0.715~0.72</td>
<td></td>
<td></td>
</tr>
<tr>
<td>microsoft/deberta-large + microsoft/deberta-xlarge</td>
<td>0.723</td>
<td>0.714</td>
<td>0.732</td>
</tr>
</tbody>
</table>
<p>The best combination is <code>deberta-large</code> + <code>deberta-xlarge</code>. Ensembling more does not help.<br>
In the final submission, I used 2 weights (2/5 folds) each model. Submission time is around 2 hours.</p>
<p><a href="https://www.kaggle.com/drpatrickchan" target="_blank">@drpatrickchan</a> It seems to be just wrong hyperparameters. A low iou_thr (0.2) is necessary to ensemble my models. Didn't try iou_thr lower than 0.5.</p>
<p>Thanks to the post processing code shared by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, I checked the difference and soon found the problem. Different score thresholds for different classes also helps a bit after WBF, while it does not work for my models before WBF.</p>
<p>The ensemble of 4models (longformer large + 3 deberta) x 2folds could get 0.720 public and 0.737 private (iou_thr=0.2, score_thr=0.3 for all classes).</p>
<p>Thank you.</p>
<p>I tried a few settings, the loss weights between classification and regression does not matter much.</p>
<p>The loss weight of objectness is important.<br>
Following YOLOX, I use</p>
<pre><code> num_total_samples = obj_target.sum()
obj_loss = F.binary_cross_entropy_with_logits(
obj_pred, obj_target, reduction='sum') / num_total_samples
</code></pre>
<p>If I haven't read the source code of YOLOX before, I may use</p>
<pre><code> obj_loss = F.binary_cross_entropy_with_logits(
obj_pred, obj_target, reduction='mean')
</code></pre>
<p>Thank you.</p>
<p>Yes there is room for improvement. I didn't try ensemble until the last day because I thought WBF would just work, however it did not work as expected. There are TPs in bigbird model output that deberta models does not catch, but I haven't found a proper merging method without adding too many FPs.</p>
<p>The post process ideas in top solutions are something I would never came up with. My model using low threshold can't achieve the recall shared in the 1st solution. There's a lot to learn and try.</p>
<blockquote>
<p>Any idea why the WBF did not work in your case?</p>
</blockquote>
<p>Unfortunately no.</p>
<blockquote>
<p>isn't it more like a segmentation rathen than a detection job</p>
</blockquote>
<p>No. obj_loss is computed using all words output, reg_loss and cls_loss are computed using output at positive locations, others ignored. I suggest reading YOLO papers and source code to understand the training target design.</p>
<blockquote>
<p>additional loss will be calculated, right?</p>
</blockquote>
<p>The more positive you have, the more "samples" you have to compute cls_loss and reg_loss. So yes.</p>
<blockquote>
<p>what is this other word 'lowest cost' of/to?</p>
</blockquote>
<p>The lowest cost word of the span. It might be the first or not. The cost is computed using network output of current sample&iteration. Please check the code for details.</p>
<p>It's SWA, average of weights of different epochs in single hyperparameters setting. Model soups seems to be average of different weights from same pretrain using different hyperparameters.</p>
<p>I had the same concern. The cost is not equal to loss, and there are random masking augmentation so in practice it does not happen and dynamic positive improves score. Actually in practice there could be multiple words with lower cost compared to the first word of span.</p>
<p>Anyway, in the worst case it just falls back to single positive. I think it's fine.</p>
<p>foreground and background</p>
<p>Yes. 1 is enough for single positive assignment, 2 is used for the extra dynamically assigned positive.</p> |
Feedback Prize - Evaluating Student Writing | 7th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Evaluating Student Writing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you Georgia State University, The Learning Agency Lab, and Kaggle for a very well run competition. The data was high quality and interesting. It was great learning. </p>
<p>Thanks also to Huggingface 🤗 . Where would NLP be without you.</p>
<p>Unfortunately, we missed out on the best of the postprocessing shown in some of the top approaches - WBF, GBM stackers, and Yolo-style span detector… really great work. Next time! </p>
<p>Our <a href="https://www.kaggle.com/code/darraghdog/new-baseline/notebook?scriptVersionId=91589401" target="_blank">solution</a> was blending a number of models (Big bird, Longformer, Deberta, Deberta-v2, Deberta-v3, Bart) all large models. We used a weighted average which was tuned with Optuna. Thresholds were also tuned with Optuna. </p>
<p>As opposed to BIO tagging we used 9 model outputs - 7 classes & No span, as well one output to predict the initial token of each span. We found token dropout worked well to reduce overfitting. </p>
<p>Training on very long sequences did not help too much. Using max 1250 tokens seemed to do well. For the shorter model it worked well to adjust the model to extend position token lengths, and just repeat the position embeddings 2X or 3X times. </p>
<p>When we initially merged both teams had been doing it this way, which made thing easier. The clovaai team, with <a href="https://github.com/clovaai/bros" target="_blank">BROS</a> did a similar thing in their model, and that is where I had picked it up. </p> |
Feedback Prize - Evaluating Student Writing | 9th solution, deberta is the king, pure ensemble of bert models | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Evaluating Student Writing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>For single model. There some lucky points(as both LB and PB dataset is small) see below. If I lower lr as * 0.5. I got 0.004 LB gain, but lose 0.001 on PB. And I noticed I have one deberta-v3 1536 model with RL, it got 0.7 LB but 0.719 on PB.<br>
<img src="https://i.postimg.cc/wvg28qrb/Wechat-IMG86.jpg" alt=""></p>
<p><img src="https://i.postimg.cc/L6HZbBBS/851647414309-pic.jpg" alt=""> 3k gain from word level lstm.</p>
<p>4k means +0.004</p>
<p>Good idea. Your ideas and sharing help us a lot:) Thanks <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> </p>
<p>I do not not know how to show image on kaggle… will update them</p>
<ol>
<li>lovasz .. en I just looked my pb, it did not improve pb but improve lb about 4k.</li>
<li>Multi obj I used it from the very beginning, as it could do post process much easier. I think it may improve around 5k. It is hard to test as I have to make different post rules… <br>
3.Word level lstm.. I did not test with recent models. Will update later after I test it.</li>
</ol>
<p>With bs 16 and learning rate max 2e-5, for all non bert part using lr max 1e-3.</p>
<p>Much better then longformer. about 10-15k(longformer 1536 with PB 705 LB690 while deberta-v3 1536 PB 718 LB 705), I only choosed 1 longformer model for ensemble sense.</p>
<p>My solution is an simple ensemble of bert models. <br>
I have tried to use recall of different models to do l2 rerank using lgb, but failed to make it work, sad story:(. So no stacking here, just simple ensemble of level 1 bert models with optimized post process(search by optuna).</p>
<ul>
<li>Best single model is <strong>deberta-v3-large</strong>!<br>
deberta-v3-large is the king for both feedback-prize and nbme maybe😉<br>
deberta-v3-large + maxlen 1536 single model(train on all 15k) could get <strong>LB 705, PB 718</strong><br>
the submission run is about 34min , I late submit test 5 folds of this model, score is <strong>LB 708, PB 723</strong><br>
Longformer maxlen 1536 with the same model structure could only get <strong>LB 690, PB 705</strong></li>
<li>Longformers(maxlen >= 1024) could improve but shortformers(maxlen == 512) can help a lot!<br>
We could get <strong>Lb 705, PB 717</strong> using just two seperately train 512 length models.<br>
deberta-v1-xlarge first 512 + deberta-v1-xlarge last 512 <br>
We could get <strong>LB 712, PB 723</strong> with 10 shortformers only ensemble.<br>
list(zip(range(len(model_dirs)), model_dirs, mns, weights)): <br>
[(0, '../input/feedback-model0', 'base.deberta-v3.start', 1),<br>
(1, '../input/feedback-model1', 'base.deberta-v3.end', 1),<br>
(2, '../input/feedback-model2', 'base.deberta-v3.se', 1),<br>
(3, '../input/feedback-model3', 'large.deberta-v2-xlarge.start', 1),<br>
(4, '../input/feedback-model4', 'large.deberta-v2-xlarge.se2', 1),<br>
(5, '../input/feedback-model5', 'base.deberta.start', 1),<br>
(6, '../input/feedback-model6', 'base.deberta.mid', 1),<br>
(7, '../input/feedback-model7', 'base.electra.start', 1),<br>
(8, '../input/feedback-model8', 'large.deberta-v3.start.mui-end-mid', 2),<br>
(9, '../input/feedback-model9', 'large.electra.start.mui-end-mid', 2)]<br>
len(model_dirs): 10</li>
<li>What improved single model performance? <br>
Be sure <strong>not to remove '\n'</strong>, super important feature, I change '\n' to new word '[BR]' , to make sure all models(like tokenizer of roberta) can handle it correctly.<br>
<strong>Word level LSTM on top of bert</strong> help a lot! (using torch scatter_add, LB +2K, PB +3K) <br>
I used multi obj model, <strong>token classfication</strong>(8 class) + <strong>seperator classification</strong>(binary) <br>
Lovasz loss help a bit (LB 4K, but no gain on PB)</li>
<li>Post process <br>
How to split is important, I found below rules help a lot ! But it could not beat LGB:)<br>
<img src="https://i.postimg.cc/ZYg42wDN/9-E8-CD1-F2-8-B5-A-4047-9-BB9-8-D41-AED02-FD0.jpg" alt="image.png"></li>
<li>Ensemble <br>
Use per word prob mean of different models, model weight is choosen from 1-10 with optuna. <br>
Backbones: deberta-v3-large, deberta-xlarge, deberta-large, longformer-large, bart-large, roberta-larege. (funnel-transformer-large help a bit on PB , but no gain on LB I did not choose it)<br>
Best model is online <strong>LB 716, PB 729</strong>, <strong>local CV 724</strong>(about <strong>721</strong> without fancy parameter search) <br>
16 models running 5hour 20min, model names without len. means 512 length models, start means from position 0, end means from the end(last position), se means like first 256 + end 256, mid means from middle position, seq_encoder-0 means not using word level LSTM.<br>
[(10, 'deberta-v3.start.nwemb-0.mark_end-0'),<br>
(10, 'deberta-v3.start.len1024.stride-256.seq_encoder-0'),<br>
(10, 'deberta-v3.se2'),<br>
(10, 'deberta-v3.se'),<br>
(10, 'deberta-v3.end.len1024.seq_encoder-0'),<br>
(9, 'roberta.start.nwemb-0'),<br>
(9, 'deberta-xlarge.start'),<br>
(9, 'deberta-xlarge.end'),<br>
(9, 'bart.start.run2'),<br>
(8, 'deberta.start'),<br>
(8, 'deberta-v3.start.len1024.rnn_bi'),<br>
(8, 'deberta-v3.mid.len1024'),<br>
(7, 'deberta-v3.start.stride-256.seq_encoder-0'),<br>
(7, 'deberta-v3.start.len1536'),<br>
(6, 'longformer.start.len1536'),<br>
(6, 'deberta-v3.start.len1024.stride-256')]<br>
<img src="https://i.postimg.cc/13ymrfj7/image-1.png" alt="image.png"><br>
<img src="https://i.postimg.cc/ht4n7NXH/image-2.png" alt="image.png"><br>
<img src="https://i.postimg.cc/52KhQydC/image-3.png" alt="image.png"><br>
<img src="https://i.postimg.cc/FHYPqx8Y/2022-03-16-12-40-36.png" alt="image.png"><br>
<img src="https://i.postimg.cc/wTG2hmMg/2022-03-16-12-41-32.png" alt="image.png"><br>
<img src="https://i.postimg.cc/8cnRB21t/2022-03-16-12-42-17.png" alt="image.png"></li>
<li>Thanks <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> . I used two of your notebooks.<br>
<a href="https://www.kaggle.com/nbroad/qa-ner-hybrid-infer-nbme" target="_blank">https://www.kaggle.com/nbroad/qa-ner-hybrid-infer-nbme</a><br>
<a href="https://www.kaggle.com/nbroad/corrected-train-csv-feedback-prize" target="_blank">https://www.kaggle.com/nbroad/corrected-train-csv-feedback-prize</a><br>
corrected train.csv help improve LB and PB around 0.001.<br>
fast tokenizer of deberta-v3 is a key for best single model.</li>
</ul>
<p>My code has open sourced here:<br>
infering: <br>
<a href="https://www.kaggle.com/code/goldenlock/feedback-base" target="_blank">https://www.kaggle.com/code/goldenlock/feedback-base</a><br>
training:<br>
<a href="https://github.com/chenghuige/feedback-prize-2021" target="_blank">https://github.com/chenghuige/feedback-prize-2021</a></p>
<p>Here want to say deep thanks to you <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> I used your corrected.csv it could improve around 0.001 for both LB and PB. And also I used your code in nbme for using deberta-v3 fast tokenizer. Just updated my post. Your notebooks all very useful indeed, learn a lot from you. </p>
<p>Thanks! I do not use offset interface just use wordids interface of tokenizer, I think this interface is more elegant for this problem which based on words not chars. After changing to 'BR' what you need to take care is wordid, you need to modify wordids to make sure BR is wordid None. Take an example:<br>
He played \n basketball -> He played [BR] baskedball <br>
you get wordids 0, 1, 2, 3 you need to change it to 0, 1, None, 2.<br>
Another workaround is you do the tokenization by yoursefl word by word, you should get same results.</p>
<p>Sure, I shared the fake code here <a href="https://www.kaggle.com/code/goldenlock/word-level-lstm-fake-code" target="_blank">https://www.kaggle.com/code/goldenlock/word-level-lstm-fake-code</a></p>
<p>pb mean private, lb is public</p> |
Feedback Prize - Evaluating Student Writing | [8th Place] Solution. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Evaluating Student Writing
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Overview</h1>
<p>Before writing our solution, we really thank to Georgia State University and Kaggle for hosting this competition and congrats to all. Also, I really enjoyed doing this competition with my UPSTAGE team (sergei, ducky, and dlrgy22).</p>
<p>The training code and inference notebook are here:</p>
<ul>
<li><a href="https://github.com/anotherchudov/feedback" target="_blank">Training code</a></li>
<li><a href="https://www.kaggle.com/code/sergeichudov/8th-place-inference-notebook" target="_blank">Inference notebook</a></li>
<li><a href="https://www.kaggle.com/code/sergeichudov/8th-place-inference-notebook/data" target="_blank">model weights</a></li>
<li><a href="https://docs.google.com/document/d/1GTBAplv6uWjq-jlLk4_toIYLp6mnCntAzwOb6DlPZ3Q/edit?usp=sharing" target="_blank">solution documents</a></li>
</ul>
<p>The pipeline is here.</p>
<blockquote>
<p>Validation strategy -> Text pre-processing -> Model -> Ensemble -> Post-processing on logits</p>
</blockquote>
<p>✔️ Things that worked.</p>
<blockquote>
<ul>
<li>Initial learning rate 1e-5</li>
<li>Max gradient norm to small (about 1.0)</li>
<li>Plateau or Linearly Reduced LR With Warmup</li>
<li>SWA (stabilize valid performance, at least +0.01 boost)</li>
<li>Mean teacher with noise label filtering by exponential moving average</li>
</ul>
</blockquote>
<p>❌ Not worked.</p>
<blockquote>
<ul>
<li>Initial learning rate 3e-5</li>
<li>Max gradient norm to large (abot 10)</li>
<li>SAM Optimizer</li>
<li>Dice Loss / Focal Loss with gamma 2.0</li>
<li>Position Bucket Expanding at DeBERTaV3</li>
</ul>
</blockquote>
<h2>Validation strategy</h2>
<ul>
<li>Cross validation with topic from <a href="https://www.kaggle.com/code/cdeotte/rapids-umap-tfidf-kmeans-discovers-15-topics" target="_blank">CHRIS DEOTTE notebook</a>.</li>
<li>Use half of all oof data as a test to find post-processing hyper-parameters.</li>
</ul>
<h2>Text pre-processing</h2>
<ul>
<li>We use mask tokens to reflect newline information.</li>
<li>adjusting start of each entity to nearest alphanumeric character to the right </li>
</ul>
<blockquote>
<p>entity before pre-p: ,Some quotation here.<br>
entity after pre-p: Some quotation here.</p>
</blockquote>
<ul>
<li>some samples with a word split by the start or end of entity text</li>
</ul>
<blockquote>
<p>Ex) discourse_id -> 1621804837671<br>
given text: t would allow students to …<br>
modified text: it would allow students to …</p>
</blockquote>
<h2>Model</h2>
<p>We used 4 DeBERTa models: DeBERTa version 1 with large and xlarge model, DeBERTa version 2 with xlarge model and DeBERTa version 3 with large model. </p>
<p>Here are the training code for each model and how to run it.</p>
<ul>
<li><a href="https://github.com/anotherchudov/feedback/tree/sergei" target="_blank">DeBERTa v1 large and DeBERTa v2 xlarge</a></li>
<li><a href="https://github.com/anotherchudov/feedback/tree/ynot" target="_blank">DeBERTa v1 xlarge</a></li>
<li><a href="https://github.com/anotherchudov/feedback/tree/dlrgy22" target="_blank">DeBERTa v3 large</a></li>
</ul>
<p>For more details with examples, see the solution documents above.</p>
<h2>Ensemble</h2>
<p>We used <strong>First Token Ensemble</strong>.</p>
<p>For this competition we conducted an ensemble targeting logits of tokens. However, since the vocab defined for each tokenizer used in each model is different, the tokenized result may be different even for the same text. Therefore, it is not possible to simply sum the results of the two models because the results from tokenizer are different, and one idea is applied here: <strong>First Token Ensemble</strong> that will group overlapping tokens and add log probabilities for the first token of each group.</p>
<p>For more details with examples, see the solution documents above.</p>
<h2>Post-processing</h2>
<p>We applied <strong>Start With I-tag</strong>, <strong>Look a Head</strong>, <strong>First Appearance</strong> and <strong>Extending end span</strong> pp.</p>
<ul>
<li>Start With I-tag pp</li>
</ul>
<blockquote>
<p>Not really a B tag but it starts with an I tag.</p>
</blockquote>
<ul>
<li>Look a Head pp</li>
</ul>
<blockquote>
<p>There were cases where tags appeared in succession and only one was empty.</p>
</blockquote>
<ul>
<li>First Appearance pp</li>
</ul>
<blockquote>
<p>Handling of classes that appear once per essay.</p>
</blockquote>
<ul>
<li>Extended extracted entities to the right</li>
</ul>
<blockquote>
<p>Extending end span to some extent.</p>
</blockquote>
<p>For more details with examples, see the solution documents above.</p> |
Feedback Prize - Predicting Effective Arguments | 0.577 single model with full code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Predicting Effective Arguments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>dunno if this solution is worth a video. there are many more interesting solutions that have been shared here. but ill give it a thought. the only interesting part in my solution is modification of embedding layer in deberta :) </p>
<p>The first step to achieve this model was to train an mlm on the old dataset. I took all the data from previous feedback competition and fine-tuned deberta-v3-large using <code>run_mlm.py</code> script available in the transformers library. The perplexity in the end was around 4.5. I could have trained more but I didn't.</p>
<p>Now comes the more interesting part. I was inspired by <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> 's kernel that showed how we can use token classification for this problem. This was the kernel that piqued my interest in this competition and I decided to give it a go. The idea was simple but effective.</p>
<p>Essay texts were represented as follows:</p>
<pre><code>[CLS]some text [CLS_{discourse_type}] some valid discourse text [END_{discourse_type}] ..... [SEP]
</code></pre>
<p>Mapping:</p>
<pre><code>disc_types = [
"Claim",
"Concluding Statement",
"Counterclaim",
"Evidence",
"Lead",
"Position",
"Rebuttal",
]
cls_tokens_map = {label: f"[CLS_{label.upper()}]" for label in disc_types}
end_tokens_map = {label: f"[END_{label.upper()}]" for label in disc_types}
</code></pre>
<p>I arranged the data as follows:</p>
<ul>
<li>Convert the data to old format (I had already made my kernel public for this part)</li>
<li>For each token, add on <code>O</code> label for discourse types if they are not a valid discourse</li>
<li>For each token, add -100 as label for discourse effectiveness if they are not a valid discourse</li>
<li>If the token belongs to one of the discourse cls or end token or is a valid discourse, I added discourse type id as their label</li>
<li>If the token belongs to one of the discourse cls or end token or is a valid discourse, I added discourse effectiveness id as their label</li>
<li>discourse effectiveness label was kept only for the CLS_{discourse_type} and END_{discourse_type} tokens</li>
</ul>
<p>So, now we have for each essay:</p>
<ul>
<li>input_ids</li>
<li>input_types (discourse type ids, one for each input id)</li>
<li>input_labels (the actual label, most are -100, only start of a discourse and end have an actual label)</li>
<li>attention_mask</li>
</ul>
<p>Now it's time to train a model. I made a crucial change in the model's embedding layer that gave a good lift. The change was to add another embedding layer for discourse_types (input_types) and add the result to the original embeddings:</p>
<pre><code>class DebertaV2Embeddings(nn.Module):
"""Construct the embeddings from word, position and token_type embeddings."""
def __init__(self, config):
super().__init__()
pad_token_id = getattr(config, "pad_token_id", 0)
self.embedding_size = getattr(config, "embedding_size", config.hidden_size)
self.word_embeddings = nn.Embedding(config.vocab_size, self.embedding_size, padding_idx=pad_token_id)
self.disc_type_embeddings = nn.Embedding(9, self.embedding_size)
.
.
.
def forward(
self, input_ids=None, token_type_ids=None, disc_type_ids=None, position_ids=None, mask=None, inputs_embeds=None
):
.
.
if self.config.disc_type_vocab_size > 0:
disc_type_embeddings = self.disc_type_embeddings(disc_type_ids)
embeddings += disc_type_embeddings
.
.
.
return embeddings
</code></pre>
<p>Other than all this, I used polynomial learning rate scheduler and AdamW optimizer from pytorch.</p>
<p>During inference, I just averaged the probabilities for CLS_{discourse_type} and END_{discourse_type} tokens to get the final label.</p>
<p>That's it. Nothing too complicated to get a good score using a single model (5-fold). Running it with some variations and averaging the models gave us 14th rank on the leaderboard.</p>
<p>The full code to train and infer is available here: <a href="https://www.kaggle.com/code/abhishek/0-577-single-model-full-code" target="_blank">https://www.kaggle.com/code/abhishek/0-577-single-model-full-code</a></p>
<p>I used my own library for training the models: tez. Please give it some love here: <a href="https://github.com/abhishekkrthakur/tez" target="_blank">https://github.com/abhishekkrthakur/tez</a></p>
<p>If you have any questions, please feel free to ask. It was a fun competition and I learnt a lot! :)</p> |
Feedback Prize - Predicting Effective Arguments | 12th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Predicting Effective Arguments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you very much.😊<br>
We have learned so much from you.</p>
<p>I would like to thank all the participants and hosts of the competition.<br>
And thank you to the teams for working hard together ( <a href="https://www.kaggle.com/tomoyayanagi" target="_blank">@tomoyayanagi</a>, <a href="https://www.kaggle.com/kashiwaba" target="_blank">@kashiwaba</a>, <a href="https://www.kaggle.com/zacchaeus" target="_blank">@zacchaeus</a>, <a href="https://www.kaggle.com/shairahama" target="_blank">@shairahama</a> )<br>
Without team I would not be in this position.</p>
<h1>Model</h1>
<ul>
<li>Deberta-large</li>
<li>Denerta-xlarge</li>
<li>Deberta-v3-large</li>
<li>bigbird-roberta-large</li>
</ul>
<h2>Token Classification</h2>
<p>All models were built with token classification.<br>
Token classification added a [CLS] token at the beginning of the discourse_text and a [SEP] token at the end.<br>
The method of pooling was different in members, cls token, average pooling was used<br>
The best input was to introduce special tokens (start, end) at the beginning and end (introduced by <a href="https://www.kaggle.com/zacchaeus" target="_blank">@zacchaeus</a>)</p>
<pre><code><{discourse_type} start> discourse text <{discourse_type} end>
</code></pre>
<h2>Adding meta description (Prompt)</h2>
<p>We found a trend of labels for each essay topic. <br>
Based on these results, we added topic text to the top of the input.<br>
This method gives us the best single model (build by <a href="https://www.kaggle.com/zacchaeus" target="_blank">@zacchaeus</a> )</p>
<p>Topic information was taken from the public notebook. Thank you very much.<br>
(<a href="https://www.kaggle.com/code/jdoesv/take2-feedback-essays-to-prompts" target="_blank">https://www.kaggle.com/code/jdoesv/take2-feedback-essays-to-prompts</a>)</p>
<pre><code>topic prompt [SEP] <{discourse_type} start> discourse text <{discourse_type} end>, ...
</code></pre>
<p>I think there was a discussion that the host posted that the topic of the essay is common. Therefore, I think it worked.</p>
<h1>Pseudo Labeling</h1>
<p>At first, we used all of the 2021 data as pseudo-labels, but the training did not proceed well, probably because the distribution of the data was too different. Therefore, we adopted the method of sampling and using a portion of the data.</p>
<p>so, the following methods are good for us (This was introduced by <a href="https://www.kaggle.com/kashiwaba" target="_blank">@kashiwaba</a> )</p>
<ul>
<li>Discard essays with high Adequate predictions in all the discourse_texts that make up the essay</li>
<li>Get everything that contains Ineffective's predictions</li>
<li>Sampling the rest of the essay including Effective at 25~35%</li>
</ul>
<p>Pseudo labels matched folds trained base model to minimize leakage</p>
<p>Another consideration was to sample the 2021 data according to topic, since the distribution of topic is different in 2022, but there was not enough time</p>
<h1>Stacking</h1>
<p>We have built a 2nd model with flattened 1st model predictions of the 3 labels as features.<br>
Discourse_type is added as a meta-feature.<br>
The model used logistic regression, lgbm and xgb.<br>
Boosted cv and lb significantly more than weighted average using Nelder-Mead based ensemble</p>
<h1>What else worked or tips</h1>
<ul>
<li>MLM<ul>
<li>Smoothed loss and boosted cv/lb</li></ul></li>
<li>AWP<ul>
<li>It seems to be good to start at epoch 1 or 2</li></ul></li>
<li>FGM</li>
<li>Evaluate more validation in steps instead of epochs</li>
<li>layer wise learning rate (decay rate 0.98)<ul>
<li>Higher learning rate at transformer classification head</li></ul></li>
</ul>
<p>thanks for your comment.</p>
<p>In my experiment, nan did not happen with awp.<br>
I think they are sensitive to parameters, so it would be good to try several hyper parameters.<br>
code was referenced in this notebook (pppm 8th place).<br>
<a href="https://www.kaggle.com/code/skraiii/pppm-tokenclassificationmodel-train-8th-place" target="_blank">https://www.kaggle.com/code/skraiii/pppm-tokenclassificationmodel-train-8th-place</a></p>
<p>FGM was performed by teammates. I expect my teammates to respond.</p> |
Feedback Prize - Predicting Effective Arguments | 146 th palce solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Predicting Effective Arguments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>1.i not used token classification approach , i used normal seq classification approach (discourse_text + other_discourse_text_from_same_essay)</p>
<p>2.😅Yes, on Sanyam's Podcast and in last uspptpm comp and in nbme too</p>
<p>yes. I used only weighted layer pooling from that notebook,i tried bilstm pooling layer with max_pooling it not worked so i assumed that using more than one pooling layer will not work.(here i made mistake)</p>
<p>Thanks to kaggle and competition host.</p>
<p>In last competition (USPPTPM) , i saw that the way we give data input to the transformers matters a lot so in this competition i was decided to experiment on that, in start i started with</p>
<ol>
<li><p>deberta-v3-base | 1024</p>
<p>I used data this way -> discourse_text + essay this gave me score -> 0.72(on LB)</p></li>
<li><p>deberta-v3-large | 1024 (with 5fold )</p>
<p>Using oof ensembling this gave me 0.627(on LB) score data input this way -> discourse_text + essay</p></li>
<li><p>deberta-v3-large | 1024 (with 5fold ) + CLS embedding</p>
<p>In this i not used essay instead i created essay by joining discourse text of same essay_id and added tags in the start of discourse text this way</p>
<p>[CLS][LEAD] discourse_text [LEAD][END] [POSITION] discourse_text [POSITION] …. [CLAIM] discourse_text [CLAIM]<br>
this gave me 0.61(on LB)</p></li>
<li><p>deberta-v3-large | 1024 (with 5fold ) + weighted layer pooling</p>
<p>In this i replace cls embedding to weighted layer pooling and got 0.600 on lb </p></li>
</ol>
<p>My final model was oof ensemble(just taking avg.) of 4th method. In the end am happy that i tried this instead of ensembling 3-4 model using discourse_text + essay this data. </p>
<p>**bad decisions i taken during competition **</p>
<ol>
<li>Instead of experimenting different things i already assumed that this will not worked </li>
<li>not tried mlm pretraining on deberta because it not gave nice results on roberta i assumed this will not give good result on deberta too.</li>
<li>not tried different models because during training longformers, gradients was exploded.</li>
</ol>
<p>**lesson learned: **<br>
1.must not assume till you experiment<br>
2.if something not worked on some model so it is not neccesery that it won't work with different model<br>
3.if some one said in dissussion that this is not working don't trust i try and observe is that working or not</p>
<p>In last i am happy that i got bronze, Because last feedback competition was my first nlp competition and In that competition i just submitted sir <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> 's notebook as it is , i was not able to understand that kernel(it was really nice kernel, i was dumb) then participated in NBME and USPPTPM and learned lot and got bronze in this competiton ik this is too much slow, in nbme and in this comp my team mates cheated on me still i learned a lot of things. </p>
<p>Thanks to <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a> and <a href="https://www.kaggle.com/SanyamBhutani" target="_blank">@SanyamBhutani</a> for youtube videos and <a href="https://www.kaggle.com/vadimirtlach" target="_blank">@vadimirtlach</a> for answering my stupid questions it helped me a lot</p> |
Feedback Prize - Predicting Effective Arguments | 15th | How to train deberta & NLP tricks | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Predicting Effective Arguments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>you are welcome, congrats on achieving gold zone!</p>
<p>Congrats to every team that fought to the last day! Thanks my great teammates <a href="https://www.kaggle.com/khyeh0719" target="_blank">@khyeh0719</a> <a href="https://www.kaggle.com/xbustc" target="_blank">@xbustc</a> , We tried a lot and have no regrets.</p>
<h1>Model</h1>
<p><strong>Formulation</strong></p>
<p>We transformed the competition task into <strong>Token Classification</strong>.</p>
<ol>
<li>First, combine discourse text in essay like this: [CLS] {discourse_type1}: {discourse_text1} [SEP] {discourse_type2}: {discourse_text2} [SEP] ….</li>
<li>Use Deberta encode the text</li>
<li>Gather all discourse first token (the discourse_type token) for classification</li>
</ol>
<p>Tried four ways to do classification, </p>
<ol>
<li>First token</li>
<li>First and End token mean (not work)</li>
<li>Span + avg pooling (not work)</li>
<li>Span + attention pooling (not work)</li>
</ol>
<p><strong>Model Architecture</strong></p>
<ul>
<li>Deberta-large</li>
<li>Denerta-xlarge</li>
<li>Deberta-v2-large</li>
<li>Deberta-v3-large</li>
</ul>
<h1>How to train deberta</h1>
<p>It's hard to train deberta models, the model weight often broken when training, there are some strategy we proposal:</p>
<ol>
<li>Smaller LR, like 5e-6</li>
<li>LR decay when layer go deeper, we split layers to 6 groups and use 0.8 decay ratio</li>
<li>Warmup LR scheduler</li>
<li>AWP (Adversarial Weight Perturbation)</li>
<li>Set dropout to ZERO, just usefull for mlm-pretrained weight</li>
</ol>
<h1>Some trick give us big improve</h1>
<ol>
<li>MLM pretrained on FB2021 & FB2022 can improve both stability and accuracy</li>
<li>Set dropout to ZERO, just usefull for mlm-pretrained weight</li>
<li>Pseudo Label on FB2021 dataset, and use soft-cross-entropy</li>
</ol>
<h1>Tried but not work</h1>
<ol>
<li>Meta Pseudo</li>
<li>back translation augmentation</li>
<li>RNN ensemble</li>
<li>SWA (Stochastic Weight Averaging)</li>
</ol>
<p>Me too! ❤️</p>
<p>Yes, concat the pl data and the train data for training with 2 epochs.</p> |
Feedback Prize - Predicting Effective Arguments | 24th Short Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Predicting Effective Arguments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, congratulate to all winners and thank you for the organizers. <br>
Also, thank you for all competitors especially <a href="https://www.kaggle.com/code/abhishek/tez-for-feedback-v2-0" target="_blank">@abhishek</a> and <a href="https://www.kaggle.com/code/kashiwaba/train-deberta-v3-large-with-optimization-approach" target="_blank">@kashiwaba</a> who publish the kernels that I refer the most.</p>
<h4>Phase 1 Token prediction and span prediction</h4>
<p>For the classification, I use token prediction and span prediction. In the essay text, there are multiple target texts. In my solution, the model is trained to predict the effectiveness using specific token or average of text span. Furthermore, I find that adding CLS and SEP token at the beginning of target text affects the CV scores. Using the deberta-v3-large and deberta-v2-xlarge, I obtained the following results.</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Approach</th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>deberta-v3-large</td>
<td>span prediction</td>
<td>0.6182</td>
</tr>
<tr>
<td>deberta-v3-large</td>
<td>CLS/SEP, token prediction</td>
<td>0.6165</td>
</tr>
<tr>
<td>deberta-v3-large</td>
<td>CLS/SEP, span prediction</td>
<td>0.6181</td>
</tr>
<tr>
<td>deberta-v2-xlarge</td>
<td>span prediction</td>
<td>0.6290</td>
</tr>
<tr>
<td>deberta-v2-xlarge</td>
<td>span prediction, overfitting</td>
<td>0.7193</td>
</tr>
<tr>
<td>deberta-v2-xlarge</td>
<td>CLS/SEP, token prediction</td>
<td>0.6308</td>
</tr>
<tr>
<td>deberta-v2-xlarge</td>
<td>CLS/SEP, token prediction, overfitting</td>
<td>0.7404</td>
</tr>
</tbody>
</table>
<p>Interestingly, deberta-v2-xlarge shows lower CV score, but it boosts the CV after the ensemble. <br>
After simple ensemble (manual weight tuning, CV 0.589, 0.583).</p>
<h4>Phase 2 Bayesian optimization for ensemble weight and LGBM</h4>
<p>To improve the score, I first ensemble the models where the weights are optimized to minimize OOF CV using Bayesian optimization (CV 0.579). <br>
Then, I use LGBM to improve the score further, I add effectiveness of prev/next text and location informations of the text as the additional information (CV 0.578, LB 0.575, Private LB 0.573).</p>
<h4>Summary - Useful attempt</h4>
<ul>
<li>Inference effectiveness using whole essay text </li>
<li>Ensemble span and token classification, use CLS/SEP token</li>
<li>Weight optimization and LGBM to deal with additional features</li>
</ul>
<p>The solutions of other competitors are always astonishing and attractive. I regret that MLM pretraining using previous competition and pseudo labeling could improve my score. Appreciate other competitors for sharing great solutions. My solution is in <a href="https://www.kaggle.com/learnitanyway/24th-inference-deberta-ensemble" target="_blank">https://www.kaggle.com/learnitanyway/24th-inference-deberta-ensemble</a>.</p> |
Feedback Prize - Predicting Effective Arguments | 2nd place solution (updated with code/notebooks) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Predicting Effective Arguments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes very soon!</p>
<p>4K essays and 11K PL since we exlcude those in the training set</p>
<p>Thanks! Congrats on getting 3rd!</p>
<p>Added an example in the beginning of the post. LMK if it makes sense</p>
<p>Thanks! Congrats on your gold medal as well!</p>
<p>First, thanks to all the organizers and kaggle staff and congrats to all the winners and thanks to my amazing teammates <a href="https://www.kaggle.com/kneroma" target="_blank">@kneroma</a> and <a href="https://www.kaggle.com/tikutiku" target="_blank">@tikutiku</a> ! With this gold medal, <a href="https://www.kaggle.com/tikutiku" target="_blank">@tikutiku</a> and I finally become competition GMs. We have released our code/notebooks:</p>
<p>code: <a href="https://github.com/Shujun-He/TeamSKT-Feedback-Prize---Predicting-Effective-Arguments-2nd-Place-solution" target="_blank">https://github.com/Shujun-He/TeamSKT-Feedback-Prize---Predicting-Effective-Arguments-2nd-Place-solution</a></p>
<p>2nd logloss notebook: <a href="https://www.kaggle.com/code/kneroma/kkiller-final-fpzeff-team-work-inference?scriptVersionId=104008824" target="_blank">https://www.kaggle.com/code/kneroma/kkiller-final-fpzeff-team-work-inference?scriptVersionId=104008824</a></p>
<p>3rd place efficiency notebook: <a href="https://www.kaggle.com/code/kneroma/fpzeff-fast-lb-efgp-inference?scriptVersionId=104922605" target="_blank">https://www.kaggle.com/code/kneroma/fpzeff-fast-lb-efgp-inference?scriptVersionId=104922605</a></p>
<p>Below is a summary of our solution. Our best private is 0.553 and our best selected private is 0.554. Please feel free to ask if you have any questions.</p>
<h1>Transformer modeling</h1>
<p>Each of our team members has their own training pipeline for transformer models. On a high level, our transformer models look at the entirety of each essay and output predictions of effectiveness for each discourse either via pooling of discourse tokens or a classification token added to the front of the each discourse. Importantly, directly inputting essays results in a situation where the model is not 100% sure about where it needs to make predictions, so to circumvent this, we use either a prompt (i.e. concat <code>f'({discourse_type} start)'</code> and <code>f'({discourse_type} end)'</code> to the beginning and end of each discourse to signal where predictions need to be made) or simply concat special tokens added to the tokenizer instead (i.e. <code>f'<{discourse_type}>'</code> and <code>f'<{discourse_type}\>'</code>). You can find an example below with a highlighted segment.</p>
<p><img src="https://raw.githubusercontent.com/Shujun-He/Shujun-He-Feedback-Prize---Predicting-Effective-Arguments-2nd-Place-solution/main/graphics/example_segments.png" alt="example"></p>
<h2>Encoders</h2>
<p>Deberta worked the best since (IMO) it supports unlimited input length and uses disentangled attention with relative positional embedding; in fact, our ensemble consists entirely of deberta variants. For me, it was also helpful to add a GRU/LSTM after pooling on the pooled discourse representations. Tom used my SlidingWindowTransformerModel (<a href="https://www.kaggle.com/competitions/feedback-prize-2021/discussion/313235" target="_blank">https://www.kaggle.com/competitions/feedback-prize-2021/discussion/313235</a>) from the last Feedback competiion, which stablized training for him.</p>
<h2>Pretraining</h2>
<p>Kkiller used pretrained weights from his solution in the last competiion (<a href="https://www.kaggle.com/competitions/feedback-prize-2021/discussion/313478" target="_blank">https://www.kaggle.com/competitions/feedback-prize-2021/discussion/313478</a>) and Tom and I found pretrained tascj0 models to be good starting points. We used some of the weights that tascj0 released after the last Feedback competition and Tom also pretrained some new ones on his own. Please checkout out tascj0's solution post if you'd like to learn more (<a href="https://www.kaggle.com/competitions/feedback-prize-2021/discussion/313424)" target="_blank">https://www.kaggle.com/competitions/feedback-prize-2021/discussion/313424)</a>.<br>
In addition, Tom used MLM for some of his models. Further, some of our models simply used huggingface weights.</p>
<h2>Max sequence length</h2>
<p>I used a max sequence length of 1280 in both training and inference, since I found that 99.9% of discourses fall within that range, whereas my teammates used up to around 1800 during inference and as low as 648 in training.</p>
<h1>Pseudo labeling</h1>
<p>Pseudo labeling is an integral part of all our solution. We use essays from the training set of last Feedback competition that are also not present in the training set of this competition. Our procedure is as follows:</p>
<ol>
<li>Train model with gt labels</li>
<li>Make predictions for old data (around 11000 essays) with each fold model</li>
<li>Retrain model with crossentropy on pseudo label probabilities (not discretized) generated by previous model trained on the same fold data: 3 epochs on pl labels only first and then 3 more epochs on gt labels only</li>
<li>Repeat from step 2</li>
</ol>
<p>For my pipeline, I saw improvement until 5 rounds of the above procedure. For Tom, it was only helpful for one round and kkiller did not have enough time to try multi-round pl.</p>
<h1>Stacking</h1>
<p>Stacking provides significantly improvement in both cv/lb (around 0.004). Our stacking framework is primarily inspired by my team's solution (<a href="https://www.kaggle.com/competitions/feedback-prize-2021/discussion/313235" target="_blank">https://www.kaggle.com/competitions/feedback-prize-2021/discussion/313235</a>) in the previous feedback competition. In addition to predicted probabilities outputted by the transformer models, we also utilized the token probabilities for each discourse, which we call prob_sequences. Compared to the previous Feedback competition, stacking is much faster since we don't have to deal with a huge amount of candidate sequences. Our features are as follows:</p>
<pre><code> #make features
def get_xgb_features(train_df,prob_sequences):
features2calculate=[f"instability_{i}" for i in range(4)]+\
[f"begin_{i}" for i in range(3)]+\
[f"end_{i}" for i in range(3)]#+\
#["entropy"]
calculated_features=[]
for i,prob_seq in tqdm(enumerate(prob_sequences)):
tmp=[]
#quants = np.linspace(0,1,n_quan)
prob_seq=np.array(prob_seq)
instability = []
#all_quants=[]
tmp.append(np.diff(prob_seq[:,:],0).mean(0))
tmp.append([(np.diff(prob_seq[:,[1,2]].sum(1))**2).mean()])
tmp.append(prob_seq[:5,:].mean(0))
tmp.append(prob_seq[-5:,:].mean(0))
calculated_features.append(np.concatenate(tmp))
train_df[features2calculate]=calculated_features
train_df['len']=[len(s) for s in prob_sequences]
calculated_features=np.array(calculated_features)
calculated_features.shape
p_features=[]
n_features=[]
neighbor_features=['Ineffective','Adequate','Effective','discourse_type']
neighbor_features_values=train_df[neighbor_features].values
for i in tqdm(range(len(train_df))):
if i>1 and train_df['essay_id'].iloc[i]==train_df['essay_id'].iloc[i-1]:
p_features.append(neighbor_features_values[i-1])
else:
p_features.append(neighbor_features_values[i])
if i<(len(train_df)-1) and train_df['essay_id'].iloc[i]==train_df['essay_id'].iloc[i+1]:
n_features.append(neighbor_features_values[i+1])
else:
n_features.append(neighbor_features_values[i])
train_df[[f+"_previous" for f in neighbor_features]]=p_features
train_df[[f+"_next" for f in neighbor_features]]=n_features
train_df['mean_Ineffective']=train_df.groupby("essay_id")["Ineffective"].transform("mean")
train_df['mean_Adequate']=train_df.groupby("essay_id")["Adequate"].transform("mean")
train_df['mean_Effective']=train_df.groupby("essay_id")["Effective"].transform("mean")
train_df['std_Ineffective']=train_df.groupby("essay_id")["Ineffective"].transform("std")
train_df['std_Adequate']=train_df.groupby("essay_id")["Adequate"].transform("std")
train_df['std_Effective']=train_df.groupby("essay_id")["Effective"].transform("std")
train_df['discourse_count']=train_df.groupby("essay_id")['discourse_type'].transform("count")
cnts=train_df.groupby('essay_id')['discourse_type'].apply(lambda x: x.value_counts())
#new_df=[]
discourse_types=['Claim','Evidence','Concluding Statement','Lead','Position','Counterclaim','Rebuttal']
value_count_hash={}
for t in discourse_types:
value_count_hash[t]={}
for key in cnts.keys():
value_count_hash[key[1]][key[0]]=cnts[key]
discourse_cnts=[]
for essay_id in train_df['essay_id'].unique():
row=[essay_id]
for d in discourse_types:
try:
row.append(value_count_hash[d][essay_id])
except:
row.append(0)
discourse_cnts.append(row)
discourse_cnts=pd.DataFrame(discourse_cnts,columns=['essay_id']+[f'{d}_count' for d in discourse_types])
#discourse_cnts
train_df=train_df.merge(discourse_cnts,how='left',on='essay_id')
train_df
#train_df
return train_df
</code></pre>
<pre><code>
</code></pre>
<p>Since stacking is fast, it works best when we use each fold predictions with xgb separately and then avg. For instance, because I have 6 folds of neural network models and 6 folds of xgb models, this way I have 6x6=36 preds to avg for each single model.</p>
<h1>Best single models</h1>
<p>All our best single models were deberta-large/deberta-v3-large variants. For Tom and Kkiller, their best single models came from 1st round PL, whereas for me it came from 4th round PL.</p>
<table>
<thead>
<tr>
<th></th>
<th>Shujun</th>
<th>Tom</th>
<th>Kkiller</th>
</tr>
</thead>
<tbody>
<tr>
<td>Public LB</td>
<td>0.560</td>
<td>0.566</td>
<td>0.562</td>
</tr>
<tr>
<td>Private LB</td>
<td>0.558</td>
<td>0.571</td>
<td>0.562</td>
</tr>
<tr>
<td>Local CV</td>
<td>0.571</td>
<td>N/A</td>
<td>0.572</td>
</tr>
</tbody>
</table>
<h1>Some more tips/tricks</h1>
<p>For Tom AWP was useful, and he reported around 0.003 cv improvement with AWP (eps=1e-3 and lr=1e-2 for large models, 1e-4 for xlarge models).</p>
<p>It was also important to split the data with <code>StratifiedGroupKFold</code> instead of <code>GroupKFold</code>. For me I started out with <code>GroupKFold</code> but found better correlation between cv/lb after switching to <code>StratifiedGroupKFold</code>.</p>
<p>For ensembling models with the same cv split, we used GP_minimize to find optimal weights and otherwise weights were determined arbitrarily.</p>
<p>Gradient accumulation was useful since we had to deal with very long sequences.</p>
<p>Best single model was my 5th round pl deberta-v3-large with public 0.56 and private 0.558</p>
<p><a href="https://www.kaggle.com/goldenlock" target="_blank">@goldenlock</a> Thanks for the kind words and congrats on getting gold to you as well. My split is similar to what Zacchaues mentioned, but I simply stratefied on effectiveness instead:</p>
<pre><code>cv = StratifiedGroupKFold(n_splits=args.nfolds,shuffle=True,random_state=2022)
for fold, (train_idxs, test_idxs) in enumerate(cv.split(train, train['label'], train['essay_id'])):
train['fold'][test_idxs]=fold
</code></pre>
<p><a href="https://www.kaggle.com/syhens" target="_blank">@syhens</a> Thanks and yes I use the same config for both first round training and multi-round pl training. The only difference is the number of epochs. Since I use a linear scheduler, I change the number of total steps to total pl label steps + total gt label steps. My teammates also do something similar: the training schedule changes a bit but nothing else</p>
<p><a href="https://www.kaggle.com/fankaixie" target="_blank">@fankaixie</a> Thansk and congrats on getting gold! Sorry about the confusion. I changed the text in question to</p>
<pre><code>For instance, because I have 6 folds of neural network models and 6 folds of xgb models, this way i have 6x6=36 preds to avg for each single model.
</code></pre>
<p>What I did in the beginning was average the neural network predictions and then use those predictions to generate features for the stacking models (1x6); however, later I found that it was better to get features for each fold model and use all folds of xgb models to make predictions for that fold (thus 6x6).</p>
<p>LMK if this clears it up! </p>
<p><a href="https://www.kaggle.com/yasso1" target="_blank">@yasso1</a> It is indeed similar to the first place solution. Only pl data ( from feedback 2021) is used in step 2. For me, the pseudo labeling procedure is mainly based on my experience with previous competitions, and the original inspiration comes from <a href="https://arxiv.org/pdf/1905.00546.pdf" target="_blank">https://arxiv.org/pdf/1905.00546.pdf</a></p>
<p>Yes you're correct</p>
<p><a href="https://www.kaggle.com/wuwenmin" target="_blank">@wuwenmin</a> Np. Not too sure about how useful pl is for FB3 yet or how to use unlabeled data. Usually I'm not concerned about overfitting as long as val loss gets better, but indeed training loss becomes somewhat useless if you mix PL and GT labels</p>
<p><a href="https://www.kaggle.com/wuwenmin" target="_blank">@wuwenmin</a> while what you said may be true, in many cases, difficult examples cannot be learned until train loss is much smaller than val loss. If you then assume you're splitting the data properly and your val loss is well correlated with test loss, it makes sense to keep training until val stops improving</p>
<p><a href="https://www.kaggle.com/yasso1" target="_blank">@yasso1</a> I didnt experiment too much with mixing pseudo labels except for initially, and the results were worse when mixing</p>
<p><a href="https://www.kaggle.com/wuwenmin" target="_blank">@wuwenmin</a> You need to retrain the model from the original checkpoint you used (e.g. huggingface checkpoint)</p>
<p>Fixed it! Sorry about that</p> |
Feedback Prize - Predicting Effective Arguments | 3rd Place Solution - Span MLM + T5 Augmentations | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Predicting Effective Arguments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>no didn't face much issues, training was stable.</p>
<p>haven't decided on releasing the model yet…</p>
<p>yes, we used this to be specific</p>
<pre><code># model
base_config = AutoConfig.from_pretrained(config["model_checkpoint"])
base_config.update(
{
# "vocab_size": len(tokenizer),
"max_position_embeddings": 1024,
}
)
model = AutoModelForMaskedLM.from_pretrained(config["model_checkpoint"], config=base_config)
model.deberta.resize_token_embeddings(len(tokenizer))
model.cls = DebertaV2OnlyMLMHead(base_config) # TODO: check impact
</code></pre>
<p>thanks! 😊</p>
<p>each model had different accuracy on the span mlm, deberta-v3-larger was around 45% accuracy, 16.x perplexity. LUKE on the other hand had ~25% accuracy</p>
<p>10% random replacement and 10% no change, same as in <code>DataCollatorForLanguageModeling</code></p>
<p>sorry, we don't have any immediate plan to make the code public. Thanks for your understanding.</p>
<p>Hi <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a>, Congratulations on winning the competition on both efficiency and conventional track 🎉🎉🎉 We really enjoyed the fight and believed that pushed each other. On synthetic data, yes i also think it compensated for pseudo labelling partially. However, we only went there as we couldn't make any sort of pseudo labelling work. Soft targets seem to have worked better, it was on our list of things to try but never got to it. </p>
<p>sadly yes, we only tried many different ways with hard labels. Thanks for the explanation, that must be it! We got distracted in finding an explanation why it doesn't work, instead of making it work. Was partially influenced by prior experience where I mostly found soft and hard targets to perform equally. But every dataset + task have subtle differences.</p>
<p>thanks 😊 congratulations to you too!</p>
<p>yes, up to 750 token length. We used this from huggingface code for splitting the essays</p>
<pre><code> def group_texts(examples):
# Concatenate all texts
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
# Compute length of concatenated texts
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the last chunk if it's smaller than chunk_size
total_length = (total_length // chunk_size) * chunk_size
# Split by chunks of max_len
result = {
k: [t[i: i + chunk_size] for i in range(0, total_length, chunk_size)]
for k, t in concatenated_examples.items()
}
# Create a new labels column
result["labels"] = result["input_ids"].copy()
return result
</code></pre>
<p>Thanks <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> </p>
<blockquote>
<p>What compute did you use? AWP + Deberta with long sequences in full precision sounds like it would take quite a long time.</p>
</blockquote>
<p>We mainly used A100 and A6000s. Training deberta-v3-large took around 7-8 hours for each fold, it was almost on limit for GPU memory for our training setting.</p>
<blockquote>
<p>I tried a higher masking rate (per Should You Mask 15% in Masked Language Modeling? ) for NBME but I didn't see an improvement. Did you see it helping without doing the long span masking?</p>
</blockquote>
<p>We don't have results on higher masking for regular mlm for this competition. In NBME, i used 20% which was better than 15%. Here, we switch to span mlm quite early as the boost was amazing. Initially, we tried 20% then went 30% and then up to 50%, each time with noticeable boost in CV and LB. </p>
<p>In addition, at later stage of the competition we found backbone gets better and better with longer training mlm. Eventually we were using mlm training up to 50 epochs (up from ~15 epochs earlier) not sure how far could we have pushed it. </p>
<p>For some models (e.g. deberta-v2-xlarge) span mlm didn't work, model broke down couldn't cross 4% accuracy… </p>
<p>thanks <a href="https://www.kaggle.com/utsavsavaliya" target="_blank">@utsavsavaliya</a>!</p>
<p>A big thanks to Kaggle & the completion hosts for introducing the <code>Efficiency Track</code>, which is an amazing addition and will surely lead to more creative solutions. This is a detailed version of our solution for the <code>Feedback Prize - Predicting Effective Arguments</code> competition. </p>
<h1>Links</h1>
<ul>
<li>TL;DR: <a href="https://www.kaggle.com/competitions/feedback-prize-effectiveness/discussion/347371" target="_blank">https://www.kaggle.com/competitions/feedback-prize-effectiveness/discussion/347371</a></li>
<li>Code: <a href="https://github.com/rbiswasfc/kaggle-feedback-effectiveness-3rd-place-solution" target="_blank">https://github.com/rbiswasfc/kaggle-feedback-effectiveness-3rd-place-solution</a></li>
<li>Inference Notebook: <a href="https://www.kaggle.com/code/conjuring92/ens58-lstm-lgb-all/notebook?scriptVersionId=104014668" target="_blank">https://www.kaggle.com/code/conjuring92/ens58-lstm-lgb-all/notebook?scriptVersionId=104014668</a></li>
</ul>
<h1>Model Architecture</h1>
<p>Overall, we followed the span classification approach with model architecture as below<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2125251%2F1b6b3d2fbb7551da6de6fe45f3f00f94%2FScreenshot%202022-08-24%20at%2010.40.29%20AM.png?generation=1661308851093353&alt=media" alt=""></p>
<h1>Pre-Processing</h1>
<p>We pre-processed each essay to insert newly added span start and span end tokens for each discourse types. To provide more context to our models, we added a topic segment in this format <code>[TOPIC] <prompt_text> [TOPIC END] <essay></code>. As a minor detail, we inserted <code>[SOE]</code> and <code>[EOE]</code> tokens to indicate essay start and end. As an illustration here is an example:</p>
<pre><code>[TOPIC] Should computers read the emotional expressions of students in a classroom? [TOPIC_END] [SOE] Lead [LEAD] Imagine being told about a new new technology that reads emotions. Now imagine this, imagine being able to tell how someone truly feels about their emotional expressions because they don't really know how they really feel. [LEAD_END] In this essay I am going to tell you about my opinions on why Position [POSITION] I believe that the value of using technology to read student's emotional expressions is over the top. [POSITION_END]
...
Concluding Statement [CONCLUDING_STATEMENT] In conclusion I think this is not a very good idea, but not in the way students should be watched. The way teachers are able to read students' facial expressions tells them how they feel. I don't believe it's important to look at students faces when they can fake their emotions. But, if the teacher is watching you then they're gonna get angry. This is how I feel on this topic. [CONCLUDING_STATEMENT_END] [EOE]
</code></pre>
<h1>Span MLM</h1>
<p>To train the newly added tokens (e.g. <code>[LEAD]</code>, <code>[POSITION]</code>) and adapting to the specific task domain (i.e. student essays from grade 6-12) we continued the pre-training phase of each backbone (e.g. <code>deberta-v3-large</code>) with masked language modelling (MLM) objective. While standard MLM works alright, we found big boost with the following modifications:</p>
<ul>
<li>Changing masking probability to 40-50% instead of 15% used typically. For a detained analysis on masking rate, please refer to this : <strong>Should You Mask 15% in Masked Language Modeling?</strong> <a href="https://arxiv.org/abs/2202.08005" target="_blank">https://arxiv.org/abs/2202.08005</a></li>
<li>Masking contiguous tokens of length 3-15 instead of regular random masking approach. Out motivation came from this paper: <strong>SpanBERT: Improving Pre-training by Representing and Predicting Spans</strong> <a href="https://arxiv.org/abs/1907.10529" target="_blank">https://arxiv.org/abs/1907.10529</a></li>
<li>Changing chuck size / max length to 720 to match average essay length in the fine-tuning stage</li>
</ul>
<p>Here we used the ~15k essays from the 2021 feedback competition. In addition, we added ~25k synthetically generated essays by leveraging T5-large model. More on this in the next section.</p>
<h1>T5 Augmentations</h1>
<p>Since we couldn't find external data sources that worked for us, we explored ways to create our own augmented / synthetic data / essays. To this end, we adopted two approaches</p>
<p><strong>Label Preserving Training Data Augmentation with T5</strong> Here we used 2022 labelled data to generate synthetic examples, which are directly added to train dataset during fine-tuning step. For this purpose, we first train the T5-large model with a seq2seq (text-to-text) text generation task. To be specific, this was the task input-output format:</p>
<ul>
<li>Input Template: <code>Generate text for <discourse effectiveness> <discourse type> | Prompt: <prompt text> | Left Context: <left context> | Right Context: <right context></code>, where left context is essay up to the discourse element we try to generate and right context is anything after the discourse element</li>
<li>Output Template: <code><discourse text></code> </li>
</ul>
<p>Here is one example:</p>
<p><code>Generate text for Ineffective Evidence | Prompt: Should the electoral college be abolished in favor of popular vote? | Left Context:Dear, State Senator\n\nWith the elctoral college vote most of people are not getting what they prefer. For the the electoral college vote, voters vote fro not the president, but for a not slate of electors. <....> majority of the people would rather have the popular vote.| Right Context: the electoral college election consist of the 538 electors. All you have to do is win most of those votes and you win the election. <.....> The electoral college is unfair, outdated, and irrational to all the people</code><br>
--><br>
<code>It does not make any sense to me how the president with the votes does not win the presidential election. The electoral college is unfair for all voters that vote. When you think of a vote and how it works you would think by the most one with the most votes wins cause that pretty much means that most of the people would rather have the most wins over the win that had less votes but more electoral votes.</code></p>
<p>The trained T5 model was pretty amazing, we couldn't distinguish which one was generated text and which one was original. The augmentations copied the student writing styles, identity masks e.g. and included effects such as that mentioned by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> in the previous competition discussion:</p>
<blockquote>
<p>just imagine you are asked to write the essay for homework. you are a bad student and want to copy other people work. you want to do so, such that that your teacher will not know that you are copying other people's work.<br>
you can:<br>
1) choose an essay of the train data. replace the claim from another essay. (if the other essay has the same position as the first essay)</p>
</blockquote>
<p>Reference: <strong>Data Augmentation using Pre-trained Transformer Models</strong> <a href="https://arxiv.org/abs/2003.02245" target="_blank">https://arxiv.org/abs/2003.02245</a></p>
<p><strong>T5: Non-Label Preserving Essay Augmentation</strong><br>
We used this to mainly generate essays for MLM. It's quite similar to previous augmentation task with minor changes</p>
<ul>
<li>uses all ~15k essays from feedback 2021 dataset</li>
<li>model only sees left context </li>
<li>no discourse effectiveness label is given in input prompt</li>
</ul>
<h1>Fine-tuning</h1>
<p>In the fine-tuning stage, we mixed varying degree of T5 label-preserved augmented data (0-50% of original essays) with actual data. This was pretty much standard with following key highlights</p>
<ul>
<li>Inclusion of prompts in essay: This helped models to boost performance on Lead and Position discourse types</li>
<li>Adversarial Training with AWP</li>
<li>Mask Augmentations (10-25%)</li>
<li>Multi-sample dropouts</li>
<li>Multi-head attention over span representations</li>
<li>LSTM over transformer encoder representations</li>
<li>Loss = CE Loss + Focal loss</li>
<li>Cosine scheduler</li>
<li>Layer wise learning rate decay</li>
<li>Mean pooling to extract span representations as opposed to [CLS] token based ones</li>
</ul>
<h1>Model Backbones</h1>
<ul>
<li>DeBERTa-Large</li>
<li>DeBERTa-XLarge</li>
<li>DeBERTa-V3-Large</li>
<li>Longformer</li>
<li>LUKE</li>
</ul>
<h1>Ensembling</h1>
<p>We built two meta models based on OOF model probabilities + additional features</p>
<p><strong>LSTM based</strong></p>
<ul>
<li>Used only stage-1 model probabilities as features</li>
<li>Had better correlations with CV and LB</li>
</ul>
<p><strong>LGB based</strong></p>
<ul>
<li>Used stage-1 model probabilities as features + additional features</li>
<li>Count of next line per essay text</li>
<li>Count of discourse_id per essay_id</li>
<li>Pos tags features (NN, VB etc)</li>
</ul>
<h1>Things that had most impact</h1>
<ul>
<li>Span MLM: 0.02 to 0.03</li>
<li>AWP: 0.005 to 0.01</li>
<li>Prompts: 0.002 to 0.005</li>
<li>Direct use of T5 Augmentations: -0.002 to 0.005</li>
<li>Mask Augmentation + MSD: 0.002 - 0.005</li>
<li>LSTM + LGB ensemble: 0.002-0.004 </li>
</ul>
<h1>Things that added for diversity</h1>
<ul>
<li>LUKE model</li>
<li>Multitask objectives e.g. predicting discourse type, ordered encoding of labels e.g. ineffective -> [0, 0], adequate -> [1, 0], effective -> [1, 1]</li>
<li>Different sampling ratio of augmented data</li>
<li>Impact moderation of focal loss with its gamma parameter</li>
<li>Knowledge distillation</li>
</ul>
<h1>How to make training stable?</h1>
<p>We could train our models up to 6-7 epoch with best results obtained in penultimate/last epoch. The following helped in prolonging and stabilizing the training</p>
<ul>
<li>Careful selection of key params e.g. batch size, learning rate, weight decay</li>
<li>Cosine learning rate scheduler</li>
<li>Task adaptation with MLM</li>
<li>AWP</li>
<li>Mask augmentations + Multi-sample dropouts</li>
<li>Layer wise learning rate decay</li>
<li><strong>Full precision training</strong></li>
</ul>
<h1>Things that didn't work for us</h1>
<ul>
<li>Pseudo labelling / meta pseudo labelling</li>
<li>on hindsight, this was a big miss for us</li>
<li>Random augmentation </li>
<li>UDA - Unsupervised Data Augmentations</li>
<li>Contrastive Learning</li>
<li>SWA</li>
<li>Mixout</li>
</ul>
<p>Thanks for reading this far! Hope this helps you in your future NLP competitions! Looking forward to your comments and feedbacks.</p>
<p>Our team ( <a href="https://www.kaggle.com/trushk" target="_blank">@trushk</a> , <a href="https://www.kaggle.com/harshit92" target="_blank">@harshit92</a>) dynamics was brilliant throughout the competition. We had a lot of fun and learned so much together. Looking forward to future teaming up! </p>
<p>Thanks 😊</p>
<p>thanks <a href="https://www.kaggle.com/drpatrickchan" target="_blank">@drpatrickchan</a>!! congratulations to you too!</p>
<p>In MLM it was mostly same, but without the discourse type texts e.g. </p>
<p><code>[TOPIC] Should computers read the emotional expressions of students in a classroom? [TOPIC_END] [SOE] [LEAD] Imagine being told about a new new technology that reads emotions. Now imagine this, imagine being able to tell how someone truly feels about their emotional expressions because they don't really know how they really feel. [LEAD_END] In this essay I am going to tell you about my opinions on why [POSITION] I believe that the value of using technology to read student's emotional expressions is over the top. [POSITION_END]</code></p>
<p>i.e. the words <code>Lead</code> and <code>Position</code> are removed. We did this because we wanted models to learn meaning of discourse type tokens.</p>
<p>Thanks! </p>
<p>For contrastive learning please refer to this paper:</p>
<ul>
<li>SUPERVISED CONTRASTIVE LEARNING FOR PRE-TRAINED LANGUAGE MODEL FINE-TUNING (<a href="https://openreview.net/pdf?id=cu7IUiOhujH" target="_blank">https://openreview.net/pdf?id=cu7IUiOhujH</a>)</li>
</ul>
<p>Yes, weight decay of 1e-2 works in most cases but few models are quite specific e.g. electra recommends weight decay to be set to 0. I normally decrease it when i'm training longer…</p>
<p>Congratulations <a href="https://www.kaggle.com/horsek" target="_blank">@horsek</a> for becoming Competition Master 🎉 and gold position in this competition!</p>
<p>For prompts we followed this great notebook: <a href="https://www.kaggle.com/code/lextoumbourou/feedback-topics-identification-with-bertopic" target="_blank">https://www.kaggle.com/code/lextoumbourou/feedback-topics-identification-with-bertopic</a> for topic identification with BERTopic model, then we had a mapping from topic to prompt e.g. <code>'seagoing luke animals cowboys': 'Should you join the Seagoing Cowboys program?'</code></p>
<p>thanks <a href="https://www.kaggle.com/duykhanh99" target="_blank">@duykhanh99</a>! </p>
<p>Thanks Sanyam! 😊 </p>
<p>Hi <a href="https://www.kaggle.com/nickycan" target="_blank">@nickycan</a>, for the span mlm part we just modified the <code>DataCollatorForLanguageModeling</code> from HF library. For the modification, we referred to SpanBert git repo, specifically here: <a href="https://github.com/facebookresearch/SpanBERT/blob/main/pretraining/fairseq/data/masking.py#L137" target="_blank">https://github.com/facebookresearch/SpanBERT/blob/main/pretraining/fairseq/data/masking.py#L137</a></p> |
Feedback Prize - Predicting Effective Arguments | 4th Place Solution - Team ... | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Predicting Effective Arguments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats all and thank you Georgia State University, Kaggle and all the organizers for hosting such an amazing competition. We were unfortunate to finish just short of a prize in both tracks of the competition, but will take the learnings with us!</p>
<h3>TL;DR</h3>
<p>Our solution is based on deberta-v3, mlm, pseudo labeling and stacking with LGB. We found the combination of a number of NLP improvements added up to a solid score. </p>
<h3>Pre/Data processing</h3>
<p>The core of our solution was to combine, in order, all discourse ids, and then start and finish them with discourse typeids and special tokens. <br>
A cls_position was used as start or end of the discourse. In python, see below, </p>
<pre><code>input_ids = []
essay_df = = trndf.loc[essay_id]
for i in range(len(essay_df)):
text = f'{discourse_types[i]}: {discourse_texts[i]} {self.tokenizer.sep_token} :{discourse_types[i]}'
i_ids = self.tokenizer.encode(text, add_special_tokens=False)
input_ids += i_ids
cls_pos += [0] * (len(i_ids) - 1) + [1]
</code></pre>
<p>An example of the output can be seen below. </p>
<pre><code>Lead: Computers can do many things for us. The idea that a computer can read your emotions is amazing [SEP] :Lead Position: I do not belive it is true. I will believe it when I see it. [SEP] :Position Evidence: In paragraph 8 it says " But in a false smile, the mouth is stretched sideways ising the zygomatic major and different muscle, the risorius." however this may be true for most people; there has to be someone out there in our world of 7 billion that smiles there smiles with their zygomatic muscle or their risorius muscle [SEP] :Evidence Claim: Everyone has diffrent emotions and everyone shows them diffrently [SEP] :Claim Counterclaim: The muscles in our face does say a lot about the emotions we are feeling. [SEP] :Counterclaim Concluding Statement: This is why I believe computures can not read your emotions [SEP] :Concluding Statement
</code></pre>
<p>Auxiliary labels such as ranking (Ineffective -> Adequate -> Effective) and essay topic cluster were also used. <br>
We also added the essay text, which was not in annotated discourses to the end, and sometimes positioned in order between discourses. <br>
Replacing line breaks with special tokens and cleaning text in some models was found to increase diversity significantly in the overall blend. </p>
<h3>Level1 Models</h3>
<p>We used three different model pipelines, with some smaller changes in data preprocessing, along with different auxiliary labels and model heads. We only used <code>deberta-v3</code> as the backbone. Other backbones did not help our blend in CV. </p>
<h6>Model1,</h6>
<ul>
<li>Non discourse essay text moved to the back of input, and text cleaned with line breaks added. </li>
<li>Used auxiliary targets of rank (rmse loss) and essay topic. Lowered auxiliary weight, from 0.4 to 0.01, as model trained. </li>
<li>Linear head extracting the first token of each discourse in the input, with categorical crossentropy loss. </li>
</ul>
<h6>Model2,</h6>
<ul>
<li>Non discourse essay text between discourses. No cleaning or line breaks</li>
<li>Used auxiliary targets of rank (rmse loss). Weight of auxiliary loss ~0.2. </li>
<li>Linear head extracting the last token of each discourse in the input, with categorical crossentropy loss. </li>
</ul>
<h6>Model3,</h6>
<ul>
<li>Non discourse essay text between discourses. No cleaning or line breaks</li>
<li>Used auxiliary targets of rank (rmse loss). Weight of auxiliary loss ~0.2. </li>
<li>This was actually two models - one with an rnn head applied to the mean pooling of the tokens belonging to each discourse; and one with an attention head on the same input.</li>
</ul>
<p>It is important to remove deberta hidden layer dropout on all models. We also found it helpful to pretrain the weights on the earlier feedback competition essays for around 20 epochs (low lr, large batchsize). <br>
All models were trained with 2-3 epochs, with backbone set to lr ~2e-5 and the model head was set to ~1e-4. </p>
<h3>Level2 Models</h3>
<p>We created meta pseudo labels on the essays from the first feedback competition (excl. current comp essays). In-fold predictions from all models were used to create an averaged in-fold prediction, which can be used as a leak-free pseudo label.<br>
Each model was then retrained on the hard labels from current competition and the soft pseudo labels. Hard labels from current competition were upweighted in the loss function. </p>
<h3>Stacking</h3>
<p>All level2 model predictions were averaged and used in a lightgbm stacking model. This was trained at discourse level, along with meta features from the respective essay, such as word count, sentence count, position in essay-topic and lead & lag features.<br>
No special postprocessing was applied after that. </p>
<h3>What did not help</h3>
<ul>
<li>Other backbones such as deberta-v1, deberta-v2, t5, facebook-opt, distilled-bloom</li>
<li>2nd or third round pseudo labelling. </li>
<li>Training models for a more epochs (this overfit, should have tried AWP)</li>
<li>The character decoding/encoding used in the public scripts.</li>
<li>Model soup</li>
<li>Test time augmentation</li>
<li>Token dropout</li>
<li>etc. etc. </li>
</ul>
<p>Create a map, like <code>{'Ineffective': 0, 'Adequate': 1, 'Effective': 2}</code> for the target and now you have one continuous label instead of categorical labels. </p> |
Feedback Prize - Predicting Effective Arguments | 5th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Predicting Effective Arguments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to competition organizers for hosting this competition and great teammates ( <a href="https://www.kaggle.com/shinomoriaoshi" target="_blank">@shinomoriaoshi</a>, <a href="https://www.kaggle.com/horsek" target="_blank">@horsek</a>, <a href="https://www.kaggle.com/runningz" target="_blank">@runningz</a>, <a href="https://www.kaggle.com/nickycan" target="_blank">@nickycan</a> ).<br>
Thanks also to the community for sharing many ideas in Notebook and Discussion.</p>
<p>Note: This post is a brief summary, and more detailed information will be updated or posted as a new topic by my teammates.</p>
<ul>
<li><a href="https://www.kaggle.com/competitions/feedback-prize-effectiveness/discussion/347369" target="_blank">tri's pipeline</a></li>
<li><a href="https://www.kaggle.com/competitions/feedback-prize-effectiveness/discussion/347379" target="_blank">housuke's pipeline</a></li>
</ul>
<h1>Summary</h1>
<p>We ensembled 6 token classification models and 1 seq classification model.<br>
<img src="https://user-images.githubusercontent.com/40639636/186302723-c501167e-a850-4fc6-93b0-583e3af72412.png" alt=""></p>
<h1>Models</h1>
<p>We trained following models and used them for final submission.</p>
<ul>
<li>tri( <a href="https://www.kaggle.com/shinomoriaoshi" target="_blank">@shinomoriaoshi</a> )'s pipeline<ul>
<li>token classification Deberta-v3-large</li></ul></li>
<li>housuke( <a href="https://www.kaggle.com/horsek" target="_blank">@horsek</a> )'s pipeline<ul>
<li>token classification Deberta-v3-large</li>
<li>token classification Deberta-large</li></ul></li>
<li>nakama( <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> )'s pipeline<ul>
<li>token classification Deberta-v3-large</li>
<li>seq classification Deberta-v3-large</li></ul></li>
<li>RunningZ ( <a href="https://www.kaggle.com/runningz" target="_blank">@runningz</a> )'s pipeline<ul>
<li>token classification Deberta-v3-large</li>
<li>token classification Deberta-v2-xlarge</li></ul></li>
<li>鲲 ( <a href="https://www.kaggle.com/nickycan" target="_blank">@nickycan</a> )'s pipeline<ul>
<li>mostly engaged on efficiency track</li></ul></li>
</ul>
<h1>Main methods that worked</h1>
<ul>
<li>MLM pretraining<ul>
<li><a href="https://arxiv.org/pdf/2004.10964.pdf" target="_blank">https://arxiv.org/pdf/2004.10964.pdf</a></li></ul></li>
<li>Resolve encoding error<ul>
<li>This method was used in previous 2021 Feedback Prize competition.</li>
<li><a href="https://www.kaggle.com/competitions/feedback-prize-2021/discussion/313330" target="_blank">https://www.kaggle.com/competitions/feedback-prize-2021/discussion/313330</a></li></ul></li>
<li>Mask augmentation<ul>
<li>This method was used in previous 2021 Feedback Prize competition.</li>
<li><a href="https://www.kaggle.com/competitions/feedback-prize-2021/discussion/313424" target="_blank">https://www.kaggle.com/competitions/feedback-prize-2021/discussion/313424</a></li></ul></li>
<li>Adversarial training (AWP, FGM)<ul>
<li>This method was used in previous 2021 Feedback Prize competition.</li>
<li><a href="https://www.kaggle.com/competitions/feedback-prize-2021/discussion/313177" target="_blank">https://www.kaggle.com/competitions/feedback-prize-2021/discussion/313177</a></li></ul></li>
<li>Multi-sample dropout<ul>
<li>This method was used in Google QUEST Q&A Labeling competition.</li>
<li><a href="https://www.kaggle.com/competitions/google-quest-challenge/discussion/129840" target="_blank">https://www.kaggle.com/competitions/google-quest-challenge/discussion/129840</a></li></ul></li>
<li>Add GRU layer</li>
<li>Label smoothing</li>
<li>Add discourse_type for each discourse_text</li>
<li>Add [head] [tail] tokens for each discourse_text</li>
<li>Back translation (worked only for RunningZ's pipeline)</li>
<li>Pseudo labeling</li>
</ul>
<h1>2nd stage Stacking using LSTM</h1>
<p>housuke( <a href="https://www.kaggle.com/horsek" target="_blank">@horsek</a> ) tried this early in the competition, and it worked very well.</p>
<h1>3rd stage Stacking using XGBoost</h1>
<p>After the 2nd stage Stacking, we applied 3rd stage Stacking using XGBoost, it improved result a bit.</p>
<h1>Final Result</h1>
<p>CV: 0.5609<br>
Public LB: 0.555<br>
Private LB: 0.560</p> |
Feedback Prize - Predicting Effective Arguments | 72th short solution (2 stage model) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Predicting Effective Arguments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats and thanks all kaggler sharing, This is my first solo silver medal. I learn a lot in this competition.</p>
<p>Brief solution as follows</p>
<ol>
<li>My model input like <a href="https://www.kaggle.com/code/nbroad/token-classification-approach-fpe" target="_blank">https://www.kaggle.com/code/nbroad/token-classification-approach-fpe</a>, thanks <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> </li>
<li>use xgb and lgb model and add some handcraft features, it can imporve your score about 0.01</li>
</ol>
<p>final model :<br>
Deberta-V3-Large + Deberta-Large (1 stage model) <br>
Private : 0.597 Public 0.591</p>
<p>Deberta-V3-Large + Deberta-Large (2 stage model lgb+xgb) <br>
Private : 0.586 Public 0.586 (But I don't select this in my final :( ) </p>
<p>Deberta-V3-Large + Deberta-Large + Roberta-Large (2 stage model only lgb) <br>
Private : 0.588 Public 0.585 </p>
<p>HandCraft additional Feature as belows :<br>
(reference Automated essay scoring paper)</p>
<p>feature_col = [</p>
<pre><code> 'adequate', 'effective', 'ineffective', # deberta predict
'essay_len', 'discourse_type_amt_nunique','discourse_type_cnt',
'discourse_type_shift', 'text_len',
# textstat
'syllable_count',
'flesch_reading_ease', 'flesch_kincaid_grade', 'fog_scale', 'smog',
'automated_readability', 'coleman_liau', 'linsear_write',
'dale_chall_readability', 'text_standard', 'total_words', 'stop_prop',
'unique_words', 'spelling_mistake_count',
'difficult_words', 'crawford'
',', '.', 'VB', 'JJR', 'WP', ### pos tag features
'PRP$', 'VBN', 'VBG', 'IN', 'CC', 'JJS', 'PRP', 'MD', 'WRB', 'RB',
'VBD', 'RBR', 'VBZ', 'NNP', 'POS', 'WDT', 'DT', 'CD', 'NN', 'TO', 'JJ',
'VBP', 'RP', 'NNS',]
</code></pre>
<p>I dont try only use deberta predict prob in 2 stage model, but in my local cv</p>
<p>if only use deberta model to 2 stage model (no additional features) CV : 0.601~0.602</p>
<p>if add addtional features CV : 0.595~0.596</p>
<p>sure, I picked out some automate essay scoring paper, you can read<br>
[<a href="https://arxiv.org/pdf/1901.07744.pdf" target="_blank">https://arxiv.org/pdf/1901.07744.pdf</a>]<br>
[<a href="https://arxiv.org/pdf/2008.01441.pdf" target="_blank">https://arxiv.org/pdf/2008.01441.pdf</a>]</p>
<p>thanks a lot 😃<br>
your notebook are informative, and learn lots of this</p> |
Feedback Prize - Predicting Effective Arguments | 73th Place Solution(feature design, MLM, stacking) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Predicting Effective Arguments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank Kaggle and host for hosting such an interesting competition!</p>
<h1>Summary</h1>
<h4>1.feature design</h4>
<p>The score was boosted by adding a discourse_type of the same essay_id for each discourse_text.<br>
Also, score was better with discourse_type capitalized.<br>
<code>dis_type.upper() + ‘ ' + dis_text+ ‘ ‘ + dis_type of same essay_id + ‘ ' + dis_text of same essay_id + ‘ ' + dis_type of same essay_id + ‘ ' + dis_text of same essay_id ...</code></p>
<h4>2.feedback2021 MLM</h4>
<p>Model was learned MLM, after feedback2021 dataset is converted to feedback2022 baseline format and feature designed.<br>
・before: raw feedback2021 essay_text</p>
<pre><code>dis_text,
dis_text,
dis_text, …
</code></pre>
<p>・after: feedback2022 baseline format & feature design</p>
<pre><code>dis_type.upper() + ‘ ' + dis_text+ ‘ ‘ + dis_type of essay_id + ‘ ' + dis_text of essay_id + ‘ ' + dis_type of essay_id + ‘ ' + dis_text of essay_id …,
dis_type.upper() + ‘ ' + dis_text+ ‘ ‘ + dis_type of essay_id + ‘ ' + dis_text of essay_id + ‘ ' + dis_type of essay_id + ‘ ' + dis_text of essay_id …,
dis_type.upper() + ‘ ' + dis_text+ ‘ ‘ + dis_type of essay_id + ‘ ' + dis_text of essay_id + ‘ ' + dis_type of essay_id + ‘ ' + dis_text of essay_id …,
</code></pre>
<h4>3.stacking</h4>
<p>stage1. 4 deberta-v3-large models , stage2.LGBM<br>
The score was boosted by stacking, while the weighted average ensemble only boosted the score a little.<br>
model1 : feature design + MLM</p>
<pre><code>dis_type.upper() + ‘ ' + dis_text+ ‘[SEP]‘ + dis_type of essay_id + ‘ ' + dis_text of essay_id + ‘ ' + dis_type of essay_id + ‘ ' + dis_text of essay_id ...
</code></pre>
<p>model2 : feature design + MLM</p>
<pre><code>dis_type.upper() + ‘[SEP]' + dis_text+ ‘[SEP]‘ + dis_type of essay_id + ‘ ' + dis_text of essay_id + ‘ ' + dis_type of essay_id + ‘ ' + dis_text of essay_id ...
</code></pre>
<p>model3 : feature design</p>
<pre><code>dis_type.upper() + ‘ ' + dis_text+ ‘ ‘ + dis_type of essay_id + ‘[SEP]' + dis_text of essay_id + ‘ ' + dis_type of essay_id + ‘ ' + dis_text of essay_id ...
</code></pre>
<p>model4 : feature design</p>
<pre><code>dis_type.upper() + ‘ ' + dis_text+ ‘[SIGN]' + dis_types of essay_id + ‘ ' + dis_texts of essay_id
</code></pre>
<h1>Other</h1>
<p>・approach : text classification<br>
・model : only deberta v3 large<br>
・max length : 512<br>
・CV strategy :StratifiedGroupKFold<br>
・head : cls embedding<br>
・epoch : 2</p>
<h1>Score</h1>
<table>
<thead>
<tr>
<th>CV</th>
<th>public LB</th>
<th>private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.5920</td>
<td>0.585</td>
<td>0.589</td>
</tr>
<tr>
<td>0.5919</td>
<td>0.586</td>
<td>0.590</td>
</tr>
</tbody>
</table> |
Feedback Prize - Predicting Effective Arguments | 80th Place Solution (Single Model) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Predicting Effective Arguments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>First of all, thanks to the competition organizers for a great competition, and all the people who published great notebooks, which everyone was able to learn from. Additionally, <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> <a href="https://www.kaggle.com/code/nbroad/token-classification-approach-fpe/notebook" target="_blank">notebook</a> was a gamechanger, because it not only improved results drastically, but also cut down runtime ten fold.</strong></p>
<p>My final solution only used a single model, as I didn't have enough time to ensemble multiple models. The model I used was of course, deberta large, as it consistently produced the best results (cv and lb). </p>
<h1>Training Process:</h1>
<p>As mentioned above, I used the token classification approach proposed by <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> as it gave a major boost in cv, and lb. Additionally, I changed the pooler to a weighted layer pooler, which seems to give the best results across many competitions/tasks. These changes helped me break 0.600 on the public leaderboard. To get the model down to 0.589 (0.590 private lb), I added an auxiliary task which stabilized training, and also of course improved cv and lb. The auxiliary task was another classification task, where the model predicted the type of discourse, in addition to it's effectiveness. For example, given an essay, the model would predict what each discourse was, and it's effectiveness. </p>
<h2>Things that worked:</h2>
<p>token classification<br>
multi-task learning<br>
weighted pooler<br>
increased epochs for scheduler</p>
<h2>Things that didn't work:</h2>
<p>mlm <br>
psuedo labeling (I had a feeling that the winners would be those who could figure out how to get this to work, but I never got convincing results in my experiments)<br>
contrastive learning<br>
using the last competitions task as an auxiliary task</p>
<h3>Conclusion</h3>
<p>I was very happy to medal in this competition, as this is my first competition medal. I am very curious to understand how the top solutions utilized the extra unlabeled data.<br>
Training code: <a href="https://www.kaggle.com/code/ryanbarretto/debertalargeauxtraining" target="_blank">Training Notebook</a> </p>
<p>For my private and public lb, it gave me a boost of around 0.01, which was pretty significant. </p> |
Feedback Prize - Predicting Effective Arguments | 88th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Predicting Effective Arguments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Sorrowfully 😪</p>
<p>Thank you!</p>
<p>Firstly, I would like to competition's organizers for such a great competition and congrats to the winners, you did amazing efforts! Secondly, I would like to thank my great teammates: Mark ( <a href="https://www.kaggle.com/e0xextazy" target="_blank">@e0xextazy</a> ) , Otabek ( <a href="https://www.kaggle.com/obatek" target="_blank">@obatek</a> ), and Sanskar ( <a href="https://www.kaggle.com/odins0n" target="_blank">@odins0n</a> ) ! Thank you guys for the spent time, experience and emotions! </p>
<p>Here, I am going to describe our final solution, which could reach 0.594 and 0.592 on Public Leaderboard and Private Leaderboard leaderboard respectively, thereby getting us to 88th place and giving us the bronze medal (I and Otabek will become Kaggle Experts 😄).</p>
<p>Unfortunately, in the middle of the competition, we switched to the AI4Code competition and when we switched back we only had ~1 week to try out some other approaches and ideas. We used only 4x models in our ensemble: <a href="https://www.kaggle.com/code/brandonhu0215/feedback-deberta-large-lb0-619" target="_blank">one of the high-scoring public notebooks</a>, Token Classification, a model with fine-tuned weights from the previous Feedback Prize competition, and a model with quite strong pre-processing. Such diversity of models permanently improves our Public Leaderboard from 0.610 to 0.594. </p>
<h1>Cross-Validation</h1>
<p>For classification, we used grouping by <code>essay_id</code> and stratify by <code>discourse_effectiveness</code>. Multi-label Stratified Cross-validation was used for Token Classification, i.e stratify by counting the number of entries of each <code>discourse_effectiveness</code> in the given essay.</p>
<h1>Useful attempts</h1>
<ul>
<li>Longer training for 4-5 epochs</li>
<li>Using pre-trained weights from the previous Feedback Prize competition.<br>
We used <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>'s fine-tuned weights of the DeBERTa BiLSTM model.</li>
<li>Models diversity<br>
3x classification models 0.61 -> replace one classification model with token-classification model 0.598 -> add one more token classification model 0.594. </li>
</ul>
<h1>Useless attempts</h1>
<ul>
<li><p>Masked Language Modeling pre-training</p></li>
<li><p>Pseudo-Labeling</p>
<p>We tried it yesterday when we had quite a strong ensemble, but pseudo-labels generating and then training took ~20 hours, so we just tried Hard Pseudo-Labeling, but it does not work well. Unfortunately, after reading other teams' solutions, I am very sad, that we didn't have much time two try more approaches with Pseudo-Labeling, especially Soft Pseudo-Labeling, which could give us the silver medal.</p></li>
</ul>
<h1>Code</h1>
<p>Final ensemble: <a href="https://www.kaggle.com/code/vad13irt/fp-final-ensemble/notebook" target="_blank">https://www.kaggle.com/code/vad13irt/fp-final-ensemble/notebook</a></p> |
Feedback Prize - Predicting Effective Arguments | Good Ineffective - 6th top solution: simple and effective | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Predicting Effective Arguments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>firstly, we trained a deberta-large base on feedback2, and then use the trained model to label the feedback1 data.</p>
<p>Thanks for the Hosts organized this competation and all the teammates who equally shared the contribution to this competation.</p>
<p>I'd like to share our solution, it is an easy and effective solution</p>
<ol>
<li>As like other top teams, the data preprcess is the key for this compettion. In the data, we added element type before the element text, using the [SEP] token as a separator, then we integrate all the elements in an article into one big long sentence, and then predict the classification label for each sentence. the follwing is the sample of input data</li>
</ol>
<p>[SEP]Lead. <em>Discourse_00</em>[SEP]Position. <em>Discourse_01</em>[SEP]Claim. <em>Discourse_02</em> …….</p>
<ol>
<li><p>And then we used the deberta-base as the back-bone to test different model architecture, we used multi-drop out in the final output layer </p></li>
<li><p>we chose the <strong>DeBERTa series model</strong>, specifically using <strong>"microsoft/deberta-large "</strong>, <strong>"microsoft/deberta-v3-large "</strong> from the HuggingFace library.</p></li>
<li><p>AWP was also added in the training stage which was proven effective</p></li>
<li><p>we also labeled the previous feedback1 training data as external data source which can also improved the cv & pl</p></li>
<li><p>Final CV-5Fold and simple weighted fusion.</p></li>
</ol>
<p>following are LB result </p>
<ol>
<li>deberta-base Baseline 5Fold,Public LB : 0.608;</li>
<li>used the whole easay as the input ,Public LB : 0.589;</li>
<li>used deberta-v3-large,Public LB : 0.577;</li>
<li>introduced Pseudo Label,Public LB : 0.572;</li>
<li>introduced AWP training,Public LB : 0.570;</li>
<li>more fintune lr and awp arguments,Public LB : 0.568;</li>
<li>simple average weight ensemble,(deberta-v3-large, deberta-large, deberta-xlarge)Public LB : 0.560;</li>
</ol>
<p>so much impressive by your great job on amex </p>
<p>hope can work with you again in feedback 3</p>
<p>you are switiching to tabluar competions? hope learn with you in msci</p> |
Feedback Prize - Predicting Effective Arguments | Knowledge Injection Method Matters, 29th Writeup | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Predicting Effective Arguments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Like many top solutions, we tried the second stage GBDT model idea very early and it gave us a huge boost. However, the performance of our first-stage transformer model is much worse than the top teams.</p>
<p>I used different token_type_id to inject discourse type and start/end info to the model, and only added special tokens to present the start/end of the whole essay. My input looks like the below:</p>
<pre><code>token_ids: [EBOS] other1 other2 lead1 lead2 other4 position1 position2 ... [EEOS]
tokey_type_ids: 0 0 0 1 1 0 2 2 ... 0
</code></pre>
<p>Top teams mostly inject that info into the token_ids sequence, for example, wrapping the discourse text with type_start and type_end special tokens. After the competition, I tried this preprocessing method and it indeed bring significant performance gain.</p>
<h2>What worked for us</h2>
<ul>
<li>MLM pretraining</li>
<li>add topic info, then group by topic_id to aggregate scores as second-stage features</li>
<li>mean+max pooling gave me some boost</li>
<li>mean+max pooling of discourse element as element embedding, mean pooling of all the element embeddings of the same essay as essay embedding, concat [element embedding, essay embedding] before prediction head. This was inspired by the second stage aggregation.</li>
</ul>
<p>Special Thanks to Z by HP for sponsoring me a Z4 Workstation with dual A6000 GPUs.</p>
<p>topic id.<br>
We trained a classification model to predict the 15 topics.</p> |
Feedback Prize - Predicting Effective Arguments | Now You See Me: 7th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Predicting Effective Arguments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks a lot to the host and Kaggle for this interesting competition and congrats to the winners.<br>
Our solution is a great collaborative effort.<br>
Thank you <a href="https://www.kaggle.com/crodoc" target="_blank">@crodoc</a> <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> <a href="https://www.kaggle.com/ivanaerlic" target="_blank">@ivanaerlic</a> <a href="https://www.kaggle.com/tascj0" target="_blank">@tascj0</a> for teaming up.</p>
<h1>Summary</h1>
<p>Our solution is an ensemble of several transformer based models, which take a full essay (including special tokens) or a concatenation of all discourse_text in the essay as input.<br>
Key points of our solution are a powerful MLM pre-training and soft pseudo labeling on previous competition data. Our MLM pipeline always forced special tokens to be masked and predicted which was important to detect span boundaries.</p>
<h1>Cross-Validation</h1>
<p>For cross-validation we used MultilabelStratifiedKFold on discourse effectiveness and topic clusters to make efficient split on essays.</p>
<p>These folds were really stable and CV to LB correlation was great.<br>
Most of our blending submissions have the same private and public lb score.</p>
<h1>Modeling</h1>
<p>Overall, we followed the token classification approach.<br>
Our main modeling approach has input has follow:</p>
<blockquote>
<p>[CLS] [cls_lead] Hi, i'm Isaac, i'm going to be writing about how this face on Mars is a natural landform or if there is life on Mars that made it. The story is about how NASA took a picture of Mars and a face was seen on the planet. NASA doesn't know if the landform was created by life on Mars, or if it is just a natural landform. [end_lead] [cls_position] On my perspective, I think that the face is a natural landform because I dont think that there is any life on Mars. In these next few paragraphs, I'll be talking about how I think that is is a natural landform [end_position]…..[SEP]</p>
</blockquote>
<p>We feed this sample to the backbone and take cls embeddings or mean pooling between cls and end tokens for each discourse text.<br>
cls embeddings works better than mean pooling for us.</p>
<p><strong>Model Backbones</strong></p>
<ul>
<li>DeBERTa-Large</li>
<li>DeBERTa-XLarge</li>
<li>DeBERTa-V3-Large</li>
</ul>
<h1>What works</h1>
<ul>
<li>Pre-Training MLM</li>
<li>Soft pseudo labeling</li>
<li>Adversarial Weight Perturbation ( AWP)</li>
<li>Stochastic Weight Averaging (SWA)</li>
<li>Removing Dropout</li>
<li>Random mask augmentation (token dropout)</li>
</ul>
<h1>Efficiency solution</h1>
<p>Our best single model scored 0.562 with 8 min inference time.<br>
This single model finished 2nd place in the efficiency track and would make a gold medal score in the accuracy track.<br>
Unfortunately this model was not included in any blend of the final submission for accuracy.<br>
Please refer to <a href="https://www.kaggle.com/competitions/feedback-prize-effectiveness/discussion/347488" target="_blank">this</a> post to read about our efficiency solution.</p>
<p>No nan for us.<br>
you can use the code below to identify pytorch modules leading to nans.</p>
<pre><code>def nan_hook(name):
def hook(m, input, output):
if not torch.isfinite(output).all():
print("Invalid output in {}".format(name))
return hook
for name, module in model.named_modules():
module.register_forward_hook(nan_hook(name))
</code></pre> |
Feedback Prize - Predicting Effective Arguments | Private 23rd solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Predicting Effective Arguments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank the competition organizers for this competition. Also, I would like to say thank you for all the people for providing great notebooks and discussions, especially <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a>. As a matter of fact, my best single model is heavily based on <a href="https://www.kaggle.com/code/nbroad/token-classification-approach-fpe" target="_blank">his kernel</a>.</p>
<h1>stage1: bert prediction</h1>
<p>Because simple text classification approach is not very good at score and took too much inference time, we abandoned this approach and adopt a token classification approach. We trained models different ways in the point of post process, averaging the prediction of all tokens in the discourse_text like US PPPM 8th solution or just adopting the prediction of the first sep token of the discourse_text. In the former type of models, we used weighted cross entropy loss and this helped a lot.</p>
<h1>stage2: lgb and xgb stacking and blending</h1>
<p>Our best public sub adopt blending by Nelder-Mead and stacking using LightGBM and XGBoost by the ratio of 1: 1: 1. In stacking, some features such as the length of text, counts of each discourse type appearing each essay, and mean and std. of each discourse label in the essay_text. The details of some models used in ensemble is like below.</p>
<table>
<thead>
<tr>
<th>model</th>
<th>token</th>
<th>CV</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>deberta-v3-large</td>
<td>sep only</td>
<td>0.5892</td>
<td>0.577</td>
<td>0.580</td>
</tr>
<tr>
<td>deberta-v3-large</td>
<td>text token mean</td>
<td>0.5907</td>
<td>0.579</td>
<td>0.587</td>
</tr>
<tr>
<td>deberta-large</td>
<td>sep only</td>
<td>0.5921</td>
<td>Not submitted</td>
<td>Not submitted</td>
</tr>
</tbody>
</table>
<h1>summary</h1>
<h2>useful attempt</h2>
<ul>
<li>Pseudo labeling using Feedback 2021 dataset<ul>
<li>Both CV and Public LB decreased about 0.01.</li></ul></li>
<li>AWP (eps: 1e-4, lr: 1.0, CV decreased about 0.003.)</li>
<li>LightGBM and XGBoost stacking and some feature engineering</li>
<li>segmented prediction and splicing like Feedback 2021 1st place solution</li>
<li>ensembling different models that used different inference ways (averaging predictions of all tokens in the texts or just using prediction of the first sep token of the texts)</li>
</ul>
<p>In blending, we decided weights of each prediction by this code.</p>
<pre><code>weights_list = []
for i in range(LOOP):
for fold in range(NUM_FOLDS):
train_idx = (oof["fold"] != fold) & (oof["fold"].isin(CFG.trn_fold))
valid_idx = (oof["fold"] == fold) & (oof["fold"].isin(CFG.trn_fold))
res = sp.optimize.minimize(
get_weighted_score,
weights,
args=(train_idx),
method="Nelder-Mead",
bounds=tuple((0, np.inf) for _ in range(NUM_MODELS)),
tol=1e-6
)
print("score:", res.fun)
print("weights:", res.x)
weights_list.append(res.x)
weights = np.array(weights_list).mean(0)
print(weights)
</code></pre> |
Feedback Prize - Predicting Effective Arguments | Team Hydrogen: 1st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Predicting Effective Arguments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you! The performance of the ensemble of the 1st level models on Private LB is 0.558. So, 2nd level LightGBM and NN bring extra 0.004. And our best single model is the most <a href="https://www.kaggle.com/competitions/feedback-prize-effectiveness/discussion/347537" target="_blank">efficient</a> one with the score of the same 0.558 without 2nd level models.</p>
<p>Thank you! There were only a few things that worked for us. We didn't use smaller backbones for the tuning, but directly optimized deberta-(v3)-large; for some models we used different learning rates for the backbone and the head; for some models we changed dropout; and for some we applied masking augmentation. An interesting thing is that there is no intermediate dropout in the backbone in our most efficient model, it made results considerably better.</p>
<p>Thank you! We will be preparing our solution for the competition organizers, and we might open source it afterwards. However, you can already see the main ideas in our inference kernel here: <a href="https://www.kaggle.com/code/philippsinger/team-hydrogen-efficiency-prize-1st-place" target="_blank">https://www.kaggle.com/code/philippsinger/team-hydrogen-efficiency-prize-1st-place</a></p>
<p>Thanks a lot to the hosts and Kaggle for hosting this interesting competition, we had great fun working on both the accuracy and efficiency tracks. Also congratulations to all other competitors for the great solutions and results. Our solution is the result of perfect teamwork.</p>
<h1>Summary</h1>
<p>Our solution is based on a two-stage approach ensembling multiple transformer-based models with different techniques and adding second level models on top. We additionally employ multiple rounds of pseudo tagging and add pseudo labels with different techniques to our models.</p>
<h1>Cross validation</h1>
<p>Throughout this competition we had near perfect correlation between CV and LB. Whenever we saw some improvement on CV, we saw it reflected in a similar manner on the LB with very small random range. For splitting the folds, we just used an efficiency-stratified split on essays.</p>
<p>As the data is small to medium size and the metric is log loss, the scores can vary between different runs. This is typical for deep learning models as they are quite dependent on the seed at hand that influences weight initializations, batching, or augmentations. Yet, this means one should not judge model performance on single seeds, and it is better to always evaluate on multiple seeds.</p>
<p>Given that model training was quite fast, we thus only relied on checking blends of 3 seeds for each model. Also, single model scores did not correlate well here with their ability to blend into larger ensembles. So a better individual model could have quite a worse performance in the blend, diversity really mattered here. Consequently, we also always checked models in the blend, even if they did not seem too promising on an individual basis. Similar to how we checked CV, we then always subbed a blend of 3 seeds of models trained on the full data.</p>
<p>Our correlation looked like follows:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F304662%2Fbdd644ba5cf65aba562fa2a2611e6183%2FLB.png?generation=1661353133551573&alt=media"></p>
<h1>Modeling</h1>
<p>Our final solution is a combination of different modeling approaches. Most of them are based on the idea of training on all discourses from a single essay at the same time. This not only made training and inference much faster, but also improved accuracy significantly. In the following we want to describe our main approaches in more detail. For backbones, we could only get deberta-(v3)-large to work. Other backbones did not improve the ensemble.</p>
<h4>Essay group model</h4>
<p>The main idea of this approach is to feed a full essay into the model, and pool each discourse separately and then feed it through the final linear layer for prediction. The main approach here is similar to what others shared, but there are some peculiarities and different sub-approaches. </p>
<p>Our main version has an input as follows:</p>
<pre><code>Lead Position Claim Evidence Counterclaim Rebuttal Evidence Counterclaim Concluding Statement [SEP] [START] Hi, i'm Isaac, i'm going to be writing about how this face on Mars is a natural landform or if there is life on Mars that made it. The story is about how NASA took a picture of Mars and a face was seen on the planet. NASA doesn't know if the landform was created by life on Mars, or if it is just a natural landform. [END] [START] On my perspective, I think that the face is a natural landform because I dont think that there is any life on Mars. In these next few paragraphs, I'll be talking about how I think that is is a natural landform [END] … more text follows here
</code></pre>
<p>As mentioned, one sample includes one essay. We start by adding a list of all types of the discourses in the essay with a SEP token and then we mark the individual discourses with custom START and END tokens. We then run this sample through the backbone, and pool between START and END tokens for each discourse. The input batch size is always 1, and this gets transformed to a batch size that depends on the number of discourses within the essay. These pooled embeddings then run through a final linear layer predicting the class.</p>
<p>You can see that in this example we do not specifically add the type to each discourse, but we use an additional auxiliary loss to predict the type of each one. This helped with regularizing the model and allowed for a bit longer training.</p>
<p>An additional sub-approach does not have this auxiliary loss and trains a model based on the following input:</p>
<pre><code>Lead Position Claim Evidence Counterclaim Rebuttal Evidence Counterclaim Concluding Statement [SEP] [START_Lead] Hi, i'm Isaac, i'm going to be writing about how this face on Mars is a natural landform or if there is life on Mars that made it. The story is about how NASA took a picture of Mars and a face was seen on the planet. NASA doesn't know if the landform was created by life on Mars, or if it is just a natural landform. [END_Lead] [START_Position] On my perspective, I think that the face is a natural landform because I dont think that there is any life on Mars. In these next few paragraphs, I'll be talking about how I think that is is a natural landform [END_Position] … more text follows here
</code></pre>
<p>While the latter approach was better individually on CV, the former approach blended significantly better in our large ensemble.</p>
<h4>Token classification</h4>
<p>In this approach the chunks of the essays are treated as separate tokens. Individual chunks either get the efficiency label or “O” label if they are outside of the annotated essay text. And the subsequent process is similar to the idea above: pass the whole essay through the backbone, apply local average poolings for each chunk and add a dense classification layer on top. The input essay didn’t have any extra special tokens apart from discourse type in front of all the chunks.</p>
<h4>Further models</h4>
<p>For diversity, we added the following models with minor impact to our solution:</p>
<ul>
<li>Simple Deberta classification on Discourse input only</li>
<li>Bag-of-words LightGBM model </li>
</ul>
<h1>Regularization and hyperparameter tuning</h1>
<p>We spent significant efforts on hyperparameter tuning and playing with various regularization techniques. We implemented quite a few augmentation techniques, but as always they were not really helpful for NLP finetuning. Only mask augmentations worked with decent results as it was bringing some diversity for the ensemble. Hyperparameter tuning was very important though, and it was time well spent.</p>
<h1>Adjustment, ensembling & 2nd level models</h1>
<p>The log loss metric is always only optimal if the mean prediction per column matches the mean of the target columns for that label. Our different models (specifically if trained in batch-wise manner on essays) do not always reflect this mean very well, which is why we added an additional optimization after each model to adjust to the train mean. This additionally makes the scores more reliable and comparable. We then also employ these learned scaling factors on LB.</p>
<p>For ensembling different models we resorted to directly optimizing the blending weights between the models. Interestingly, we also had several models with negative weights, but this worked for us both on CV as well as LB.</p>
<p>We additionally trained several 2nd level models to further improve our predictions.</p>
<h4>LightGBMs</h4>
<p>For the 2nd level LightGBM model we took the weighted ensemble prediction, together with individual models predictions, and generated some aggregate features based on the whole essay. We trained 2 LightGBM versions with different features and parameters.</p>
<h4>Neural networks</h4>
<p>We tuned two types of neural networks here. The first takes the weighted ensemble prediction, as well as an average across the essay and across the type within an essay for each of the three target columns as input and trains a three-layer DNN. The second one takes the same features, but on an individual model basis and then uses a three-layer Conv1d with average pooling afterwards.</p>
<p>All together, 2nd level models were consistently bringing us about 0.003-0.005 points on CV and the leaderboard throughout the competition.</p>
<h1>Pseudo labels</h1>
<p>Another major part of our solution is pseudo labeling. We applied 3 stages of pseudo labeling on the extra data from the previous Feedback competition. It was done in a leak-free manner for the individual folds and additionally for our models trained on all the data (6 versions of pseudo labels in total). The process consisted of the following steps:</p>
<ol>
<li>Train an ensemble of models only on the given train data</li>
<li>Run predictions on the previous Feedback competition data using our full 2-stage pipeline</li>
<li>Use soft pseudo labels from this extra dataset and apply it to modeling in two different ways:<ul>
<li>Concatenate pseudo labels with the actual labels in the given train data, and train simultaneously on all this data</li>
<li>Pre-train models on the pseudo labels and finetune it only on the given train data afterwards. Similar to: <a href="https://arxiv.org/abs/1904.04445" target="_blank">https://arxiv.org/abs/1904.04445</a></li></ul></li>
<li>Repeat steps 1-3 three times using an ensemble of models trained on pseudo labels now</li>
</ol>
<p>Apart from using previous Feedback competition data for pseudo labels, it was also used in some models as a pre-training dataset. The model was warmed up on the old data predicting the type of the chunk and further finetuned on the given train data.</p>
<h1>Efficiency solution</h1>
<ul>
<li>Please refer to <a href="https://www.kaggle.com/competitions/feedback-prize-effectiveness/discussion/347537" target="_blank">this</a> post to read about our efficiency solution.</li>
<li>And <a href="https://www.kaggle.com/code/philippsinger/team-hydrogen-efficiency-prize-1st-place" target="_blank">here</a> you can find our most efficient kernel that gets 0.557 Private LB scores in 5 minutes and 40 seconds!</li>
</ul>
<h3>Extra links</h3>
<ul>
<li><a href="https://www.kaggle.com/code/ybabakhin/team-hydrogen-1st-place" target="_blank">Here</a> you can find our final inference kernel</li>
<li><a href="https://github.com/ybabakhin/kaggle-feedback-effectiveness-1st-place-solution" target="_blank">Here</a> you can find our code to train the models</li>
</ul> |
Feedback Prize - Predicting Effective Arguments | [11th solution] Feature design and Knowledge distillation | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Feedback Prize - Predicting Effective Arguments
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to the organizers to host this interesting competition and thanks to community for those great ideas which inspired me a lot. </p>
<h2>Methods that work</h2>
<ul>
<li>Pretraining</li>
<li>Feature design</li>
<li>Pseudo labeling</li>
<li>Knowledge distillation</li>
</ul>
<h2>Transformer Model</h2>
<ul>
<li>Deberta-v3-large</li>
<li>Deberta-large</li>
<li>Deberta-xlarge</li>
</ul>
<h2>Feature Design</h2>
<p>1.Concatenate different <code>discourse_id</code> in same essay and predict token probabilities of tags:</p>
<blockquote>
<p>[CLS LEAD] lead discourse [END LEAD] [CLS POSITION] position discourse [END POSITION] …</p>
</blockquote>
<p>2.Predict token probabilities of <code>[CLS]</code> and <code>[END]</code> tags in essay directly:</p>
<blockquote>
<p>essay…. [CLS CLAIM] I think that the face is a natural landform because there is no life on Mars that we have descovered yet [END CLAIM] essay …</p>
</blockquote>
<p>In my case, training model with whole essays works better than w/o essays like feature <code>1</code>, but it add up some diversity.</p>
<p>3.Add classifier tags:</p>
<blockquote>
<p>essay…. [CLS CLAIM] [Ineffective] [Adequate] [Effective] I think that the face is a natural landform because there is no life on Mars that we have descovered yet [END CLAIM] essay …</p>
</blockquote>
<p>This time, the predicted probabilities are not <code>[CLS]</code> and <code>[END]</code>, but <code>[Ineffective]</code>, <code>[Adequate]</code> and <code>[Effective]</code>. And the activation function is using <code>Sigmoid</code> but not <code>Softmax</code>. Then the prob of these 3 tags would be scaling to 1. This could add diversity also.</p>
<p>4.Predict all tokens in a span and take a mean value for discourse probabilities.</p>
<h2>Pretraining</h2>
<p>I use <a href="https://www.kaggle.com/competitions/feedback-prize-2021/data" target="_blank">feedback-2021-data</a> for model pretraining, it could improve model performance about ~0.003.</p>
<h2>Pseudo labeling</h2>
<p>I spend a lot of time for pseudo labeling, especially when training model with pseudo labels (extra more data). <br>
In early stage I tried pseudo with hard label, but it did not give any improvement. And finally, soft label of probabilities give some improvement in model performance (The loss function is using <code>soft cross entropy</code>).</p>
<h2>Knowledge Distillation (KD)</h2>
<p>In my case, KD works better than pseudo labeling when model ensembling. I found that the higher of distillation loss weight, the better cv but worse lb. So I take the high distillation loss in early stage of training, but lower weight in later stages, this could improve both cv and lb.</p>
<h2>Post process</h2>
<p>I didn't have extra time to explore post process method, so I just used weighted average.</p>
<p>Anyway, solo in this game is a little bit hard. There left many things I have not tried yet.</p>
<p>It's hard to set the weight empirically 😂</p>
<p>Thanks! And congrate for your gold! <br>
<code>L_total = L_ce + w * L_distillation</code>, and <code>w</code> is distillation loss weight</p>
<p><code>the higher of distillation loss weight, the better cv but worse lb</code> <br>
The article I found related <a href="https://arxiv.org/pdf/2102.00650.pdf" target="_blank">here</a>. Are there any other possible explanations?</p>
<p>Thanks <a href="https://www.kaggle.com/evilpsycho42" target="_blank">@evilpsycho42</a> ! It's a powerful method worth to try it!</p>
<p>Thanks! Congrates for your gold too~</p>
<p><a href="https://www.kaggle.com/conjuring92" target="_blank">@conjuring92</a> Thanks! I also learn a lot from your great work! Congrates for your 3rd place and prize!</p>
<p><a href="https://www.kaggle.com/yasso1" target="_blank">@yasso1</a> you can refer to this post <a href="https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/discussion/322832" target="_blank">here</a></p> |
Flavours of Physics: Finding τ → μμμ | (not) a winning solution: discussion on features other than mass | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Flavours of Physics: Finding τ → μμμ
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I have read all top rank teams solutions here, congratulations to all of them. After reading these, I find most of them use the tau invariant mass while other interesting features are not mentioned because the tau invariant mass is such a significantly strong feature. No matter if using invariant mass is good or not, it might be interesting for checking out some other interesting features using kinematic variables and selection varialbes, so I have written a (not) winning solution blog for discussing on some features and models, and had my code on github.</p>
<p>blog link for the team "Hi from CMS": <a href="https://no2147483647.wordpress.com/2015/10/26/yet-another-not-winning-solution-kaggle-flavours-of-physics-for-finding-τ-→-3μ/">https://no2147483647.wordpress.com/2015/10/26/yet-another-not-winning-solution-kaggle-flavours-of-physics-for-finding-%CF%84-%E2%86%92-3%CE%BC/</a></p>
<p>github link: <a href="https://github.com/phunterlau/kaggle-tau-flavour">https://github.com/phunterlau/kaggle-tau-flavour</a></p> |
Flavours of Physics: Finding τ → μμμ | First place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Flavours of Physics: Finding τ → μμμ
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everyone!</p>
<p>Finally, here goes our solution. We should have posted it a bit earlier, but I hope everyone will still find something interesting in it anyway. You will find model explanation in .pdf and code in .ipynb files. We tried to make solution consize for sake of convenience.</p>
<p><a href="https://github.com/aguschin/flavours-of-physics">https://github.com/aguschin/flavours-of-physics</a></p> |
Flavours of Physics: Finding τ → μμμ | Second best solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Flavours of Physics: Finding τ → μμμ
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Source code and brief description of the solution ranked second on the Private Leaderboard can be found at <a href="https://github.com/gramolin/flavours-of-physics">https://github.com/gramolin/flavours-of-physics</a></p>
<p>I'd like to congratulate the Go Polar Bears team and Josef Slavicek who also achieved very high scores. I'm very grateful to the Organizers for the opportunity to work on an exciting problem and to analyze real data from LHCb. Here are some thoughts regarding the technical details of the competition which I'd like to share:</p>
<ol>
<li>I realize that it was difficult to perfectly combine real and simulated data and make them indistinguishable from one another. By removing real background events from the region of invariant masses around the tau mass you, of course, made sure that there are no potential decays of tau to three muons among them. At the same time, you introduced a serious vulnerability, because now mass can be used to perfectly discriminate between signal and background.</li>
<li>It was probably a bad idea to provide the agreement and correlation datasets to participants. Instead, I think it would be enough to just let them check the KS and CvM scores while making submissions (limited number of times). This would ensure that they don't overuse the tests.</li>
<li>For me personally, it was sad that you didn't provide azimuthal (phi) angles of the particles. This limited our ability to use kinematics to analyze events. At the same time, it didn't prevent us to calculate the mass of the mother particle anyway.</li>
</ol>
<p>Despite all this, please don't stop your efforts to bring particle physicists and machine learning people together. We look forward to the next Kaggle competition from CERN!</p>
<p>[quote=phunter]</p>
<p>Just a quick question, I see the invariant mass of the mother particle is calculated. How does the model with invariant mass pass the CvM test? Thanks.</p>
<p>[/quote]</p>
<p>Vicens Gaitan performed a nice analysis of the mass gap vulnerability: <a href="https://www.kaggle.com/vicensgaitan/flavours-of-physics/clipping-spreading">https://www.kaggle.com/vicensgaitan/flavours-of-physics/clipping-spreading</a></p>
<p>I think that here we have a similar situation. I combined a strong (kinematic) and a weak (geometric) classifiers to pass the correlation test. In addition, I used the XGBoost parameter scale_pos_weight=5 which, I guess, is somewhat equivalent to raising the predicted probabilities to some power.</p>
<p>Though the idea of combining a geometric and a kinematic classifiers can actually be useful, it's impossible to fairly test it with the datasets provided. I wish we didn't have that bad mass gap in the background (which probably could be filled with simulated events)...</p> |
Flavours of Physics: Finding τ → μμμ | Third place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Flavours of Physics: Finding τ → μμμ
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello,</p>
<p>first, excuse, please, my bad English. Second, excuse, please, my brevity - I really didn't expected that I will place so high, so I have lots of work to complete my solution description, which is required to be published today (October 14th).
I wish to thank CERN and organizers for this exciting competition. I wish to congratulate the winning team 'Go Polar Bears' and Alexander Gramolin who placed second - we have had great fight in the past hours of the competition and he managed to beat me :-). </p>
<p>Excuse, please, poor quality of my source code. I really didn't expected such good result, so I didn't expected that I will be asked to publish my code. And there is not enough time to clean it up. So, here it is (it's crazy mixture of three programming languages):
<a href="http://jslavicek.cz/third_place.zip">third_place.zip</a>
. Installation instruction are in the INSTALL.txt file.</p>
<p>I'm going to write description of my solution and hope to publish it on this forum until midnight UTC.</p>
<p>Edit: description of my model is now packed in the third_place.zip file.</p>
<p>Edit2: I simplified my solution to the level where it can be published as <a href="https://www.kaggle.com/josefslavicek/flavours-of-physics/simplified-version-of-my-solution">script</a>. It scores 0.999594 as a "Post-Deadline entry" on private leaderboard (~third place).</p>
<p>[quote=Alexander Gramolin;96133]
Unfortunately, I have serious doubts that the winning models (including my own) can be useful for real physics... As far as I understand now, any usage of the reconstructed invariant mass as a feature (what I did) or usage the agreement/correlation data sets while training the classifier (you did it, right?) makes the model physically meaningless.
[/quote]</p>
<p>Yes, my model uses reconstructed mass. I added it, because my xgbtree started accidentally in some configuration show impressive CV score of 0.9995 and at the same time it started to fail desperately in correlation test. I found a way to combine results from several models to make it pass the correlation test (not knowing that doing it is wrong from physical viewpoint). I deduced that poor results in correlation test means that the xgbtree just found a way to internally reconstruct the mass and use it to make signal predictions. So I trained linear regression to estimate the mass (I didn't know that the mass can be simply computed) and added this estimated mass as another input to my xgbtree. It improved the CV score to 0.9997. </p>
<p>On the other hand, I'm using the agreement/correlation data really only to determine, if my solution will pass or fail in the tests - which of course is too a way to leak the information from the files into my model.</p> |
Flavours of Physics: Finding τ → μμμ | sharing code - UGBC | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Flavours of Physics: Finding τ → μμμ
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Anton, Andre, rakhlin, phunter, Kumaresh, Thank you very much for discussion in this thread and good luck in next competitions.</p>
<p>I have uploaded simplified (without evaluating) version of my script: <a href="https://www.kaggle.com/sionek/flavours-of-physics/ugbc-gs2">Kaggle-UGBC-GS-simplified</a></p>
<p>*original: 80% training.csv, fl_coefficient=5, n_estimators=750, LB=0.992031</p>
<p>*simplified: 100% training.csv, fl_coefficient=15, n_estimators=550, LB=0.992632</p>
<p>Feel free to change the parameters or add new features to go up to TOP 10.</p>
<p>I (like probably many others) spent a lot of time doing wrong things. </p>
<p>In my case, I thought, that the aim is to train model on one decay, check the agreement on another decay and if it is passed, it means that model is (probably) generally good to find any other decay. </p>
<p>Now I see that to let me discover such a model in my way, the organizers should give us not one, but two or more test files. One (or more) to check agreement, one to find the decay. But they mixed them all in one.
My final model (nothing special, just using UGBC) was created when I surrendered.</p>
<p>Next time I will read the rules until I feel pain between my ears ;)</p>
<p>I do not know how many Kagglers knew that they had to create a model that should detect two different decays in one experiment. I'm not sure if all the boys and girls from CERN were aware of it. </p>
<p>Probably that is why undertraining, spoiling a model being good to detect one isolated decay gives better results than its further improvement.</p>
<p>Anton, </p>
<p>UGBC does something good with background ;), so I thought, that "normal" part of train taken for evaluation of existence of other phenomenon shows "too good" results because of these zeros (background). That is why I decided to remove some 0s. Looking at <a href="https://www.kaggle.com/gramolin/flavours-of-physics/histograms">histograms</a> I made a cut at IPSig=5 and VertexChi2=4 taking the regions of high signal to background ratio (analogy to evaluation basing on not allowed min_ANNmuon>0.4). Data selected in such manner work really good in AUC evaluation, so tuning was performed manually by ~10 steps, going up and down with 2-3 parameters of UGBC and looking at this evaluation value.</p>
<p>Thanks to Andrey Ustyuzhanin, Alex Rogozhnikov et al., the authors/maintainers of hep_ml, for their effort.
I have spent two evenings installing hep_ml, and three evenings running it, but I have obtained <strong>0.991099</strong> using just one model - <a href="http://grzegorz.sionkowski.pl/zip/justUGBC.zip">UGradientBoostingClassifier</a> <- zip file of my solution. </p>
<p>Because I know neither R nor Python, the solution is written partly in R, partly in Python 2.7.</p>
<p>New things regarding scripts known before the end of the competition:</p>
<p>-- new feature (Python file)
df['NEW_FD_LT']=df['FlightDistance']/df['LifeTime']
of the dimension of "velocity" ;)</p>
<p>-- removing some specific rows from evaluation file enables to obtain
better evaluation close to that on LB or that (not allowed) min_ANNmuon>0.4 (R file). </p>
<p>No mass "reconstruction".</p> |
Flight Quest 2: Flight Optimization, Milestone Phase | Milestone Winner Methodology | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Flight Quest 2: Flight Optimization, Milestone Phase
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Summary</strong></p>
<p style="text-align: justify">The flight model and the cost function of the Flight Simulator were simplified. Optimal altitude and airspeed for current aircraft weight were computed for cruise, ascent, and descent. An automatic agent was constructed which minimizes the cost function selecting one of three actions for each time moment.</p>
<p><strong>Modeling Techniques</strong></p>
<p style="text-align: justify">The flight cost consists of fuel cost and delay cost. The latter consists of a linear term (crew and other hourly costs) and a nonlinear term (passenger dissatisfaction costs). The nonlinear term is small relative to the linear one and was neglected. The cost equation was:</p>
<p style="text-align: center">flight_cost = fuel_consumed * fuel_cost + flight_duration * delay_cost</p>
<p style="text-align: justify">Please note that this equation does not include the actual delay of arrival. Hence, all parameters influencing this delay were neglected. In particular, both ground and traffic conditions at a destination airport were not taken into account.</p>
<p>This solution uses several simplifications:</p>
<ul>
<li><span style="line-height: 1.4">Only crew and other hourly costs are taken into account. Passenger dissatisfaction costs are neglected.</span></li>
<li><span style="line-height: 1.4">Ground conditions and traffic conditions at a destination airport are ignored. The arrival model is disabled in the Flight Simulator.</span></li>
<li><span style="line-height: 1.4">Wind direction and magnitude are assumed to be constant and correspond to the cutoff time. </span></li>
<li><span style="line-height: 1.4">Airplane goes in a straight line to a destination. Only altitude and airspeed are changing.</span></li>
</ul>
<p style="text-align: justify">The second principal optimization concerns flight dynamics. Each time moment an aircraft has one of the three states: cruise, ascend, or descent. Each of the states has its own fuel expense depending on aircraft weight, airspeed and altitude. Optimal parameters for each state were found minimizing the cost function. Cruise state is usually optimal at maximal altitude and some specific airspeed. Descent fuel expense does not depend on airspeed. Descending with maximal airspeed is obviously the most efficient state. Ascent is always the least efficient.</p>
<p>The automatic agent minimizes the cost function selecting one of the three actions for each time moment:</p>
<ul>
<li><span style="line-height: 1.4">change altitude and cruise straight for 50 miles;</span></li>
<li><span style="line-height: 1.4">2000 feet ascent and subsequent descent;</span></li>
<li><span style="line-height: 1.4">descend to a destination (10000 feet on an airport border).</span></li>
</ul>
<p style="text-align: justify">At present the optimal flight plans do not look to be natural. In reality a cycle of ascent with subsequent descent can not be more efficient than cruise at constant altitude. Hopefully, the next version of the Flight Simulator will be more realistic.</p> |
Foursquare - Location Matching | 11th place code and solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Foursquare - Location Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all I would like to thank the team member and, participants and Foursquare.<br>
Except the data leak, the problem design was very interesting and a good competition with many possible approaches.</p>
<p>Our team may have been too pure to suspect the data leak, (because the host explicitly declared that there was no overlaps).</p>
<p>ATTENTION: This notes was written before the data leak was known, and some points may seem silly for now, but we will share it.</p>
<h3>Links</h3>
<ul>
<li>Training(XGB) notebook: <a href="https://www.kaggle.com/code/iiyamaiiyama/fs-training-4-ensemble-0705/notebook?scriptVersionId=100082426" target="_blank">https://www.kaggle.com/code/iiyamaiiyama/fs-training-4-ensemble-0705/notebook?scriptVersionId=100082426</a></li>
<li>Inference notebook: <a href="https://www.kaggle.com/code/iiyamaiiyama/fs-inference-no54-spatial-no-country-07045/notebook?scriptVersionId=100093618" target="_blank">https://www.kaggle.com/code/iiyamaiiyama/fs-inference-no54-spatial-no-country-07045/notebook?scriptVersionId=100093618</a> </li>
<li>NN models traing code (github): <a href="https://github.com/heartkilla/mcd-kaggle-fsq" target="_blank">https://github.com/heartkilla/mcd-kaggle-fsq</a></li>
</ul>
<h3>Summary</h3>
<ul>
<li>Metric learning multi-input(concatenated text -> BERT + lat/lon) NN model.</li>
<li>Generate candidate by NN model embeddings and spatial nearest neighbours.</li>
<li>Create 2nd stage binary classification XGB models for each candidate rank.</li>
</ul>
<h3>Validation strategy</h3>
<ul>
<li>In the beginning we split training data in 4 folds, and trained BERT(train:valid = 3:1). Then we pick up one fold(validation), and split it further in 4 folds for 2nd stage XGB model training.</li>
<li>After a while, we found that we can get much better results if we trained BERT using all the data. In this case we train XGB with leaked data and it may perform badly for the test data. Maybe the quality of embeddings is more important than traning a leak-free XGB model.</li>
</ul>
<h3>Preprocessing</h3>
<ul>
<li>Fill <code>NaN</code> texts with nearest(in longitude and latitude) 5 points for BERT models.</li>
</ul>
<h3>1st stage: Create candidates</h3>
<ul>
<li>BERT embeding candidates<ul>
<li>Architechture(BERT + lat/lon)<ul>
<li>Concatenate BERT model output with normalized lat/lon values and connected to the FC layer(320).</li>
<li>FC layer(320) embeddings were used for similarity.</li>
<li>Loss: ArcMarginProduct</li>
<li>Training<ul>
<li>Gradually increase Margin during training(0.2 to 0.8)</li>
<li>40 epoch training(24-48 hours to train)</li></ul></li></ul></li>
<li>Emsemble<ul>
<li>Concatenate multiple BERT models embeddings<ul>
<li><code>xlm-roBERTa-large</code>, <code>sentence-transformers/LaBSE</code>, <code>sentence-transformers/paraphrase-multilingual-mpnet-base-v2</code>, <code>remBERT</code></li></ul></li></ul></li>
<li>DBA/QE<ul>
<li>Weighted DBA/QE for better embedding.</li></ul></li>
<li>Create candidates<ul>
<li>For each id, 50 candidates were created using cosine similarity (faiss)</li></ul></li></ul></li>
<li>Spatial candidates<ul>
<li>Added candidates from lat/lon using sklearn.neighbors.BallTree.</li>
<li>Hoping to add candidates that were missed by BERT embeddings.</li></ul></li>
</ul>
<h3>2nd stage: GBDT models for each candidate rank</h3>
<ul>
<li>Many public kernels train GDBT model after taking N(e.g.20) neighbors from each point by Bert embeddings and creating a dataset with size len(test_df)*N. We create XGB models for each rank of the candidates. We first search the rank point(excluding itself) from each point and create a dataset with a size of len(test_df) to train one XGB model. Similarly, we create another XGB model from the second rank point, and so on. Finally we created 50 models. This is very efficient approach in memory, training time and accuracy.</li>
<li>Sample weights were set according to the number of POI matches.</li>
<li>ForestInference was used during inference. It was very fast and helpful.</li>
</ul>
<h3>Post process(dijkstra)</h3>
<ul>
<li>To generate the set of matched point, we defined a graph from the output of XGB models and then used Dijkstra's algorithm. We add edges x -> y for each point x and its candidate y and set the weight to 1.0 - (prediction of XGB model). We then generate the set of matched point by collecting all points whose distances from the focussing point are smaller than a threshold. This process reduces the number of FN.</li>
<li>Before executing Dijkstra's algorithm, we make the graph undirected by averaging the weigths of forward edge x -> y and inverse edge y -> x.</li>
<li>We tried various binary operations (sum, prod, max, etc…) of weights to change the distance on the graph. The usual summation works best for us. </li>
</ul>
<h3>LB vs CV</h3>
<ul>
<li>As we discussed before, our approach is pretty leaked. Our best CV is 0.997. But we can see almost linear CV-LB relationship with this leaked pipeline. This approach caused problem, our score stopped increasing when the CV score was close to 1.0</li>
<li>LB vs CV plot<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F194365%2F8a61a2976e19f08c444f0a11e881e856%2F05926d0c5dca1180991ecc1ad152ad17.png?generation=1657291410405512&alt=media" alt=""></li>
</ul>
<h3>Hardware</h3>
<ul>
<li>Our team did not have sufficient GPU, and Google Colab(pro) was not powerful enough for Bert training. So we employed Vast.ai, which is relatively inexpensive compared to GCE or AWS, and allowed us to use good GPU.</li>
<li>For XGB training, Kaggle Kernel was enough. We can finish XGB training within one hour.</li>
</ul>
<h3>Score Timeline</h3>
<ul>
<li>0.922 baseline: bert-base-multilingual-cased</li>
<li>0.928 +ensemble xlm-roberta-base(2 ensemble total)</li>
<li>0.931 +add XGB feature</li>
<li>0.940 +fill na</li>
<li>0.945 +add bert models(4 ensemble total)</li>
<li>0.947 +dijkstra postprocessing</li>
</ul>
<h3>Not worked for us</h3>
<ul>
<li>shuffle text augmentation</li>
<li>reverse geocoding text augmentation</li>
<li>text unidecode -> deberta-large-v3</li>
</ul>
<h3>Postscript: When should we notice the leak?</h3>
<p>In retrospect, we had several opportunities to know the data leaks.</p>
<ul>
<li>We noticed that there are a overlap items of <code>train.csv</code> in sample <code>test.csv</code>.</li>
<li>BERT training on all data scored much higher than the single fold model.</li>
<li>XGBoost parameters which are more aggressive, gave better scores.</li>
</ul> |
Foursquare - Location Matching | 12th Place Short Summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Foursquare - Location Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to Kaggle and Foursquare for hosting this competition. It was a fun and interesting challenge. Our team was not aware of the leak, but our solution is most likely affected by it. Our final approach is a weighted average of XGBoost, CatBoost, and a language model-based NN. This is a quick summary of some of the things that our team did differently from the solutions shared by other competitors.</p>
<h2>Candidate Generation</h2>
<p><a href="https://www.kaggle.com/sakami" target="_blank">Kosuke’s</a> candidate generation is one of the main components of our solution. In the end, we achieved a max IoU of 0.97675 using 25 candidates per id. </p>
<ul>
<li>We extracted candidates in three different ways:<ol>
<li>latlon top-k</li>
<li>name & latlon embedding top-k</li>
<li>category & latlon embedding top-k</li></ol></li>
<li>name & latlon and category & latlon are simple concatenations of their embeddings.<ol>
<li>We used the <a href="https://tfhub.dev/google/universal-sentence-encoder-multilingual/3" target="_blank">Universal Sentence Encoder</a> to create embeddings.</li>
<li>There was a lot of data with the same name all over the world, e.g. Starbucks. The embedding is concatenated to extract places with the same name that are close in distance. This took advantage of the fact that the order of magnitude of the latlon L2 distance is much smaller than the order of magnitude of the name embedding L2 distance.</li>
<li>The categories embedding was the average embedding for each category.</li></ol></li>
<li>We picked 15 latlon neighbors, 15 name & latlon neighbors, and 10 latlon & category neighbors. Then we picked the top-k samples with the smallest distance among them. Note that we extract more than the pre-defined minimum number of candidates for each id.</li>
</ul>
<h2>Postprocessing</h2>
<p>Postprocessing gave us a boost on the LB. We created a soft adjacency matrix from the graph where nodes are places and edges are match probabilities. First, we found all 1-hop paths that exist using edges with a predicted probability greater than 0.5 (this is equivalent to the postprocessing used in most public notebooks). Then, we found all 2-hop paths that exist using edges with a predicted probability greater than 0.9. We repeated this for 3-, 4-, and 5-hop paths and thresholds of 0.95, 0.998, and 0.999, respectively. In the end, we predicted that two places are a match if there are at least one of these n-hop paths between the places. </p> |
Foursquare - Location Matching | 13th place solution (GNN) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Foursquare - Location Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for sharing your solution!<br>
leak is very sad😭. I hope the second Foursquare competition will be held.</p>
<p>Thank you. It is nice to be admired by a strong Grandmaster like you!</p>
<p>I am very interested in your solution too.<br>
Do you plan to share your solution?<br>
Can you give me a brief explanation of your solution?</p>
<p>I'm thinking of sharing the inference code, but I'd like to organize it a bit before I share it with you.<br>
The current code is very dirty.😂</p>
<p>Thanks for the question.</p>
<p>2-hop-subgraph is created by considering the graph as an undirected graph.<br>
The implementation uses pytorch_geometric and can be written as follows<br>
<a href="https://pytorch-geometric.readthedocs.io/en/latest/modules/utils.html#torch_geometric.utils.k_hop_subgraph" target="_blank">https://pytorch-geometric.readthedocs.io/en/latest/modules/utils.html#torch_geometric.utils.k_hop_subgraph</a></p>
<pre><code>from torch_geometric.utils import k_hop_subgraph
nodes = k_hop_subgraph(node_id, 2, G.edge_index, flow='target_to_source')[0]
subgraph = G.subgraph(nodes)
</code></pre>
<p>Sorry if this didn't answer your question.</p>
<p>First of all, this competition would have been better without the leak.<br>
It is very sad that because of the leak, we can no longer distinguish between a good solution and a overfitting solution.</p>
<h1>Summary</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6958652%2F8709eae7598a8c4aa5f131aff0f0d128%2Fsolution.jpg?generation=1657372988563552&alt=media" alt=""></p>
<p>My solution consists of four stages</p>
<ul>
<li>Candidate generation<ul>
<li>1st stage: Create candidates</li>
<li>2nd stage: Filtering by LightGBM</li></ul></li>
<li>Matching<ul>
<li>3rd stage: match prediction by xlm-roberta-base and mdeberta-base-v3</li></ul></li>
<li>PostProcessing<ul>
<li>4th stage: Node Classification by GNN</li></ul></li>
</ul>
<h1>Cross validation strategy</h1>
<p>GroupKFold(n_splits=2, group='point_of_interest')<br>
The CV score of my solution is 0.920</p>
<h1>1st stage</h1>
<ul>
<li><p>Generates 60 candidates per id</p>
<ul>
<li>30 candidate neighborhoods per id using cosine similarity of text embedding by Tfidf<ul>
<li>text: name, categories, address, city, state in a one sentence</li></ul></li>
<li>30 candidate neighborhoods by haversine distance for per id</li></ul></li>
</ul>
<h1>2nd stage</h1>
<ul>
<li>Reduce to 3.7M pairs with high match probability using LightGBM</li>
<li>Features<ul>
<li>Missing value or not(name, cat)</li>
<li>jaro(name, categories, address, city, state, zip, url, phone)</li>
<li>leven(name, categories, address, city, state, zip, url, phone)</li>
<li>haversine</li>
<li>cos sim(tfidf embedding)</li>
<li>mean top_k haversine, tfidf (k=5, 10, 15)</li></ul></li>
<li>Using ForestInference</li>
<li>Max IoU: 0.971</li>
</ul>
<h1>3rd stage</h1>
<ul>
<li>Predicts matche using a language model like DITTO</li>
<li>model: xlm-roberta-base and mdeberta-base-v3</li>
<li>Input<ul>
<li>text: name_1 + [SEP] + name_2 + [COL] + categories_1 + [SEP] + categories_2 + [COL] + address_1 + [SEP] + address_2 + [COL] + city_1 + [SEP] + city_2 + [COL] + state_1 + [SEP] + state_2 + [COL] + zip_1 + [SEP] + zip_2</li>
<li>numerical_feature</li>
<li>haversine</li>
<li>mean top_k haversine(k=5, 10, 15)</li></ul></li>
<li>Training method<ul>
<li>multitask learning: match and country (+0.003)</li>
<li>augmentation</li></ul></li>
</ul>
<pre><code>import nlpaug as naw
aug_list = [
naw.RandomWordAug(action='swap', aug_p=0.1),
naw.RandomWordAug(action='delete', aug_p=0.1),
naw.SplitAug(aug_p=0.1)
]
</code></pre>
<ul>
<li>flip id_1, id_2</li>
<li>TTA: flip id_1, id_2</li>
</ul>
<h1>4th stage</h1>
<p>This stage is the most important part of my solution.<br>
By using GNN, both CV and LB are improved by about 0.02x. </p>
<p>Predicts the matching id for id_1 using GNN.</p>
<ol>
<li>Create a 2-hop-subgraph for each id_1(max iou: 0.993) <br>
<strong>example</strong> , Suppose the following table is given and we predict the id matching A.</li>
</ol>
<table>
<thead>
<tr>
<th>id_1</th>
<th>id_2</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>B</td>
</tr>
<tr>
<td>B</td>
<td>A</td>
</tr>
<tr>
<td>B</td>
<td>C</td>
</tr>
<tr>
<td>B</td>
<td>D</td>
</tr>
<tr>
<td>C</td>
<td>D</td>
</tr>
<tr>
<td>D</td>
<td>B</td>
</tr>
<tr>
<td>E</td>
<td>A</td>
</tr>
<tr>
<td>E</td>
<td>F</td>
</tr>
<tr>
<td>F</td>
<td>E</td>
</tr>
</tbody>
</table>
<p>In this case, the 2-hop-subgraph for A is as follows(Think in terms of undirected graphs)</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6958652%2F26622119da7e573aa087ba1e4e942ff6%2FScreenshot%20from%202022-07-09%2021-30-51.png?generation=1657371037868405&alt=media" alt=""></p>
<ol>
<li>Predict whether each node matches id_1</li>
<li>Nodes that exceed the threshold are set as matches for id_1</li>
</ol>
<h2>Details</h2>
<ul>
<li>Model<br>
This model is based on the top solution of the Japanese competition platform atmaCup.</li>
</ul>
<pre><code>class SimpleGCN(nn.Module):
def __init__(self,
num_node_features: int,
num_edge_features: int,
# num_classes: int,
deg):
super(SimpleGCN, self).__init__()
aggregators = ['mean', 'min', 'max', 'std']
scalers = ['identity', 'amplification', 'attenuation']
towers = 4
pre_layer_num = 2
post_layer_num = 2
divide_input=False
self.convs = ModuleList()
self.batch_norms = ModuleList()
conv = PNAConv(in_channels=num_node_features, out_channels=32,
aggregators=aggregators, scalers=scalers, deg=deg,
edge_dim=num_edge_features, towers=towers, pre_layers=pre_layer_num, post_layers=post_layer_num,
divide_input=divide_input)
self.batch_norms.append(BatchNorm(32))
self.convs.append(conv)
for j in [32, 64, 128]:
conv = PNAConv(in_channels=j, out_channels=j*2, aggregators=aggregators, scalers=scalers, deg=deg, edge_dim=num_edge_features, towers=towers, pre_layers=pre_layer_num, post_layers=post_layer_num, divide_input=divide_input)
self.convs.append(conv)
self.batch_norms.append(BatchNorm(j*2))
self.dropout = nn.Dropout(0.2)
self.linear = nn.Linear(j*2, 1)
def forward(self, data):
x = data.x
edge_index = data.edge_index
edge_attr = data.edge_attr
for conv, batch_norm in zip(self.convs, self.batch_norms):
x = F.relu(batch_norm(conv(x, edge_index, edge_attr)))
x = self.dropout(x)
x = self.linear(x).flatten()
return x
</code></pre>
<ul>
<li>node features<ul>
<li>marker(I named it): 1 if the node corresponds to an id_1, 0 if it is a candidate. <br>
The presence of this feature allows us to solve for node classification rather than link prediction.</li>
<li>Missing value or not(name, categories, address, city, state, zip, url, phone)</li></ul></li>
<li>edge features<ul>
<li>mdeberta pred</li>
<li>xlm-roberta pred</li>
<li>lgbm pred</li>
<li>jaro (address, city, state, zip, url, phone, categories)</li></ul></li>
<li>loss function<ul>
<li>IouLoss for each graph and the binary cross-entropy of the entire batch are the loss functions</li></ul></li>
</ul>
<pre><code>def iou_loss(logits, data):
sizes = degree(data.batch, dtype=torch.long).tolist()
logit_list = logits.sigmoid().split(sizes)
target_list = data.y.split(sizes)
loss = 0
for y_pred, y_target, group_size in zip(logit_list, target_list, data.group_size):
intersection = (y_pred * y_target).sum()
total = (y_pred + y_target).sum() + (group_size - y_target.sum())
union = total - intersection
loss += 1 - (intersection / union)
return loss / data.num_graphs
loss = iou_loss + 0.1*BCELoss
</code></pre>
<h2>Advantages of GNN</h2>
<ul>
<li>Creating a 2-hop-subgraph greatly improved maxiou.</li>
<li>Iou loss made it possible to optimize the competition metric directly.</li>
<li>Fast inference</li>
</ul>
<h1>Public Score Timeline</h1>
<ol>
<li>20 candidates per id and mdeberta and lgbm stacking: 0.907</li>
<li>60 candidates per id and xlm-roberta-base+mdeberta and lgbm stacking : 0.924</li>
<li>60 candidates per id and xlm-roberta-base + mdeberta and GNN Post Processing: 0.946</li>
</ol>
<p><a href="https://www.kaggle.com/tatamikenn" target="_blank">@tatamikenn</a> </p>
<blockquote>
<p>how to deal with the others edge?</p>
<p>Since the author saids he/she deals with the target task as a node classification, I think we don't have to consider the relation between non-directly-connected nodes.</p>
<blockquote>
<p>The presence of this feature allows us to solve for node classification rather than link prediction.</p>
</blockquote>
</blockquote>
<p>Thank you for supplementing my explanation.</p>
<blockquote>
<p>I'm also interested in how the whole implementation (especially on GNN part) is done.<br>
I don't care what the code looks like, the code would be very helpful.</p>
<p>Thanks for sharing your solution.<br>
I can't believe this all is done by only one person!</p>
</blockquote>
<p>Thank you for complimenting my solution!<br>
I will release it to the public only after modifying the code.<br>
However, since I am not sure when I will be able to release the code, I have shared it privately with you.</p>
<p>I am glad that you are interested in my solution.</p>
<p><a href="https://www.kaggle.com/hanejiyuto" target="_blank">@hanejiyuto</a> <br>
Thank you for your question.<br>
I have implemented GNN using <a href="https://pytorch-geometric.readthedocs.io/en/latest/index.html" target="_blank">pytorch_geometric</a> and <a href="https://networkx.org/" target="_blank">networkx</a>.</p>
<p>Follow this procedure to create the inputs to the GNN.</p>
<h3>1. Create a large graph in the following way.</h3>
<p>df_pair</p>
<table>
<thead>
<tr>
<th>id_1</th>
<th>id_2</th>
<th>mdeberta</th>
<th>xlm-roberta</th>
<th>lgbm</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>B</td>
<td>0.9</td>
<td>0.9</td>
<td>0.4</td>
</tr>
<tr>
<td>…</td>
<td>…</td>
<td>…</td>
<td>…</td>
<td>…</td>
</tr>
<tr>
<td>Z</td>
<td>A</td>
<td>0.2</td>
<td>0.5</td>
<td>0.6</td>
</tr>
</tbody>
</table>
<pre><code>import networkx as nx
from torch_geometric.utils.convert import from_networkx
def build_graph(df_pair, weight_att_cols):
nx_G = nx.from_pandas_edgelist(
df_pair,
source='id_1',
target='id_2',
edge_attr=weight_att_cols,
create_using=nx.DiGraph()
)
pyg_G = from_networkx(nx_G, all, all)
return pyg_G
</code></pre>
<h3>2. Partition a large graph into connected components</h3>
<h3>3. Create a 2-hop-subgraph for each id_1.</h3>
<pre><code>from torch_geometric.utils import k_hop_subgraph
nodes = k_hop_subgraph(node_id, 2, pyg_G.edge_index, flow='target_to_source')[0]
subgraph = pyg_G.subgraph(nodes)
</code></pre>
<h3>4. Create a marker feature</h3>
<p>Detailed inference and training code will be posted later.</p>
<p>I have decided not to share the code because the data is no longer available.</p> |
Foursquare - Location Matching | 15th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Foursquare - Location Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Although I was not able to win the gold medal this time, I would like to publish my solution.<br>
Please forgive me if my English is not correct because I used an automatic translation.<br>
My solution consists of Blocking, Matching, and Postprocessing.</p>
<ul>
<li><p>Blocking</p>
<ul>
<li>KNN in L2 norm for a matrix combining the following three (performed with three weights, Lat/Lng : Name : Categories = 3 : 10 : 1, 100 : 10 : 1, 3000 : 10 : 1)<ul>
<li>Latitude and longitude</li>
<li>Name (TFIDF)<ul>
<li>After preprocessing by unidecode, vectorized by<ul>
<li>TfidfVectorizer(analyzer='char_wb', ngram_range=(3, 4), sublinear_tf=True)</li></ul></li>
<li>'char_wb' was better than 'word' with higher tolerance for misspellings.</li></ul></li>
<li>Categories (MDS)<ul>
<li>Using the matching rate between categories in the train data as distance, I created 4-dimensional embeddings using <a href="https://en.wikipedia.org/wiki/Multidimensional_scaling" target="_blank">Multi Dimensional Scaling (MDS)</a>.</li></ul></li></ul></li>
<li>Exact matches for attributes (name, address, zip, phone (last 4 digits), url (domain name))</li>
<li>Merged the above and aligned the id/near_id pairs in one direction and removed duplicates.</li>
<li>12 candidates per 1 id (about 7M candidates on test data), max IOU w/o postprocessing is about 0.983.</li>
<li>I also tried embedding based on deep learning, but it did not work well and took too much inference time, so I gave up on it.</li></ul></li>
<li><p>Matching</p>
<ul>
<li>Distance and country tokens were added to the <a href="https://arxiv.org/abs/2004.00584" target="_blank">Ditto</a> based input and trained mdeberta-v3-base using fine tuning. Example inputs are as follows.<ul>
<li>' [D36] [RU] [COL] name [VAL] Аптека [COL] categories [VAL] Pharmacies [SEP] [COL] name [VAL] Аптека [COL] categories [VAL] Pharmacies [COL] address [VAL] Ул Лесная д 8А',<br>
' [D29] [HU] [COL] name [VAL] Kazánház [COL] categories [VAL] Bars [SEP] [COL] name [VAL] Cézár Ház B Épület [COL] categories [VAL] Residential Buildings (Apartments / Condos) [COL] city [VAL] Budapest [COL] zip [VAL] 1132',<br>
' [D16] [JP] [COL] name [VAL] スシロー [COL] categories [VAL] Sushi Restaurants [COL] address [VAL] 寺崎北1 7 4 [COL] city [VAL] 佐倉市 [COL] state [VAL] 千葉県 [COL] phone [VAL] 5830 [COL] zip [VAL] 285-0819 [COL] url [VAL] akindo-sushiro [SEP] [COL] name [VAL] 幕張イオン',</li></ul></li>
<li>Distance tokens are represented as [D0]-[D49]. [Di] means that the haversine distance is in the 2i to 2(i+1) percentile of the entire train data.</li>
<li>The country token is simply the string of country.</li>
<li>Fine tuning is done in 4 epochs, all positive pairs are used, negative pairs are divided into 4 parts and each part is assigned to each epoch.</li>
<li>Trained by the entire train data, no validation. Single model, single fold.</li>
<li>Google Colab (V100, high memory) was used, 40 hours per epoch (160 hours overall).</li>
<li>Learning rate is 2e-5. Batch size is 36.</li>
<li>Max token length is 128, which covers over 99% of train input sequences.</li>
<li>Adversarial Training (FGM) was used (eps = 0.1) .</li></ul></li>
<li><p>Postprocessing</p>
<ul>
<li>I constructed my own algorithm based on the group average method.<ul>
<li>The usual group average method is too time-consuming due to the computational complexity of O(N^2) where N is the length of the test data. But this algorithm works with O(NKC^2), where K is the number of candidates per id, and C is the average cluster size.</li></ul></li>
<li>The rough descriptions of the algorithm are as follows. <ul>
<li>Use a priority queue to process the clusters in order from the edge with the highest value of predictions. If the prediction is known for all possible edges between clusters, and the value retrieved from the priority queue matches the average of the predictions for all edges, merge the clusters with Union-Find. If there are edges for which the prediction has not yet been determined, the information is temporarily stored in the queue for additional predictions. When the additional prediction queue is full to some extent, the above matching process is used to obtain the missing predictions for the missing edges, calculating the average of the predictions for all edges, and the information is again stored in the priority queue.</li></ul></li>
<li>The score was expected to be improved by roughly 0.025 compared to the case where postprocessing was not used.</li>
<li>I initially used probability (0~1) as the value of prediction, but using the value of raw prediction before passing through sigmoid improved the score by about 0.002.</li>
<li>The loop ends when the number of performing merging reaches 35% of the length of the test data.</li></ul></li>
</ul> |
Foursquare - Location Matching | 1st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Foursquare - Location Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to the host for an interesting competition and congratulations to all the winners!</p>
<p>While we learned many things from this competition, it is truly regrettable that there was leakage. It was very sad that we had to spend so much time looking for leaks in the last few days.</p>
<h2>Summary</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F623370%2F00c6255ee515c26709b1107d59a95937%2F4square_pipeline.jpg?generation=1657345465981057&alt=media" alt=""><br>
We divided the generation of candidates and the determination of whether they were the same POI into four separate stages.</p>
<ul>
<li>1st stage : Create candidates</li>
<li>2nd stage : Feature engineering and LightGBM</li>
<li>3rd stage : xlm-roberta-large and mdeberta-v3-base and Catboost</li>
<li>4th stage : Post Process and Predicting newly emerged candidates with xlm-roberta-large</li>
</ul>
<p>Next, since there was the same data in train and test, we tried to link them and deleted the pairs that would have been incorrect.</p>
<h2>Cross validation strategy</h2>
<ul>
<li>Local evaluation was done with 2 fold cross validation. In the training for submit, we used all of the 2 fold data.</li>
</ul>
<h2>1st stage</h2>
<ul>
<li>For each id, we selected 100 candidates for each of the following two methods<ul>
<li>Euclidean distance in latitude and longitude</li>
<li>Cos similarity of name embedding<ul>
<li>model : bert-base-multilingual-uncased</li>
<li>knn : cuml</li></ul></li>
<li>Generate features for candidates. Here, to save memory, only a few features were generated, such as jaro distance for names and jaro distance for categories.</li>
<li>Using LightGBM, each of the above two patterns was predicted, leaving the top 20 candidates each, for a total of about 40 candidates. To reduce inference time, we used Forestinference.</li>
<li>Max IOU: 0.979</li></ul></li>
</ul>
<h2>2nd stage</h2>
<ul>
<li>We created about 120 features based on the following notebook.<ul>
<li><a href="https://www.kaggle.com/code/ryotayoshinobu/foursquare-lightgbm-baseline" target="_blank">https://www.kaggle.com/code/ryotayoshinobu/foursquare-lightgbm-baseline</a></li>
<li>Character similarity features such as Levenshtein distance and Jaro-winkler Distance.</li>
<li>Character similarity statistics (maximum, minimum, average) using id as key. <br>
Also, the ratio of those statistics.</li>
<li>Euclidean distance using latitude and longitude</li>
<li>Embedding of name with svd for dimensionality reduction<ul>
<li>model : bert-base-multilingual-uncased</li></ul></li></ul></li>
<li>LightGBM ( Inference : Forestinference)<ul>
<li>CV : 0.875</li></ul></li>
<li>Threshold value : 0.01<ul>
<li>The number of candidates was reduced to about 10%.</li></ul></li>
</ul>
<h2>3rd stage</h2>
<ul>
<li>We had suspected overlap between train and test, but had no direct evidence, so we created train overfit models to increase LB and models to increase CV at the same time. After finding evidence of train and test overlap, we only needed to concentrate on improving the accuracy of the data that existed only in test, so we finally used models that had the highest CV and did not overfit train data(e.g. do not increase the number of epochs, use FGM, etc.)<ul>
<li>catboost<ul>
<li>The same features were used as in the 2nd</li>
<li>CV : 0.878</li></ul></li>
<li>xlm-roberta-large<ul>
<li>text : name + categories + address + city + state</li>
<li>Combine some features of 2nd ( about 70)</li>
<li>3 epochs</li></ul></li>
<li>mdeberta-v3-base<ul>
<li>text : name + categories + address + city + state</li>
<li>Combine some features of 2nd and new features ( about 90 ) <ul>
<li>manhattan distance, harversinie distance, etc </li></ul></li>
<li>Trained with FGM and EMA</li>
<li>4 epochs</li>
<li>CV : 0.907</li></ul></li></ul></li>
<li>Ensemble : 2nd lgb * 0.01 + 3rd catboost * 0.32 + 3rd xlm-roberta-large * 0.29 + 3rd medebeta-v3-base * 0.38<ul>
<li>CV : 0.911</li></ul></li>
<li>Threshold value : 0.5</li>
</ul>
<h2>4th stage</h2>
<ul>
<li>Post procress<ul>
<li>Compare the matches of two ids and merge the matches of two ids if the common id exceeds 50% from either side.</li>
<li>Prediction with xlm-roberta-large for the newly created pair above.</li>
<li>Threshold value : 0.02</li>
<li>CV : 0.9166</li></ul></li>
</ul>
<h2>Merge train (Leak)</h2>
<ul>
<li>Tie train and test by name,lat,lon</li>
<li>Add true positive<ul>
<li>Create all pairs of true positive with POI as key (1)</li></ul></li>
<li>Remove false positive<ul>
<li>Remove Pairs of id's tied with train data (2)<ul>
<li>It was possible to determine whether the id associated with the train data was false positive or not</li></ul></li>
<li>Remove pairs of id tied with training data and id not tied with training data(3)</li></ul></li>
<li>By using LB, we validated the effects of (1) add train-train tp, (2) remove train-train fp, (3) remove train-test fp, which are explained above.<ul>
<li>We compared LB with submission at stage2.<ul>
<li>Without merge train : 0.900</li>
<li>(1) : 0.943</li>
<li>(1) + (2) + (3) : 0.971</li></ul></li></ul></li>
<li>Since we were not sure whether (3) would work for private data, we chose both (1) + (2) and (1) + (2) + (3) for the final submission.</li>
</ul>
<h2>Private Score / Public Score</h2>
<ul>
<li>Over fit version<ul>
<li>mdeberta-v3-base(no dropout) 8epoch + xlm-roberta-large 8epoch</li>
<li>without train merge : 0.946 / 0.947</li>
<li>with train merge : 0.976 / 0.976</li></ul></li>
<li>Non over fit version1<ul>
<li>as explained above</li>
<li>without train merge : 0.941 / 0.941</li>
<li>with train merge : 0.977 / 0.978<ul>
<li>(1) + (2) + (3)</li></ul></li></ul></li>
<li>Non over fit version2<ul>
<li>Slightly changed model compared to non over fit version1</li>
<li>with train merge : 0.966 / 0.966<ul>
<li>(1) + (2)</li></ul></li></ul></li>
</ul>
<h2>About Leakage</h2>
<p>For the past few weeks, we have been suspicious of leak for many reasons:</p>
<ul>
<li>There is a large gap between CV and LB. Max IOU is already high, so a gap of this magnitude is unnatural.</li>
<li>LB score is higher with overfitted models. For example, no dropout or many epochs training. Training with many epochs leads to a worse CV score.</li>
<li>As the CV increased, the CV/LB correlation disappeared. We had a big improvement of CV by training of NNs with FGM but LB was worse.</li>
<li>The host explained that "we include a few example instances selected from the test set." and the two of the five sample records exist in training data.</li>
</ul>
<p>In the few days just before the deadline, we focused on thinking about how to find and utilize leakage. And fortunately, we succeeded in discovering how to take advantage of leakage.</p>
<h2>Inference code</h2>
<p><a href="https://www.kaggle.com/code/takoihiraokazu/sub-ex73-74-75-ex104-115-90-101-merge-train3" target="_blank">https://www.kaggle.com/code/takoihiraokazu/sub-ex73-74-75-ex104-115-90-101-merge-train3</a></p>
<p>Thanks!<br>
Here's how we found leak.</p>
<ul>
<li>First, the gap between CV and LB was large, LB increased when the number of epochs in the 3rd stage model was increased, and CV increased but LB decreased for the model trained with FGM. From these three points, we suspected that train and test had the same data.</li>
<li>Next, for further confirmation, we trained the model with a dropout 0 in bert, a model that was more overfit to train. This also increased the LB. </li>
<li>Furthermore, the host explains in the data tab that "we include a few example instances selected from the test set.", and two of the five sample records exist in training data. <br>
<a href="https://www.kaggle.com/competitions/foursquare-location-matching/data" target="_blank">https://www.kaggle.com/competitions/foursquare-location-matching/data</a></li>
</ul>
<p>From the above, we were almost certain that there was data overlap between train and test.<br>
Next, we experimented with the train merge described in the solution using online train data and found that LB did not increase.<br>
However, since we were convinced that train and test had the same data, we did not give up and wondered what the problem was. We thought that the contents of train.csv might have been switched, so we used another data that we had uploaded for debugging in advance. When we did so, LB went up quite a bit and we noticed that the contents of train.csv had been replaced.</p>
<p>Thanks!<br>
Yes, I was able to eliminate almost 90% of the data with a threshold of 0.01. The threshold was determined by looking at the balance between the number of remaining data and the max iou. At the end of the 2nd stage, max iou was 0.974. </p>
<p>I have concated it to the output of roberta.</p>
<pre><code>class FourSquare_model(nn.Module):
def __init__(self):
super(FourSquare_model, self).__init__()
self.model = AutoModel.from_pretrained(THIRD_BERT_MODEL1)
self.ln1 = nn.LayerNorm(1024)
self.linear1 = nn.Sequential(
nn.Linear(1024,128),
nn.LayerNorm(128),
nn.ReLU(),
nn.Dropout(0.2))
self.linear2 = nn.Sequential(
nn.Linear(76,128),
nn.LayerNorm(128),
nn.ReLU(),
nn.Dropout(0.2))
self.linear3 = nn.Sequential(
nn.Linear(128 + 128,64),
nn.LayerNorm(64),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(64,1),
)
def forward(self, ids, mask, token_type_ids,num_features):
# pooler
out = self.model(ids, attention_mask=mask,token_type_ids=token_type_ids)['last_hidden_state'][:,0,:]
out = self.ln1(out)
out = self.linear1(out)
out2 = self.linear2(num_features)
out = torch.cat([out,out2],axis=-1)
out = self.linear3(out)
return out
</code></pre> |
Foursquare - Location Matching | 23rd place solution (arcmargin) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Foursquare - Location Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi, sorry for the late reply. I finished the competition a bit burnt out and have been away on vacation for a while. </p>
<p>I have prepared a notebook explaining in more detail the 'latitude and longitude encoding trick': <a href="https://www.kaggle.com/code/joulazas/latitude-and-longitude-encoding-for-arcmargin" target="_blank">notebook</a></p>
<p>Congratulations to all medal winners and thanks to Kaggle and Foursquare for such an interesting competition. </p>
<p>My solution is based on arcmargin models to extract candidate pairs and LigthGBM for matching. </p>
<p><strong>Blocking</strong>: Generate pair candidates with arcmargin model. This model is trained with arcmargin loss (<a href="url" target="_blank">https://arxiv.org/pdf/1801.07698.pdf</a>) </p>
<p>Features: </p>
<ul>
<li><p>TFIDF (char level of (1, 3) ngrams) of name, address, zip code. </p></li>
<li><p>TFIDF (word level) of categories. </p></li>
<li><p>Onehot of city, country, state and the domain of the url. </p></li>
<li><p>Binary features: has_phone, has_url, etc. </p></li>
<li><p>Latitude and longitude encoded with a trick explained below. </p></li>
</ul>
<p>Model: </p>
<p>All the inputs are concatenated and followed by two blocks of Dense-PReLU-BN-Dropout. The output of the last block is the input to the arcmargin layer. </p>
<ul>
<li><p>Size of embedding is 512. </p></li>
<li><p>It’s important to use L2 regularization in the arcmargin layer (it has a lot of parameters, >200M when is trained with >500k points of interest). I use L2=1e-4 </p></li>
<li><p>Margin warmup from 0.2 to 0.8. I think it’s not very important, but the model converges faster. </p></li>
</ul>
<p>Results: </p>
<p>These models generate a set of pairs with a max IOU of >0.98. </p>
<p>Public LB: 0.89. </p>
<p><strong>Matching: LightGBM models</strong> </p>
<p>Features: </p>
<ul>
<li><p>Cosine distance between embeddings. </p></li>
<li><p>Euclidean, Manhattan, Harvesine distances between lat, lon. </p></li>
<li><p>Levenhstein, Jaro-Winkler, LCS, distances/similarities (<a href="url" target="_blank">https://github.com/seatgeek/thefuzz</a>). </p></li>
</ul>
<p>Results: </p>
<p>Public LB: 0.908 </p>
<p><strong>Final submission</strong>: </p>
<ul>
<li><p>New arcmargin model based only on text features with XLM-Roberta. Public LB: 0.917 </p></li>
<li><p>New features in LGB models: Embedding rankings, cosine distance to the nearest neighbor, etc. Public LB: 0.923. </p></li>
</ul>
<p><strong>Latitude and longitude encoding trick</strong>: Because the latitude and longitude features are of high precision, it is necessary to encode them in some way to make them useful for the arcmargin model. I have used a technique like positional encoding of transformers to embed these high precision features in vectors. The code used is: </p>
<pre><code>emb_size = 20
precision = 1e6
latlon = np.expand_dims(df[["latitude", "longitude"]].values, axis=-1)
m = np.exp(np.log(precision) / emb_size)
angle_freq = m ** np.arange(emb_size)
angle_freq = angle_freq.reshape(1, 1, emb_size)
latlon = latlon * angle_freq
latlon[..., 0::2] = np.cos(latlon[..., 0::2])
latlon[..., 1::2] = np.sin(latlon[..., 1::2])
latlon = latlon.reshape(-1, 2 * emb_size)
</code></pre> |
Foursquare - Location Matching | 2nd place solution - colum2131 Part | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Foursquare - Location Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you very much for organizing this interesting and exciting competition, and thank you for teaming up with me, <a href="https://www.kaggle.com/no9more9ria10" target="_blank">@no9more9ria10</a>, <a href="https://www.kaggle.com/tomyanabe" target="_blank">@tomyanabe</a>, <a href="https://www.kaggle.com/tkm2261" target="_blank">@tkm2261</a>, <a href="https://www.kaggle.com/yukia18" target="_blank">@yukia18</a> .</p>
<p>This post is part of the 2nd place solution in LB 0.949 (LB 0.971 with leak).</p>
<p><a href="https://www.kaggle.com/competitions/foursquare-location-matching/discussion/336062" target="_blank">Brief Summary</a></p>
<p>Other part</p>
<ul>
<li><a href="https://www.kaggle.com/competitions/foursquare-location-matching/discussion/336072" target="_blank">Candidate Generation</a></li>
</ul>
<h1>Stack-part 1: Binary classification of matches</h1>
<p>Using the candidate matches created in the previous stage, we ensemble three models: GBDT (XGBoost, lightGBM), BERT(xlm-roberta-base).<br>
For this id-pair (e.g., src_id, dst_id), GBDT mainly used the edit distance feature, while BERT used text created in particular columns. These models classified whether the pair matched or not.</p>
<h2>GBDT</h2>
<h3>Features</h3>
<p>XGBoost features were created by colum2131 ( <a href="https://www.kaggle.com/columbia2131" target="_blank">@columbia2131</a> ), and lightGBM features by ria ( <a href="https://www.kaggle.com/no9more9ria10" target="_blank">@no9more9ria10</a> ).<br>
Although there are some differences in each feature, the following XGBoost’s features were below:</p>
<ul>
<li>Features generated independently with src_id and dst_id<ul>
<li>latitude / longitude</li>
<li>country with ordinal-encoding</li>
<li>categories with ordinal-encoding</li></ul></li>
<li>Features generated dependently with src_id and dst_id<ul>
<li>Edit distance (name, categories, address, city, state, zip, url, phone)<ul>
<li>gesh</li>
<li>leven</li>
<li>jaro</li>
<li>simple_ratio (RapidFuzz)</li>
<li>partial_ratio (RapidFuzz)</li>
<li>token_set_ratio (RapidFuzz)</li>
<li>token_sort_ratio (RapidFuzz)</li>
<li>token_ratio (RapidFuzz)</li>
<li>partial_token_ratio (RapidFuzz)</li>
<li>wratio (RapidFuzz)</li>
<li>qratio (RapidFuzz)</li></ul></li>
<li>Set similarity with .split(“ ” or “, “) (name, cateogires)<ul>
<li>jaccard</li>
<li>dice</li>
<li>sympson</li></ul></li>
<li>Cosine similarity of vectors encoded by tf-idf (name, categories, address, all)<ul>
<li>tfidf(ngram_range=(1, 1), analyzer=‘word’)</li>
<li>tfidf(ngram_range=(1, 3), analyzer=‘char’)</li></ul></li></ul></li>
<li>Others<ul>
<li>probability by transformer</li></ul></li>
</ul>
<h4>Modeling</h4>
<p>The cross-validation of the train data was purposely split so that leakage would occur.<br>
Specifically, instead of the folds with GroupKFold(n_splits=2, group=‘point_of_interest’), we created new folds with GroupKFold(n_splits=5, group=‘src_id’).<br>
This leakage improved the CV and Public LB because train and test data contained common records.<br>
We suspect that the latitude and longitude variables and the ordinal categories worked effectively as features to represent the common id’s records.</p>
<h2>BERT</h2>
<h3>Text & Numerical-Features</h3>
<p>The text was defined as each lowercased name and cateogires connected by a SEP-token and country.<br>
Additionally, the numerical feature was logarithmic variables of haversine distance.</p>
<pre><code>def get_basearray(cfg, df):
cfg.idx2id = df['id'].fillna('').to_numpy()
cfg.id2idx = {c: i for i, c in enumerate(cfg.idx2id)}
cfg.idx2name = df['name'].fillna('').str.lower().to_numpy()
cfg.idx2categories = df['categories'].fillna('').str.lower().to_numpy()
cfg.idx2address = df['address'].fillna('').str.lower().to_numpy()
return cfg
def get_text(cfg, i, j): # {i: index of src_id, j: index of dst_id}
text = cfg.idx2country[i]+cfg.sep+\
'name1 '+cfg.idx2name[i]+cfg.sep+'category1 '+cfg.idx2categories[i]+\
cfg.sep + cfg.sep +\
'name2 '+cfg.idx2name[j]+cfg.sep+'category2 '+cfg.idx2categories[j]
return text
</code></pre>
<h3>Model Training</h3>
<p>The architecture of BERT is below:</p>
<pre><code>class CustomModel(nn.Module):
def __init__(self, cfg, criterion):
super().__init__()
self.cfg = cfg
self.criterion = criterion
self.config = AutoConfig.from_pretrained(
cfg.MODEL_PATH,
output_hidden_states=True
)
self.config.attention_probs_dropout_prob = 0.0
self.config.hidden_dropout_prob = 0.0
self.backbone = AutoModel.from_pretrained(
cfg.MODEL_PATH, # xlm-roberta-base
config=self.config
)
self.linear1 = nn.Sequential(
nn.Linear(self.config.hidden_size+len(cfg.num_cols), 1024),
nn.SELU(),
nn.Linear(1024, 1)
)
def forward(self, inputs, labels=None):
outputs = self.backbone(
inputs['input_ids'],
attention_mask=inputs['attention_mask']
)["last_hidden_state"]
outputs = outputs[:, 0, :]
outputs = torch.cat((
outputs,
inputs['num_features']
), dim=1)
logits = self.linear1(outputs).flatten()
loss = self.criterion(logits, labels)
return logits, loss
</code></pre>
<p>This model was trained for 5 epochs on all train data.</p>
<h1>Leak Post Process</h1>
<p>As mentioned in other discussions, there was a duplication of train data in the test data.<br>
Therefore, the some test data was merged with the train data, and matches were created with the same poi. The prediction of LB0.949 was overwritten.<br>
We processed this matches overwriting the matches of LB0.949.<br>
Note that train.csv is replaced on submit, so we created a separate train.csv dataset.</p>
<pre><code>import cudf
train = cudf.read_feather(‘../input/4sq-train-data-copy-feather/train.ftr’).fillna(‘[NAN]‘)
test = cudf.read_csv(‘../input/foursquare-location-matching/test.csv’, dtype=str).fillna(‘[NAN]’)
cols = [‘name’, ‘latitude’, ‘longitude’]
test = test.merge(train[cols+[‘point_of_interest’]], on=cols, how=‘left’)
test[[‘id’, ‘point_of_interest’]].to_pandas().to_csv(‘id2poi.csv’, index=False)
id2poi = pd.read_csv('id2poi.csv')
match_dict = {}
for poi, group_df in id2poi.groupby('point_of_interest'):
ids = set(group_df['id'].tolist())
for id_str in ids:
match_dict[id_str] = ids
sub = pd.DataFrame({
‘id’: sub_dict.keys(),
‘matches’: [
match_dict.get(key, value)
for key, value in sub_dict.items()
]
})
sub[‘matches’] = sub[‘matches’].map(lambda x: ' ’.join(x))
http://sub.to_csv(‘submission.csv’, index=False)
</code></pre> |
Foursquare - Location Matching | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Foursquare - Location Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><pre><code># embeddings 600k x embedding dimension - already normalized
chunk = 512
embeddings_chunks = embeddings.split(chunk)
for idx in range(len(embeddings_chunks)):
cos_sim_chunk = torch.mm(embeddings_chunks[idx], embeddings.transpose(0, 1))
# here you will only want to keep the topk indices in memory for further use
</code></pre>
<p>Diff LR was also helpful, yes</p>
<p>It was a binary classification task, and label 1 or 0 was defined by ground truth data. And I extracted the candidates from in-sample similarities.</p>
<p>BTW: it is crazy how far we have got. I remember struggling with this problem on tfidf features ~10 years ago. It was sheer impossible to do on CPU and memory restrictions. </p>
<p>I will have to check on old models on cv, I don't remember it fully. But it was somewhere in range of 0.97 recall maybe? But by taking more I could get higher.</p>
<p>What I want to mention is that the whole solution is quite quick. A single first stage + single second stage model scores in 90 minutes on LB with 600k records. </p>
<p>There is no magic in the code that would be interesting. Rather certain training tricks to make the training stable such as: long warmup, playing with the neck of the model, trying different margins and scale combinations, etc.</p>
<p>ArcFace is explicitly tailored towards large number of classes. You have this dimension in the margin product of ArcFace.</p>
<p>Congrats to all competitors, and although it had a strange ending, I am confident that some techniques of this competition can be re-used in future.</p>
<h3>Summary</h3>
<p>My solution is based on a two-stage approach utilizing NLP deep learning models. The first-stage model is a metric learning model with ArcFace loss trained on the original input data predicting the point-of-interest of a record and consequently putting similar records into a similar embedding space. The second stage additionally trains a bi-encoder NLP model based on the proposed candidates from the first stage. The final sub is a blend of both first- and second-stage models.</p>
<h3>First stage: ArcFace</h3>
<p>In the first stage, I fit ArcFace model on most of the input columns separated by SEP tokens. I directly feed the longitude and latitude into the string but split it up every 3rd digit so that the tokenizers can work better with it. A sample record looks like follows:</p>
<p><code><s> Tesco HQ</s></s> Offices</s></s> 18 963 587</s></s> 47 450 562</s></s> 2040</s></s> Budaörs</s></s> Kinizsi út 1-3.</s></code></p>
<p>If a field was NAN, I just filled in an empty string, and no special tokens or other alterations were helpful.</p>
<p>It was still quite tricky to get ArcFace models to work, and some effort was necessary to get the training to stabilize and converge. I only rely on xlm-roberta-large models here. In the beginning I experimented with Triplet-like models, which were easier to get going, but ArcFace was clearly superior after making these models work.</p>
<p>I then calculate all-pairs similarity and cut-off below a certain threshold. All pairs above the threshold make it into the second stage and we also keep the predicted probability for blending.</p>
<h3>Second stage: Bi-Encoder</h3>
<p>For training, I use in-sample results from the first stage as candidate pairs. I use various bi-encoder models to train for TP/FP of pairs. A record looks like follows:</p>
<p><code><s> Tesco HQ</s></s> Novo Nordisk</s></s> 18 963 587</s></s> 12 449 657</s></s> 47 450 562</s></s> 55 751 928</s></s> Offices</s></s> Offices</s></s> HU</s></s> DK</s></s> Pest</s></s> Bagsværd</s></s> Budaörs</s></s> Bagsværd</s></s> 2040</s></s> 2880</s></s> Kinizsi út 1-3.</s></s> Novo Allé</s></s></s></s></s></s></s></s></code></p>
<p>So pasting each column of both records next to each other and then training a bi-encoder model. I always added all TPs to the training data, and then increasingly added more FPs if time allowed.</p>
<h3>Cross-Validation</h3>
<p>The test set in this competition contained ~600k records, and the whole training data contained approximately twice the data. Retrieval/Recognition type problems are always heavily dependent on the list of candidates to choose from, so naturally having the possibility of more unknown candidates will deteriorate the score. </p>
<p>Hence, the most reasonable CV setup was to split out 600k records with unique points of interest as a validation set, and train models on the remaining data. When doing so I saw good correlation between CV and LB however there is one big issue with this approach, namely that you miss training models on half the data. Consequently, I decided around six weeks before competition deadline to only evaluate on public LB which from my point of view was the best set to evaluate my models due to the following reasons:</p>
<ul>
<li>There was enough time left for me to evaluate regularly on LB</li>
<li>I saw good correlation before between CV and LB</li>
<li>It allowed me to train all models directly on full data</li>
<li>It included the full test data as candidates (all 100%)</li>
<li>On CV, the score was stable on subsets, so any random 100k subset scored nearly the same as the full 600k set (keeping all 600k candidates of course).</li>
</ul>
<p>Unfortunately, I only learned after competition ended, that the test data has a leak that was already discussed thoroughly in the forums. So I definitely biased my model choices towards that leak (e.g., longer training, more overfitting - which definitely works very well with my data and model setup). However, I still made sure that longer training would not hurt my CV score. Yet, I stopped trying to explicitly optimize my CV score, and in retrospective this is a bit unfortunate as I am fully convinced that my general approach could further be improved by working towards a different direction. </p>
<h3>Blend and discussion</h3>
<p>I blend both probabilities from first and second stage to get the final prediction and cut-off by a certain threshold. I applied some minor QE post processing, but not much was helping additionally.</p>
<p>I personally never suspected the leak, due to the dataset description and nature of the problem, so I also never actively searched for it. That said, I also noticed that longer training leads to better LB, now in retrospective I understand why. I was rather suspecting that the models learn / overfit the individual datasets (there are different synthetic data sources) which can help the matching tremendously as it reduces the possible candidates.<br>
The <a href="https://www.kaggle.com/competitions/foursquare-location-matching/discussion/338035" target="_blank">post-analysis</a> revealed that my solution also scores very competitively on non-overlapping data which shows how well my pipeline can generalize to both scenarios. </p>
<p>I still learned a lot in this competition and am very happy to have solved it with deep learning only. It is most fun to me personally, and I am sure this general approach will come in handy again in the future.</p>
<p>I also nearly gave up on it, and I still wasted many runs where the loss exploded. It is very hard to tune in general and needs some patience. I started with ArcFace, couldnt make it work, and switched to TripletLoss which worked. So I thought it has to work and went back to ArcFace.</p>
<p>I tried quite a few things with loss weighting, dynamic margins, and other things, but in the end I did not do anything special with them as overall the class distribution (in terms of nr of records per class) was not too diverse and just leaving it as-is was best or at least not worse.</p>
<p>I am calculating it directly in Pytorch on GPU in batch-wise fashion on the embedding output from the ArcFace model. It is very quick.</p>
<p>You split the source in batches, let's say 32, and keep the target full. So you calculate 32x600k for 600k/32 times. It is very fast. Let me know if it doesn't make sense and I will try to elaborate more detailed.</p> |
Foursquare - Location Matching | 4th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Foursquare - Location Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you Aman !</p>
<p>We used phone to create candidates, since a same phone number was often indicating a true match.</p>
<p>As for model features, we create some of them (LCS, jaro-winkler, count of occurences…) but as you said, most of them were not very relevant.</p>
<p>I just shared 3 useful notebooks that we used : <a href="https://www.kaggle.com/competitions/foursquare-location-matching/discussion/335873" target="_blank">https://www.kaggle.com/competitions/foursquare-location-matching/discussion/335873</a></p>
<p>Yes, I will soon share one or two notebooks. Moreover, <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> will soon add a diagram to illustrate the pipeline.</p>
<p><strong>[EDIT] You can find the notebooks here :</strong> <a href="https://www.kaggle.com/competitions/foursquare-location-matching/discussion/335873" target="_blank">https://www.kaggle.com/competitions/foursquare-location-matching/discussion/335873</a></p>
<p>Thank you. This is normal, don't worry, it must be taken step-by-step. One can't learn in 10 minutes what asked us more than 100 hours !</p>
<p>Thank you for your kind comment !</p>
<p>Hello !</p>
<p>First of all, I would like to thank the organizers for this very interesting and challenging competition. Also, congratulations to all participants : the suspense was so tense in the last few days that I can’t believe it’s over now. I’m writing this at 3am, very grateful and relieved ! Also, a kind word for the deserving people who missed a medal by very few positions, because it can be hard to accept. You did really great, and I’m sure you’ll do even better the next time. Hardwork pays off.</p>
<p>Finally, thanks to my talented and hard-working teammates <a href="https://www.kaggle.com/ymatioun" target="_blank">@ymatioun</a> and <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a>, with whom I enjoyed to work and learn.<br>
<br></p>
<h2><strong></strong></h2>
<p>Our solution relies on the following steps :</p>
<p>• <strong>Clean data, mostly names.</strong> Remove special characters + space, apply unidecode to unify non-latin languages, lower text…<br>
• <strong>Create some meaningful features.</strong> This has required some work beforehand, for instance to create groups of similar categories ('gyms', 'gyms or fitness centers', 'gymnastics gyms', 'gym pools'…), and to compute the average distance for each of them where we would be likely to find at least 50%, 75%, 90% and 95% of true matches. We also grouped names, cities and states, if they designated the same place. You can find <a href="https://www.kaggle.com/competitions/foursquare-location-matching/discussion/335873" target="_blank">3 useful notebooks here</a> that show how we did it.<br>
• <strong>Find candidate pairs to match.</strong> To do this, we used different ideas : consider near neighbors; name-similarity; same words in names; grouped-category equality; (cleaned) phone number equality; same address; TF-IDF (especially for airports and indian places); …<br>
• <strong>Pair matching.</strong> At this point, we had so much pairs that we had to use 2 models to avoid memory errors. The first one, a 5-fold LGBM, was only using a few features such as distance or name similarity to avoid issues. Its goal was to delete every non-relevant pairs (threshold < 0.007). We then computed more than 200 features to feed a 2nd LGBM (20 folds !), which was endorsed to give the final results. Both of these models were trained on the whole train set (1,1M rows), which was not possible on Kaggle and had to be done elsewhere due to memory constraints.<br>
• <strong>Post-processing.</strong> We tried a lot of ideas (graph-based; 3rd LGBM model taking the 2nd model’s score as an input, as well as contextual features such as the number of high-scored neighbors of a places). In the end, the key idea was to adapt the thresholds to the sizes of the groups we were merging. In fact, merging 2 places has not the same impact (and probability) than merging 2 groups of 10 places.</p>
<p><strong></strong> : some rows of the private test set were very similar to rows of the train set. Thankfully, we found this on the very last day of the competition. So we recorded every matched ids of the train set, and if they were found in the test set, then we would automatically matched them. In practice, we made the parallel between the train and test set by identifying ids with the following key : cleaned_name + round(latitude, 5) + round(longitude, 5).<br>
In my opinion, it would have been better for the competition if the entries of the train set and the test set were distinct.<br>
Edit : we didn't use the leak to delete false positives but it would have surely given a big boost.<br>
<br></p>
<h2><strong></strong></h2>
<p><a href="https://ibb.co/xXCkXR6"><img src="https://i.ibb.co/rF7jF1H/4sq-drawio.png" alt="4sq-drawio"></a><br>
Thanks to <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> for this great illustration.<br>
<br><br>
<strong></strong><br>
This part would be very long, but mainly:</p>
<p>• Tf-idf was not very useful : less than 3% of true matches added, and it added some false-positives that were hard to catch (since the names were similar by definition).<br>
• Translation of foreign languages was not a game-changer compared to unidecode (except pykakasi that was better for Japanese translation)<br>
• (Offline) reverse geocoder was not that useful, mainly because the main features were foremost based on the name/latitude/longitude rather than on city/state.<br>
• Model stacking. We tried to add an XGBoost and Catboost, but in the end, the gain was very poor compared to the efforts.</p>
<p><br></p>
<h2><strong></strong></h2>
<p>Our code is avalaible <a href="https://www.kaggle.com/competitions/foursquare-location-matching/discussion/335921" target="_blank">here</a>.</p>
<p><br></p>
<h2><strong></strong></h2>
<p>I really enjoyed this competition, because a lot of approaches were possible. In the end, a great solution required to tackle very different and complementary tasks: text-processing, out-of-the-box ideas, models tuning, memory handling, features engineering… So everyone ends to learn something ! And thanks to it, after a lot of hard-work, I've become a Kaggle Competition Master ! ✔️</p>
<p>To finish, thank you all once again for your great efforts and contributions. This competition has been very fun, and I'm looking forward to continue my journey on Kaggle.</p>
<p>If you have any question about our work, feel free to ask, and I wish you all the best for the next competitions. 🙂</p> |
Foursquare - Location Matching | 58th Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Foursquare - Location Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h4>CV</h4>
<ul>
<li>Group shuffle split on POIs gave me very good correlation across folds and lb (almost identical at my score range)</li>
<li>i used 60/40 split on POIs. But i limited the number of negative samples for training (2.8M neg + 500k positive).</li>
<li>For memory efficiency, i did <code>tuple(sorted([id_1,id_2]))</code> so i only worked with unique pairs</li>
</ul>
<h4>Preprocessing</h4>
<ul>
<li>str.lower()</li>
<li>map US states to abbreviations (california -> ca)</li>
<li>replace all city and states for SG to 'singapore'</li>
</ul>
<h4>Finding Pairs</h4>
<p>used different geohash partitions to split data, then:</p>
<ul>
<li>tfidf (various combination of text cols) + agglomerative clustering to form clusters, then generate pairs from clusters</li>
</ul>
<p>knn using euclidean + haversine</p>
<p>uuids pairs: find rows with exactly same values of combinations of text columns (e.g. same name and address)</p>
<h4>Features</h4>
<ul>
<li>using neighbours' features gave me good increase in lb (e.g. include neighbour's distance of id_1 and id_2)</li>
<li>expanded the idea to take distance features between neighbours of id_1 and neighbours of id_2 </li>
<li>language detection using fasttext on names</li>
<li>sbert embeddings and universal sentence encoder -> dot product</li>
<li>tfidf, lsa and hashing vectoriser -> dot product</li>
</ul>
<h4>Model</h4>
<ul>
<li>Lgb, catboost, xgboost and 7 layered feed forward nn using category embeddings</li>
</ul>
<p>some parameters i found important:</p>
<ul>
<li><code>max_delta_step</code> for xgb and lgb when dealing with imbalance data helps make results more stable across experiments</li>
<li><code>cat_features</code> are very good with catboost, but it makes the model size very large. tune <code>max_ctr_complexity</code> and <code>model_size_reg</code> to make inference less memory intensive.</li>
</ul>
<h4>Post-Processing</h4>
<ol>
<li>For id without matches, increase threshold slightly using <code>np.round(x,2)</code></li>
<li>For given id, i took union of all matches that included the given id (e.g. made an inverted list of id-to-matches then take union)</li>
<li>I check neighbour's matches then merge if IOU exceeded threshold (depending on length of original match)</li>
<li>Lastly, i pruned pairs which has low predictions (<=0.2)</li>
</ol>
<p>Edit: i did not use the leakage explicitly. Applied pairs matching on train/test separately </p> |
Foursquare - Location Matching | 6th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Foursquare - Location Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thank you to everyone who organized the competition.<br>
The competition was ended over a month ago, I share my solution.<br>
This competition's task and data were interesting, but I regretted about the leakage.</p>
<h1>1. Preprocessing</h1>
<hr>
<ul>
<li>Remove symbols that seem to be noise.<ul>
<li>About zip and phone, leave only numbers.</li></ul></li>
<li>Apply pykakasi to JP and unidecode to all data.</li>
</ul>
<h1>2. Candidate Generation</h1>
<hr>
<ul>
<li>Generate 28 candidates per id, IoU=0.978.</li>
<li>Dataset Creation Process<ul>
<li>Split the data into two parts by poi.</li>
<li>Create candidates and features for each data.</li>
<li>Change seed and split the data multiple times.</li>
<li>Combine them to form a dataset.</li></ul></li>
</ul>
<h4>Phase1</h4>
<ul>
<li>Extract 6500 points for each point by coordinates, using kdtree.</li>
</ul>
<h4>Phase2</h4>
<ul>
<li>Calculate the similarity with each point, and select top28 as a candidate by the following weighted sum.<ul>
<li>square root of the coordinate distance</li>
<li>1 - jarowinkler: name, categories</li>
<li>normalized levenshtein: name, categories</li>
<li>1 - simpson: name splited by ' ', categories splited by ', '</li></ul></li>
</ul>
<h1>3. Matching Prediction</h1>
<hr>
<h4>Feature</h4>
<ul>
<li>coordinate distance rank of kdtree</li>
<li>language identification from name<ul>
<li>using top3 labels and probs</li></ul></li>
<li>coordinate distance<ul>
<li>haversine, chebyshev, euclidean, manhattan</li></ul></li>
<li>similarity of the categorical feature<ul>
<li>LCS(Subsequence/Substring), levenshtein, jarowinkler</li>
<li>About categories splited by ', ', use similarity, calculated on a brute force basis, mean, min and max.</li></ul></li>
<li>string length of categorical features</li>
<li>features about sets based on string and word<ul>
<li>jaccard, dice, simpson</li></ul></li>
<li>count encoding<ul>
<li>coordinate, name, address, url, phone</li></ul></li>
<li>tf-idf</li>
<li>rank features about frequency of occurrence<ul>
<li>country, categories, categories splited by ', ', categories splited by ' '(word)</li>
<li>Except country, labeling in both of ascending and descending order.</li>
<li>Labels less than 1% are to be removed.</li></ul></li>
<li>rank features for each candidate<ul>
<li>Create a rank of between 28 candidates per id for each feature.</li></ul></li>
</ul>
<h4>Model</h4>
<ul>
<li>catboost, single model</li>
<li>remove early stopping & increase iterations (overfit)</li>
<li>train on all data</li>
</ul>
<h1>4. Postprocessing</h1>
<hr>
<ul>
<li>graph probability convolution
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2126876%2F2e034453f5dbb06da3b079ab9326f2cf%2Fgraph%20probability%20convolution.png?generation=1661677435456298&alt=media" alt=""><ul>
<li>ex) Updating the prob of the connection from A to B (red line).<ul>
<li>Extract A's candidates that have B as a candidate.</li>
<li>Limit to connections having 0.5 or more prob to A.</li>
<li>Calcurate a weighted average of the probs of the connection with B, using the probs of the connection with A as the weight.</li>
<li>AB’<br>
= (AC * CB + AD * DB + AA * AB) / (AC + AD + AA)<br>
= (0.7 * 0.7 + 0.9 * 0.1 + 1 * 0.6) / (0.7 + 0.9 + 1)<br>
= 0.45</li></ul></li></ul></li>
<li>Add nodes that are connected with a prob of 0.5 or more at the adjacent nodes to the prediction.<ul>
<li>Repeat twice.</li></ul></li>
</ul>
<h1>5. Other Tips</h1>
<hr>
<ul>
<li>Build with dict and numpy basically.</li>
<li>Batch processing of feature creation and prediction to avoid memory errors.</li>
<li>Parallel processing of the candidate generation with 4 cpu at submission.</li>
<li>Convert id dtype from str to int.</li>
</ul> |
Foursquare - Location Matching | 7th place solution(with inference code) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Foursquare - Location Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>there are word tour's result for categories<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1958529%2F116b2ea99df839c15806f8c06cbf10af%2F(42).png?generation=1657245838706707&alt=media" alt="word tour result for categories 1"></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1958529%2Fd50fab4fb54b6baaf70d8c5ccf56ca42%2Fword_tour_result.png?generation=1657245934032150&alt=media" alt="word tour result for categories 2"></p>
<h1>Japanese edition</h1>
<h1>overview</h1>
<p>私達は"name", "address", に対して各言語に応じた前処理を行い、embeddingに変換しました。<br>
私たちは重回帰をベースに距離や名前の類似度・単語や文字単位でのprecision・recallから候補生成を行いました。<br>
予測はLightGBMの二値分類で行いました<br>
後処理は予測値が閾値を超えたものを辺として、Union Find Tree(Disjoint Set)でサブグラフに分割したのち、最短距離が2以内のものを選ぶ処理をしました。</p>
<p>皆さんが気になる0.933 -> 0.951へのジャンプについては、5-fold の一つを500iterationsでearly stoppingさせていたものから全データで2000iterations学習させたものに差し替えた際に起こりました。</p>
<h1>前処理</h1>
<h2>自然言語の前処理</h2>
<p>nameとaddressについては多言語だったので、処理を分けた三つのものを用意しました。</p>
<ol>
<li>ほぼそのままのテキスト</li>
<li>小文字化・記号の除去・数詞の数字化を行った各言語ごとのテキスト</li>
<li>2.についてunidecodeで全て[0-9a-z&]に変換したテキスト</li>
</ol>
<p>またaddressについてはnanが多かったので、私達は距離の近傍3つをつなげて補完しました。</p>
<h3>各言語ごとの処理</h3>
<p>基本的にはどの言語も記号の除去->小文字化・数詞の数字化->アルファベット化を一律に行いましたが<br>
日本語・中国語・タイ語については分かち書きが必要な言語であるため個別に処理を行いました。</p>
<p>日本語は<a href="https://github.com/WorksApplications/Sudachi" target="_blank">Sudachi</a>というtokenizerで分かち書き・読み方を取得し、<a href="https://github.com/miurahr/pykakasi" target="_blank">Pykakasi</a>でローマ字に変換しました。<br>
中国語は<a href="https://tfhub.dev/google/zh_segmentation/1" target="_blank">zh_segmehntation</a>で分かち書きし、unidecodeでローマ字に変換しました。<br>
タイ語は<a href="https://pythainlp.github.io/" target="_blank">PyThaiNLP</a>を用いて分かち書きとローマ字化を行いました。</p>
<h2>city, stateの前処理</h2>
<p>cityとstateは出現数順に約2000を取り出し、距離の近傍で2000のうちのどれかにあてはめカテゴリにしました。<br>
また、<a href="http://www.geonames.org/" target="_blank">cities1000</a>データセットを用いて上位2000の地名に変換しカテゴリにしました。(以下geo_nameと呼びます)</p>
<h2>embeddingの作成</h2>
<p>前項の3のテキストについてnameとaddressをそれぞれ別のsentencepieceを用いて32000のサブワードに分割しました。<br>
この上でword2vecをベースにした以下の3つの教師なし事前学習によりembeddingを学習しました。</p>
<ul>
<li>nameについてSkip-gramタスク</li>
<li>name, address, city・state・geo_nameからcategoriesを予測するタスク</li>
<li>name, address, city・state・geo_name, categoriesを混ぜ、<a href="https://arxiv.org/abs/2104.08821" target="_blank">SimCSE</a>により自身を当てる対照学習タスク</li>
</ul>
<p>これらの学習を行う際、negative sampleの取得方法は非常に大切でした。<br>
各サンプルの128近傍についてhaversine距離の対数の負を元にした確率を辺とし、各サンプルごとにランダムウォークでnegative sampleを生成することで質の良い学習を行えました。</p>
<h3>カテゴリの欠損値埋め</h3>
<p>カテゴリの予測については良いモデルができたので、カテゴリが空欄になっているものに対して予測を行うことで欠損値を埋めました。</p>
<h3>クラスタリング・一次元化</h3>
<p>各embeddingは球面kmeans++により約2000のクラスタへと分割しました。<br>
また、haversine距離をもとにしたkmeans++も実装し、同じく約2000のクラスタに分割しました。</p>
<p>クラスタの分割後は各頂点同士の距離にたいして巡回セールスマン問題を解くことで、情報を一次元に埋め込みました。<br>
これは<a href="https://arxiv.org/abs/2205.01954" target="_blank">Word Tour</a>と呼ばれる手法で、これによりembeddingをわずか一次元でGBDTが解釈しやすい形に変換することができます。<br>
各kmeansの類似度、出現数top2000の名前と住所の類似度、city, state, geonameのhaversine距離から各カテゴリをTSPで並び替え連続値としました。</p>
<h1>特徴量生成</h1>
<h2>haversine distance</h2>
<p>二点間の距離について対数をとって特徴量にしました。対数への変換はのちの候補生成で役に立ちます。</p>
<h2>name, address</h2>
<p>両者には様々な比較の特徴量を作成しました。</p>
<h3>公開notebookから取得したもの</h3>
<p>Gestalt Pattern Matching(SequenceMatcher), levenshtein, jaro_winkler (様々なkernelで紹介されていたので割愛します)</p>
<h3>Bag of Words, Bag of Char</h3>
<p>分かち書きし正規化した単語についてTfIdfVectorizerをbinary=True, use_idf=Falseにし、1-gramと2-gramのスパースベクトルを作り類似度を比較しました。<br>
また、日本語や中国語のように文字が意味を持つ場合も考え、文字単位でも同様にスパースベクトルを作成し類似度を比較しました。</p>
<h3>ROUGE-N, ROUGE-L</h3>
<p>ROUGEは文章要約タスクの評価に使われることの多い指標です。文章同士について一致度を測りprecision, recall, f1 valueを出します。<br>
ROUGE-NはN-GRAMを用いて比較、ROUGE-LはLCSを用いて比較を行い、前者はtensorflowのRaggedTensorによる比較、後者はtensorflow-textのrouge_l関数を用いて実装しました。</p>
<h3>コサイン類似度、一次元の距離</h3>
<p>各embeddingについてコサイン類似度と、一次元に落とした後の距離を特徴量として追加しました。</p>
<h2>city/state</h2>
<p>embeddingと同様に、上位2000件のcityとstateをCountEncodingと巡回セールスマン問題で並べた順に特徴量として入れました。</p>
<h2>categories</h2>
<p>事前学習の際のカテゴリ予測タスクについてmetric learningを行うことでカテゴリのembeddingも手に入れました。<br>
他の特徴量と同様にカテゴリのembeddingも巡回セールスマン問題で並べたものとcount encodingを特徴量としました。</p>
<h2>phone/url</h2>
<p>正規化を行い一致するかどうかのみ特徴量として加えました。</p>
<h1>候補生成</h1>
<p>学習・予測のための候補生成は複数のロジックを組み合わせました。</p>
<ul>
<li>haversine距離の最近傍4個</li>
<li>haversine距離の対数とname, address, categories等のembeddingのコサイン類似度の重回帰12個</li>
<li>nameについて単語単位でのrouge-1による類似度、precision, recall合計4つ</li>
<li>nameについて文字単位でのrouge-2による類似度、precision, recallの合計8つ</li>
<li>nameのコサイン類似度の上位4つ</li>
</ul>
<h1>予測</h1>
<p>LightGBMを用いました。全データについてearly stoppingなしで2000iterations学習を行い、予測時はForestInferenceによるGPUでの予測を行いました。<br>
ハイパーパラメータは以下の通りで、一部のカテゴリだけmax_binを増やしました。</p>
<pre><code>max_bin_by_feature = []
for col in feature_cols:
if (col.endswith("kmeans_labels") or col.endswith("pos_tsp_index")) and col.startswith("query"):
print(col)
max_bin_by_feature.append(255*4)
elif (col.endswith("kmeans_labels") or col.endswith("pos_tsp_index")) and col.startswith("candidate"):
print(col)
max_bin_by_feature.append(255*2)
elif col.endswith("_category_frequent_min_tsp") or col.endswith("_category_frequent_max_tsp"):
print(col)
max_bin_by_feature.append(255*2)
else:
max_bin_by_feature.append(255)
lgb_params = {
"objective" : "binary",
"metric" : "auc",
"boosting": 'gbdt',
"max_depth" : -1,
"num_leaves" : 2**12 - 1,
"learning_rate" : 0.1,
"bagging_freq": 1,
"is_unbalance": True,
"max_bin_by_feature": max_bin_by_feature,
"bin_construct_sample_cnt": 200000,
"lambda_l1": 1.,
"lambda_l2": 1.,
"bagging_fraction" : 0.9,
"feature_fraction" : 0.6,
"seed": 0}
</code></pre>
<h1>後処理</h1>
<p>ペアの予測値が一定以上のものを辺とし、グラフの媒介中心性をもとに一部の辺を除いた後UnionFindによりグループごとのグラフを構築しました。<br>
各グループについて重みなしのグラフの全対最短距離を求め、各頂点から距離が2以内の頂点256個以内を予測値としました。</p>
<h1>UPDATE</h1>
<p>I published <a href="https://www.kaggle.com/code/nadare/7th-place-solution-inference" target="_blank">our inference code</a><br>
The code is dirty, but I hope it helps you</p>
<h1>UPDATE 2</h1>
<p>I published <a href="https://www.kaggle.com/code/nadare/word-tour-experiment/notebook" target="_blank">notebook about word tour</a>.<br>
"Word Tour" is one of the important parts of this solution and is likely to be applicable to other tasks as well. Please check this out and upvote.</p>
<h1>introduction</h1>
<p>We publish our solution.<br>
Since it is English generated by google translate, I think that there are some unnatural parts, but please forgive me as I will correct it as much as possible later.</p>
<h1>overview</h1>
<p>We preprocessed "name" and "address", according to each language, and converted them to embedding.<br>
We generated candidates from distance, name similarity, word and character precision, and recall based on multiple regression.<br>
Prediction was made by LightGBM binary classification<br>
In the post-processing, the edge whose predicted value exceeds the threshold is divided into subgraphs by Union Find Tree (Disjoint Set), and then the one whose shortest distance is within 2 is selected.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1958529%2Fc432cb35d881cf8ca05319de19863afb%2F4sq_overview.png?generation=1657809170858208&alt=media" alt="overview_of_4sq_solution"></p>
<p>The jump to 0.933-> 0.951 that everyone cares about happened when I replaced one of the 5-folds from one that was early stopping at 500 iterations to one that was trained at 2000 iterations with all the data.</p>
<h1>Preprocessing</h1>
<h2>Natural language preprocessing</h2>
<p>Since the name and address were multilingual, I prepared three separate processes.</p>
<ol>
<li>Almost raw text</li>
<li>Text for each language with lowercase letters, removal of symbols, and digitization of numerals</li>
<li>About 2. Text converted to [0-9a-z &] with unidecode</li>
</ol>
<p>Also, since there were many nans for the address, we complemented by connecting three neighborhoods of the distance.</p>
<h3>Processing for each language</h3>
<p>Basically, all languages have uniform removal of symbols-> lowercase letters / digitization of numbers-> alphabetization.<br>
Since Japanese, Chinese, and Thai are languages that require word-separation, they are processed individually.</p>
<p>For Japanese, use the tokenizer <a href="https://github.com/WorksApplications/Sudachi" target="_blank">Sudachi</a> to get the word-separation and reading, and then use <a href="https://github.com/miurahr/pykakasi" target="_blank">Pykakasi</a> to convert it to romaji. did.<br>
Chinese was divided by <a href="https://tfhub.dev/google/zh_segmentation/1" target="_blank">zh_segmehntation</a> and converted to romaji by unidecode.<br>
Thai is divided and romanized using <a href="https://pythainlp.github.io/" target="_blank">PyThaiNLP</a>.</p>
<h2>city, state pre-processing</h2>
<p>For city and state, we took out about 2000 in order of the number of appearances and put them in the category of fitting to any of 2000 in the vicinity of the distance.<br>
We also converted it to the top 2000 place names using the <a href="http://www.geonames.org/" target="_blank">cities1000</a> dataset and made it into a category. (Hereafter referred to as geo_name)</p>
<h2>Creating an embedding</h2>
<p>For the text of 3 in the previous section, the name and address were divided into 32000 subwords using different sentence pieces.<br>
On top of this, we learned embedding by the following three unsupervised pre-learning based on word2vec.</p>
<ul>
<li>About name Skip-gram task</li>
<li>Task to predict categories from name, address, city, state, geo_name</li>
<li>A contrasting learning task that mixes name, address, city, state, geo_name, categories and guesses itself by <a href="https://arxiv.org/abs/2104.08821" target="_blank">SimCSE</a></li>
</ul>
<p>When doing these learnings, the method of obtaining negative samples was very important.<br>
We were able to perform high-quality learning by generating a negative sample with a random walk for each sample, using the probability based on the negative logarithm of the hadsine distance as the edge for the vicinity of 128 of each sample.</p>
<h3>Fill in missing values in a category</h3>
<p>Now that we have a good model for category prediction, we filled in the missing values by making predictions for those with blank categories.</p>
<h3>Clustering / one-dimensional</h3>
<p>Each embedding was divided into about 2000 clusters by spherical kmmeans ++.<br>
We also implemented kmeans ++ based on the hadsine distance and divided it into about 2000 clusters as well.</p>
<p>After splitting the cluster, we embedded the information in one dimension by solving the traveling salesman problem for the distance between each vertex.<br>
This is a technique called <a href="https://arxiv.org/abs/2205.01954" target="_blank">Word Tour</a>, which allows you to transform embedding into a form that GBDT can easily interpret in just one dimension.<br>
Each category was sorted by TSP based on the similarity of each kmeans, the similarity of names and addresses with the number of occurrences top2000, and the haversine distances of city, state, and geoname, and used as continuous values.</p>
<h1>Feature generation</h1>
<h2>haversine distance</h2>
<p>The logarithm of the distance between two points was taken and used as a feature. The conversion to logarithm will be useful in later candidate generation.</p>
<h2>name, address</h2>
<p>We have created various comparative features for both.</p>
<h3>Obtained from a public notebook</h3>
<p>Gestalt Pattern Matching (SequenceMatcher), levenshtein, jaro_winkler (I will omit it because it was introduced in various kernels)</p>
<h3>Bag of Words, Bag of Char</h3>
<p>For the word that was divided and normalized, TfIdfVectorizer was set to binary = True, use_idf = False, and 1-gram and 2-gram sparse vectors were created and the similarity was compared.<br>
Also, considering the case where characters have meaning like Japanese and Chinese, we created sparse vectors for each character and compared the similarity.</p>
<h3>ROUGE-N, ROUGE-L</h3>
<p>ROUGE is an index often used to evaluate sentence summarization tasks. Measure the degree of agreement between sentences and give precision, recall, f1 value.<br>
ROUGE-N was compared using N-GRAM, ROUGE-L was compared using LCS, the former was implemented using tensorflow's Ragged Tensor, and the latter was implemented using tensorflow-text's rouge_l function.</p>
<h3>Cosine similarity, one-dimensional distance</h3>
<p>For each embedding, the cosine similarity and the distance after dropping to one dimension are added as features.</p>
<h2>city / state</h2>
<p>Similar to embedding, the top 2000 cities and states are listed as features in the order of Count Encoding and traveling salesman problem.</p>
<h2>categories</h2>
<p>I also got the category embedding by doing metric learning about the category prediction task during pre-learning.<br>
Like the other features, the category embedding is based on the count encoding as the features arranged in the traveling salesman problem.</p>
<h2>phone / url</h2>
<p>Only whether or not they match after normalization was added as a feature quantity.</p>
<h1>Candidate generation</h1>
<p>Candidate generation for learning and prediction combines multiple logics.</p>
<ul>
<li>haversine 4 nearest neighbors of distance</li>
<li>Haversine 12 multiple regressions of distance logarithm and embedding cosine similarity such as name, address, categories, etc.</li>
<li>About name Similarity by rouge-1 in word unit, precision, recall Total 4</li>
<li>About name A total of 8 similarities, precision, and recall by rouge-2 in character units</li>
<li>Top 4 cosine similarity of name</li>
</ul>
<h1>predict</h1>
<p>I used LightGBM. 2000 iterations training was performed on all data without early stopping, and GPU prediction was performed by Forest Inference at the time of prediction.<br>
The hyperparameters are as follows, and max_bin has been increased only for some categories.</p>
<pre><code>max_bin_by_feature = []
for col in feature_cols:
if (col.endswith ("kmeans_labels") or col.endswith ("pos_tsp_index")) and col.startswith ("query"):
print (col)
max_bin_by_feature.append (255 * 4)
elif (col.endswith ("kmeans_labels") or col.endswith ("pos_tsp_index")) and col.startswith ("candidate"):
print (col)
max_bin_by_feature.append (255 * 2)
elif col.endswith ("_category_frequent_min_tsp") or col.endswith ("_category_frequent_max_tsp"):
print (col)
max_bin_by_feature.append (255 * 2)
else: else:
max_bin_by_feature.append (255)
lgb_params = {
"objective": "binary",
"metric": "auc",
"boosting":'gbdt',
"max_depth": -1, -1,
"num_leaves": 2 ** 12 --1,
"learning_rate": 0.1,
"bagging_freq": 1,
"is_unbalance": True,
"max_bin_by_feature": max_bin_by_feature,
"bin_construct_sample_cnt": 200000,
"lambda_l1": 1.,
"lambda_l2": 1.,
"bagging_fraction": 0.9,
"feature_fraction": 0.6,
"seed": 0}
</code></pre>
<h1>Post-processing</h1>
<p>The graph was constructed for each group by UnionFind after removing some edges based on the mediation centrality of the graph, with the predicted values of the pair being above a certain level.<br>
For each group, the shortest distance to all pairs of unweighted graphs was calculated, and the predicted value was within 256 vertices within 2 distances from each vertex.</p> |
Foursquare - Location Matching | 8th place solution [SBERT + LightGBM] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Foursquare - Location Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello.</p>
<p>Here is the solution that got us to 8th place. <br>
I apologize if the English is incorrect because I am using an automatic translation. <br>
Also, this is my first time posting in a discussion forum so I may not be able to summarize it well. If you have any questions, please comment.</p>
<h3>Overview</h3>
<p>Like many other competitors, our method consists of three steps. </p>
<ol>
<li>Creation of candidate matching pairs. </li>
<li>Binary classification to determine if the locations are identical or not. </li>
<li>Post-processing</li>
</ol>
<p>The basic process is copied from <a href="https://www.kaggle.com/code/guoyonfan/binary-lgb-baseline-0-834" target="_blank">@guoyonfan's baseline</a>. (Thank you for the great baseline.)</p>
<p>Below are some of the points we devised for each step</p>
<h3>Creation of candidate matching pairs</h3>
<p>In addition to the distance-based kNearestNeighbor (distance kNN) used in the baseline, kNearestNeighbor (embedding kNN) using embedding created from fine-tuned SentenceBERT Added.</p>
<p>The k parameters used for the final submission are 10 for distance kNN and 20 for embedding kNN.<br>
The ideal IoU simulated on the training data was 0.986.</p>
<p>The embedding kNN is very important and played a major role as a feature in the binary classification described below.</p>
<h4>SentenceBERT embedding details</h4>
<p>The embedding model was fine-tuned based on <code>sentence-transformers/all-MiniLM-L12-v2</code>. </p>
<p>The key points that were important to get a good performance with the embedding kNN are</p>
<ul>
<li>Does not use multilingual model<ul>
<li>In fact, we also use multilingual models to create features for the binary classification step, but the English-based models are by far the most important.</li></ul></li>
<li>Loss selection</li>
<li>Two steps fine-tuning</li>
</ul>
<h5>does not use a multilingual model.</h5>
<p>The input of the model is a concatenated string of name, address, city, state, country, category, and latlon. <br>
Since the base model is not a multilingual model, names and addresses are normalized by pykakasi for Japanese and by unidecode for other languages. </p>
<p>The reason why we did not use the multilingual model is that the English-based model with normalized inputs simply performs better, but we believe that the reason is that the number of parameters in the multilingual model is very large and the amount of data is insufficient for fine-tuning. <br>
Usually, a large amount of training data is required for metric learning, but the number of bases available for this competition was only about 1.1 million, which is not enough data to obtain invariant expressions from data expressed in dozens of languages, let alone a single language. </p>
<p>Another reason why we could not successfully train multilingual models may be that we had to reduce the size of the mini-batches that could be set during training due to the memory limitation of the GPU. </p>
<p>The models were trained on Google Colab using a single GPU, and if more resources and time were available, the multilingual models might perform better. </p>
<h5>Loss Selection</h5>
<p>For the loss function of fine-tuning, we used <a href="https://www.sbert.net/docs/package_reference/losses.html" target="_blank">Contrastive Loss</a> implemented in sentence-transformer. library </p>
<p>We chose this function because it has the best performance compared to TripletLoss and ConsineSimilarityLoss.</p>
<h5>2-step fine-tuning</h5>
<p>To obtain better performance, two stages of fine-tuning were performed. </p>
<p>In the first stage, fine-tuning is performed using the pair set created by distance kNN as training data. <br>
In the second stage, fine-tuning is performed on the training data, using the pair set created by embedding kNN using the model learned in the first stage, in addition to the distance kNN. </p>
<p>We believe that the second stage of fine-tuning contributed as a hard negative to the training data, since the locations with similar names and addresses added by embedding KNN were added to the training data. </p>
<p>Comparing the first and second stage models, the ideal IoU(recall) improved from 0.97 to 0.986. </p>
<h3>Binary classification to determine if they are the same location</h3>
<p>For the binary classification, we used 7 LightGBMs created by kFold. <br>
Below are the features that were important to us and our training efforts. </p>
<h4>Features that were effective for prediction</h4>
<p>In addition to the features used in the baseline, we added the following</p>
<ul>
<li>cosine similarity of the embedding vectors created by the three types of SentenceBERT<ul>
<li>1st : multi-lingual model with name, address, latlon, etc. as input + fine-tuning with triplet loss</li>
<li>2nd : same as embedding kNN</li>
<li>3rd: multi-lingual model with only names as input + fine-tuning with contrastive loss</li></ul></li>
<li>Country and category label encoding<ul>
<li>Label encoding of country and category associated with the id that is the key of the matching pair</li>
<li>To avoid over-learning, labels with high frequency in train.csv are selected, and the rest are replaced with "other".</li></ul></li>
<li>Land use characteristics by area<ul>
<li>The following statistics were created using latlon rounded to two decimal places as the key<ul>
<li>Number of POIs included</li>
<li>A 5-dimensional PCA was applied to the frequency distribution of the category labels.</li></ul></li>
<li>The above statistic was combined with the latlon of one id of the matching pair</li></ul></li>
<li>Normalized name tfidf similarity<ul>
<li>Normalized names in unidecode and pykakasi converted to character base tfidf vector and cosine similarity calculated</li></ul></li>
</ul>
<p>feature importance is as shown in the attached image.</p>
<h4>Learning LightGBM</h4>
<p>LightGBM was trained by optimizing logloss and applying early stopping. <br>
The training data were the same pairs created by distance kNN and embedding kNN. </p>
<p>One of the major findings is that the larger the number of id and the number of pairs in the training data, the larger the stopping iteration of early stopping becomes and the better the score improves. </p>
<p>Initially, the public LB was 0.908 after training with 550,000 id, but it improved to 0.928 with 700,000 id, and to 0.948 with 1.1 million id.</p>
<p>In the final submission, seven models were trained using 25 million pairs created from 1.1 million locations. <br>
The number of stop iterations for each model was about 1000. <br>
(Unfortunately, due to memory limitations, the kaggle environment did not work, so we used Google Colabe pro+ for training.)</p>
<p>Finally, the model became so large that inference on the CPU took too long, so we used Rapids to speed up the process when submitting. (The inference time was reduced by a factor of 40, which was very impressive.)</p>
<h3>Post-processing</h3>
<p>Omitted because no changes have been made from the baseline.</p>
<h3>Conclusion</h3>
<p>This was the first competition that I seriously worked on, and there were many points that required a lot of trial and error, and I learned a lot about how to learn the model and how to make the resources more efficient. <br>
I would like to thank the competition organizers and Kaggle, a great platform, for giving me this opportunity. </p>
<p>Last but not least, thank you to my delightful teammates. </p> |
Foursquare - Location Matching | 9th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Foursquare - Location Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>setting</h2>
<p>evaluate local by 2fold CV<br>
groupKfold by POI</p>
<h2>1st : candidate selection</h2>
<ul>
<li>selected 400 candidate for each id<ul>
<li>knn with latlon: 350 candidate</li>
<li>knn with latlon + name tfidf: 50 candidate</li></ul></li>
<li>Max IoU: 0.994, 420M candidates</li>
</ul>
<h2>2nd : modelling light model</h2>
<ul>
<li>catboost with tfidf similarity <ul>
<li>features (40 features)<ul>
<li>text similarity with tfidf</li>
<li>raw & diff int of text</li>
<li>raw & diff latlon</li></ul></li>
<li>threshold is 0.005</li>
<li>Max IoU: 0.987, 3M candidates</li></ul></li>
</ul>
<h2>3rd : modelling</h2>
<ul>
<li>catboost and lightgbm<ul>
<li>catboost with text encoding each text<ul>
<li>0.010 improved with text encoding</li></ul></li>
<li>features (2nd40 + 100 features)</li>
<li>text similarities with gesh, LCS, leven, jaccard… 12 methods</li>
<li>count ids, match ids</li>
<li>target encoding each pair of ['name', 'name_match'], <a href="2fold" target="_blank">'categories', 'categories_match'</a></li></ul></li>
</ul>
<h2>4th : post_process</h2>
<ul>
<li>calc the node predictions with avg of neighbors predictions, order by predictions<ul>
<li>thresholds: 0.4</li></ul></li>
</ul>
<h2>score</h2>
<ul>
<li>Final CV: 0.905, 2fold CV</li>
<li>LB: <ul>
<li>catboost(cb): 0.925</li>
<li>cb + post_process(pp): 0.930</li>
<li>cb + lgb + pp: 0.933</li>
<li>cb + lgb + pp + training all data(1fold): 0.943</li>
<li>cb + lgb + pp + all data + ensemble 5models 0.945</li>
<li>cb + lgb + pp + all data + ensemble + increase iterations(over fit): 0.948</li></ul></li>
</ul>
<p>Thank you!!</p>
<p>Thank you for comment. And, congratulations for winning!!<br>
I use memory about 500GB~1TB in train, 2~3hours spent in 2nd inference phase (including make features).</p> |
Foursquare - Location Matching | A few notes on #16 solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Foursquare - Location Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I didn't keep track, but it wouldn't surprise me if it was more than 100 hours. The full pipeline took a long time to run, and then the submissions took even longer, so there were long periods where I was working on other stuff while I waited for some code to run. How about you?</p>
<p>Thanks. Nice to see another Swiss Kaggler here. </p>
<p>First, thanks to Kaggle and Foursquare for providing a very interesting challenge. The leakage, though, is rather frustrating. This would have been an easy leak to avoid. I can't imagine how this slipped through. Looking for leaks is a normal part of Kaggle competitions, but when the instructions explicitly state that a particular leak does not exist, then it really shouldn't be there. </p>
<p>Congrats to the winners, especially <a href="https://www.kaggle.com/psi" target="_blank">@psi</a> for his late surge and winning solution (if we don't include explicit use of the leak). This was a tough competition that required a lot of ingenuity to do well.</p>
<p>My big take-aways from this competition are:</p>
<ol>
<li>RAPIDS is a very mature and useful library. This was my first time using RAPIDS and I was really impressed with how much sklearn and pandas functionality can be run on the GPU with virtually no code changes. The speed-up was impressive, espcially in cases where the non-GPU version uses only 1 CPU. Also, many thanks to <a href="https://www.kaggle.com/steubk" target="_blank">@steubk</a> for showing how to do inference using ForestInference (<a href="https://www.kaggle.com/competitions/foursquare-location-matching/discussion/328242)" target="_blank">https://www.kaggle.com/competitions/foursquare-location-matching/discussion/328242)</a>.</li>
<li>Old-school nlp techniques such as text normalization, tfidf, keyword extraction, etc. are still relevant in isolated cases. In this competition, most of the text were names in multiple languages, which reduced the usefulness of pre-trained text models. </li>
<li>As ML toolsets mature, ML is becoming more and more just a software engineering specialty. In this competition especially, setting up the problem correctly to process the data in chunks using minimal resources was just as important as the specific ML algorithms employed.</li>
</ol>
<h1>Big question about this competition</h1>
<p>Why are most of the top teams from Japan? Is there something about the data that gave teams from Japan an advantage?</p>
<p>Below is a brief outline of my solution (does not explicitly use the leak).</p>
<h2>Training Pipeline</h2>
<ol>
<li>Preprocess training data:<br>
a) Normalize text fields (lowercase, remove punctuation, normalize common spelling variations, convert to ASCII)<br>
b) Select frequent name keywords and encode as features<br>
c) Separate multiple categories and encode as single categories.</li>
<li>Randomly divide training set into two equal parts, grouped by point_of_interest, producing two sets -- train0 and train1.</li>
<li>Process the two training sets separately, as follows:<br>
a. Get term frequency vectors for text fields<br>
b. For each id, find 200 closest neighbors by lat/lon.<br>
c. Generate features in groups of 10 neighbors (e.g., first 1-10 closest neighbors, then 11-20, 21-30, etc.). Each id is paired with the 10 neighbors in the group to generate pairwise features. Altogether, about 50 features are generated, the most important of which are:<br>
1) Cosine similarities between text fields (especially normalized names, name_nonkey, and address)<br>
2) Distance rank (e.g., 2nd closest neighbor vs. 150th closest)<br>
3) Prev and Next distance (e.g., if we're processing the 5th closest neighbor, what are the distances of the 4th and 6th neighbors)<br>
4) Jaccard similarity between names<br>
5) Raw latitude/longitude<br>
6) Encoded categories<br>
7) number of matching name words in pair <br>
8) length in characters of longest name<br>
9) Min and max ordinal values of characters in name (proxy for character set / language)<br>
10) Number of category matches<br>
11) Encoded country</li>
<li>Again, processing in groups of 10 neighbors, train shallow LGB models to predict pair matches.</li>
<li>Use the match predictions from #4 to select the most likely matches. The goal here is to reduce the candidate universe and get a more balanced training set without throwing out too many positives. For each group of 10 nearest neighbors, we train 2 shallow Lightgbm models -- one for train0 and one for train1. We use the model from one set to predict matches for the other set so that we get oof predictions for both sets. We save the 40 models (2 each for 20 groups of 10 neighbors) for inference.</li>
<li>Using the predictions from step 5, we set thresholds to filter out candidates that have almost no chance of being matches. I found that setting the threshold at about 30-1 (positives to negatives) worked well. It captured about 95% of the positives (the ones in the 200 nearest-neighbor pairs) while eliminating about 90% of the negatives. The threshold values were set using different percentiles in each group of 10. For example, in the neighbor group 1-10, we keep the top 15% of pairs, whereas in the group containing neighbors 190-199, we keep only the top 0.05% of pairs.</li>
<li>Use two additional heuristics to collect more candiates:<br>
a) Round latitude and longitude values and then merge by the rounded lat/lon and specific fields, such as name, address, phone, url, etc. Rounding the lat/lon values is equivalent to dividing the map into quadrants and assigning the same lat/lon values to all ids in the quadrant. Then if we merge by name, for example, we're finding all pairs sharing the same name within each quadrant.<br>
b) Merge by city and name.</li>
<li>Combine all candidates from steps 6-7. Recall that steps 3-7 are applied seprately to the two training sets generated in step #2, so in this step we also combine the candidates from those 2 sets. This gives us our master set of training candidates -- about 20m records, with a negative-positive ratio of about 15-1. Of the potential 1.9m positive pairs in the training set, we capture about 70%. </li>
<li>Train Lightgbm and Xgboost models and save the models for inference.</li>
<li>Repeat steps 2-9 five times to get different lgb/xgb models.</li>
</ol>
<h2>Inference pipeline</h2>
<ol>
<li>Apply steps 1-10 (excluding step #2) to the test set. For the modeling steps, we do inference only using the models generated from the train set.</li>
<li>Post process predictions to add two types of pairs that may be missing from the candidate set:<br>
a) inverses (e.g., if AB is a match, then BA should also be a match)<br>
b) graph traversal pairs (e.g., if AB and AC are pairs, then BC should also be a pair). For traversal pairs, we use a threshold of 0.65, meaning that subgraphs are formed only from pairs whose the probability of being a match is greater than 0.65.</li>
</ol>
<h2>CV/LB Agreement</h2>
<p>CV scoring was performed on a holdout set of 20% of the candidates, grouped by point_of_interest. From the beginning, the LB scores were slightly higher than the CV scores. By the end of the competition, they were considerably higher:</p>
<p>CV: 0.908 (single model)<br>
LB: 0.935 (ensemble of 10 models)</p>
<p>I expected some increase due to the ensembling but this was more than expected. We now know that this is due to the leak. Essentially, the difference in CV/LB scores represents how much our models overfit. So, it's true, as some have written, that we all used the leak implicitly, but that's still a lot different from using it explicitly.</p> |
Foursquare - Location Matching | Simple Silver Solution. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Foursquare - Location Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Training code is <a href="https://github.com/dongkyuk/Kaggle-Foursquare" target="_blank">here</a> if anyone is interested </p>
<p>Here is my solution for this competition. The code is <a href="https://www.kaggle.com/code/dongkyunkim/simple-silver-solution-foursquare-inference" target="_blank">here</a></p>
<p>It received a silver medal without any ensembling or complicated feature engineering.</p>
<p>The pipeline is is simple:</p>
<ol>
<li>Train xlmroberta with ArcFace Loss</li>
<li>Use the cos sims from the xlmroberta + coordinate distance to extract match candidates</li>
<li>Add features (cos sim, distance, lcs, tfidf, etc…)</li>
<li>Train a lightgbm model (with flaml hyperparameter optimization) to select the correct candidates as a binary classification task.</li>
<li>Do 2-3 on the test data and inference with lgbm</li>
</ol>
<p>Okay so I simply concat all string columns<br>
into one large sentence, infer word embeddings with the trained xlmroberta, then do k nearest neighbor retrieval with k=10 with cosine similarity. I also do k nearest neighbor retrieval with the longitude and latitude. The combined are my final candidates.</p> |
Foursquare - Location Matching | [21st place solution] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Foursquare - Location Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I trained as shown in SBERT training overview, specifically I used the simple ContrastiveLoss. Moreover, I was not trying to make it converge, but just do an epoch of pretrain on domain text. Check here --> <a href="https://www.sbert.net/docs/training/overview.html" target="_blank">https://www.sbert.net/docs/training/overview.html</a></p>
<p>I didn't know about this, thanks! </p>
<p>Was fun competing together! All the best. 💪</p>
<p>First of all thanks to Kaggle and the host for this competition. I enjoyed it quite a bit as this competition needed a lot of memory management (OOM is painful), NLP and GBDT related pipelines.</p>
<p>It was fun working with <a href="https://www.kaggle.com/locbaop" target="_blank">@locbaop</a> who put in lots of hard work in this. </p>
<p>Our solution is reasonably simple:</p>
<p><strong>Pre-process</strong><br>
We preprocess the text by Pykakashi for JP text and unidecode+lower for other countries. </p>
<p><strong>Blocking</strong><br>
At the start I was mostly iterating using distance measure, and that kept us in 84-85 LB range. Then we tried TFIDF - 3 ngrams+char_analyzer and it immediately gave boost.</p>
<p>Most of our effort was spend on pretraining the SBERT models, in total we trained <strong>5 SBERT models</strong> and used the concat embeddings in KNN for blocking. We could get a MAX IOU of > 99 with concatenation of <code>paraphrase-xlm-r-multilingual-v1</code> & <code>msmarco-bert-base-dot-v5</code> embeddings. We train SBERT for 1 epoch on full data.</p>
<p>In total we created 20 neighs per ID.</p>
<p><strong>Binary Classification Features & Model</strong><br>
LightGBM model with heavy regularization.</p>
<p>We considered name, address, category, and another one based on ceil and floor of latitude/longitude, like below</p>
<p><code>df['text'] = df['name'] + '[SEP]' + df['address'] + '[SEP]' + df['categories']+ '[SEP]'+ \
lat_int_floor + '[SEP]' +long_int_floor + '[SEP]'+ lat_int_ceil + '[SEP]' +long_int_ceil</code></p>
<p>☝️ This is also the text on which SBERT is trained. </p>
<p>We used embedding similarity for all 5 SBERT models in train. Giving us local CV on 20% valid data ~94.3.</p>
<p>I also added counts as a last try and saw immediate boost to local logloss and CV improving to 94.5. Unfortunately I didn't have enough time to do full training and submit.</p>
<p><strong>Post Processing</strong><br>
Same as public notebooks.</p>
<p><strong>Inference</strong><br>
We used inference forest from cuML to do infer, I think there is some accuracy drop due to lower casting (float64 to float32), but I was really impressed with cuML's abilities on GPU. Will explore this more.</p>
<p><strong>What did not work</strong></p>
<ul>
<li>Couldn't get mDebertaV3 to work, lots of NAN weights in model state dict (lol)</li>
<li>Adding City, State for SBERT training</li>
<li>XGB didn't work for us</li>
</ul>
<p>Thats it, goes without saying we couldn't figure out the leak! </p>
<p>Thanks,</p>
<p>I arrived at SBERT by reading through similar cases, my hypothesis was that sentence models should work better here since I was combining multiple features to make a sentence. I didn't try other LMs except mDeberta but failed to train it properly due to fp16 bugs (as per Psi). </p> |
Freesound Audio Tagging 2019 | 13rd solution summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi,</p>
<p>As I have made a slide about my solution, I would like to post it here. Thank organizer and staffs for this great competition.</p>
<p>Here is the brief summary.</p>
<p>Inference:</p>
<ol>
<li>Raw audio to log mel.</li>
<li>Apply sliding window to the log mel and predict each windows by CNN.</li>
<li>Feed temporal prediction given step2 to RNN</li>
<li>Feed temporal prediction given step2 to Rakeld + ExtraTrees</li>
<li>Average step3 and step4.</li>
</ol>
<p>Training:</p>
<ol>
<li>Train CNN by using only curated data.</li>
<li>Apply sliding window to the log mel in noisy dataset and predict each windows by CNN.</li>
<li>Feed temporal prediction given step2 to RNN. Then, I got pseudo labels for noisy dataset.</li>
<li>Average pseudo labels with original noisy labels by 0.75:0.25.</li>
<li>Train CNN by using all data.</li>
<li>Train some models to follow the inference steps.</li>
</ol>
<p>Details in Training CNN:</p>
<ul>
<li>Densenet 121</li>
<li>Spec Aug</li>
<li>Mixup</li>
<li>CLR + SWA + Snapshot ensemble.</li>
</ul>
<p>I intensively tried SSL approach such as ICT, but it did not work.</p>
<p><a href="/daisukelab">@daisukelab</a> It has not so much reason. I tried to search around 1.0, 0.9, 0.8, 0.7.</p>
<p><a href="/jamesrequa">@jamesrequa</a></p>
<blockquote>
<p>did you use the penultimate layer of CNN</p>
</blockquote>
<p>No. I used the probability predicted by the CNN. I tried some others as an input. But it was the best to just feeding the temporal probability.</p>
<p>After training CNN, I predicted sliding window and save the result which has shape (n_data, n_windows, n_class) as a file.</p>
<blockquote>
<p>did you do this at fixed intervals or random crops</p>
</blockquote>
<p>I used fixed step size 21. The window size is 224, which is equivalent with about 5 sec. If I use smaller step size and more windows, the result becomes better, but the inference time is longer.</p>
<p>I hope you can get good position in final stage. Good luck!</p> |
Freesound Audio Tagging 2019 | 13th place solution [public LB] - brief summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The second year to take part in freesound audio tagging, thanks for the organization for the interesting and close to reality audio competition. Thanks for <a href="/daisukelab">@daisukelab</a> <a href="/mhiro2">@mhiro2</a> <a href="/ceshine">@ceshine</a> <a href="/jihangz">@jihangz</a> , and many kagglers for the starter kernels and discussions.</p>
<p>Earlier single model kernel without cv is as below:
<a href="https://www.kaggle.com/sailorwei/fat2019-2d-cnn-with-mixup-lb-0-673">https://www.kaggle.com/sailorwei/fat2019-2d-cnn-with-mixup-lb-0-673</a>
Discussion about single model LB:
<a href="https://www.kaggle.com/c/freesound-audio-tagging-2019/discussion/91881#latest-547036">https://www.kaggle.com/c/freesound-audio-tagging-2019/discussion/91881#latest-547036</a>
<a href="https://github.com/sailor88128/dcase2019-task2">https://github.com/sailor88128/dcase2019-task2</a></p>
<p>Final submission:
- On curated dataset, 5 fold cv, inception V3, single model LB is 0.691 and 5 model average is 0.713;
- On curated dataset and noisy dataset, with specific loss function, 5 fold cv, 8 layer cnn, 5 model average 0.678;
- Geometric average of these two models, 0.73+.</p>
<p>Techniques we ever used:
1. PCEN spectragram, didnot try many parameters, not sure if it work for this competition;
2. mixup, about 0.01 increase in LB;
3. SpecAugment shows no increase, while RandomResizedCrop cannot be explained for audio with a better lb increase;
4. CV and tta, about 0.05 increase for the primary single model;
5. Mixmatch, got a much lower LB with the same model, need to check the code then;
6. Schedule of lr, cycliclr and cosineannealing do not show much difference.</p>
<p>Others:
1. While lots of kagglers said shallow cnn works well, we didnot get effective shallow cnn. We ever tried alexnet or vgg, all with low local lwlrap :(
2. Different cnns were used at first, but we didnot save them well for the final model ensemble;
3. Dataset with noisy label are not well used. We just used a weight loss function to train the noisy labeled data together with curated data set. The mixmatch we realize for the task maybe wrong in detail and should be checked.</p>
<p>It's good the idea helpful for you. tta for validation was used and local lwlrap increased, but LB almost remained the same.
Looking forward to your solution.</p>
<p>Yes, we tried both geometric and arithmetic mean, geometric mean shows better lb for us, about 0.002 higher. I am not sure why either.</p> |
Freesound Audio Tagging 2019 | 1st place solution released on a Github | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello everyone, thanks for the competition.</p>
<p>Link to solution <a href="https://github.com/lRomul/argus-freesound">https://github.com/lRomul/argus-freesound</a></p>
<p>Key points:</p>
<ul>
<li>CNN model with attention, skip connections and auxiliary classifiers </li>
<li>SpecAugment, Mixup augmentations </li>
<li>Ensemble with MLP second-level model and geometric mean blending </li>
</ul>
<p>More details you can find in the repository <a href="https://github.com/lRomul/argus-freesound/blob/master/README.md">README</a>.</p>
<p>Yes, but only train data</p>
<p>lvl1 models in the final submission are in the sheet "Train 3". Information about ensembling submissions is in the sheet "Stacking".</p> |
Subsets and Splits