Title of Competition
stringclasses 343
values | Title of Writeup
stringlengths 6
139
| User
stringclasses 1
value | Writeup
stringlengths 419
80.4k
|
---|---|---|---|
CommonLit - Evaluate Student Summaries | 217th Place Solution for the CommonLit - Evaluate Student Summaries Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank the Learning Agency Lab and Kaggle for organizing and running this competition. I would also like to thank all the competitors who shared their views and notebooks.<br>
Congratulations to all the winners! Although I am not one of the winners, I would like to share with you what I did and the results. I am interested in NLP but have never had good results in competitions. So I started to study in public notebooks. All I could finally do was change a few parameters due to my lack of ability and the GPU's time limit, but the private score improved very slightly.</p>
<h2><strong>Context</strong></h2>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/overview" target="_blank">https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/overview</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/data" target="_blank">https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/data</a></li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5800072%2Fbe5c9aaa507498ae435000694059aa8e%2FCommonLit%202023_0.png?generation=1697846085252308&alt=media" alt=""></p>
<h2><strong>Overview of the Approach</strong></h2>
<p>Since I did not have the ability to create my own notebook, I started by modifying the excellent notebook [<a href="https://www.kaggle.com/code/cody11null/tuned-debertav3-lgbm-autocorrect?scriptVersionId=141032742" target="_blank">1</a>] by <a href="https://www.kaggle.com/cody11null" target="_blank">Cody_Null</a>. The main modification was to change the batch size.</p>
<h2><strong>Details of the submission</strong></h2>
<p>I changed all random seed values to 0 just because I like 0, although it may not make sense. Then, I decided to try changing the batch size. First, the batch size was changed from the original 12 to 16, but the result was out of memory. Second, the batch size was decreased to 8, but the public score was worsened from 0.45021 to 0.46446. Third, the batch size was changed to 10, but the public score further worsened to 0.46992. So I stopped.<br>
I saw only the public scores and thought there was no improvement, but the changes in the private scores were different. When the batch size was decreased to 8, contrary to the worsening in the public score (0.45021 -> 0.46446), the private score had improved very slightly (0.48972 -> 0.48483).</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5800072%2Fd426b4cdac7276d987b7644faf1c0d61%2FCommonLit2023_1.png?generation=1697846131832554&alt=media" alt=""></p>
<p>I always tend to worry only about the public scores. I want to have a firm standard for myself.</p>
<h2><strong>Sources</strong></h2>
<p>[1] <a href="https://www.kaggle.com/cody11null" target="_blank">Cody_Null</a>, <a href="https://www.kaggle.com/code/cody11null/tuned-debertav3-lgbm-autocorrect?scriptVersionId=141032742" target="_blank">Tuned Debertav3+LGBM+autocorrect</a>.</p> |
CommonLit - Evaluate Student Summaries | 21th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank the organizers for hosting such an interesting competition. Also thanks to kaggler for sharing so much useful information through notebooks and discussions.</p>
<p>I share my solution.</p>
<p><br></p>
<h2>Overview</h2>
<p>My final submission is an ensemble of two deberta-v3-large models and LightGBM.<br>
The pipeline is as follows.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6454400%2F9c4ca8159dbb9fb98f9cefa2e4e90b3d%2F(81).png?generation=1697805022893951&alt=media" alt=""></p>
<p><br></p>
<h2>Model</h2>
<ul>
<li>Deberta v3 large<ul>
<li>input_text = text+prompt_question+prompt_text</li>
<li>maxlen = 1024</li>
<li>CLS token</li>
<li>loss : RMSELoss</li>
<li>epoch = 4</li></ul></li>
</ul>
<p><br></p>
<ul>
<li>Deberta v3 base<ul>
<li>input_text = prompt_title+len2text+prompt_question+text (※len2text : [<a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/437591" target="_blank">ref</a>])</li>
<li>maxlen = 512</li>
<li>Attention Pooling</li>
<li>loss : RMSELoss</li>
<li>epoch = 4</li></ul></li>
</ul>
<p><br></p>
<ul>
<li>LightGBM<ul>
<li>Almost the same as <a href="https://www.kaggle.com/code/tsunotsuno/debertav3-lgbm-no-autocorrect" target="_blank">publish nootbook</a></li></ul></li>
</ul>
<p><br><br></p>
<h2>Kye Points</h2>
<p>The following two points contributed significantly to improving the accuracy of the deberta model.</p>
<ul>
<li>Grouped-LLRD (Layer Wise Learning Rate Decay)</li>
<li>freeze layer (freeze 4 layers close to embeddeding) [<a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/433754#2405543" target="_blank">ref</a>]</li>
</ul>
<p><br></p>
<h2>CV</h2>
<ul>
<li>validation strategy : GroupKFold(groups=prompt_id)</li>
</ul>
<table>
<thead>
<tr>
<th></th>
<th>39c16e</th>
<th>3b9047</th>
<th>ebad26</th>
<th>814d6b</th>
<th>cv</th>
<th>public</th>
<th>private</th>
</tr>
</thead>
<tbody>
<tr>
<td>model1</td>
<td>0.4768</td>
<td>0.5045</td>
<td>0.4461</td>
<td>0.5842</td>
<td>0.4948</td>
<td>0.466</td>
<td>0.471</td>
</tr>
<tr>
<td>model2</td>
<td>0.4540</td>
<td>0.5154</td>
<td>0.4427</td>
<td>0.5830</td>
<td>0.4907</td>
<td></td>
<td></td>
</tr>
<tr>
<td>LGBM</td>
<td>0.4730</td>
<td>0.5802</td>
<td>0.4592</td>
<td>0.5809</td>
<td>0.5197</td>
<td>0.449</td>
<td>0.481</td>
</tr>
</tbody>
</table>
<p><br></p>
<h3>Ensemble</h3>
<ul>
<li>weighted average<ul>
<li>cv : 0.4777</li>
<li>Public LB : 0.4351</li>
<li>Private LB : 0.46163</li></ul></li>
</ul>
<p><br></p>
<h2>Didn't Work</h2>
<ul>
<li>text cleaning</li>
<li>awp</li>
<li>fgm</li>
<li>svr </li>
<li>pseudo labels </li>
<li>etc…</li>
</ul> |
CommonLit - Evaluate Student Summaries | 22nd Efficiency - My creative ideas | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First and foremost, I want to express how much I've enjoyed participating in this competition and how deeply I adore delving into the world of machine learning. However, I must confess that I became quite engrossed in this competition, finding it somewhat addictive to explore the plethora of novel ideas and approaches constantly bubbling up in my mind. I couldn't resist the urge to implement and experiment with them. In all honesty, especially during the last few weeks, I did experience a degree of disappointment. It stemmed from my inability to unearth any substantial correlations, structures, or approaches that significantly contributed to improving my scores. I meticulously crafted features that exhibited a high correlation with the score, but to my chagrin, they often led to worse results. Towards the end, it felt more akin to filling out a lottery ticket than engaging in scientific endeavors; it had become far too random.</p>
<p>Since we're encouraged to share our solutions, I'd like to offer my highest-scoring notebook and provide an overview of some of the strategies I pursued with great hope, only to see them fall short in the end.</p>
<p>I should mention that from the start, I made a conscious decision to focus exclusively on the Efficiency track, given my passion for constructing computationally efficient solutions. I also refused to simply run one of the high scoring publicly shared notebooks.</p>
<p>Here are some of the techniques I experimented with, despite their ultimate lack of impact on my scores:</p>
<ul>
<li>Extracting 10 stemmed noun-keywords from both prompt_question+prompt_text and input_text, and then comparing them, both direct similarity as tfidf_vectorized cosine similarity.</li>
<li>Extracting the three most crucial sentences from the prompt_text and utilizing them for comparisons with the input_text.</li>
<li>Employing metrics such as ROUGE scores, METEOR scores, JACCARD similarity, and BLEU scores.</li>
</ul>
<p>In my best-performing notebook, I did incorporate BM25 and textblob features. However, I can't say I was entirely confident that these features were responsible for the observed score improvements. There must have been some effect, but I wished I could have established a more concrete relationship between these features and my scores.</p>
<p>**My unsuccessful attempt with a BERT model: **</p>
<p>I aimed to deploy a BERT model capable of processing data in under 10 minutes using only CPU resources. I experimented with various BERT variants but eventually settled on deberta-v3-xsmall, with a limited input length of only 96 tokens to heavily expedite inference. I trained the model on 30% of the data in a single epoch with a learning rate of 1.5e-5. Subsequently, I froze the layers and added content and wording heads, one for each fold. These heads consisted of simple GRU models connected to the CLS layer, essentially creating an ensemble without incurring additional time costs.</p>
<p>While the model achieved an RMSE of approximately 0.58 in isolation, it lowered my cv score to around 0.44. However, my public score remained at 0.56, indicating severe overfitting to the training data.</p>
<p>In hindsight, I acknowledge that I might have achieved better results with a longer input length, but given my CPU constraints and my desire to perform well on the efficiency leaderboard, I opted to work with a smaller text portion.</p>
<p>I want to emphasize that I'm not a seasoned LLM expert; rather, I'm a learner eager to experiment and get creative in crafting solutions that work effectively and score well. I truly wish my efforts had borne fruit, but that is true for all of us of course.</p>
<p>** Regarding my used BERT model: **</p>
<p>In this particular notebook, I employed a deberta-v3-xsmall model trained using the data from the English Language competition. My approach involved selecting the three middle sentences from each text in that competition data to train the model, as this approximated the average text length of the input_text. However, I restricted the maximum input length as well to only 96 tokens.</p>
<p>This managed to boost my score by approximately 0.04.</p>
<p>I welcome any feedback on my approach, as there's nothing more valuable than the opportunity to learn and improve.</p>
<p>Thank you very much.</p>
<p><a href="https://www.kaggle.com/code/geoffreybeulque/commonlit-efficiency-final-submission" target="_blank">https://www.kaggle.com/code/geoffreybeulque/commonlit-efficiency-final-submission</a></p> |
CommonLit - Evaluate Student Summaries | 25th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><blockquote>
<p>May I ask you how you evaluated your full data train?<br>
Did you just train the model 2 epochs without evaluation?</p>
</blockquote>
<p>Yes, when I tried single model sub, I just trained Deberta model without evaluation. I just checked the model performance by LB.<br>
However, final sub does not use the full trained model, because the LB score did not improved compared with the model with cv split.</p>
<p>Congrats to all the winners!<br>
Thanks to organizers and all participants! I really enjoyed this exciting competition!</p>
<p>This is my final solution!</p>
<h1>Overview</h1>
<p>As <a href="https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-feature-engineering" target="_blank">I published before</a>, I used Deberta + LGBM tric.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2033239%2Fa993ec6cdadb4afed646085dfa13e511%2F2023-10-13%201.24.56.png?generation=1697127934161831&alt=media" alt=""></p>
<h2>CV-LB Score Summary</h2>
<table>
<thead>
<tr>
<th></th>
<th>wording CV</th>
<th>content CV</th>
<th>CV</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>DebertaV3-base</td>
<td>0.425</td>
<td>0.542</td>
<td>0.483</td>
<td>0.490</td>
<td>0.478</td>
</tr>
<tr>
<td>DebertaV3-large</td>
<td>0.412</td>
<td>0.534</td>
<td>0.477</td>
<td>not submit</td>
<td>not submit</td>
</tr>
<tr>
<td>DebertaV3-base + DebertaV3-large with LGBM tric</td>
<td>0.419</td>
<td>0.551</td>
<td>0.485</td>
<td>0.433</td>
<td>0.463</td>
</tr>
</tbody>
</table>
<h2>1st stage: Deberta</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2033239%2F6df068eeabe2ee8f944bc53b2f15e57e%2F2023-10-13%201.28.04.png?generation=1697128165767762&alt=media" alt=""></p>
<ul>
<li>base model : DebertaV3-base, DebertaV3-large</li>
<li>input: <code>prompt_text</code> + <code>[SEP]</code> + <code>prompt_question</code> + <code>[SEP]</code> + <code>text</code><ul>
<li>max_len = 2000</li></ul></li>
<li>loss: MCRMSE loss</li>
<li>custom header: average pooling (using <code>text</code> only)</li>
<li>inference: top 3 cv score model average (to avoid inference timeouts)</li>
</ul>
<table>
<thead>
<tr>
<th></th>
<th>wording CV</th>
<th>content CV</th>
<th>CV</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>DebertaV3-base</td>
<td>0.425</td>
<td>0.542</td>
<td>0.483</td>
<td>0.490</td>
<td>0.478</td>
</tr>
<tr>
<td>DebertaV3-large</td>
<td>0.412</td>
<td>0.534</td>
<td>0.477</td>
<td>not submit</td>
<td>not submit</td>
</tr>
</tbody>
</table>
<p>By the way, I show the score models trained using all data without cutting CV.</p>
<table>
<thead>
<tr>
<th></th>
<th>wording CV</th>
<th>content CV</th>
<th>CV</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>DebertaV3-base (full data, epoch=2)</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>0.470</td>
<td>0.481</td>
</tr>
<tr>
<td>DebertaV3-large (full data, epoch=2)</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>0.480</td>
<td>0.465</td>
</tr>
</tbody>
</table>
<p>It seems that using single model (DebertaV3-large,full data train, epoch=2) can achieve silver medal score. </p>
<h3>Why I used <code>prompt_text</code>?</h3>
<p>I think many participants suffered from large <code>wording</code> rmse.</p>
<p>In my opinion, this big error comes from <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/430705" target="_blank">score rotation</a> by competition host, I think.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2033239%2Fee4c02f37d0f5c8ae695d2f6a016aade%2F2023-10-13%202.29.03.png?generation=1697131789891984&alt=media" alt=""></p>
<p>If the content and wording data we get from this competition has already been rotated 30 degree, the score we get from competition is like below.</p>
<ul>
<li>content score (apparent) = original_content score * cos30° + original_wording score * -sin30°</li>
<li>wording score (apparent) = original_content score * sin30° + original_wording score * cos30°</li>
</ul>
<p>Thus, if my idea is correct, the content and wording score dataset we get from competition will contain both original content and wording components.</p>
<p>The meanings of content and wording score are shared from host, and original content score is based on following components.</p>
<ul>
<li>Main Idea<ul>
<li>How well did the summary capture the main idea of the source?</li></ul></li>
<li>Details<ul>
<li>How accurately did the summary capture the details from the source?</li></ul></li>
<li>Cohesion<ul>
<li>How well did the summary transition from one idea to the next?</li></ul></li>
</ul>
<p>I think apparent wording score contains not only wording contents score but also original contents score. So, in order to improve the accuracy of the apparent wording, I think it necessary to accurately predict the original content score and use <code>prompt_text</code> for predicting original content score.</p>
<p>(No one don't know what the truth is. This is just my opinion…)</p>
<h3>Why I used custom header?</h3>
<p>So far, I have shown that it is better to use prompt_text.</p>
<p>However, I think that simply using prompt_text will result in larger input tokens, which in turn will carry more noise. As a result, I think that we can’t get a good cv score.</p>
<p>So I used the custom header to input only the <code>text</code> portion into the final layer.<br>
I think we can use <code>prompt_text</code> and <code>prompt_question</code>, but focus the deverte on the <code>text</code>.</p>
<h2>2nd stage: LightGBM</h2>
<p>I use LGBM tric as I published.</p>
<h3>Using features</h3>
<ul>
<li>deberta output <ul>
<li>content_debertav3_large_prompt_text_text_ver2_pred</li>
<li>content_debertav3_base_prompt_text_text_ver2_pred</li>
<li>wording_debertav3_large_prompt_text_text_ver2_pred</li>
<li>wording_debertav3_base_prompt_text_text_ver2_pred</li></ul></li>
<li>handcrafted features<ul>
<li>num_chars</li>
<li>num_unique_words</li>
<li>embedding_sim (sentence transformer)</li>
<li>tfidf_sim</li>
<li>subjectivity</li>
<li>trigram_overlap_ratio</li>
<li>difficult_words</li>
<li>coleman_liau_score</li>
<li>polarity</li>
<li>nn_count</li>
<li>summary_length</li>
<li>dale_chall_readability_score</li>
<li>num_stopwords</li>
<li>mean_tokens_in_sentence</li>
<li>bigram_overlap_ratio</li>
<li>flesch_reading_ease</li>
<li>jj_count</li>
<li>max_count_per_sentence</li>
<li>vb_count</li>
<li>mean_count_per_sentence</li>
<li>osman</li>
<li>linsear_write_formula</li>
<li>gunning_fog</li>
<li>word_overlap_count</li>
<li>splling_err_num</li>
<li>szigriszt_pazos</li>
<li>smog_index</li>
<li>median_count_per_sentence</li>
<li>pr_count</li>
<li>num_words_title</li>
<li>min_count_per_sentence</li>
<li>num_punctuations</li>
<li>fernandez_huerta</li>
<li>gulpease_index</li>
<li>gutierrez_polini</li>
<li>crawford</li>
<li>automated_readability_index</li>
<li>bigram_overlap_count</li>
<li>trigram_overlap_count</li>
<li>flesch_kincaid_grade</li>
<li>num_sentences</li>
<li>cd_count</li>
<li>quotes_count</li>
<li>num_words_upper</li>
<li>num_paragraphs</li>
<li>uh_coun</li></ul></li>
</ul> |
CommonLit - Evaluate Student Summaries | 27th Place Solution: How to use prompts? | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Firstly, I want to thank Kaggle and the Host for organizing this fun competition, and thanks to my teammates <a href="https://www.kaggle.com/muhammad4hmed" target="_blank">@muhammad4hmed</a> , <a href="https://www.kaggle.com/mohammad2012191" target="_blank">@mohammad2012191</a> , <a href="https://www.kaggle.com/cody11null" target="_blank">@cody11null</a> , <a href="https://www.kaggle.com/ivanisaev" target="_blank">@ivanisaev</a> . This wouldn't be possible to achieve without them!</p>
<h1>Summary</h1>
<p>Our solution is an ensemble of diverse 1st stage models (Mostly with Deberta-v3-large backbone) feed into a 2nd Stage stacking Model. Diversity came from input, different custom pooling techniques and heads.<br>
Our best submission out of the 3 we chose had CV 0.486 and LB 0.425. It gives PB 0.464 which puts us at 27th. The best submission out of all our submissions had LB of 0.459, so it is a silver in both cases :)</p>
<h1>First Stage Models</h1>
<p>We finetuned Deberta Base, Large and Squad using summary texts + question only. We also added other models with some custom pooling ideas which involved using prompts inside, and they worked pretty nice, thanks to <a href="https://www.kaggle.com/ihebch" target="_blank">@ihebch</a> and <a href="https://www.kaggle.com/muhammad4hmed" target="_blank">@muhammad4hmed</a> work! </p>
<h3>Model 1</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8785809%2F985b784a03c245a29e6113d8de4f7957%2FScreen%20Shot%202023-10-12%20at%207.19.37%20AM.png?generation=1697091595551878&alt=media" alt=""><br>
CV: 0.505; public LB: 0.456; private LB: 0.474 </p>
<h3>Model 2 (not included in ensemble 😢 )</h3>
<p>same as Model1 but without cross-attention mechanism<br>
CV: 0.49; publicLB: 0.575; privateLB: 0.468 (wow)</p>
<h3>Model 3</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8785809%2F9c30cf54687f2fb2126330e56af09975%2FScreen%20Shot%202023-10-12%20at%207.01.34%20AM.png?generation=1697090518733548&alt=media" alt=""><br>
CV: 0.52; PublicLB: 0.457; PrivateLB: 0.499</p>
<h1>2nd Stage Model</h1>
<p>We then added all these predictions as features for our LGBM+XG+CB models along with the same features published in public notebooks. <br>
We also added <strong>FB3</strong> labels (good boost) and used metadata features(specially grade).<br>
We tried hard to add new features/ do feature selection / do extensive hyperparameters tuning, but all these trials ended up with better cv and much worse lb, so we gave them up and focused on improving our 1st stage models.</p>
<h1>What Didn't Work</h1>
<ul>
<li>Feature Selection</li>
<li>Extensive Hyperparameters Tuning</li>
<li>Adding FB2 Labels</li>
</ul>
<h1>Team Members</h1>
<ul>
<li><a href="https://www.kaggle.com/ivanisaev" target="_blank">@ivanisaev</a> </li>
<li><a href="https://www.kaggle.com/cody11null" target="_blank">@cody11null</a> </li>
<li><a href="https://www.kaggle.com/mohammad2012191" target="_blank">@mohammad2012191</a></li>
<li><a href="https://www.kaggle.com/muhammad4hmed" target="_blank">@muhammad4hmed</a></li>
<li><a href="https://www.kaggle.com/ihebch" target="_blank">@ihebch</a> </li>
</ul>
<p>my teammate <a href="https://www.kaggle.com/mohammad2012191" target="_blank">@mohammad2012191</a> already shared a <a href="https://www.kaggle.com/code/mohammad2012191/inference-notebook-stacking-v1-autocorrect-new-bas" target="_blank">notebook</a> that contains most of the code where he will explain his idea of 3rd stage in a separate discussion.</p>
<p>Hey,</p>
<blockquote>
<p>What was the intuition behind using Bi-LSTM? </p>
</blockquote>
<p>using Bidirectional LSTM on top of token embeddings is meant to capture contextual information between student summary and the prompt.</p>
<blockquote>
<p>Did you try anything else besides it?</p>
</blockquote>
<p>Pooling is the key for this competition as 2nd Place mentioned, adding special tokens and pooling summary and prompts individually did the magic for us.</p>
<blockquote>
<p>You said you used FB3 labels, how did you actually do it?</p>
</blockquote>
<p>We just inferred a public model shared in FB3 on train and test.</p> |
CommonLit - Evaluate Student Summaries | 29th Place Solution for the CommonLit - Evaluate Student Summaries Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I would like to thank the host for organizing this competition and all the participants who generously shared valuable information. I have learned a lot.</p>
<h2>Context</h2>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/overview" target="_blank">https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/overview</a> </li>
<li>Data context: <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/data" target="_blank">https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/data</a></li>
</ul>
<h2>Overview of the approach</h2>
<p>My approach is averaging the following models. </p>
<ul>
<li>LightGBM stacking with meta-features of 9 deberta and bart models. </li>
<li>A deberta-v3-large model for the concatenation of summary text and prompt text. (Exp ID: 10) <br>
The key aspect of the latter model is an attention mask that focuses only on the summary text, as explained by other top solutions.</li>
</ul>
<h4>Model Training Settings</h4>
<p>I trained the deberta models shown in this table. <br>
Out of these, 8 models are deberta-v3-large, and 2 model are bart-large. <br>
The bart-large models take prompt text as the encoder input and summary text as the decoder input. (However, I don't think they effectively utilize prompt text information…) <br>
CV strategy is Group k fold (group id = prompt_id) </p>
<table>
<thead>
<tr>
<th>exp id</th>
<th>model</th>
<th>input</th>
<th>pooling method</th>
<th>token max length</th>
<th>epoch</th>
<th>freezing layer</th>
<th>loss</th>
<th>awp</th>
<th>CV</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>deberta-v3-large</td>
<td>text</td>
<td>attention</td>
<td>822</td>
<td>10</td>
<td>18</td>
<td>smooth l1</td>
<td>false</td>
<td>0.542</td>
<td>0.494</td>
<td>0.541</td>
</tr>
<tr>
<td>2</td>
<td>facebook/bart-large</td>
<td>text, question+prompt_text</td>
<td>mean</td>
<td>822</td>
<td>4</td>
<td>-</td>
<td>smooth l1</td>
<td>false</td>
<td>0.555</td>
<td>0.514</td>
<td>0.556</td>
</tr>
<tr>
<td>3</td>
<td>deberta-v3-large-squad2</td>
<td>text</td>
<td>attention</td>
<td>822</td>
<td>4</td>
<td>18</td>
<td>smooth l1</td>
<td>false</td>
<td>0.581</td>
<td>0.496</td>
<td>0.549</td>
</tr>
<tr>
<td>4</td>
<td>deberta-v3-large</td>
<td>text</td>
<td>attention</td>
<td>822</td>
<td>10</td>
<td>18</td>
<td>smooth l1</td>
<td>true</td>
<td>0.535</td>
<td>0.488</td>
<td>0.532</td>
</tr>
<tr>
<td>5</td>
<td>bart-large</td>
<td>text+question, prompt_text</td>
<td>attention</td>
<td>822</td>
<td>10</td>
<td>-</td>
<td>smooth l1</td>
<td>true</td>
<td>0.533</td>
<td>0.486</td>
<td>0.510</td>
</tr>
<tr>
<td>6</td>
<td>deberta-v3-large</td>
<td>question+title+text</td>
<td>attention</td>
<td>822</td>
<td>10</td>
<td>18</td>
<td>smooth l1</td>
<td>false</td>
<td>0.538</td>
<td>0.497</td>
<td>0.563</td>
</tr>
<tr>
<td>7</td>
<td>deberta-v3-large</td>
<td>question+title+text</td>
<td>attention</td>
<td>512</td>
<td>4</td>
<td>-</td>
<td>smooth l1</td>
<td>true</td>
<td>0.628</td>
<td>0.519</td>
<td>0.535</td>
</tr>
<tr>
<td>8</td>
<td>deberta-v3-large</td>
<td>title+question+text</td>
<td>multi-scale</td>
<td>822</td>
<td>10</td>
<td>18</td>
<td>smooth l1</td>
<td>true</td>
<td>0.535</td>
<td>0.477</td>
<td>0.538</td>
</tr>
<tr>
<td>9</td>
<td>deberta-v3-large</td>
<td>text</td>
<td>attention</td>
<td>822</td>
<td>4</td>
<td>18</td>
<td>mse + rank</td>
<td>false</td>
<td>0.551</td>
<td>0.487</td>
<td>0.544</td>
</tr>
<tr>
<td>10</td>
<td>deberta-v3-large</td>
<td>text+title+question+prompt_text</td>
<td><a href="#model-for-summary-text-and-prompt-text-input">See this section</a></td>
<td>training : 1800, prediction : 1024</td>
<td>4</td>
<td>18</td>
<td>smooth l1</td>
<td>false</td>
<td>0.504</td>
<td>0.461</td>
<td>0.47</td>
</tr>
<tr>
<td>Every training utilized layer-wise learning rate decay between 0.8 and 0.9.</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>*Sum of losses calculated in each of the last 3 hidden states.</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>**Two attention pooling layers for content and wording separately.</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>***Pooling at different levels: words, sentences, and paragraphs, then the results are pooled.</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<h4>LightGBM stacking with meta features</h4>
<p>I stacked 9 models (Exp ID: 1 to 9) using LightGBM with the following meta-features:</p>
<ul>
<li>Number of characters in the summary text</li>
<li>Number of characters in the prompt text</li>
<li>Number of unique words overlapping between summary and prompt text (excluding punctuation)</li>
<li>Number of overlapping words between summary and prompt text</li>
<li>Ratio of overlapping bigrams</li>
<li>Ratio of overlapping 4-grams</li>
<li>Cosine similarity between TF-IDF values of summary and prompt text</li>
<li>Mean of edit distances between each summary sentence and prompt sentence</li>
<li>Number of words in the summary text (excluding punctuation, correcting misspelling)</li>
<li>Number of unique words overlapping between summary and prompt text (excluding punctuation, correcting misspelling)</li>
<li>Number of overlapping bigrams between summary and prompt text (excluding punctuation, correcting misspelling)</li>
<li>Ratio of overlapping bigrams (correcting misspelling)</li>
<li>Number of similar summary texts (similarity calculated by cosine similarity using Universal Sentence Encoder embedding)</li>
<li>Readability score of the prompt text<br>
This stacking model achieved CV: 0.478.</li>
</ul>
<h4>Averaging the Two Models</h4>
<p>For the final submission, I simply averaged the stacking model and the long-token model (Exp ID: 10), and submitted it. <br>
The CV is 0.471, Public LB: 0.433. <br>
I also tried nelder-mead weight optimization, but CV was not well as the averaged score.</p>
<h2>Details of the submission</h2>
<h3>What Worked</h3>
<h4>Model for Summary Text and Prompt Text Input</h4>
<p>I initially concatenated the summary text and prompt text and fed them into deberta-v3-large model. However, this didn't lead to an improvement in CV performance. <br>
I realized it was necessary to inform the model about the positions of the summary text tokens. Therefore, I prepared attention masks to indicate the positions of the summary text, title, question, and prompt text within the input tokens.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2419980%2F8d48845c845333584ec6044c09f7d33e%2Fattention.drawio.png?generation=1697274325973927&alt=media" alt=""><br>
These masks were used in conjunction with deberta's output. The architecture of this model is described below.<br>
(I apologize if it seems overly complex. According to other solutions, it appears that using an attention mask solely for the summary text may be sufficient.)<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2419980%2Fcb53a1430da088ad8e964119ff05acbe%2Fdeberta.drawio.png?generation=1697274351083740&alt=media" alt=""><br>
This model was trained with a token length of 1800. However, during prediction, I had to reduce the token length to 1024 due to a 9-hour time limit. <br>
This model (Exp ID: 10) imporoved CV scores, especially challenging prompts, resulting in CV: 0.504, Public LB: 0.461, inference time exceeded 4h~.</p>
<h3>What didn't Work</h3>
<h4>CNN Model to the Sentence Similarity Matrix between Summary Text and Prompt Text</h4>
<p>I noticed that significant errors were caused by summary texts that simply copied and pasted many sentences from their prompt texts. These summaries were given the same low content and wording score. I believe that when scorers come across such a copy-and-paste summary, they tend to stop reading it and automatically assign low scores. Therefore, I wanted to address this issue by calculating the edit distances between each of the N sentences in the summary text and each of the M sentences in the prompt text. I then created an N x M similarity matrix, with each element containing edit distance ratio. <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2419980%2F300454278eb2e87f36d4e263c64cc282%2Fheatmap.png?generation=1697274594179322&alt=media" alt=""> <br>
This matrix was fed into a CNN model, and target values were generated. However, this CNN model was not as effective as anticipated… (this idea was inspired by a method described in <a href="https://www.cl.cam.ac.uk/~ek358/Summarization_NAACL.pdf" target="_blank">this</a> paper).</p>
<h3>Souces</h3>
<ul>
<li><a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369956" target="_blank">https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369956</a><ul>
<li>For information on the multi-scale approach.</li></ul></li>
<li><a href="https://www.cl.cam.ac.uk/~ek358/Summarization_NAACL.pdf" target="_blank">https://www.cl.cam.ac.uk/~ek358/Summarization_NAACL.pdf</a><ul>
<li>Reference material for tackling the same task.</li></ul></li>
<li><a href="https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-feature-engineering" target="_blank">https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-feature-engineering</a><ul>
<li>I borrowed many features from this source. Thank you very much for your contribution!</li></ul></li>
</ul> |
CommonLit - Evaluate Student Summaries | 2nd Place Efficiency Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to the sponsors and admins! Absolutely love the impact that these education-focused competitions will have down the road so please keep them coming - these are awesome initiatives.</p>
<p>Thank you to <a href="https://www.kaggle.com/kononenko" target="_blank">@kononenko</a> and <a href="https://www.kaggle.com/andreippv" target="_blank">@andreippv</a> for providing support to the competition's community to help overcome those license compliancy issues.</p>
<p>And finally thank you to the heavyweight grandmasters both old and new who didn’t enter the efficiency track. Not sure I would be writing this if they had!</p>
<h1>2ND PLACE SOLUTION</h1>
<p>With most competitors focusing on (and clearly succeeding in) pushing the boundaries of transformers, I decided from the onset that I would focus solely on the efficiency track. Not sure if I loved the efficiency leaderboard delay but for some reason it made things a bit more exciting lol - maybe that's just me though.</p>
<p>Anyway after exhausting readability metrics and prompt/summary overlaps, I turned to exploiting missteps in English writing (e.g. don’t start sentences with subordinating conjunctions, don’t use first-person pronoun I in this context, etc.) and also the human nature of the scorers (e.g. focus on early sections of summary and the lasting first impressions they make, bias towards penalizing for spelling errors). I felt that by doing so I could hone in on the few elements (i.e. scorers and their rubrics/scoring instructions) that were relatively consistent across all prompts no matter the grade or subject - maybe this helped stabilize things when facing the private data.</p>
<p>Below are some of my solution’s specs:</p>
<ul>
<li>Catboost</li>
<li>CV/PUBLIC/PRIVATE - 0.52/0.452/0.504<ul>
<li>Used GroupKFold with five different seeds for validation - due to the nature of the public LB I often found myself using it as another fold and re-calculating CV when I was on the fence about a feature. </li></ul></li>
<li>Hyperparameters by way of Optuna: <ul>
<li>content_params = {'grow_policy': 'Depthwise', 'iterations': 1490, 'min_data_in_leaf': 93, 'learning_rate': 0.01, 'l2_leaf_reg': 2.0484264500233307, 'colsample_bylevel': 0.42, 'subsample': 0.42, 'max_depth': 6}</li>
<li>wording_params = {'grow_policy': 'Depthwise', 'iterations': 1230, 'min_data_in_leaf': 7, 'learning_rate': 0.01, 'l2_leaf_reg': 8.403269352596215, 'colsample_bylevel': 0.75, 'subsample': 0.4, 'max_depth': 6}</li></ul></li>
<li>Features:<ul>
<li>% of unique words that were misspelled - found that just the count didn’t work while % worked very well</li>
<li># of quotes in the summary that were found in the prompt</li>
<li>% of prompt words found in summary</li>
<li># of bigrams/trigrams that intersect between prompt and summary, also represented these values as a % of the total summary bigrams/trigrams</li>
<li>% of stop words in summary</li>
<li># of periods that aren’t followed by a space and capital letter</li>
<li># of commas that aren’t followed by a space and a letter</li>
<li># of commas that are preceeded by a space or not typed directly beside a character</li>
<li># of cardinal numbers + # of existential there + # of co-ordinating conjunctions <- for this I replaced nltk pos_tag with just a list of words/numbers that are captured by pos_tag</li>
<li># of uppercase characters</li>
<li>Mean words in each sentence, mean unique words in each sentence, # of sentences, syntax count, # of “and”s, # of characters, # of words, # of difficult words</li>
<li>% of words in summary that are all uppercase</li>
<li># of unique verbs and # of unique nouns found in the second sentence of the summary</li>
<li># of unique word meanings in the summary</li>
<li>Index of first period, binary flag if summary ends with a period</li>
<li># of “ i “ or “ I “ - to help indicate if writer used first-person</li>
<li># of sentences starting with a subordinating conjunction</li>
<li>Pretty much every readability metric out there except the McAlpine EFLAW score, szigriszt-pazos is the only non-english language metric used</li>
<li>Difference between Dale-Chall readability scores of prompt and summary, also did this with Spache readability</li>
<li>Difference between # of stop words found in summary and mean number of stop words across all summaries for the respective prompt_id</li></ul></li>
<li>Speed-ups (i.e. vrooms):<ul>
<li>Catboost for faster inference! Much slower to train, but for inference it's great. It was always considerably faster than my LightGBM models, but at the same time slightly worse in terms of CV/PUBLIC LB.</li>
<li>In almost all cases I used regex patterns instead of word_tokenize and sent_tokenize from nltk - found that they really slowed things down so I avoided them at all costs.</li>
<li>Used simple list instead of pos_tag from nltk when possible, also used simple list instead of importing stopwords</li>
<li>Used numpy vectorize as much as possible (groupby features were done with pandas data frame)</li>
<li>Also tried libraries like daal4py, treelite, and lleaves for faster inference but the compile time didn’t seem to be worth the inference speed-up so I just ended up loading my models normally.</li></ul></li>
</ul>
<p><strong>Inference Notebook:</strong> <a href="https://www.kaggle.com/code/kpfmma/efficiency-2nd-place-inference-catboost" target="_blank">https://www.kaggle.com/code/kpfmma/efficiency-2nd-place-inference-catboost</a></p>
<p>Hahah sorry I should clarify! Meant new/old as in when they received the title 😂</p> |
CommonLit - Evaluate Student Summaries | 2nd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Correct. 7165 x 2.</p>
<p>Yeah that's probably too short. As long as you are using the right pooling method, which is zeroing out the prompt_text and prompt_question portions you should be good to do higher max lengths of 1024+. </p>
<p>Yes, that's correct. You can find the implementation in the inference notebook if you want to see the code.</p>
<p><a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> Good seeing you here! And good luck with RSNA 🥇</p>
<p><strong>Inputs</strong></p>
<p>The data was input to the model as follows : </p>
<p>'Think through this step by step : ' + prompt_question + [SEP] + 'Pay attention to the content and wording : ' + text + [SEP] + prompt_text</p>
<p><strong>Pooling Method [High Impact]</strong></p>
<p><strong>Input :</strong> [TOKEN] [TOKEN] [SEP] [TOKEN] [TOKEN] [SEP] [TOKEN] [TOKEN]<br>
<strong>Head Mask :</strong> [0] [0] [1] [1] [1] [0] [0] [0]</p>
<p>Instead of using the normal attention mask created by the model tokenizer. I used a head mask that only had ones for the students' answer (text) portion of the input and zeros for all other tokens. I used the normal attention mask for the attention mask that the model consumed but I used the head mask for the mean pooling.</p>
<p>This had the biggest impact out of all the tricks I used. It increased the CV by a huge margin in all folds, but especially for the difficult prompts : 3b9047 and 814d6b. In my opinion this was the “magic” for this competition.</p>
<p><strong>Prompt Question Augmentation [Moderate Impact]</strong></p>
<p>I created 10 extra prompt questions per a prompt. I used an LLM. I asked the LLM to give me 10 variations of the prompt question. I then used this as augmentation during training. In inference, I used the default prompt question. In total I had 44 different prompt questions across all folds.</p>
<p><strong>Auxiliary Classes [Moderate Impact]</strong></p>
<p>I used auxiliary classes during the competition. These auxiliary classes were the target classes from Feedback 3.0 - <br>
['cohesion','syntax','vocabulary','phraseology','grammar','conventions'].</p>
<p>To create these labels I used models that were trained on the Feedback 3.0 data and ran the data from this competition through those models. I used only the ‘text’ column from this competition. In doing this I produced pseudo labels to use for this competition.</p>
<p>I used the auxiliary classes in the following way : (loss * .5) + (aux_loss * .5)</p>
<p>The auxiliary classes were used every second step.</p>
<p>The Feedback 3.0 competition was hosted by The Learning Agency Lab and to the best of my knowledge this is a legal technique.</p>
<p><strong>Max Length</strong></p>
<p>Models were trained on a maximum length ranging from 896-1280 during initial training. During the pseudo labelling rounds they were trained with a maximum length ranging from 1280-2048. Pseudo labels allowed the models to learn at a higher maximum length.</p>
<p>During inference the models used 1792 for large and 2048 for base.</p>
<p><strong>Pseudo Labels [Moderate Impact]</strong></p>
<p>Once a CV of .4581 was reached across the grouped kfold I started creating pseudo labels. </p>
<p>The pseudo labels allowed me to train deberta-v3-base effectively. Before PL, I was not able to train the base model. They also allowed me to increase the maximum length during training.</p>
<p>PL increased the CV from .4581 to .4476</p>
<p>The models were trained using a concatenation of the original labels and pseudo labels.</p>
<p><strong>Final Ensemble (PL)</strong></p>
<p><br></p>
<table>
<thead>
<tr>
<th>Model Name</th>
<th>Training Max Length</th>
<th>Inference Max Length</th>
<th>Head</th>
<th>Model CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>microsoft/deberta-v3-large</td>
<td>2048</td>
<td>1792</td>
<td>Mean Pooling + LSTM Layer Pooling</td>
<td>.460</td>
</tr>
<tr>
<td>microsoft/deberta-v3-base</td>
<td>2048</td>
<td>2048</td>
<td>Mean Pooling + LSTM Sequence Pooling</td>
<td>.468</td>
</tr>
<tr>
<td>OpenAssistant/reward-model-deberta-v3-large-v2</td>
<td>2048</td>
<td>1792</td>
<td>Mean Pooling + LSTM Layer Pooling</td>
<td>.464</td>
</tr>
<tr>
<td>microsoft/deberta-large</td>
<td>2048</td>
<td>1792</td>
<td>Mean Pooling + Linear</td>
<td>.466</td>
</tr>
<tr>
<td>microsoft/deberta-v3-large</td>
<td>1280</td>
<td>1792</td>
<td>Mean Pooling + LSTM Sequence Pooling</td>
<td>.461</td>
</tr>
</tbody>
</table>
<p><br><br>
<br></p>
<p><strong>Did work:</strong></p>
<ul>
<li>Layer wise learning rate decay</li>
<li>Freezing layers (bottom 8)</li>
<li>LSTM layer pooling</li>
<li>LSTM Sequence pooling</li>
<li>Turn off dropout in transformer backbone</li>
<li>Multisample dropout in head</li>
</ul>
<p><strong>Did not work:</strong></p>
<ul>
<li>AWP</li>
<li>SWA</li>
</ul>
<p><strong>Inference script :</strong> <a href="https://www.kaggle.com/code/ivanaerlic/2nd-place-solution/notebook" target="_blank">https://www.kaggle.com/code/ivanaerlic/2nd-place-solution/notebook</a></p>
<p><strong>Note :</strong> The inference script and input datasets will contain extra tricks that I haven't mentioned here, such as the inclusion of additional head for the model. Only the main points are outlined in this write-up.</p>
<p>Also, you can find the training script for each model inside of the dataset that contains the weights.</p>
<pre><code> :
def :
super.
self.hidden_size = hidden_size
self.cat_size = hidden_size*
self.sequence_pooler = utilities.
self.rnn = utilities.
self.head = nn.)
self.aux_head = nn.))
def forward(self, x, mask):
x = torch.cat(x.hidden_states, dim=-)
hidden_mask = mask.unsqueeze.expand(x.size).
x = xhidden_mask
x = self.rnn(x)
x = self.sequence
x_preds = self.head(x)
aux = self.aux
return x_preds, aux
</code></pre>
<p>Here is how the head mask is created in the dataset class :</p>
<pre><code> head_mask =
use_full = False
token tokenized_dict:
token == CFG:
use_full = not use_full
head_mask( use_full .)
</code></pre>
<p>Thanks for the kind words <a href="https://www.kaggle.com/chumajin" target="_blank">@chumajin</a>. It was a great competition 😀</p>
<p>The write-up may change from time to time. I am fixing mistakes and adding content as I go.</p>
<p>I think the reason why my CV had good correlation with the LB was because of using the prompt text and the head mask, while not using feature engineering at all. I only started using the base model towards the end when I implemented PL, so that would not have been the reason. I had a stable CV and LB correlation before that.</p>
<p>There were still things which didn't correlate well, one of them was increasing the max length. This increased LB drastically while not doing much for CV.</p>
<p>One of the reasons why people were experiencing an unstable LB may have been due to feature engineering. These are hard number values and are less generalizable than a proper understanding of language.</p>
<p>I can't really tell you how to determine it, because it is based off of past experience and it changes per a competition. There aren't any hard rules for it.</p>
<p>I can give you a few tips, though. In general I find it best to distribute the learning rate across the model into four sections (sometimes 5). For this competition I chose to freeze the first 8, 2e-6 for 8 to 16, 1e-5 for 16 to 20, 2e-5 from 20 to 22 and 5e-5 from 22 to 24. This is with LSTM pooling across model layers and mean pooling across sequences, if you are pooling across model sequences with an LSTM then I tend not to go past 2e-5.</p>
<p>These are the deberta-v3-large numbers. It is different per a model.</p>
<p>Also it will change based on the batch size and max length.</p>
<p>As max length goes up - decrease the learning rate.<br>
As batch size goes up - increase the learning rate</p>
<p>You are welcome. I hope to become a professional soon. </p>
<p>Haha, thank you <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a>. Do you want training or inference. Training would be a bit difficult for me. Inference I can publish quickly.</p>
<p>Thanks for sharing, this give me something to try out next time.</p>
<p>AWP : <a href="https://www.kaggle.com/code/itsuki9180/introducing-adversarial-weight-perturbation-awp" target="_blank">https://www.kaggle.com/code/itsuki9180/introducing-adversarial-weight-perturbation-awp</a><br>
SWA : <a href="https://www.kaggle.com/competitions/google-quest-challenge/discussion/119371" target="_blank">https://www.kaggle.com/competitions/google-quest-challenge/discussion/119371</a></p>
<p>Here are some links. It's difficult to explain these two techniques in one comment. You will have to read through the code. </p> |
CommonLit - Evaluate Student Summaries | 30th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My thank you to Kaggle, the competition hosts, and the Kaggle community for a fun and educational three months. I'm really looking forward to reading everyone's solutions and learning from them. The 30th ranked submission was an ensemble of 6 deberta-v3-large models.</p>
<p>Throughout this competition, I worried about overfitting. My earliest taste of overfitting was when I reached a gold position briefly with 0.461 public LB score, by not splitting on prompt-id, and by doing a random 80/20 train/val split. After spending a a day being thrilled at the gold symbol in my public LB rank, I started thinking that it was all a mirage, and I was overfitting (and indeed the private LB score of that submission confirms this). Back then the model did not even train if I split by prompt-id. It was some more weeks before I learned to train transformer models in a stable way, with a somewhat steadily decreasing validation loss. I did not use many of the features from the top-scoring public notebooks, because I feared overfitting. I don't know if it was a good decision, but the shakedown was only by 5 places, and I don't regret it.</p>
<p><strong>Things that worked:</strong></p>
<ul>
<li><p>Training seperately for content & wording scores. I trained 8 models (4 per target, since there were 4 folds) for each "full model". For CV, I averaged out-of-fold predictions. For predicting on test data, I averaged the 4 models for each target. </p></li>
<li><p>It was 3 levels of modeling: <br>
1) deberta-v3-large models with many differences (pooling techniques, different inputs, frozen vs unfrozen, different max-len, etc)<br>
2) Make LGBM predictions from each deberta's output, plus features like text length etc. The features list was the same for all models.<br>
3) Both deberta and LGBM outputs of 6 models was fed into hill climbing algorithm (with negative weights) to predict the final output. Hill climbing, being a greedy algorithm, was sensitive to the order of inputs, and I tried to find alternatives but none of them worked as well.</p></li>
<li><p>Layerwise learning rate decay - stabilizing transformer training using this method, with linear or cosine scheduler with warmup was the foundation of all the models. </p></li>
<li><p>Using the input text after replacing common-ngrams with prompt text with the "_" character. The inputs to most of the models was prompt_question + separator + prompt_title + [start] + modified input text + [end]. One model uses a modified prompt_text appended at the end -- I've explained that below. Another model had [P] prepended to each sentence in summary_text.</p></li>
<li><p>Taking the encoder output of only some tokens, like [start] and [end], or [start] to [end], or gem pool of only [P] tokens.</p></li>
<li><p>weighted layer pooling (all layers), gem pooling & mean max pooling</p></li>
<li><p>The head was the same for all models: 2 linear layers with tanh inbetween.</p></li>
<li><p>freezing embeddings & layers. I froze between 12 and 18 layers, not including embeddings layer, which was frozen for all models.</p></li>
<li><p>pseudo labeling: I did it by training T5 on competition data, using it to generate texts, which I labeled using my best ensemble, then trained fresh models first on PL data, then on competition data. The first time I did it, it was leaky & LB score tanked. Second implemention gave a small LB boost (2-model ensemble score went from 0.431 to 0.430), but I don't know if it was worth all the trouble.</p></li>
<li><p>mixed precision training seemed to give results that were as good as full precision training, while being much faster</p></li>
<li><p>I included prompt text like this: Gather sentences from prompt_text that are quoted by at least one student, who had written a summary that was at least 20% original (as calculated by ngram matching). The ensemble of 6 models included one model that did this. That model's private LB score was 0.48 (public LB was 0.443). I don't know how useful it was. I was just happy to have found a way to include prompt_text.</p></li>
<li><p>Hill climbing</p></li>
</ul>
<p><strong>Things that did not work:</strong></p>
<ul>
<li>Re-initializing transformer layers</li>
<li>AWP</li>
<li>adversarial attack through SIFT</li>
<li>MLM</li>
<li>SWA (stochastic weight averaging - maybe my implementation was wrong)</li>
<li>ridge regression or xgb for ensembling</li>
<li>auxiliary loss calculated for classifying the summary into discretized score buckets</li>
<li>RAPIDS SVR using embeddings of various models (without training on competition data)</li>
<li>many features I'd calculated, such as mean sentence transformer cosine similarity scores (sentence-wise) between prompt_text and summary_text, improved CV score but made LB score much worse</li>
<li>Trying to make vicuna-13b generate various features (including "gold" summaries) that deberta or LGBM could use did not work. Making chatgpt guess the individual rubric scores that must have gone into the calculation of wording & content also did not work.</li>
<li>I felt that if a summary provided on-topic, original insight, it was scored higher. I spent a lot of energy in trying to predict this using various pretrained models. I also generated lots of chatgpt or llama-generated data for fine tuning deberta for this purpose. It did not work at all. I think I spent the first month mostly on this pursuit & in making vicuna-13b useful. By doing this I learned a lot about the datasets that are used for benchmarking LLMs, about finetuning llama and sentence transformers, etc. So maybe it was useful (I doubt it lol). </li>
<li>One alarming thing that did not work till the end was Clipping grad norm. The model stopped training if I clipped.</li>
<li>Using a model trained on previous feedback competition data to guess syntax, cohesion, vocabulary, etc scores of competition data didn't work. But I tried it early on, and back then I didn't know that much about various transformer techniques that I learned later. So maybe it would've worked if I'd tried it near the end.</li>
<li>I picked only 2 submissions because I was going to pick one submission that I'd made last minute, and it errored out, and I forgot to pick an alternative.</li>
<li>I wish I'd trusted the CV score all along. It correlated the best with private LB.</li>
</ul>
<p><strong>Credits & Gratitude:</strong></p>
<ul>
<li>I learned about LLRD from Vignesh in Sanyam's interview: <a href="https://www.youtube.com/watch?v=XLaq-boyAGk" target="_blank">https://www.youtube.com/watch?v=XLaq-boyAGk</a> and then by reading <a href="https://www.kaggle.com/torch" target="_blank">@torch</a>'s wonderful notebooks - <a href="https://www.kaggle.com/code/rhtsingh/on-stability-of-few-sample-transformer-fine-tuning" target="_blank">https://www.kaggle.com/code/rhtsingh/on-stability-of-few-sample-transformer-fine-tuning</a></li>
<li>Learned about pooling techniques from <a href="https://www.kaggle.com/code/rhtsingh/utilizing-transformer-representations-efficiently/notebook" target="_blank">https://www.kaggle.com/code/rhtsingh/utilizing-transformer-representations-efficiently/notebook</a></li>
<li>Learned about GeM pool from Team hydrogen's notebooks ( <a href="https://www.kaggle.com/code/philippsinger/team-hydrogen-efficiency-prize-1st-place" target="_blank">https://www.kaggle.com/code/philippsinger/team-hydrogen-efficiency-prize-1st-place</a> )</li>
<li>Nvidia's videos with Chris Deotte, Christof Henkel, Jean-Francois Puget and Ahmet Erdem were super helpful for me. I learned how to do leak-proof PL (and other things) from them. Despite this, my first implemention of PL was leaky, but I was able to fix it because frankly, I learned from the best: <a href="https://www.youtube.com/watch?v=CwDKy0EKwHE" target="_blank">https://www.youtube.com/watch?v=CwDKy0EKwHE</a> and <a href="https://www.youtube.com/watch?v=PXc_SlnT2g0" target="_blank">https://www.youtube.com/watch?v=PXc_SlnT2g0</a> </li>
<li>Learned about hill climbing from <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369609" target="_blank">https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369609</a></li>
</ul>
<p><strong>General Educational Material:</strong><br>
I started learning ML in april, and until this competition, I was very intimidated by kaggle. If someone else like me is reading this, I highly recommend these courses. I found all them very helpful and educational: </p>
<ul>
<li>Andrew Ng's introductory course <a href="https://www.coursera.org/specializations/machine-learning-introduction" target="_blank">https://www.coursera.org/specializations/machine-learning-introduction</a></li>
<li>Fast.ai (book and videos) <a href="https://www.fast.ai/" target="_blank">https://www.fast.ai/</a></li>
<li>Karpathy's Zero to Hero course <a href="https://www.youtube.com/watch?v=VMj-3S1tku0&list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ" target="_blank">https://www.youtube.com/watch?v=VMj-3S1tku0&list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ</a></li>
<li>Kaggle Learn <a href="https://www.kaggle.com/learn" target="_blank">https://www.kaggle.com/learn</a></li>
<li>Kaggle Playground Competitions</li>
<li>Aladdin Pearson's youtube channel <a href="https://www.youtube.com/@AladdinPersson" target="_blank">https://www.youtube.com/@AladdinPersson</a> : He implements papers like UNet, Gans etc, and it makes the papers & code very approchable. </li>
<li>I found it very helpful to use weights & biases for tracking things and highly recommend them, especially for beginners like me.</li>
<li>I used paperspace & jarvislabs.ai for training.</li>
</ul> |
CommonLit - Evaluate Student Summaries | 333rd place...sharing my findings | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Firstly, huge thanks for organizing this competition and everyone who has shared their ideas in the notebooks and discussion forums!</p>
<p><strong>Approach</strong></p>
<p>My solution is built upon <a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a>'s effective idea of stacking Debertav3base with LGBM. I mostly played around with adding more nlp features and tweaking with the model.</p>
<p><strong>What worked</strong></p>
<p>Freezing the first 4 encoder layers gave the best results<br>
Using CLS token output for regression was effective<br>
Adding the 6 features from <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/overview" target="_blank">FB3 commonlit competition</a> gave a good boost<br>
Adding textstat features such as <code>automated_readability_index, smog_index, dale_chall_readability_score, gunning_fog, reading_time, difficult_words</code> showed improvement<br>
Addding cosine similarity between prompt text and text embeddings also helped to a small extent<br>
Removing <code>length_ratio</code> also showed small improvement</p>
<p><strong>What didn't work</strong></p>
<p>Freezing embedding layers deteriorated performance<br>
Mean pooling/ GeM pooling<br>
Autocorrect didn't improve score<br>
Tried other models such as Roberta, Electra which were not as good as Debertav3base. Even ensembling Debertav3base with them couldn't help<br>
Training separate models for each target was not effective. In fact, content scores improved when trained together with wording scores</p>
<p>I wanted to include <code>prompt_text</code> and also try using Debertav3large, however it didn't turn out well this time :(</p>
<p>Although I couldn't earn a medal, my learnings through this competition have been immense and I'm very happy to have come this far. I've already learnt a lot from the teams who have published their solutions and I look forward to participating in the next competitions!</p> |
CommonLit - Evaluate Student Summaries | 35th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for organizing the challenging competition. Though we got a silver medal finally, we learned a lot from the experienments. And thanks my teammates <a href="https://www.kaggle.com/penpentled" target="_blank">@penpentled</a> <a href="https://www.kaggle.com/shigeria" target="_blank">@shigeria</a> <a href="https://www.kaggle.com/chaudharypriyanshu" target="_blank">@chaudharypriyanshu</a> <a href="https://www.kaggle.com/takanashihumbert" target="_blank">@takanashihumbert</a>, they have been really helpful these months. </p>
<h1>1. Summary</h1>
<p>Our best private solution submitted is a ensemble of models, which contain a model with prompt_text, 3 wording models and a content+lgbm model without prompt_text. The inference time is about 8hrs. </p>
<table>
<thead>
<tr>
<th>Sub</th>
<th>Type</th>
<th>CV</th>
<th>LB</th>
<th>PB</th>
<th>Comment</th>
</tr>
</thead>
<tbody>
<tr>
<td>No.1</td>
<td>best lb</td>
<td>0.4655</td>
<td>0.426</td>
<td>0.466</td>
<td>model with prompt_text training max_len=896 infer 1024</td>
</tr>
<tr>
<td>No.2</td>
<td>best cv</td>
<td>0.463</td>
<td>0.427</td>
<td>0.467</td>
<td>substitute the content model with a distilled model+lgbm</td>
</tr>
<tr>
<td>No.3</td>
<td>conservative</td>
<td>0.476</td>
<td>0.439</td>
<td>0.484</td>
<td>only models without prompt_text</td>
</tr>
</tbody>
</table>
<p>Though we have a model scored 0.461 for pb, we didn't select it since it only scored 0.445 for lb. It contains training on 2 folds validation on other 2 folds models by <a href="https://www.kaggle.com/takanashihumbert" target="_blank">@takanashihumbert</a> and the base model with prompt_text.</p>
<h1>2. Training</h1>
<p>Models submitted are shown below.</p>
<table>
<thead>
<tr>
<th>ID</th>
<th>Type</th>
<th>Train max_len</th>
<th>Infer max_len</th>
<th>Text Cleaning</th>
<th>CV</th>
<th>Model</th>
<th>With prompt_text</th>
</tr>
</thead>
<tbody>
<tr>
<td>base</td>
<td>both</td>
<td>896</td>
<td>1024</td>
<td>public methods</td>
<td>0.477</td>
<td>v3-large</td>
<td>✓</td>
</tr>
<tr>
<td>w2</td>
<td>wording</td>
<td>512</td>
<td>512</td>
<td>replaced copied text with [PASSAGE]</td>
<td>0.556</td>
<td>v3-large</td>
<td></td>
</tr>
<tr>
<td>w4</td>
<td>wording</td>
<td>768</td>
<td>768</td>
<td>replaced copied text with [PASSAGE]</td>
<td>0.559</td>
<td>v3-large</td>
<td></td>
</tr>
<tr>
<td>w6</td>
<td>wording</td>
<td>512</td>
<td>512</td>
<td>replaced copied text with [PASSAGE] and [REFERENCE]</td>
<td>0.5561</td>
<td>v3-base</td>
<td></td>
</tr>
<tr>
<td>c1</td>
<td>content</td>
<td>1024</td>
<td>1024</td>
<td>public methods</td>
<td>0.4445</td>
<td>v3-large</td>
<td></td>
</tr>
<tr>
<td>distill</td>
<td>content</td>
<td>1024</td>
<td>1024</td>
<td>joined copied text with [PASSAGE] and [REFERENCE] tokens</td>
<td>0.488</td>
<td>v3-base</td>
<td></td>
</tr>
</tbody>
</table>
<h1>3. inference</h1>
<p>We used optuna and nelder-mead to optimize for ensemble, and features that public notebooks shared for lgbm. We strived for both better cv and lb when ensemble.</p>
<h1>4. Not working for us</h1>
<ul>
<li>Models except for v3</li>
<li>Mean Pooling, Concat Pooling, …</li>
</ul>
<h1>5. Acknowledgements</h1>
<ul>
<li><a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a> idea of lgbm and feature engineering</li>
</ul>
<h1>6.Notebooks</h1>
<p>Best PB submission: <a href="https://www.kaggle.com/code/snorfyang/commonlit-exp-e6-shigeria1/notebook" target="_blank">https://www.kaggle.com/code/snorfyang/commonlit-exp-e6-shigeria1/notebook</a><br>
Text cleaning, ensemble methods, final submissions and other notebooks may be published later.</p> |
CommonLit - Evaluate Student Summaries | 3rd Place Efficiency Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>1. Summary and the Magic</h1>
<p>I trained a deberta-v3-xsmall, running in ~20 minutes. With additional steps for another 4 minuts.<br>
My solution is very similar to the 1st Place on the Efficiency Leaderboard, see <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/447944" target="_blank">Shochoshi write-up</a>. The main idea was to give the model as much information as possible from the original text without increasing the token number by much. I did that by looking at sequences from the original text occurring in the prompt text and putting braces around them, the code and examples for this step can be seen <a href="https://www.kaggle.com/code/raki21/marked-summary-gen-efficient-v3xsmall/notebook" target="_blank">here</a>. If I had known of the optional token_type_ids I would probably have gone with these instead of using braces that use up a few additional tokens. I also needed to focus on longer sequences for that reason (at least 3-gram) while the winner focused especially on short sequences. I would have used up lot's of tokens to mark trivial single word overlaps like 'the' if I focused on short ones with my approach. I also utilized multiprocessing and tried out tons of tricks, but most did not stick. </p>
<h1>2. Getting Sidetracked</h1>
<p>I also pursued the performance track, after seeing how effective the sequence length reduction was, hoping I could get a good medal by just being able to iterate more quickly and use huge ensembles. I also did not expect to win any prize money and because no medals are given for the Efficiency track my focus shifted towards the safer bet of the performance track.<br>
In the end I split my submissions 2 for the performance and 1 for the efficiency track, getting 32nd place in performance and 3rd in Efficiency. The write-up for the performance track focusing more on topics like ensembling is <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/446631" target="_blank">here</a>.</p>
<h1>3. Approach</h1>
<ol>
<li>Clean summaries and prompt from whitespaces.</li>
<li>Put braces around sequences that appear in the original text with n-gram >=3.</li>
<li>Predict content and wording with the same deberta-v3-xsmall (only 1 model here of course).<br>
3.1 This is done with Ray Multiprocessing to utilize the 4 cores well.</li>
</ol>
<p>Training was pretty standard with embeddings and first layer frozen.</p>
<h1>4. Journey</h1>
<p>I tried a lot of options, at first I just looked at examples of summaries for different scores, a notebook I shared for that <a href="https://www.kaggle.com/code/raki21/summary-examples-by-content-and-wording" target="_blank">here</a>. I looked up the CommonLit site, tried to make sense of how the grade the student is in might impact mean scores for the metrics and just absorbed information from discussion threads and code. I looked at the efficiency score implementation and made sense of the time/performance tradeoff.</p>
<p>The tradeoff I worked with was based on the assumption that with a baseline of 0.84 and a best assumed score of around 0.42, the divisor of the first part of the efficiency metric would also be 0.420. With 9 hours of runtime / 540 minutes contributing to 1 score I worked with the assumption that 0.008 score is worth around 10 minutes.</p>
<p>I then implemented simple baselines like linear and non-linear models using only length of the summary. I then looked at the biggest error sources and saw that most of them by a large margin are children just copy pasting large parts of the original text. Those are long and rated well by my simple approach, but are rated very poorly by teachers, especially in wording. Just removing all common 3-grams (link ngram), counting the length of the remaining text instead and using this information again in an exponential formula decreases wording RMSE much furthers. I then switched to LGBM and added additional features like relative words copied, counts for N-grams… At this point I noticed content score being pretty good already but the wording prediction still being really bad. For better predictions I added a tiny bert transformer for wording only.</p>
<p>I wanted to use bigger transformers and tried out a lot of approaches to make them more efficient: conversion from torch to ONNX, static and dynamic quantization in torch and ONNX, multiprocessing with ray. In the end only ray sticked. The quantization gave 25% speedup on my CPU but only around 10% on the submission servers CPU, the tradeoff was not good enough as there was a performance drop of course. Maybe I could have made it work with quantization aware training, but then you have a conflict with the early frozen layers (at least I think so as I found nothing about the standard weights being quantized already). In the end I only had around 15 days to invest in total and a lot of that time was gone at this point.</p>
<p>I also tried misspelling correction, averaging weights of multiple models (does not work at all), varying token length, adjusting for token length > max_length by adding some score to the prediction for longer sequences. I also tried post processing with LGBMs (which are pretty lightweight and don't need much compute), but that did not work out well at first, I got LGBMs working in the end using a trick, but only at the last day and I only used that for my final performance submission, I described this in more detail in my performance write-up. I switched almost everything to deberta-v3-xsmall prediction after working on the performance track for a while. In an error analysis afterwards I found some texts with tons of tabs being misclassified and solved that byremoving white-spaces with a single space.</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Local CV Score</th>
<th>Public Score</th>
<th>Private Score</th>
<th>Inference Time</th>
</tr>
</thead>
<tbody>
<tr>
<td>0-submission</td>
<td>1.030</td>
<td>0.840</td>
<td>0.985</td>
<td><1min</td>
</tr>
<tr>
<td>summary-length-only</td>
<td>0.677</td>
<td>0.596</td>
<td>0.610</td>
<td><1min</td>
</tr>
<tr>
<td>cutting 3-grams</td>
<td>0.621</td>
<td>0.535</td>
<td>0.565</td>
<td>1 min</td>
</tr>
<tr>
<td>LGBM+tiny-bert</td>
<td>?</td>
<td>0.471</td>
<td>0.515</td>
<td>31 min</td>
</tr>
<tr>
<td>v3-xsmall with all tricks</td>
<td>?</td>
<td>0.455</td>
<td>0.491</td>
<td>25 min</td>
</tr>
</tbody>
</table>
<p>I had a lot of things left to do: <br>
I only just got LGBMs behind a strong transformer working when the competition ended, I wanted to throw away some layers of deberta-v3-xsmall completely and see if it improves the time-performance tradeoff. I wanted to try pruning and distillation too. </p>
<h1>5. Acknowledgements</h1>
<p>I especially want to thank <a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a> for the great baseline.<br>
There were also a lot of other great things I read up on intermediately like how and why to do layer freezing. I want to thank everyone I gained good ideas and insights from this competition and all the people freely sharing their models performance and settings. I also want to give a general thank you to the community, I really have been learning a lot in the last months on Kaggle!</p> |
CommonLit - Evaluate Student Summaries | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I will probably release my train code after some clean up, so stay tuned</p>
<p>In some encoder models (and in deberata too) you could pass to forward additional argument, called token_type_ids. Originally it's used in QA task, to separate questions and answers. It's working similar to positional embeddings - but it’s encoding type of tokens instead of position. Implementation is very simple- you need to add additional mask similar to attention mask, and tokens belong to prompt will be zeroes, related to question will be ones and related to essay text will be twos</p>
<p>Thank you, but I only got a master😀<br>
I just calculated pairwise levenstein distance across folds, and then check manually very similar example (where is similarity is more that 95%, for example).<br>
When I looked and this examples, I selected manually one of them with less number of typos and dropped the others, and if they had different scores - averaged it.</p>
<p>Yes, and when you doing this kind of preprocessing you could get some insights. For example, there was quite similar examples, ex. only differs with one word, so student #1 write this word correctly and student#2 made a mistake. So, as host explained to us, there should be identical scores for both of this essays. But in fact, this scores could be very differnt. You could even calculate this cases and get some estimate of human error on this task😁</p>
<p>My solution is rather simple, because I used almost no complex tricks - instead I just built a reliable pipeline and found good hyperparameters. I had some brilliant ideas (at least I think so :)) - to use augmentation and generate synthetic data using some big model based on other texts from commonlit.org. But at that time, when I was still actively contributing, there was uncertainty with the license for the texts, and I did not complete the use of augmentations - I was too busy with my work (trying to get RLHF to work), so I left the competition - my last commit was a month ago. But apparently the decision was good, at least I didn’t overfit :)</p>
<p>So let's break my solution down into parts.</p>
<p><strong>1. Data:</strong><br>
I used a fairly standard template: prompt + question + text. At the end of my participation, I tried to dig into the data and do a good pre-processing - after all, there were quite a lot of very similar essays with exact or different scores. So, I tried to find these samples by similarity (like Levenstein) and then merge them. In addition, I decided to make augumentations based on this insight - if there are many similar essays (differing only in typos, for example) - I could use something like reverse autocorrect - randomly replace some words with its close analogues. With this technique I got 0.453 in private ONLY on fold3 (which is better than my chosen blend and probably could lead to a second place) - but I was too tired at this point so I didn't look further into augmentations. But I think augmentations could probably lead me to victory.</p>
<p><strong>2. Models</strong><br>
Deberta is the king, so there's not much to say here. I tried using decoder models like Llama, but Deberta was still better. There were some techniques that gave me a boost - using EMA (honestly, without EMA it was very unstable, so it's probably just necessary) and using differential learning rates.I tried several pooling options, but the best option for me was to use concatenation of CLS token and student's text meanpooling. I also used token_type_ids to separate the prompt, question and essay.</p>
<p><strong>3 Inference & train</strong><br>
I used following scheme - I tried to find a good hyperparameters on some fold (for example, fold0), and then train with exact hyperparameters on other folds. I then sumbittend the entire blend and 4 individual models (5 submission total - one day) and repeated the procedure the next day. I realized that I could use maxlen 1500 for inference (didn't research this number much, tried something like 1024 and 2048, but 1500 was better in terms of efficiency), so in my final mix I took the 10 best checkpoints across folds (some folds got 2 checkpoints, some folds got 3). First I averaged by folds, then averaged the rest. That's all. </p>
<p>Briefly what worked (ranked from most important to least important, IMO):</p>
<ol>
<li><p>Using Deberta</p></li>
<li><p>EMA</p></li>
<li><p>Augumentation</p></li>
<li><p>Defferentiated learning rates</p></li>
<li><p>Custom pooling</p></li>
<li><p>token_type_ids</p></li>
<li><p>Data cleaninig</p>
<p>What did not work (random order):</p></li>
<li><p>Decoder models</p></li>
<li><p>AWP</p></li>
<li><p>FGM</p></li>
<li><p>WD</p></li>
<li><p>Constant LR</p></li>
<li><p>Handcrafted features</p></li>
<li><p>GBT for stacking</p></li>
</ol>
<p>In the end, it was a good competition for me. Last year I competed in another NLP competition and got a silver medal, but I grinded all day at that competition (I wasn't working that time, so I had a lot of free time). This time I also expected silver, which I consider a solid result, but I got 3rd place. In any case, this competition was a cakewalk for me, since I spend very little effort on it (compared to the previous competition, at least). I'm hoping this means I'll grow a lot this year - and I think that's the main goal of participating in Kaggle.</p>
<p>Good luck to all of you.</p>
<p>Yeah, you are right.</p>
<p>Probably, it’s because you Albert. You should always use deberta nowadays, I think. </p> |
CommonLit - Evaluate Student Summaries | 429th Place Solution for Evaluate Student Summaries Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank Kaggle for hosting Evaluate Student Summaries, a competition that assesses the quality of summaries written by students in grades 3-12. Despite joining this competition just a month before the deadline and being busy with another competition, I did not achieving good results but learned a lot how to train a model to automatically evaluat the student summaries in terms of the <em>content score</em> (how proper the content of student summaries relevant to original texts) and <em>wording score</em> (spellings, grammar errors, word overlapping…). </p>
<h1>Context</h1>
<p>Business context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam</a><br>
Data context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/data" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/data</a></p>
<h1>Overview of the Approach</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16611574%2Fecf091a2f0599e26d3b37d2ddbb7d210%2FPicture1.jpg?generation=1698114172221971&alt=media" alt="Workflow"></p>
<p>The LLM-based summary evaluation system works as follows:</p>
<ol>
<li><p>Fix misspellings/ in student summaries and collect features for evaluation.</p></li>
<li><p>Fine-tune a pre-trained LLM and train two models on training datasets. </p></li>
<li><p>Predict student summary scores in test dataset using fine-tuned LLM and LightGBM models</p>
<p>The best solution I found for the Evaluate Student Summaries competition was a combination of <code>Deberta-v3-base</code> and <code>LightGBM</code> models. This solution achieved an accuracy of 0.488, with a training time of 3.8 hours on a P100 GPU. I have published my notebook <a href="https://www.kaggle.com/code/minhsienweng/debert-v3-base-lightgbm-model-training?scriptVersionId=146058874" target="_blank">here</a>.</p></li>
</ol>
<p><strong>Step 1.</strong> Text preprocessing collects and fixes grammar/spelling errors in student summaries.<br>
Text preprocessing collects features, such as the number of incorrect words, overlapping words, and named entities (people, place or medical codes), from student summaries and original texts. It then uses <code>pyspellchecker</code> to find the grammar errors and fix incorrect words using <code>autocorrect</code>. The features include (1) text length (2) misspellings (3) N-gram overlapping (tri-, bi-gram and word) (4) grammar errors (5) quote overlapping.</p>
<p><strong>Step 2.</strong> Fine-tune pretrained LLM model and then train two models with training datasets for content and wording scores.</p>
<p>Fine-tune the pretrained <code>Deberta-v3-base</code> using 4-fold group cross-validation (<code>GroupKFold</code>) to predict content and wording scores for student summaries training dataset. The fine-tuned model takes <code>prompt title</code> [SEP] <code>prompt question</code> [SEP] <code>student summary</code> as input, and predicts their content and wording scores as output, and the evaluation is based on mean squared error. </p>
<p>Next, train two <code>LightGBM</code> models on the training dataset, one to predict content score and the other to predict wording score. Both models predict using the same fold groups, based on the features from Step 1 and the scores from fine-tuned model, and are then evaluated using the same metric.</p>
<p><strong>Step 3.</strong> Predict content and wording scores using the fine-tuned and LightGBM models. <br>
For the testing dataset, we predict content and wording scores for student summaries using the fine-tuned and LightGBM models, and average the predictions to produce the final score.</p>
<h1>Details of the submission</h1>
<p>I tried the below approaches apart from the final solution.</p>
<h1>Deberta-V3-large model</h1>
<p>We used the <code>Deberta-V3-large</code> model without any training to predict scores on the test dataset. The score was very poor (1.08). This indicates that LLMs require further information to make accurate predictions.</p>
<h2>Sentence Transformer + XGBoot</h2>
<p>We used Sentence Transformer to extract text similarity and word diversity features from student summaries and original texts, then trained XGBoost and BERT models to predict content and wording scores. BERT outperformed XGBoost with a score of 0.59, suggesting fine-tuning can improve accuracy.</p>
<h2>Text Preprocssing + Fast-AI training</h2>
<p>We applied the FastAI tabular library to simplify the training process. We used the same text processing as the final solution to extract features and fix misspellings. Next, we used a FastAI tabular learner to train a model with layers <code>[300, 200]</code>, using mean absolute error (<code>MAE</code>) as the loss function. The trainer fit the training data and obtained the optimal learning rate using 30-70 cross-validation (random splits). However, the notebook threw exceptions, even though it ran successfully and produced a submission file.</p>
<h1>Sources</h1>
<p>The final solution was inspired by</p>
<ul>
<li><a href="https://www.kaggle.com/code/dangnguyen97/tuned-debertav3-lgbm" target="_blank">https://www.kaggle.com/code/dangnguyen97/tuned-debertav3-lgbm</a></li>
<li><a href="https://www.kaggle.com/code/yusuphmustaphaladi/implementing-fastai-tabular" target="_blank">https://www.kaggle.com/code/yusuphmustaphaladi/implementing-fastai-tabular</a></li>
</ul> |
CommonLit - Evaluate Student Summaries | 42nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This was my first Kaggle competition and I ended in 42nd position out of over 2000 teams.</p>
<ul>
<li>Training code: <a href="https://github.com/Lukasz-Cesarski/cless/blob/main/cless/run_sweep.py" target="_blank">cless/run_sweep.py</a> </li>
<li>Inference <a href="https://www.kaggle.com/code/luki493/lgbm-ensemble-iter-e6-open" target="_blank">code</a></li>
</ul>
<h2>Solution</h2>
<p>Final solution contains</p>
<ul>
<li>6 <a href="https://huggingface.co/docs/transformers/model_doc/deberta" target="_blank">Deberta</a> models (2 base and 4 large)</li>
<li>LGBM on top of every model </li>
<li>Late Fusion ensemble</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2200033%2Feab6107c139b5f93f4f14b440ed33a7d%2Fsolution.png?generation=1697541256075838&alt=media" alt=""></p>
<h2>Results</h2>
<h3>Cross Validation:</h3>
<table>
<thead>
<tr>
<th></th>
<th>checkpoint_name</th>
<th>model_name</th>
<th>adds</th>
<th>pseudo</th>
<th>content_rmse</th>
<th>wording_rmse</th>
<th>mcrmse</th>
<th>lgbm_content_rmse</th>
<th>lgbm_wording_rmse</th>
<th>lgbm_mcrmse</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>lrg-add-nops-202310101531</td>
<td>large</td>
<td>True</td>
<td>False</td>
<td>0.436815</td>
<td>0.563804</td>
<td>0.500309</td>
<td>0.416913</td>
<td>0.547992</td>
<td>0.482452</td>
</tr>
<tr>
<td>2</td>
<td>lrg-add2-nops-202310071225</td>
<td>large</td>
<td>True</td>
<td>False</td>
<td>0.422034</td>
<td>0.557535</td>
<td>0.489785</td>
<td>0.422094</td>
<td>0.552657</td>
<td>0.487376</td>
</tr>
<tr>
<td>3</td>
<td>deberta-large-pseudo-20231004</td>
<td>large</td>
<td>False</td>
<td>True</td>
<td>0.447833</td>
<td>0.599299</td>
<td>0.523566</td>
<td>0.431393</td>
<td>0.559015</td>
<td>0.495204</td>
</tr>
<tr>
<td>4</td>
<td>cless-deberta-20230919-2131-ensamble</td>
<td>base</td>
<td>False</td>
<td>False</td>
<td>0.478477</td>
<td>0.6168</td>
<td>0.547638</td>
<td>0.436959</td>
<td>0.556711</td>
<td>0.496835</td>
</tr>
<tr>
<td>5</td>
<td>base-noadd-pseu-202310101211</td>
<td>base</td>
<td>False</td>
<td>True</td>
<td>0.450015</td>
<td>0.631051</td>
<td>0.540533</td>
<td>0.426784</td>
<td>0.570818</td>
<td>0.498801</td>
</tr>
<tr>
<td>6</td>
<td>lrg-add2-ps-frz8-202310061415</td>
<td>large</td>
<td>True</td>
<td>True</td>
<td>0.450208</td>
<td>0.574412</td>
<td>0.51231</td>
<td>0.439451</td>
<td>0.563622</td>
<td>0.501536</td>
</tr>
</tbody>
</table>
<p>Columns:</p>
<ul>
<li><strong>checkpoint_name</strong> - name of model checkpoint (4 folds inside) </li>
<li><strong>model_name</strong> - which model was used: <code>microsoft/deberta-v3-base</code> or <code>microsoft/deberta-v3-large</code></li>
<li><strong>adds</strong> - if <code>True</code> then <code>prompt_text</code> and <code>prompt_question</code> were both added to model as input</li>
<li><strong>pseudo</strong> - pretraining on <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/data" target="_blank">Feedback Prize - English Language Learning</a> data</li>
<li><strong>content_rmse</strong>, <strong>wording_rmse</strong>, <strong>mcrmse</strong> - metrics of raw Deberta Regressor model </li>
<li><strong>lgbm_content_rmse</strong>, <strong>lgbm_wording_rmse</strong>, <strong>lgbm_mcrmse</strong> - metrics of LGBM Regressor on top of every Deberta</li>
</ul>
<p><strong>Note</strong>: Models without <code>prompt_text</code> and <code>prompt_question</code> in input (adds=False) has much worse CV <code>mcrmse</code>. <br>
However this is mitigated by LGBM which utilizes <code>prompt_text</code> handcrafted text mining techniques and <code>lgbm_mcrmse</code> <br>
is significantly improved for these models</p>
<h3>Ensemble metrics</h3>
<table>
<thead>
<tr>
<th></th>
<th>metric</th>
<th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>content_rmse</td>
<td>0.40780695722821314</td>
</tr>
<tr>
<td>2</td>
<td>wording_rmse</td>
<td>0.5389858871892348</td>
</tr>
<tr>
<td>3</td>
<td>mcrmse</td>
<td>0.47339642220872397</td>
</tr>
</tbody>
</table>
<h3>Leaderboard</h3>
<table>
<thead>
<tr>
<th></th>
<th>metric</th>
<th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>mcrmse (public)</td>
<td>0.432</td>
</tr>
<tr>
<td>2</td>
<td>mcrmse (private)</td>
<td>0.468</td>
</tr>
</tbody>
</table>
<p>Kudos for the author of <a href="https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-spell-autocorrect" target="_blank">this notebook</a> it helped me a lot.</p> |
CommonLit - Evaluate Student Summaries | 43th place solution (711 shake up) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for organizers hosting the competition. I also appreciate everyone at Kaggle sharing insights through codes and discussions.</p>
<p>The reason why I was at very low rank (754 th) is that I didn't have enough time to improve public leaderboard score due to severe fever from 10/3 to the competition end.<br>
(I have planed to reach similar CV score reported in <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/424330" target="_blank">Single Model CV-LB discussion</a> first, then try to improve CV and public leaderboard score. But there wasn't enough time to do later.)</p>
<h1>Overview of the approach</h1>
<p>My solution is mean ensemble of two deberta-v3-large with different settings. Each model has 4 weights (4 folds), so the total number of weights for inference is 8. I have used best CV checkpoints.<br>
CV setup is Group K fold by prompt_id and CV scores of each model are 0.482 and 0.492.</p>
<h1>Details of submission</h1>
<p>Most influential part for CV score improvement is pooling inside student summary. Other things bring small improvements compared with that. Details are written in the following sections.</p>
<h2>Model input</h2>
<p>Each model has different model input.<br>
One is <code>[START] text [END] prompt_question [SEP] prompt_text</code>.<br>
Another is <code>[START_QUESTION] prompt_question [END_QUESTION] [START_SUMMARY] text [END_SUMMARY] [SEP] [START_TITLE] prompt_title [END_TITLE] [START_PROMPT] prompt_text [END_PROMPT]</code>.</p>
<h2>Model architecture</h2>
<p>Each model has different pooling.<br>
One is mean pooling inside <code>[START]</code> and <code>[END]</code> tokens which means pool inside student summary.<br>
Another is GeM pooling inside <code>[CLS]</code> <code>[SEP]</code> tokens which means pool inside prompt question and student summary.</p>
<p>Second model has separate heads to predict content and wording scores.</p>
<h2>Other settings</h2>
<h3>Same settings across models</h3>
<ul>
<li>Gradient clipping = 10.</li>
<li>Set <code>hidden_dropout_prob</code> and <code>attention_probs_dropout_prob</code> are 0.</li>
<li>Tokenizer max length is 1024.</li>
</ul>
<h3>Different settings across models</h3>
<ul>
<li>The frequency of CV computation to save best checkpoints during training is 100 and 300.</li>
<li>Second model has different learning rate for heads.</li>
</ul>
<h1>Things not worked</h1>
<p>I have guessed most important thing of this competition is combining student summary and prompt text effectively. Most of time I have focused this direction but none of them bring significant improvements (They bring similar CV score of final models but even ensemble of them didn't give huge improvements).</p>
<ul>
<li>After passing summary and prompt_text to backbone, pool summary and prompt_text then take bert cross attention of them<ul>
<li>I have also tried to stack bert cross attention and bert layer multiple times like <a href="https://arxiv.org/abs/2112.03857" target="_blank">this paper</a>, but not worked</li></ul></li>
<li>Pass summary and prompt text separately to the backbone, then concatenate those hidden states and pass to the head</li>
<li>re initialization of layers</li>
<li>freezing layers</li>
<li>ensemble with other backbones (debrta-large, longformer)</li>
<li>pretty long max length (4096)</li>
</ul> |
CommonLit - Evaluate Student Summaries | 49th Place Solution [Single Model] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>You need to add these special token to the tokenizer first<br>
` all_special_tokens =["[SUMMARY_START]", "[SUMMARY_END]"]</p>
<pre><code>tokenizer = AutoTokenizer.from_pretrained(
config.architecture.model_name,
=,
=all_special_tokens
)`
</code></pre>
<p>First i save the token ids like <br>
<code>config.text_start_token = tokenizer.convert_tokens_to_ids("[SUMMARY_START]")
config.text_end_token = tokenizer.convert_tokens_to_ids("[SUMMARY_END]")</code></p>
<p>and while processing outputs i used </p>
<pre><code> = model(inputs)
= inputs[]
= (input_ids == config.text_start_token).nonzero()[:, ]
= (input_ids == config.text_end_token).nonzero()[:, ]
= torch.arange(input_ids.size()).expand(input_ids.size(), -).to(input_ids.device)
= (mask >= start_tokens.unsqueeze()) & (mask <= end_tokens.unsqueeze())
= y_preds * mask.unsqueeze().float()
</code></pre>
<p>Based on the problem statement, task was to rate summary text based on given prompt texts. So summary is written by students based on the prompt_text and prompt_text has large tokens [more than 3k tokens in private for some prompts]. Also in my experiments adding full prompt_text leads to better CV, so i decided to go with larger max_len. Perhaps i should have compromised on max_len and added more variability through multi-model ensemble, that could have given me better results on private LB. </p>
<p>Thanks <a href="https://www.kaggle.com/aman1391" target="_blank">@aman1391</a> and congratulations to you and your team for gold finish.</p>
<p>Thanks a lot to the hosts and Kaggle for hosting this interesting competition. I personally had a great time working on this competition.</p>
<h2>Summary:</h2>
<p>My solution primarily relies on a single DeBERTa-v3-large model with a custom MeanPooling layer. Used all the text columns <code>text</code>, <code>prompt_text</code>, <code>prompt_question</code>and <code>prompt_title</code> for training. Trainied with 1500 max_len and infer with 2048. </p>
<h2>Input Processing:</h2>
<p>To prepare the input data, I added two special tokens, '[SUMMARY_START]' and '[SUMMARY_END]', before and after the summary text, respectively. Then, I appended all the other prompt data with a '[SEP]' token. The final input text structure looked like this:<br>
'[SUMMARY_START]text[SUMMARY_END][SEP]prompt_text[SEP]prompt_question[SEP]prompt_title[SEP]'.</p>
<h2>Training Details:</h2>
<p>The DeBERTa-v3-large model was trained with a maximum sequence length of 1500 using the input structure mentioned above. I also modified the MeanPooling layer to calculate the mean only between the '[SUMMARY_START]' and '[SUMMARY_END]' tokens.</p>
<h2>CV - LB Details</h2>
<ul>
<li>CV - 0.474 Public LB - 0.43 Private LB 0.469</li>
</ul>
<h2>Inference Time:</h2>
<p>It's important to note that this single model has a significant inference time of approximately 8.5 hours.</p>
<h2>What Worked:</h2>
<ul>
<li>Inferencing on larger Max Len</li>
<li>Averaging Multiple checkpoints per epoch.</li>
</ul>
<h2>What Didn't Work:</h2>
<ul>
<li>Implementing a second-stage model (LGBM/XGBOOST).</li>
<li>Text cleaning.</li>
<li>Post-processing techniques, such as target smoothing.</li>
</ul>
<h2>Training Code</h2>
<p><a href="https://github.com/rohitsingh02/CommonLit-ESS" target="_blank">https://github.com/rohitsingh02/CommonLit-ESS</a></p>
<h2>Inference Kernel</h2>
<p><a href="https://www.kaggle.com/code/rohitsingh9990/commonlit-ensemble-new-v2?scriptVersionId=145818122" target="_blank">https://www.kaggle.com/code/rohitsingh9990/commonlit-ensemble-new-v2?scriptVersionId=145818122</a></p> |
CommonLit - Evaluate Student Summaries | 4th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/husseinah95" target="_blank">@husseinah95</a> Thank you very much!! </p>
<p><a href="https://www.kaggle.com/takamichitoda" target="_blank">@takamichitoda</a> <br>
Thank you very much! Your posts on discussion were very helpful. Again, I have looked at your blog many times! Thank you every time!</p>
<p><a href="https://www.kaggle.com/hidebu" target="_blank">@hidebu</a> <br>
Thank you for question.<br>
I show the process as follows:<br>
1) We trained 4kfold. We can get 4 model weights and calculate cv.<br>
2) If the cv result of 1) is good, we trained using all train data. In this train, we use the condition which is the same as 1) including the n-epoch, lr and so on.<br>
3) for inference, we used the one model which was made in 2). We did not use the model weights which was made in 1).</p>
<p><a href="https://www.kaggle.com/rashmibanthia" target="_blank">@rashmibanthia</a> Thank you for comment !! This is a good question. You are right. To be precise, we did several validation evaluations per epoch at 4kfold training, at which point we calculated the oof after the 4kfold training and stopped learning the fulltrain at the point where the oof was the highest. However, many of the oofs were stable at the very end of the last epoch, and we often let them run to the end.</p>
<p><a href="https://www.kaggle.com/judith007" target="_blank">@judith007</a> Thank you for your comment! Instead, we used the cv when we created the same model with 4kfold! (4kfold model is not used for inference)</p>
<p><a href="https://www.kaggle.com/nlchen" target="_blank">@nlchen</a> Thank you for comment ! we used the cv when we created the same model with 4kfold, and hill climb method.</p>
<p><a href="https://www.kaggle.com/harshit92" target="_blank">@harshit92</a> I'm most happy to receive congratulations from you. You are the key person in my kaggle life.<br>
Let's do it together again someday.</p>
<p><a href="https://www.kaggle.com/takai380" target="_blank">@takai380</a> Thank you very much. This is my personal opinion, but maybe one of the reason why we did not shake down is because we did not use the model without prompt text. In our case, the model without prompt text did raise the cv by gbdt, but it did not raise the cv like prompt id 3b9047 compared to that with prompt text. If there were more prompts like 3b9047 in the private test, we would have expected the score to go down.</p>
<p><a href="https://www.kaggle.com/judith007" target="_blank">@judith007</a> yes. you're right!</p>
<p><a href="https://www.kaggle.com/muhammad4hmed" target="_blank">@muhammad4hmed</a> Thank you so much!!</p>
<p>Thank you very much for organizing such an interesting competition. I am greatly thankful to the hosts and the Kaggle staff.</p>
<p>This competition turned out to be exactly what we had imagined and we are really happy to have won 4th place.</p>
<p>And thank you so much <a href="https://www.kaggle.com/kurokurob" target="_blank">@kurokurob</a> for teaming up with me and for the daily discussions and experiments. Congratulations on your 4th GOLD MEDAL.<br>
(And I will become competition Grand Master!!!).<br>
We are a great team!</p>
<h1>1. Summary</h1>
<p>Our best private solution is 1 fulltrain x 7 models ensemble.</p>
<p>This competition had only 4 prompts in the training data and with the information that 122 prompts were in the test data, one could imagine a big shake. Also, the public lb was very unstable. These things made us think that robustness is important, not only for cv.</p>
<p>I used all 4kfold at first. Especially the models with prompt text, because I had to increase the maxlen to increase the score, I had to increase the inference time and could not ensemble other models (only 2-3 models could be included).</p>
<p>Just then, I and kuro have team merged. And we mixed the fulltrain idea he had been working on. By replacing 4kfold with 1fulltrain, we arrived at the idea of compressing the information of 4kfold to 1fulltrain and ensemble more models.</p>
<p>We imagined that by doing so, we would prevent the variation and reduction of scores on each prompt in the private test data and, in total, get better score.</p>
<p>As a result, we believe we were able to prevent a shake down and finish in this position!</p>
<table>
<thead>
<tr>
<th>sub</th>
<th>type</th>
<th>models</th>
<th>cv</th>
<th>public lb</th>
<th>private lb</th>
<th>rank</th>
<th>comment</th>
</tr>
</thead>
<tbody>
<tr>
<td>sub1</td>
<td>best lb</td>
<td>1 fulltrain × 9 models</td>
<td>0.4679</td>
<td>0.41991</td>
<td>0.45785</td>
<td>11</td>
<td>using gbdt and expand of inference-maxlen</td>
</tr>
<tr>
<td>sub2</td>
<td>best cv</td>
<td>1 fulltrain × 7 models</td>
<td>0.4639</td>
<td>0.42979</td>
<td>0.45515</td>
<td>4</td>
<td>just NLP result(expand of inference-maxlen)</td>
</tr>
<tr>
<td>sub3</td>
<td>insurance</td>
<td>1 fulltrain × 8 models</td>
<td>0.4693</td>
<td>0.43855</td>
<td>0.45597</td>
<td>6</td>
<td>just NLP result</td>
</tr>
</tbody>
</table>
<p>The best cv in sub2 is our best private, and that is the main explanation below.</p>
<h1>2. training</h1>
<p>Each of our models in sub2 is shown below.</p>
<table>
<thead>
<tr>
<th>modelno</th>
<th>InfNo</th>
<th>model</th>
<th>training<br>maxlen</th>
<th>inference <br>maxlen</th>
<th>freezing</th>
<th>layerwise</th>
<th>input</th>
<th>pooling</th>
<th></th>
<th></th>
<th>2nd loss</th>
<th>preprocess</th>
<th>cv of 4kfold<br> earlystop</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>original prompt</td>
<td>cls</td>
<td>attention of <br>original prompt</td>
<td>mean of <br>text part only</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>91</td>
<td>deberta-v3-large</td>
<td>768</td>
<td>1500</td>
<td>✓</td>
<td></td>
<td></td>
<td>✓</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>0.4818</td>
</tr>
<tr>
<td>2</td>
<td>22</td>
<td>deberta-v3-large</td>
<td>1050</td>
<td>950</td>
<td></td>
<td>✓</td>
<td>✓</td>
<td></td>
<td>✓</td>
<td></td>
<td></td>
<td></td>
<td>0.4855</td>
</tr>
<tr>
<td>3</td>
<td>63</td>
<td>deberta-v3-large</td>
<td>850</td>
<td>1500</td>
<td></td>
<td>✓</td>
<td></td>
<td></td>
<td></td>
<td>✓</td>
<td></td>
<td></td>
<td>0.4984</td>
</tr>
<tr>
<td>4</td>
<td>72</td>
<td>deberta-v3-large</td>
<td>868</td>
<td>1024</td>
<td></td>
<td>✓</td>
<td></td>
<td>✓</td>
<td></td>
<td></td>
<td>Arcface</td>
<td>✓</td>
<td>0.4919</td>
</tr>
<tr>
<td>5</td>
<td>2,3</td>
<td>deberta-v3-large</td>
<td>868</td>
<td>868</td>
<td></td>
<td>✓</td>
<td></td>
<td>✓</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>0.4880</td>
</tr>
<tr>
<td>6</td>
<td>259</td>
<td>deberta-v3-large-squad2</td>
<td>768</td>
<td>1500</td>
<td>✓</td>
<td></td>
<td></td>
<td>✓</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>0.4952</td>
</tr>
<tr>
<td>7</td>
<td>331</td>
<td>deberta-v3-large-squad2</td>
<td>1050</td>
<td>950</td>
<td></td>
<td>✓</td>
<td>✓</td>
<td></td>
<td>✓</td>
<td></td>
<td></td>
<td></td>
<td>0.4993</td>
</tr>
</tbody>
</table>
<p>The details are described below.</p>
<h2>2.1 model no1 : basemodel</h2>
<p>This model is a typical base for our model.</p>
<p>First, two inputs are prepared and put into the tokenizer as a pair (an example of dataset). The same result can be obtained by connecting them with [SEP] even if you do not put them as a pair.</p>
<pre><code>self = self.
self = self + + self + + self
tokens = tokenizer(
self,
self,.
...
)
</code></pre>
<p>Now all we have to do is put this in the model, increase the maxlen, and output the cls in the fc layer.</p>
<h2>2.2 model no2 : original prompt</h2>
<p>In this model, an original prompt was created and used for input.</p>
<pre><code>self = + self
self = prompt_title + + prompt_question + + prompt_text
tokens = tokenizer(
self,
self,
・・・
)
</code></pre>
<p>Then, only the part of the original prompt (Evaluating the summarized…) is attentional pooled (an example of a model).</p>
<pre><code>## init
.pool = AttentionPooling(..hidden_size)
...
## forward
= .model(ids, mask, token_type_ids)
= [][:,:,:]
= .pool(,mask[:,:])
= .fc()
</code></pre>
<h2>2.3 model no4 : using 2nd loss and preprocess</h2>
<p>As discussed in the discussion and notebook, the train data can be classified into 38 types. I thought I could make good use of this, so I included arcface as an auxiliary loss.<br>
I tried to make good use of embeddings at the end, but it was not available. However, it did contribute to the diversity of the ensemble.</p>
<p>Also, as a preprocss, I made sure to include a space after the period and comma; after doing EDA, I noticed that if a sentence comes after a period or comma without a space, it is divided differently in the tokenizer. However, I don't think this affected the score significantly.</p>
<h2>2.4 Extending the inference max length</h2>
<p>Scores have been greatly improved by increasing the inference length over that of training. An example is shown below. The following was trained with maxlen 850, but by setting maxlen to 1500 during inference, cv and public lb improved (maybe not so much contribution to private).</p>
<table>
<thead>
<tr>
<th>traing<br>maxlen</th>
<th>inference<br>maxlen</th>
<th>39c16e<br>fold0</th>
<th>814d6b<br>fold1</th>
<th>ebad26<br>fold2</th>
<th>3b9047<br>fold3</th>
<th>cv</th>
</tr>
</thead>
<tbody>
<tr>
<td>850</td>
<td>850</td>
<td>0.4505</td>
<td>0.5595</td>
<td>0.5051</td>
<td>0.5024</td>
<td>0.4984</td>
</tr>
<tr>
<td>850</td>
<td>1024</td>
<td>0.4524</td>
<td>0.5590</td>
<td>0.4836</td>
<td>0.5018</td>
<td>0.4927</td>
</tr>
<tr>
<td>850</td>
<td>1500</td>
<td>0.4527</td>
<td>0.5588</td>
<td>0.4614</td>
<td>0.5013</td>
<td>0.4867</td>
</tr>
</tbody>
</table>
<p>However, some models may degrade. insurance submission(sub3) consisted only of models that did not expand this inference max length. As a result, cv and public were affected, but private had almost no effect.</p>
<h1>3. inference (ensemble)</h1>
<p>For best cv sub, we initially post-processed using GBDT (LGBM + Catboost + XGboost), but in the end, ensemble without GBDT resulted in a better CV, The submission that produced the best private did not use post-processing with GBDT. We also considered nealder-mead and optuna, but we did not use them because of the concern that they would definitely overfit. So we use the simple mean ensemble using a hill climb technique (we also added a full train with different seed to further increase robustness for the extra inference time).</p>
<p>final cv(4kfold earlystopping result) : 0.46394 , public : 0.42979, private 0.45515 4th</p>
<p>※ Futhermore, we took as insurance(sub3) that the inference max lengths were the same as that in training. This result was as follows.</p>
<p>final cv 0.4693 , public : 0.43855, private 0.45597 6th.</p>
<p>In this result, cv and lb were bad, but private was good, which shows how important the robustness of model was in this competition.</p>
<h1>4. Not working for us</h1>
<ul>
<li>mlm</li>
<li>awp</li>
<li>svr</li>
<li>other model</li>
<li>regression from classification model</li>
<li>other many many things…</li>
</ul>
<h1>5. Acknowledgements</h1>
<p>We could not have achieved these results on our own. We were greatly influenced by those who we have collaborated with in the past, and we are grateful for their contributions. We would also like to express our sincere gratitude to those who have shared their knowledge and insights through previous competitions. Thank you very much.</p>
<p>We would especially like to thank the following people for their help in this competition! Thank you very much.</p>
<ul>
<li><p>Fulltrain : Team Hydrogen <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369578" target="_blank">ref1</a> <a href="https://www.kaggle.com/competitions/feedback-prize-effectiveness/discussion/347536" target="_blank">ref2</a>, Raja Biswas <a href="https://www.kaggle.com/conjuring92" target="_blank">@conjuring92</a> <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/424330#2354068" target="_blank">ref3</a></p></li>
<li><p>Freezing : Takamichi Toda <a href="https://www.kaggle.com/takamichitoda" target="_blank">@takamichitoda</a> <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/433754#2405543" target="_blank">ref4</a></p></li>
<li><p>37-38 classification : MOTH <a href="https://www.kaggle.com/alejopaullier" target="_blank">@alejopaullier</a> <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/430705" target="_blank">ref5</a>, <br>
Alexander Chervov <a href="https://www.kaggle.com/alexandervc" target="_blank">@alexandervc</a> <a href="https://www.kaggle.com/code/alexandervc/commonlit-levenshtein-distances?scriptVersionId=141542492&cellId=23" target="_blank">ref6</a></p></li>
<li><p>Postprocess : nogawanogawa <a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a> <a href="https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-spell-autocorrect" target="_blank">ref7</a></p></li>
<li><p>Model selection(feedback3) : CHRIS DEOTTE <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369609" target="_blank">ref8</a>, TOM <a href="https://www.kaggle.com/tikutiku" target="_blank">@tikutiku</a> <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369567" target="_blank">ref9</a></p></li>
</ul>
<h1>6. team member</h1>
<ul>
<li><a href="https://www.kaggle.com/chumajin" target="_blank">@chumajin</a></li>
<li><a href="https://www.kaggle.com/kurokurob" target="_blank">@kurokurob</a> </li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4001300%2Fa311809edebd32836455da5b13e1dc1e%2Fkuromajin.jpg?generation=1697084418755389&alt=media" alt=""></p>
<h1>7. code</h1>
<p>inference code : <a href="https://www.kaggle.com/code/chumajin/commonlit2-4th-place-inference" target="_blank">https://www.kaggle.com/code/chumajin/commonlit2-4th-place-inference</a> (This is the cleaned code. Same score as the submission.)</p>
<p>training code(chumajin part) : <a href="https://github.com/chumajin/kaggle-commonlit" target="_blank">https://github.com/chumajin/kaggle-commonlit</a><br>
training code(kuro_B part) : <a href="https://github.com/kurokuroB/kaggle-commonlit2-4th_place_solution-my_training_code" target="_blank">https://github.com/kurokuroB/kaggle-commonlit2-4th_place_solution-my_training_code</a></p>
<p><a href="https://www.kaggle.com/oluwatobibetiku" target="_blank">@oluwatobibetiku</a> Thank you for comment!!</p>
<p><a href="https://www.kaggle.com/ivanaerlic" target="_blank">@ivanaerlic</a> Thank you very much!! I was really afraid to just shake down! Competition GM has been a tough road, but it feels great to have accomplished it. See you again at one of the competitions!</p>
<p><a href="https://www.kaggle.com/conjuring92" target="_blank">@conjuring92</a> I am specially glad to be congratulated by you. Your post about fulltrain has made me more aware of fulltrain and this strategy. Thank you very much! And congrats on your 6th place at LLM competition! I think it must have been tough days for you. See you again at one of the competitions! Looking forward to it!</p> |
CommonLit - Evaluate Student Summaries | 5th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks you! I trained 4 epochs. </p>
<p>I reduced memory usage by applying the following three things. </p>
<ul>
<li>Freeze embedding and the first 18 layers</li>
<li>fp16</li>
<li>gradient_checkpointing</li>
</ul>
<p>This allows deberta-v3-large(max_len :1536, batch_size 8) to train with 16Gib of GPU memory.</p>
<p>Thank you! And congratulations on winning the gold medal in the intense LLM competition!</p>
<p>First of all, I would like to thank the host for hosting such a wonderful competition.Also thanks to kagglers for sharing valuable information in notebooks and discussions.</p>
<p>I'll briefly share my approach that led to the 5th place finish. Honestly, I didn't use any special techniques, so I believe the shake-up was largely due to luck. However, I'm extremely pleased with my first solo gold.</p>
<h3>Overview</h3>
<p>The key points of my approach are mainly threefold.</p>
<ul>
<li>Use all of summary, question, title, and prompt_text.</li>
<li>Use a single deberta v3 large model, improved with longer max_len and freeze layer, etc.</li>
<li>An ensemble of deberta v3 large and LightGBM.</li>
</ul>
<h3>Approach</h3>
<ul>
<li><p>deberta v3 large</p>
<ul>
<li>Input all of summary + question + title + prompt_text.</li>
<li>No preprocessing.</li>
<li>max_len: 1536.</li>
<li>Freeze embedding and the first 18 layers. </li>
<li>No dropout.</li>
<li>Train content and wording simultaneously.</li></ul></li>
<li><p>LightGBM</p>
<ul>
<li>Uses mostly the same features as public notebooks.</li></ul></li>
<li><p>ensemble</p>
<ul>
<li>Weighted average of deberta and LGB.</li>
<li>Weights optimized using nelder mead.</li></ul></li>
</ul>
<h3>validation</h3>
<ul>
<li>strategy<ul>
<li>groupkfold (group = prompt_id)</li></ul></li>
<li>cv score<ul>
<li>deberta v3 large :0.4816</li>
<li>LGBM : 0.5513</li>
<li>ensemble : 0.4748</li></ul></li>
</ul>
<h3>What Didn't Work</h3>
<ul>
<li><p>Models other than deberta v3 large.</p>
<ul>
<li>When including prompt_text as input, other models (including deberta-v3-base) had much poorer performance.</li></ul></li>
<li><p>Additional training data with LLM</p>
<ul>
<li>Considered data augmentation by creating summaries from prompt_texts using LLM.</li>
<li>I used the scraped prompt_text from commonlit.</li>
<li>Created ground truth using pseudo labels and added to training, but it did not improve.</li></ul></li>
<li><p>LGB-based stacking</p>
<ul>
<li>Tried stacking using deberta's oof, similar to public notebooks, but it did not improve in my model.</li>
<li>Hence, changed to the weighted average of each model as mentioned above.</li></ul></li>
<li><p>Text preprocessing.</p></li>
<li><p>Inputting other summaries from the same prompt.</p>
<ul>
<li>Inspired by the PPPM competition's magic, considered concatenating other summaries from the same prompt, but it did not improve.</li></ul></li>
<li><p>etc…</p></li>
</ul>
<p>Thank you for your comment.<br>
And congrats to you on 4th place and Grand Master !</p> |
CommonLit - Evaluate Student Summaries | 61th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>My First Kaggle Competition</h1>
<p>I would like to express my gratitude to the Kaggle staff and competition hosts for organizing this event. This document outlines the tactics that were successful, and those that weren't as effective in my experiments. Without the remarkable notebook at <a href="url" target="_blank">https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-spell-autocorrect</a> and various insights from numerous discussions, my work wouldn't have been as comprehensive. Credit goes to all those contributors who I, unfortunately, do not recall by name.</p>
<p>My solution involved an ensemble of 17 models; 11 deberta-v3-large models and 6 deberta-v3-base models, each with different learning rates. All of this work was conducted on a workstation with an i9-12900k and a 3090 24G system.</p>
<h1>What Worked</h1>
<ol>
<li><p>Setting hidden_dropout_prob=0 [High Impact]<br>
Given that this is a regression problem, using dropout creates a discrepancy between the output scales during inference and training.</p></li>
<li><p>Using a fusion model for wording and content predictions [Decreases training and inference time without impacting performance]<br>
The original notebook used separate models for these tasks, which extends training time without any significant improvement in inference performance.</p></li>
<li><p>Leveraging the LightGBM model with enhanced feature engineering [High Impact]<br>
I put more emphasis on feature engineering, as illustrated in this notebook: <a href="url" target="_blank">https://www.kaggle.com/code/wchan757/achieving-lb-0-47-with-just-lightgbm-detail.</a></p></li>
<li><p>Using an ensemble of deberta-v3-large and deberta-v3-base models with varying learning rates [High Impact]<br>
A lower learning rate (~ 8e10-6) works better for deberta-v3-large models, while a higher one (~ 2e10-5) is more effective for deberta-v3-base models.</p></li>
<li><p>Maintaining uniform ensemble weights [High Impact]<br>
Optimizing the model weights using Optuna tends to overfit the ensemble model.</p></li>
</ol>
<h1>What Didn't Work</h1>
<ol>
<li><p>Using different weights for different models with Optuna<br>
While this improves CV scores, it leads to overfitting in the LB and PB tests.</p></li>
<li><p>Cleaning the text<br>
I attempted this late in the process, and it led to overfitting in both the PB and LB tests. I'm unsure why this happened and appreciate anyone willing to share their successful experiences with text cleaning.</p></li>
<li><p>Setting attention_probs_dropout_prob = 0<br>
This ended up deteriorating the model's scores.</p></li>
</ol>
<h1>Some note and tips</h1>
<ol>
<li>read through discussion session and other similar competition. I find my most idea and figure out what work and don’t work in here</li>
<li>Versioning and documents experiment through for tracking what work</li>
<li>Don’t ensemble model too soon because if you want to change architecture will require change all the ensemble model </li>
<li>It will fine if you stuck a few day. Just take a rest and try again.</li>
</ol>
<p>I am very lucky that my best notebook for LB is best notebook for PB.</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>LB</th>
<th>PB</th>
</tr>
</thead>
<tbody>
<tr>
<td>deberta-v3-base 4-fold ( only change hidden dropout =0)</td>
<td>0.44565</td>
<td>0.48406</td>
</tr>
<tr>
<td>Ensemble deberta-v3-large 4-fold with 2 base</td>
<td>0.44073</td>
<td>0.48301</td>
</tr>
<tr>
<td>Ensemble model ( 17 model find tune weight with optuna)</td>
<td>0.43694</td>
<td>0.47308</td>
</tr>
<tr>
<td>Ensemble model ( 17 model with more feature engineer)</td>
<td><strong>0.43551</strong></td>
<td><strong>0.47284</strong></td>
</tr>
</tbody>
</table> |
CommonLit - Evaluate Student Summaries | 61th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>61th place solution</h1>
<p>First of all, I'd like to thank the competition host for organizing such an intriguing contest and my team members. <br>
Many of the ideas for this solution were contributed by my teammates.</p>
<h3>Summary</h3>
<p>The pipeline of our solution is as follows <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3836857%2F03a0f6f09ed783bbdbc1cdc759ab0fca%2Fimage.png?generation=1697118683872550&alt=media" alt=""><br>
This solution was inspired by this <a href="https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-spell-autocorrect" target="_blank">notebook</a>. Thanks for sharing this great notebook.</p>
<h3>Strategy</h3>
<p>The public leaderboard was derived from a single prompt, but based on <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/443223" target="_blank">discussion</a>, the private leaderboard encompasses over 100 prompts. This observation led me to place a heightened emphasis on cross-validation (CV). However, the CV too was based on merely four prompts, making it somewhat untrusted.<br>
Given the uncertainty in both CV and the public leaderboard, I opted for a strategy that emphasizes robustness. Consequently, I integrated various backbones and input texts into the ensemble, optimizing to reduce the inference time. Furthermore, considering that the evaluation metric was RMSE, ensuring prediction accuracy was paramount. Therefore, even models with suboptimal individual scores were incorporated into the ensemble.</p>
<p>I couldn't effectively utilize the prompt_text, so I shifted my strategy to shorten the input length and reduce inference time. </p>
<h3>BERT Model Training</h3>
<p>Training used Nakama's excellent <a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">notebook</a> on Feedback3.<br>
Common to all <br>
epoch : 3 <br>
schduler : cosine <br>
No dropout </p>
<table>
<thead>
<tr>
<th>backbone</th>
<th>text</th>
<th>maxlen</th>
<th>loss</th>
<th>freeze</th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>deberta-v3-base</td>
<td>title question summary</td>
<td>512</td>
<td>SmoothL1Loss</td>
<td>4</td>
<td>0.54756</td>
</tr>
<tr>
<td>deberta-v3-large</td>
<td>question summary</td>
<td>512</td>
<td>SmoothL1Loss</td>
<td>12</td>
<td>0.53988</td>
</tr>
<tr>
<td>deberta-xlarge</td>
<td>summary title question</td>
<td>512</td>
<td>SmoothL1Loss</td>
<td>2</td>
<td>0.54400</td>
</tr>
<tr>
<td>electra-large</td>
<td>question summary</td>
<td>1024</td>
<td>RMSE</td>
<td>2</td>
<td>0.55925</td>
</tr>
<tr>
<td>muppet-roberta-large</td>
<td>question summary</td>
<td>512</td>
<td>SmoothL1Loss</td>
<td>4</td>
<td>0.56337</td>
</tr>
</tbody>
</table>
<h3>GBDT</h3>
<p>LightGBM, CatBoost, and XGBoost training were all parameter optimized with optuna. <br>
The amount of features is based on this <a href="https://www.kaggle.com/code/wchan757/achieving-lb-0-47-with-just-lightgbm-detail" target="_blank">notebook</a>.</p>
<h3>did not work</h3>
<ul>
<li>awp</li>
<li>fgm</li>
<li>SVR<ul>
<li>(feedback3 chirs notebook)[<a href="https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x" target="_blank">https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x</a>]</li></ul></li>
<li>pseudo (might have worked)<ul>
<li>The CV of feedback3 and old commonlit single had a gain of more than 0.01 on any backbone, but it worsened when put in GBDT, so it was not used.</li></ul></li>
<li>text cleaning<ul>
<li>Could capitalization and STOPWORD have also been a feature of wording?</li></ul></li>
<li>wording anomaly detection<ul>
<li>By removing data with extremely poor wording, the RMSE deteriorated significantly, so an anomaly detection model was created specifically for this, but it was abandoned due to lack of progress in learning.</li></ul></li>
<li>FlanT5 encoder<ul>
<li>The encoder part was taken from FlanT5, an encoder-decoder model often used in actual summarization, and used in the backbone, but the CV was 0.8x, which was very poor, so it was not used.</li></ul></li>
</ul>
<h3>Important Citations</h3>
<ul>
<li><a href="https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-spell-autocorrect" target="_blank">2stage pipeline</a></li>
<li><a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">feedback3 train notebook</a></li>
<li><a href="https://www.kaggle.com/competitions/commonlitreadabilityprize/discussion/260729" target="_blank">no dropout</a></li>
<li><a href="https://www.kaggle.com/code/wchan757/achieving-lb-0-47-with-just-lightgbm-detail" target="_blank">get features</a></li>
</ul> |
CommonLit - Evaluate Student Summaries | 627th Solution for CommonLit ESS competition- What doesn't kill you makes you stronger | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This was the first NLP task & first Kaggle competition that I worked really hard on.<br>
Even though my final result is not satisfying, I have learned a lot through the journey.<br>
<br></p>
<h1>For those who didn't join the competition.</h1>
<p>You can check the competition overview <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/overview" target="_blank">here</a>.<br>
You can check the competition dataset <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/data" target="_blank">here</a>.</p>
<h1>Overview of Approach</h1>
<h2>Spell Checking Tools</h2>
<p>In the beginning, I started off by modifying the notebook created by other competitors (<code>debertav3base</code>).<br>
Then, I realized that I wouldn’t be developing myself in this field just by copying others’ work. (I had to <code>copy & edit</code> later on anyway, trying to achieve high score)<br>
So, I started off by developing my own preprocessing method, focusing on <a href="https://www.kaggle.com/code/jasonheesanglee/eda-spellcheck-tool-comparison" target="_blank">misspelled words</a>.<br>
Here, I found that SymSpell worked better on the dataset we have than any other libraries.<br>
Later (now), I understood that this wasn’t the key to this competition.<br>
<br></p>
<h2>MobileBERT</h2>
<p>After taking a closer look at the Competition Overview, I decided to try out <code>MobileBERT</code> with the concept of helping teachers located in areas where there is less or no internet connection.<br>
My journey with <code>MobileBERT</code> is explained in this <a href="https://www.kaggle.com/code/jasonheesanglee/en-mobilebert-implementation" target="_blank">notebook</a>.<br>
<strong>TL;DR</strong><br>
I first tried to apply <code>MobileBERT</code> on the code that I was working on with <code>debertav3base</code>.<br>
However, it didn’t work.<br>
After reading the <a href="https://arxiv.org/pdf/2004.02984.pdf" target="_blank">research paper</a>, I understood that the teacher model was needed to educate MobileBERT on the task.<br>
However, in terms of the competition, it didn’t give a good result.<br>
<br></p>
<h2>All models</h2>
<p>So I had to move back to <code>debertav3base</code> + <code>LGBM</code>.<br>
I finally got an OK score, but I couldn’t stop trying other methods. (Check this <a href="https://www.kaggle.com/code/jasonheesanglee/lgbm-autocorrect-allmodels" target="_blank">notebook</a> for my record)<br>
So, I have tried all different models in this <a href="https://www.kaggle.com/datasets/kozodoi/transformers" target="_blank">Dataset</a>.<br>
Most models worked fine on the logic I had but didn’t give a good score as expected. (There should be reason behind why other people don’t go with haha).<br>
The only thing I regret is that I couldn’t try with <code>T5</code>, which didn’t work on my logic (Of course it wouldn’t work.)<br>
<br></p>
<h2>Modifying Preprocessing</h2>
<p>In the last few days, I tried to focus on developing the preprocessing part & the <code>LGBM</code> part of the code.
<br>
For the preprocessing part, I have <a href="https://www.kaggle.com/code/jasonheesanglee/datapreprocess" target="_blank">separated this part</a> from the submission notebook to save some time with submission and added SymSpell as a spell-checking tool.<br>
And Optuna for finding the best parameter for LGBM.<br>
It worked fine on the Public Leaderboard (0.44) but unfortunately didn’t work on the Private Leaderboard.<br>
<br></p>
<p>At the end of the day, I am still alive, learned a lot through this competition.<br>
On top of that, I am happy that I have more opportunities to challenge myself!</p>
<p>Thank you <a href="https://www.kaggle.com/shreyanshmurathia" target="_blank">@shreyanshmurathia</a> for the wish!!</p> |
CommonLit - Evaluate Student Summaries | 69th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Firstly, thanks to Kaggle and the hosts for this interesting competition, this is the first competition that I ever commit to, and I'm glad that it paid off.</p>
<p>As always, I knew that diversity wins, so I focused on building a diverse set of models. My final submission consisted of two LGBM models on top of Deberta-Base predictions and raw Deberta-Large predictions, with weights optimized on oofs. </p>
<p><strong>LGBM Models</strong><br>
These two models are based on Feature Engineering ideas and model from <a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a> (thanks!). Each model was trained with a different set of features and hyperparameters, each scoring around 0.505 cv, 0.447 public and 0.480 private. A simple average between these two models scores 0.493 cv , 0.442 lb and a bronze zone finish. </p>
<p><strong>Stabilizing Deberta-Large Training</strong><br>
I knew that this would be key for diversity. However, at first, the training seemed unstable and the predictions not trustworthy, the model would converge at different epochs for different folds. So I spent the last week trying to stabilize the training pipeline and these are the key things that helped:</p>
<ul>
<li>Re-Initializing last layers</li>
<li>Attention Pooling seemed to work best</li>
<li>Layer-wise Learning Rate Decay</li>
<li>I've also experimented with AWP but with no success</li>
</ul>
<p>With this in mind, I managed to somewhat stabilize the training, I trained the final model for 4 epochs with 1 epoch for warmup and always picking the last epoch as the final checkpoint.</p>
<p><strong>Final Ensemble</strong><br>
The final weights of the 3 models were optimized on oofs on each of the targets, I managed to get a small boost for the cv (0.489) but a slightly worse lb (0.445). But that performed the best on private and got me a 0.473 ( Trust your cv!). </p> |
CommonLit - Evaluate Student Summaries | 79th place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for hosting this competition. <br>
I'm so excited to have finished my first Kaggle competition with a silver medal!<br>
Thanks to shake up we made our own medal.</p>
<p>The main techiques are not different as other well organized public notebooks. <br>
We took our baseline from the public notebook made by <a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a> .<br>
Special thanks to the best contributor of this competition.</p>
<p>We made a little changes though, which are addressed below briefly.</p>
<h1>What Worked</h1>
<ul>
<li>deberta-v3-large + LGBM</li>
<li>re-init last 2 layers</li>
<li>replace '\n\n', '\r\n' with [PARAGRAPH]</li>
<li>Use Optuna</li>
<li>nelder-mead for finding Ensemble weights</li>
<li>Get target-wised Model</li>
<li>Post-processing(low impact, private score : + 0.0001)<br>
We took a look at the preprocessed data and found some patterns from it.<br>
So we arranged the content and wording score like this based on what we analyzed</li>
</ul>
<pre><code>test.loc[(test[]>)&(test[]>-), ] = -
test.loc[(test[]>)&(test[]<=)&(test[]>), ] =
test.loc[(test[]>)&(test[]>-), ] = -
test.loc[(test[]>)&(test[]<=)&(test[]>-), ] = -
test.loc[(test[]>)&(test[]<=)&(test[]>), ] =
</code></pre>
<h1>What Didn't Work</h1>
<ul>
<li>freezing n layers</li>
<li>MeanPooling, AttentionPooling</li>
<li>change batch size</li>
<li>Models other than deberta-v3-large (deberta-v2-xlarge, albert-large-v2, muppet-roberta-large, bigbird-roberta-large, roberta-large)</li>
</ul>
<p>Thanks again to the competition hosts and everyone who participated!! Good luck for your next competitions!</p>
<h2><strong>Team member</strong></h2>
<p><a href="https://www.kaggle.com/kimseunghee" target="_blank">@kimseunghee</a> <br>
<a href="https://www.kaggle.com/danielchae" target="_blank">@danielchae</a> </p> |
CommonLit - Evaluate Student Summaries | 7th Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to competition organizers for hosting this interesting competition and my teamate <a href="https://www.kaggle.com/emiria" target="_blank">@emiria</a>. I learn a lot from emiria's ideas and code. Congrats for emiria's fourth gold medal, a new GM is on the way. And this is my first gold medal of nlp competition.</p>
<p>The <strong>key points</strong> of our strategy:</p>
<ul>
<li>Add "prompt_text" for inputs (0.03 boost on cv)</li>
<li>Freezing layers (0.01 boost on cv)</li>
<li>Different inputs for blending(0.01 boost on cv)</li>
<li>Blending with result of lightgbm(0.005 boost on cv)</li>
<li>Focus on local cv (LB only has 13% of data)</li>
</ul>
<p><strong>Did not work</strong> for us:</p>
<ul>
<li>AWP</li>
<li>SWA</li>
<li>Text preprocess</li>
<li>MLM</li>
</ul>
<h3>Models</h3>
<p>Here's the discription of our models we used for finall submissions. We use groupkfold with "prompt_id" for local validation, and used all prompts for training when inference.</p>
<table>
<thead>
<tr>
<th>id</th>
<th>backbone</th>
<th>inputs</th>
<th>maxlen</th>
<th>loss</th>
<th>cv</th>
</tr>
</thead>
<tbody>
<tr>
<td>model1</td>
<td>deberta-v3-large</td>
<td>text+sep+prompt_text+sep+prompt_question</td>
<td>1280</td>
<td>mseloss</td>
<td>0.500</td>
</tr>
<tr>
<td>model2</td>
<td>deberta-v3-large</td>
<td>text+sep+prompt_title+sep+prompt_question+sep+prompt_text</td>
<td>1280</td>
<td>mseloss</td>
<td>0.489</td>
</tr>
<tr>
<td>model3</td>
<td>deberta-v3-large</td>
<td>prompt_title+sep+prompt_question+sep+text+sep+prompt_text</td>
<td>1280</td>
<td>mseloss</td>
<td>0.506</td>
</tr>
<tr>
<td>model4</td>
<td>deberta-v3-large+lgb</td>
<td>prompt_question+sep+text</td>
<td>512</td>
<td>mseloss</td>
<td>0.520</td>
</tr>
<tr>
<td>model5</td>
<td>deberta-v3-large</td>
<td>text+sep+prompt_title+sep+prompt_question+sep+prompt_text</td>
<td>768</td>
<td>mseloss</td>
<td>-</td>
</tr>
<tr>
<td>model6</td>
<td>deberta-v3-large</td>
<td>text+sep+prompt_title+sep+prompt_question+sep+prompt_text</td>
<td>768</td>
<td>logloss</td>
<td>-</td>
</tr>
<tr>
<td>model7</td>
<td>deberta-large</td>
<td>text+sep+prompt_title+sep+prompt_question+sep+prompt_text</td>
<td>1024</td>
<td>mseloss</td>
<td>-</td>
</tr>
</tbody>
</table>
<h3>Results</h3>
<p>And here's our models with best scores:<br>
Each model is average of 2 seeds, except the "model4"(with lightgbm).</p>
<table>
<thead>
<tr>
<th>PB</th>
<th>LB</th>
<th>Picked</th>
<th>models</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.456</td>
<td>0.427</td>
<td>Yes</td>
<td>0.32*model1+0.32*model2+0.16*model3+0.2*model7</td>
</tr>
<tr>
<td>0.453</td>
<td>0.428</td>
<td>No</td>
<td>0.32*model1+0.32*model2+0.16*model4+0.1*model5+0.1*model6</td>
</tr>
</tbody>
</table>
<p><a href="https://www.kaggle.com/zhangyaji" target="_blank">@zhangyaji</a> <br>
Maybe three points:</p>
<ul>
<li><p>sort the data by "token_length"<br>
<code>test_df['full_text'] = test_df['text'] + SEP + test_df['prompt_text'] + SEP + test_df['prompt_question']</code><br>
<code>test_df['tokenize_length'] = [len(config.tokenizer(text)['input_ids']) for text in test_df['full_text'].values]</code><br>
<code>test_df = test_df.sort_values('tokenize_length', ascending=True).reset_index(drop=True)</code></p></li>
<li><p>use amp<br>
<code>with torch.cuda.amp.autocast(enabled=True)</code></p></li>
<li><p>DataCollatorWithPadding</p></li>
</ul>
<p>Actually, we use higher maxlen when inference(model1-3 used maxlen=1400). And each model is average of 2 seeds, so we got 8 models for blending.</p> |
CommonLit - Evaluate Student Summaries | 8th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The main reason for choosing a single model trained on 100% of training data rather than submitting all folds in this competition was that the long maximum sequence length caused really long inference times, which limited the amount of models I could include in my final ensemble.</p>
<p>I tried combining the OOF predictions of my best 3 runs, and found that my CV score improved noticeably. However, because I was doing inference with max length 1536, it took at least 90 minutes per model. If I submitted 4 models for each experiment then I would only be able to include one and a half experiments before hitting the inference runtime limit. </p>
<p>I tried retraining on 100% of training data and submitting single models and found that the LB scores were reasonably in line with what I was expecting. Creating an ensemble of these models allowed me to increase the diversity in my final submission by including models from more different experiments.</p>
<h1>Submission Overview</h1>
<p>I split the data into 4 folds on <code>prompt_id</code> and computed a 4-fold CV. Rather than submitting all 4 models for inference, I retrained on 100% of the training data. My final submission is 6x deberta-v3-large: 2 models with different random seeds from each of my top 3 experiments. I took the unweighted mean of the 6 models' predictions.</p>
<h1>Source code</h1>
<p>I mostly work in Python scripts rather than notebooks. Here's a repo containing a package for running my training and inference scripts: <a href="https://github.com/AMontgomerie/commonlit-summaries-competition" target="_blank">https://github.com/AMontgomerie/commonlit-summaries-competition</a></p>
<h1>Inputs and max length</h1>
<p>In addition to using the summary text, I found that using the <code>prompt_question</code> improved the score slightly, and using the <code>prompt_text</code> helped quite a bit, assuming that the max length was also increased. Increasing the max length only helped up to 1024 tokens for training, although increasing up to 1536 during inference improved both CV (on prompt <code>ebad26</code>) and LB scores.</p>
<p>It was hard to tell if increasing beyond 1536 was worth it as none of the prompts in the training set were long enough, and I didn't get any consistent improvements from the LB when increasing to 2048 either.</p>
<h1>Special tokens</h1>
<p>Another thing that helped was adding a special token for each section of the input rather than using <code>[SEP]</code>. I added <code><text></code>, <code><question></code>, and <code><summary></code> tokens to the tokenizer before training, and then fine-tuned them. Inputs took the format:</p>
<pre><code>"<question> The prompt question text here. <> The student text here. <text> The original prompt text here."
</code></pre>
<p>I put the prompt text last to account for extremely long texts that would need to be truncated.</p>
<p>I did also try adding the prompt title and a <code><title></code> token, but this didn't improve CV.</p>
<h1>Inference</h1>
<p>Earlier in the competition I was submitting the mean of the 4 models trained in each experiment. However after I started increasing the max length up to and beyond 1024 tokens, I found that inference time increased quite significantly. It took about an hour to run inference with 1 deberta-v3-large at 1024 token max length, and about 1.5 hours at 1280 or 1536 tokens.</p>
<p>This means a 4-fold mean would take 6 hours or more at 1536 tokens. To be able to submit a more diverse ensemble I decided to start retraining on 100% of training data instead. I found the LB scores of these 100% models to be fairly consistent with the ones trained on 75% of the data. This allowed me to submit an ensemble of my best 3 CV experiments. I retrained on 100% of training data twice with different random seeds for each of the experiments.</p>
<h1>GPU resources</h1>
<p>I don't have access to any dedicated GPUs, and each 4 fold experiment took a long time on Kaggle P100s (particularly when training with long sequence lengths), so I subscribed to Paperspace Gradient which gave me intermittent access to an RTX5000 or an A4000, both of which allowed me to run experiments about 4x faster than running on Kaggle GPUs.</p>
<p>Since none of these GPUs have more than 16GB of RAM, I was only able to train with a batch size of 1 at max length 1024. To compensate for this, I used gradient accumulation. I initially accumulated gradients for 8 steps per optimizer update, but found that 16 steps worked even better.</p>
<h1>Dropout</h1>
<p><a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/432186" target="_blank">This thread</a> pointed out that turning off dropout can be useful in regression tasks. I gave this a try and found that turning off dropout in fully-connected layers helped a little, but not in attention layers.</p>
<h1>Pooling</h1>
<p>I saw some public notebooks which used mean pooling or GeM pooling, but in my experiments neither of these worked better than the default <a href="https://github.com/huggingface/transformers/blob/3e93dd295b5343557a83bc07b0b2ea64c926f9b4/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L56C7-L56C8" target="_blank">ContextPooler</a> which is built into <code>transformers.DebertaV2ForSequenceClassification</code>.</p>
<h1>Things which didn't work</h1>
<p>Here's a list of ideas either by me, or which I borrowed from public notebooks or discussions, which didn't lead to any noticeable improvement:</p>
<ul>
<li>Using <code>prompt_title</code>.</li>
<li>Freezing embeddings.</li>
<li>Freezing encoder layers.</li>
<li>Max, mean, or GeMText pooling.</li>
<li>Extra attention head.</li>
<li>MSE, SmoothL1, or (modified) MarginRankingLoss. (I mostly used MCRMSE loss).</li>
<li>Training separate models to predict content and wording scores.</li>
<li>Training with max sequence lengths longer than 1024.</li>
<li>deberta-v2-xlarge with LoRA (even with large alpha).</li>
<li>Autocorrecting spelling.</li>
<li>Ensembling with linear models or gradient-boosted regressors with hand-made features. (I didn't spend much time on this, but I tried using some of the features from the high-scoring public notebooks and they didn't seem very useful)</li>
<li>Using abstractive summarization to reduce the length of the <code>prompt_text</code>. I was able to improve my CV a bit with this, but it significantly hurt my public LB, which was a good warning not to do this. I decided it was too risky, because I wouldn't be able to validate any generated summaries when scoring on the private test data. If the generative model hallucinated even once it could screw up my final LB score.</li>
</ul>
<p>I would've liked to have tried extractive summarization as this would prevent the hallucination problem, but I didn't come up with a good way to train a model to rank the most important sentences in each <code>prompt_text</code>. It would've been a fun direction to go down, but probably ultimately a waste of time in comparison to just increasing the max sequence length.</p> |
CommonLit - Evaluate Student Summaries | 98th Place Solution Overview 🥈 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>🥈 <strong>Silver Medal Solution Overview for CommonLit - Evaluate Student Summaries<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7523757%2F717c6210085b3728d0a839bb7b35e846%2FCDS.jpg?generation=1697980812788529&alt=media" alt=""></strong></h1>
<ul>
<li><strong>Solution Link</strong>: <a href="https://www.kaggle.com/code/aleron751/sanseymysamba-final-solution/notebook" target="_blank">Kaggle Notebook</a></li>
</ul>
<h2><strong>1. 📖 Data Reading</strong></h2>
<ul>
<li><strong>Datasets Used</strong>: Data was sourced from Kaggle, including student prompts for train/test and their corresponding essays.</li>
</ul>
<h2><strong>2. 🛠 Feature Creation</strong></h2>
<ul>
<li>Generated features were primarily based on comparing student essays with a summarized version of the original prompts.</li>
<li>An extraction-based summarization technique was employed to distill the essence of the original prompts, making them more comparable to student essays.</li>
</ul>
<h2><strong>3. 🔗 Data Concatenation</strong></h2>
<ul>
<li>To facilitate a holistic view and ease of processing, prompts and essays were concatenated.</li>
</ul>
<h2><strong>4. 🤖 Transformer Predictions</strong></h2>
<ul>
<li><p>Several transformer models were leveraged, forming a "stack of BERTs" to enhance prediction accuracy.</p></li>
<li><p>Pre-existing models were harnessed to make out-of-fold predictions on the training data. The training process can be reviewed <a href="https://www.kaggle.com/code/bulivington/transformers-predictions-base" target="_blank">here</a>.</p></li>
<li><p>Functions were defined to facilitate the efficient inference of test data using the trained models.</p></li>
</ul>
<h2><strong>5. 📚 Stacking Approach</strong></h2>
<ul>
<li><p>A customized model architecture was employed, utilizing Huggingface's trainer.</p></li>
<li><p>Different pooling strategies were explored for the transformer outputs. <code>WeightedLayerPooling</code> was the chosen method due to its efficacy.</p></li>
<li><p>A custom dataset class in PyTorch was designed for test inference, accompanied by its associated function.</p></li>
</ul>
<h2><strong>6. 🎛 Meta Model Training</strong></h2>
<ul>
<li><p>Given its resilience against overfitting and robustness to outliers, the <code>HuberRegressor</code> was chosen as the meta model.</p></li>
<li><p>Beyond transformer-based features, various intuitive features were conceptualized, like word/token statistics, text readability score, and prompt-essay comparisons.</p></li>
</ul>
<h2><strong>7. 🔍 Feature Engineering</strong></h2>
<ul>
<li><p>The BARTScorer class was utilized to compute the BART score, shedding light on the essay's quality.</p></li>
<li><p>Several pre-trained models were deployed to derive features that measure the degree of alignment between the prompt and the essay.</p></li>
<li><p>A plethora of custom features were engineered to capture the nuances of the text, such as sentence structure, token intersections, and linguistic errors.</p></li>
</ul>
<h2><strong>8. 🧪 Feature Generation</strong></h2>
<ul>
<li><p>Sentence encoders, reward models, and other tools were employed to craft features that capture the essence of the text.</p></li>
<li><p>Features that provide insights into the text's structure and coherence, like the count of stop words, n-gram token intersections, and sentence errors, were extracted.</p></li>
<li><p>Models trained on previous competition datasets were leveraged to craft additional, insightful features.</p></li>
</ul>
<h2><strong>9. 🚀 Final Model Training</strong></h2>
<ul>
<li><p>All features, including transformer predictions and manually crafted ones, were integrated and readied for the final model.</p></li>
<li><p>Cross-validation methods were possibly used to gauge the model's performance on unseen data.</p></li>
<li><p>The model was trained with a limited iteration count to prevent overfitting and ensure its general applicability.</p></li>
</ul>
<h2><strong>10. 📤 Submission Generation</strong></h2>
<ul>
<li><p>Predictions for the test dataset were crafted using the meticulously trained HuberRegressor model.</p></li>
<li><p>Predictions were organized into the required format, typically consisting of an ID and its corresponding predicted value.</p></li>
<li><p>The structured submission file was uploaded to Kaggle's competition page, marking the culmination of the process.</p></li>
</ul> |
CommonLit - Evaluate Student Summaries | 9th Place Efficiency Solution for CommonLit-Evaluate Student Summaries | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, congratulations to the organizers and winners of the competition. <br>
I am very grateful for the opportunity to have participated in this wonderful contest and to have learned from the other competitors.</p>
<h1>Context</h1>
<ul>
<li>Overview <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/overview" target="_blank">https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/overview</a></li>
<li>Data <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/data" target="_blank">https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/data</a></li>
</ul>
<h1>Overview of the Approach:</h1>
<p>My solution implemented for the efficiency award consists of a pipeline composed of the following steps Feature Engineering followed by the following Pipeline:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1647119%2F64b8524f73ff2bb814cc907c4952f502%2Fpipe.png?generation=1697930928007385&alt=media" alt=""></p>
<p>The CV by prompt:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1647119%2F9adc3576d7b060c04cd91902cf924b1b%2Ftable.PNG?generation=1697930966613777&alt=media" alt=""></p>
<p>Public LB 0.45943, private LB 0.51579</p>
<h1>Details of the submission:</h1>
<p>The best performing features were those that used logarithms of counts words, 2-grams, and 3-grams in the summary text and were not in the prompt text. I tried to extract more features with Distillbert and TfidfVectorizer but didn't work.</p>
<h1>Sources:</h1>
<p>Some ideas for feature engineering were taken from:</p>
<ul>
<li><a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/424372" target="_blank">https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/424372</a></li>
<li><a href="https://www.kaggle.com/code/cody11null/tuned-debertav3-lgbm-autocorrect" target="_blank">https://www.kaggle.com/code/cody11null/tuned-debertav3-lgbm-autocorrect</a></li>
<li><a href="https://www.kaggle.com/code/vassylkorzh/feature-engineering" target="_blank">https://www.kaggle.com/code/vassylkorzh/feature-engineering</a></li>
<li><a href="https://www.kaggle.com/code/gusthema/commonlit-evaluate-student-summaries-w-tfdf" target="_blank">https://www.kaggle.com/code/gusthema/commonlit-evaluate-student-summaries-w-tfdf</a></li>
</ul>
<h1>Codes:</h1>
<ul>
<li>Training: <a href="https://www.kaggle.com/code/jhony1/9th-place-efficiency-solution-training" target="_blank">https://www.kaggle.com/code/jhony1/9th-place-efficiency-solution-training</a></li>
<li>Inference: <a href="https://www.kaggle.com/code/jhony1/9th-place-efficiency-solution" target="_blank">https://www.kaggle.com/code/jhony1/9th-place-efficiency-solution</a></li>
</ul> |
CommonLit - Evaluate Student Summaries | 9th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you ! <br>
Yes, your understanding is correct except for the LSTM part. The thought process behind the model architecture is as follows:</p>
<p>I wanted the model to capture the relationship between the summary and prompt. Therefore, I used all available information as input to the first DeBERTa model.<br>
Since the prompt_id is different between the train data and test data, I believed that not including the prompt part in the pooling would improve generalization. So, I decided to only pool the summary part. Furthermore, by training only the summary part in a separate DeBERTa model before pooling, I believed it might lead to a better score. Hence, for the second DeBERTa model, I input only the summary part.<br>
Regarding the LSTM, I incorporated it as it led to score improvements specifically for prompt_id: 814d6b. There isn’t a deep rationale behind this, but through various experiments, I found that including the LSTM was beneficial for improving the score for 814d6b.</p>
<p>Thank you!<br>
Here are the answers to your questions:</p>
<ol>
<li><p>For the model with the best public score, I calculated the CV for prompt-id other than 814d6b. It is an ensemble of two DeBERTa-v3-large models. As for 814d6b, I didn't calculate the CV, so its value remains unknown.<br>
CV: 0.4598<br>
Private score: 0.461</p></li>
<li><p>When evaluating the performance of the model, I use cross-validation. After the cross-validation process, I retrain on all the data using the same settings. Consequently, while I can't directly evaluate the model meant for submission, I've found that the results remain consistent across different seeds during the cross-validation phase. Additionally, the loss trajectory during training did not exhibit any problems, leading me to confidently use it for submission. I adopted this approach of training on all data based on references from discussions like the following:<br>
<a href="https://www.kaggle.com/competitions/feedback-prize-effectiveness/discussion/347536" target="_blank">https://www.kaggle.com/competitions/feedback-prize-effectiveness/discussion/347536</a></p></li>
<li><p>Yes, it's an ensemble solely based on models trained with different seeds. When I tried to ensemble with other models, the score for 814d6b did not improve, and even the overall CV didn't show significant improvement. Thus, I decided to submit using a single model. However, for the remaining two submissions, I did utilize ensemble.</p></li>
</ol>
<p>Thank you. I tried LGBM, but I didn't adopt it because the CV didn't improve</p>
<h1>9th Place Solution</h1>
<p>First and foremost, I'd like to extend my gratitude to the host for organizing this competition. Congratulations to the winners as well! I'd like to present a brief overview of my solution that secured the 9th position.</p>
<h2>Cross-Validation</h2>
<p>The wording of prompt_id 814d6b stood out, being notably different from other prompt_ids. When cross-validating with groupkfold using the prompt_id, predicting for this particular prompt_id didn't correlate with scores from other prompt_ids. Hypothetically, if there were any test ids with a distribution similar to 814d6b, having 814d6b in the training data might produce decent accuracy. However, there was no way to validate this. For this competition, I employed two methods of cross-validation and submitted for both:<br>
Cross-validation 1: groupkfold (group = prompt_id), excluding 814d6b from evaluation.<br>
Cross-validation 2: groupkfold (group = prompt_id), Including 814d6b in the evaluation.<br>
The final private score was better for the model from cross-validation 2, so I'll delve into the details of that model. <br>
Given that changing the seed alone resulted in variable CV scores, I performed cross-validation using three different seeds and evaluated the model based on their ensemble. The final submission was based on the predictions of models trained on all the data using three different seeds.</p>
<h2>Training</h2>
<p>I tokenized the text as follows:<br>
text1: summary_text<br>
text2: prompt_question + [SEP] + prompt_text</p>
<pre><code>inputs = self.tokenizer.encode_plus(text1,
text2,
=,
=self.max_len,
=,
=,
=)
</code></pre>
<p>Model: Consisted of two deberta-v3-large and one LSTM as described below:</p>
<ul>
<li>The first deberta-v3-large was trained using the entire input text.</li>
<li>The second deberta-v3-large and LSTM were trained solely on the summary_text.</li>
</ul>
<pre><code> =
</code></pre>
<p>Training settings:</p>
<ul>
<li>token_len: 1680</li>
<li>epochs: 3</li>
<li>Loss: SmoothL1Loss</li>
<li>lr: 8e-6</li>
<li>optimizer: AdamW<ul>
<li>weight_decay: 0.01</li>
<li>beta: (0.9, 0.98)</li></ul></li>
<li>scheduler: get_linear_schedule_with_warmup<ul>
<li>num_warmup_steps: 10% of total training steps</li></ul></li>
<li>EMA (Exponential Moving Average)<ul>
<li>ema_decay: 0.995</li>
<li>Using EMA helped stabilize the training.</li></ul></li>
</ul>
<h2>Inference</h2>
<p>token_len: 4200</p>
<h2>Scores</h2>
<ul>
<li>CV: 0.495 (ensemble of 3 seeds)<ul>
<li>814d6b: 0.604982</li>
<li>ebad26: 0.431438</li>
<li>3b9047: 0.49692</li>
<li>39c16e: 0.483208</li></ul></li>
<li>Public Score: 0.456</li>
<li>Private Score: 0.457</li>
</ul>
<h2>Inference Code</h2>
<p><a href="https://www.kaggle.com/code/takoihiraokazu/sub-ex263-seq4200?scriptVersionId=145902061" target="_blank">https://www.kaggle.com/code/takoihiraokazu/sub-ex263-seq4200?scriptVersionId=145902061</a></p>
<p>Thank you! I also learned a lot from your team's solution.</p>
<p>Thanks!<br>
Regarding the two models, the output of one model is used as the input for the other. For details, please check the implementation of CustomModel. As for the LSTM, its inclusion resulted in an improvement in the 814d6b score.</p>
<p>Yes, your understanding is correct. The reason I differentiated between model1 and model2 was simply because I wanted to change the attention_mask midway through the model. By separating the models, I can achieve this just by altering the model's input.</p> |
CommonLit - Evaluate Student Summaries | Learnings from this competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I'd first of all like to express my gratitude for the Kaggle team to organize this challenge and my colleagues from my office for supporting me through the journey.<br>
Apart from the coding & conceptual knowledge that I gained through this I also learnt the most important values of patience, perseverence & hardwork.<br>
Late into the competition I also joined in the efficiency track, so that was an additional learning i got about making the inference superfast and actually developing strong & meaningful textual features.</p>
<p>this is my code for the efficiency track: <a href="https://www.kaggle.com/bhavesjain/commonlit-2023-train" target="_blank">https://www.kaggle.com/bhavesjain/commonlit-2023-train</a></p>
<p>and for the accuracy track i modified the publicly available solution with my fixes: <a href="https://www.kaggle.com/code/bhavesjain/commonlit-ess-lgbm-autocorrect-deberta-v3-tuned/" target="_blank">https://www.kaggle.com/code/bhavesjain/commonlit-ess-lgbm-autocorrect-deberta-v3-tuned/</a><br>
Overview: <br>
My prime objective in the journey was to experiment with logical approaches than just bruteforce techniques of concatenating everything & feeding it to a powerful & large models. I experimented with textual features like lexical diversity, readability indices, syllables& word metrics. I wanted to explore more into the area of metadiscourse & publicly available datasets for grading the quality but i had time & computation quota limitations. I also experimented with BERT, different versions of DeBERTa, sentence transformers but that's about it.<br>
Submission Details:<br>
I primarily stuck to XGBoost with Optuna. And rest was just hit & trial with various models.<br>
The USP of my solution is the wide array of features that i used inspired from <a href="https://textinspector.com/help/statistics-readability/" target="_blank">https://textinspector.com/help/statistics-readability/</a><br>
I hope you all too check it out<br>
Additionally i also learnt that i need to focus more on the robustness of my training techniques & optimization to avoid such loss in shakeups.</p>
<p>Even though I didn't get a medal, i'm glad that i learnt a lot of things worth applying in my own domain & possibly in upcoming competitions.</p> |
CommonLit - Evaluate Student Summaries | Lessons learned and experiments. DeBERTaV3-base solutions + extra features. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks host and Kaggle for an interesting challenge. Also thanks for the three submissions choices, should be a standard hereinafter.</p>
<p>Congrats all the top solutions, will be valuable source of information, with only a quick look I can see that large model and large lengths seems like a boost to the score. </p>
<p>As many times before I had an experiment approach, the knowledge one gets can not give massive shakeup even of one’s leaderboard place were so 😉</p>
<p>I experimented with Debertav3Base w/ 512MaxL as I used T4s it was the best, my experimenting never ended so those models were used for different solution. Sadly I didn’t pick the best CV solution with CV .460 with a final LB place ~#80, no ML errors behind that choice though.</p>
<p><strong>Short summary of the experimenting and solutions</strong></p>
<p><strong>Frameworks for training</strong></p>
<ul>
<li>Huggingface training and inference code. Only multi-target training.</li>
<li>Past top solutions code – feedback-prize-english-language-learning 1st place <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369457" target="_blank">https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369457</a>. Multi-target training with mixout augmentation etc.</li>
<li>AutoGluon MultiModalPredictor and TabularPredictor – for both model and later tabular training. <a href="https://auto.gluon.ai/stable/index.html" target="_blank">https://auto.gluon.ai/stable/index.html</a>. Single target training.</li>
</ul>
<p><strong>Input to transformers</strong></p>
<p>Different combinations for different models but mainly text_sum+Prompt Question. Sometime with original text and sometimes corrected, for the autocorrection I used NeuSpell: A Neural Spelling Correction Toolkit <a href="https://github.com/neuspell/neuspell" target="_blank">https://github.com/neuspell/neuspell</a> which worked well and fast with cuda and batch support.<br>
Exceptions for other types than text where for models trained on AutoGluon MultiModalPredictor framework. In that framework one can experiment with different fusion heads for the inputs building a complete end-to-end LM model.</p>
<p><strong>Features used for final tabular prediction</strong></p>
<ul>
<li>The public created 9 features.</li>
<li>Stacking: Predictions from every model and its ensemble value.</li>
<li>6 feature [cohesion, syntax, vocabulary, phraseology, grammar, conventions] from the Feedback Prize - English Language Learning competition and used 1st place code for it <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369457" target="_blank">https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369457</a> Even though grammar etc. shouldn’t take in count for the true human score it helped the solution.</li>
<li>2 features, reading level, for the Prompt Question and Prompt Text from the CommonLit Readability Prize Competition and the 2nd place code was used <a href="https://www.kaggle.com/competitions/commonlitreadabilityprize/discussion/258328" target="_blank">https://www.kaggle.com/competitions/commonlitreadabilityprize/discussion/258328</a></li>
<li>4 features, value of similarity between the texts. <a href="https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2" target="_blank">https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2</a> was used for it.</li>
</ul>
<p><strong>Model for tabular prediction</strong></p>
<p>As said before the SOTA framework AG TabularPredictor was used. Some changes were set from the standard. Used fixed set 4 fold validation with the 4 groups as folds. Fold_fitting_strategy = sequential_local as it always works without issue. Also the new feature “experimental_zeroshot_hpo_hybrid” was used to quickly get the params based on dataset in zeroshot manner, thanks. Great work <a href="https://www.kaggle.com/innixma" target="_blank">@innixma</a> 👍<br>
Then a 3-level stacked ~100 trained model ensemble did the final prediction. I also manually included the first target prediction to the next target training, a.k.a Auto-Regressive training.</p>
<p><strong>Other interesting experimenting</strong></p>
<ul>
<li>Pseudo label in one level worked well, taking the 4 GroupKFold prediction to 24 StratifiedGroupKFold made boost to the score.</li>
<li>Some new findings and own-made codes along the way:<br>
o A custom-made optimizer included cosine scheduler all-in-one with also a multi-learning rate for multitargets. Also added MADGRAD instead of AdamW as it worked better in the testing.<br>
o A multi-target feature to a torch model that saves the best per target states within the model .self function and ensemble it between the target states to smoothing the targets best loss closer to each other. In this way I could get a Deberta Base model CV close to Deberta Large. Didn’t included it in a final solution as it was close to deadline but did a late after deadline submission and the Base model had better private score then Large model without the feature. More testing needed of course.</li>
</ul>
<p>Next in line somewhere for testing would have been larger models and more text lengths. Next time 😊</p>
<p>That's it! Happy Kaggling!</p> |
CommonLit - Evaluate Student Summaries | Public 5th & Private 24th Place Solution (CV = LB) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to the winners and everyone who put lots of efforts on this competition, and thank Kaggle & host for organising this competition.<br>
Even though my team shakedown on the private learderboard, we don't think it's truly 'shakedown'. Because our CVs are very close to the LBs (public + private) and we have already selected almost our best results. Our final solutions are purely based on best CVs.</p>
<h1>1 Summary</h1>
<h2>1.1 Key points</h2>
<p>● Various inputs using common words, phrases, etc. between “prompt_text” and “text” as inputs.<br>
● Custom heads combining various pooling.<br>
● Pseudo labeling with back translated texts.<br>
● Weighted loss.<br>
● Ensemble weights based on ridge regression with large alpha </p>
<h2>1.2 Our submissions (3 final submissions and the best submission)</h2>
<pre><code>CV LB LB
</code></pre>
<p>①best public LB 0.4596 0.420 0.463<br>
②best CV 0.4566 0.423 0.463<br>
③other 0.4579 0.422 0.464<br>
(best private LB) 0.4601 0.420 0.462<br>
cv roughly euqals to LB: 0.457 = 0.420x0.13(public) + 0.463x0.87(private)</p>
<h1>2 Details</h1>
<h2>2.1 Various inputs</h2>
<p>We added common words, phrases, etc. between “prompt_text” and “text” to the end of the input texts.(While 2 and 3 grams overlap were added to the input, 2nd Stage LGB models are not working anymore, at least this is the case on Jie’s models)<br>
● overlap words<br>
● overlap N-grams (N=2,3)<br>
● overlap phrases<br>
● overlap key phrases by Rake method</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F530572%2Fc4af5af50522e5116d8ff8a626d594ef%2FScreenshot%202023-10-12%20at%2017.42.56.png?generation=1697129165509557&alt=media" alt=""></p>
<h2>2.2 Custom heads</h2>
<p>We created many custom heads that cat the output values of different poolings.<br>
● CLS output<br>
● Mean pooling<br>
● Max pooling<br>
● GeM pooling<br>
● Attention pooling<br>
● Mean-Max-Attention Pooling (MMA)<br>
● Mean-Max-Gem Pooling (MMG)<br>
● Gem-Attention Pooling (GA)<br>
Example: GA Pooling<br>
gemtext_out = self.gempooler(lasthiddenstate, attention_mask)<br>
attpool_out = self.attpooler(lasthiddenstate, attention_mask)<br>
context_vector = torch.cat((gemtext_out, attpool_out), dim=-1)</p>
<h2>2.3 Pseudo labeling for back translated texts.</h2>
<p>Data Augmentation and Pseudo Labelling: Translate summary text to Chinese, then translate back to English, then use a good model to Pseudo label.</p>
<h2>2.4 Weighted loss</h2>
<p>We created a weighted loss for wording and content as following, ratios are decided by the RMSE ratio between content and wording.<br>
class WeightedCRMSE(nn.Module):<br>
def <strong>init</strong>(self):<br>
super(WeightedCRMSE, self).<strong>init</strong>()<br>
self.weights = torch.tensor([0.85, 1.15]).to(device)</p>
<pre><code> ():
squared_errors = torch.square(y_pred - y_true)
weighted_root_squared_errors = torch.mean(squared_errors, dim=) ** * .weights
loss = torch.mean(weighted_root_squared_errors)
loss
</code></pre>
<h2>2.5 Ensemble weights based on ridge regression with large alpha</h2>
<p>We trained ridge regression out-of-fold and use the mean of the normalized coefficients as the ensemble weights. Both CV and LB scores were better when using ridge regression with a large alpha than using linear regression and other methods.</p>
<ol>
<li>The intercept is zero.</li>
<li>Large alpha (500)</li>
</ol>
<h2>2.6 Others</h2>
<p>● Backbone<br>
○ deberta-v3-large<br>
○ electra-large<br>
○ roberta-large<br>
○ roberta-large-squad2<br>
● Model for ensemble<br>
○ LightGBM<br>
○ Ridge regression<br>
● Max length<br>
○ 300<br>
○ 512<br>
○ 768</p>
<h1>3 Not working</h1>
<p>● LSTM head<br>
● Freezing the embbedings and some layers.<br>
● AWP<br>
● MLM<br>
● 30° rotated targets<br>
● Different max length between training and inference.</p>
<h1>4 Jie’s model performance</h1>
<p>● Name convention backbon_maxlength_2&3gramsoverlap_dataaugmentionflag_pooling, e.g.<br>
del_768_23_aug_mmg: Debertal-V3-Large, maxlength=768, 2and3 grams overlap were added to input, data augmention is used for model training, use Mean-Max-Gem pooling.<br>
● Table below: 1st row: model name, 2nd row: CV(average of 4 folds), 3rd row: CV (oof), 4th row: LB, 5th row: Inference time. <br>
● Ensemble of LB 0.437, 0.432, 0.439, 0.436, 0.433 and 0.436 gives CV: 0.45909, Pubic LB 0.424, Private LB: 0.464<br>
● Note: Deberta Large models inference time are 80mins not 125-140min, all other models are <=36mins
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F530572%2F231869326fa934ba8ccf68f282a8a442%2FJies%20Model%20Performance.png?generation=1697129193727475&alt=media" alt=""></p>
<h1>5 yyykrk’s model performance</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F530572%2F692c701395194fb917ea8a5be76ef559%2Fyyykrk.png?generation=1697129202595643&alt=media" alt=""></p> |
CommonLit - Evaluate Student Summaries | Solution Write up For CommonLit - Evaluate Student Summaries | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi Everyone! I'm posting my solution here to share a slightly different approach I tried in the competition.</p>
<h1>Competition Context</h1>
<p>The context of this solution is the CommonLit - Evaluate Student Summaries competition:<br>
<a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/overview" target="_blank">https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/overview</a></p>
<p>The data of this competition can be found here:<br>
<a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/data" target="_blank">https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/data</a></p>
<h1>My Approach</h1>
<p>My approach was very similar to the DeBERTa + LGBM approach in this competition, however, I added a few details such as using 2 different DeBERTa backbones to parse the summary text and the prompt text separately, and 2 extra heads which then computed both the target values.</p>
<p>The idea for this approach was that I wanted to give more flexibility to the model by fine-tuning two different backbones entirely as there may be very different writing styles that require different processing.</p>
<p>However, it is also quite reasonable to assume that passing the prompt text and the summary can give a better feature vector that combines knowledge from both, and this may be better in fact, as seen in many top solutions.</p>
<p>Lastly, using the wording and content predictions from the double DeBERTa model, I pass it into an LGBM ensemble along with the n-gram features as computed in <a href="https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-spell-autocorrect#Model-Function-Definition" target="_blank">https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-spell-autocorrect#Model-Function-Definition</a></p>
<p>My goal was to not try and overfit the model because the summaries and prompts in the hidden test dataset could have been very different from the train dataset.</p>
<p><em>Validation</em><br>
I validated my DeBERTa predictions using a random hold out validation set.<br>
I used KFold CV for LGBM predictions.</p>
<h1>Improvements</h1>
<ol>
<li>Use DeBERTa Large instead of Base</li>
<li>Use CV ensemble for DeBERTa itself</li>
<li>Made better validation set by separating by prompt and not randomly</li>
<li>Could have tried making a dataset using ChatGPT</li>
</ol> |
CommonLit - Evaluate Student Summaries | We got so close! :((( | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle and CommonLit for such an amazing competition! This was one of the first non-starter/playground competitions that our team has competed in. Despite our close call I’m still really happy with the results, and this has definitely equipped us with the skills needed to conquer future Kaggle competitions.</p>
<p>Our best submission had a public LB score of 0.438, which was in the bronze threshold, but ended up getting a private LB score of 0.485, which was 0.001 away from the bronze threshold :(( That being said, I still wanted to make a post detailing our approach, since our team invested a lot of time into this competition. We found that our CV was around 0.5-0.6 points higher than the public LB, and was 0.1-0.2 points higher than the private LB.</p>
<p>Thank you to my teammates <a href="https://www.kaggle.com/yeoyunsianggeremie" target="_blank">@yeoyunsianggeremie</a> and <a href="https://www.kaggle.com/aayushpatni" target="_blank">@aayushpatni</a> for helping me with this competition! You guys were amazing :))</p>
<p>Congratulations to all of the winners! And what a shakeup!</p>
<h1>Method</h1>
<p>We used a modification of the DebertaV3 + LGBM approach shared by <a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a> in his public notebook. Big kudos to them!</p>
<h2>Feature Engineering</h2>
<p>We utilized more features in our submission than were present in the original notebook. In particular,</p>
<ul>
<li>we added 6-gram and 12-gram features to model copying from the text instead of paraphrasing (which was associated with lower wording scores);</li>
<li>we added polarity and subjectivity scores using the NLTK library; and</li>
<li>we added counts of the NLTK pos tags in the student summary (e.g. nouns, verbs, exclamations, etc.)</li>
</ul>
<h2>Modelling</h2>
<p>We ensembled 4 LLM models: 2 DebertaV3 large models trained on individual metrics, 1 deberta v3 large model trained on both metrics, and 1 deberta v3 base model trained on both metrics. The predictions from these models were then fed into individual LGBM models, that had been tuned to each LLM model using Optuna. Finally, the resultant predictions were ensembled using hill climbing.</p>
<p>We found a variety of techniques when pretraining helped with the LLM models’ performance, including</p>
<ul>
<li>freezing the first N layers (N=6 or 18)</li>
<li>freezing the embeddings layer</li>
<li>using a max seq len of 1024 for training and a max seq len of 512 for inference</li>
<li>using step evaluation instead of epoch evaluation</li>
</ul>
<h1>What we could have done to get that sweet, sweet competition medal</h1>
<ul>
<li>We used <code>autocorrect-1.1.0</code> because of the whole licensing issue, but we should have looked into alternatives (since this was known to be buggy)</li>
<li>We could have tried ensembling the boosting models (e.g. using CatBoost / XGB / etc.)</li>
</ul>
<p>In general, we were really cautious about over-engineering our solution as we did not want to overfit to the public LB, especially given that there were not that many prompts in the training data. I do wish that we had a bigger distribution of prompts in our training data, perhaps one for each grade (grade 3-12).</p>
<h1>Code</h1>
<p>The notebooks we used for this competition can be found here:</p>
<ul>
<li><a href="https://www.kaggle.com/code/mcpenguin/inference-commonlit-student-summaries" target="_blank">Inference/Submission Notebook</a></li>
<li><a href="https://www.kaggle.com/code/mcpenguin/eda-commonlit-student-summaries" target="_blank">EDA Notebook</a></li>
<li><a href="https://www.kaggle.com/code/mcpenguin/utility-commonlit-student-summaries" target="_blank">Utility Notebook</a></li>
</ul>
<p>I personally doubt it - about 100-120 people would have to be removed before we can get a medal, and the LB seems pretty finalized already.</p> |
CommonLit Readability Prize | #24 Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Firstly, thanks to the hosts and Kaggle for hosting this interesting competition. Also, great thanks to torch( <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a> ) for share his/her experience, which helps me break 0.485 and could get better score.</p>
<h1>Summery</h1>
<p>My final model consist of 2 kind of model and totally 17 models.</p>
<ol>
<li>single model: <strong>Input</strong>: encoded row text. <strong>Model</strong>: roberta/deberta/bart last_hidden_states with mean pooling and attention head.</li>
<li>pair model: <strong>Input</strong>: baseline encoded text and encoded row text. <strong>Model</strong>: same architecture for baseline text and row text, then concatenate (or minus).</li>
</ol>
<p><em>As hosts say, every score is a compare of one text to baseline text.</em> <a href="https://www.kaggle.com/c/commonlitreadabilityprize/discussion/236403" target="_blank">here</a></p>
<h1>Fine-tune</h1>
<p>My final models are fine-tuned with almost same fine-tune strategy. That is:</p>
<ul>
<li>Epochs: 5 for base mode and 3 for large model</li>
<li>Batch Size: almost maximize the GPU memory. (4 - 24)</li>
<li>Gradient Accumulate: 1-2</li>
<li>Optimizer: AdamW + Lookahead</li>
<li>LR Scheduler: None for base model and cosine_schedule_with_warmup for large model (warmup steps > 0)</li>
<li>LLRD: use torch's shared settings and set base_lr=5e-5</li>
<li>Random initial top layers: 1-2 for base model and 4-6 for large model</li>
<li>Evaluating for every n steps: 10 to 60 steps</li>
</ul>
<h1>Scoring</h1>
<p>Just report my best score of the same base transformers.</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Pair(1/0)</th>
<th>CV</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>Roberta-large</td>
<td>0</td>
<td>0.47835</td>
<td>0.466</td>
<td>0.470</td>
</tr>
<tr>
<td>Roberta-large</td>
<td>1</td>
<td>0.46694</td>
<td>0.460</td>
<td>0.459</td>
</tr>
<tr>
<td>Roberta-base</td>
<td>1</td>
<td>0.47325</td>
<td>0.475</td>
<td>0.477</td>
</tr>
<tr>
<td>Deberta-large</td>
<td>0</td>
<td>0.46440</td>
<td>0.460</td>
<td>0.461</td>
</tr>
<tr>
<td>Deberta-large</td>
<td>1</td>
<td>0.46673</td>
<td>0.462</td>
<td>0.464</td>
</tr>
<tr>
<td>Bart-large</td>
<td>0</td>
<td>0.46184</td>
<td>0.462</td>
<td>0.464</td>
</tr>
<tr>
<td>Bart-large</td>
<td>1</td>
<td>0.46401</td>
<td>0.463</td>
<td>0.463</td>
</tr>
<tr>
<td>Bart-base</td>
<td>1</td>
<td>0.47103</td>
<td>0.470</td>
<td>0.472</td>
</tr>
<tr>
<td>Xlnet-large-cased</td>
<td>1</td>
<td>0.47427</td>
<td>0.475</td>
<td>0.465</td>
</tr>
<tr>
<td>Electra-large</td>
<td>1</td>
<td>0.47100</td>
<td>0.467</td>
<td>0.472</td>
</tr>
<tr>
<td>xlm-roberta-large</td>
<td>1</td>
<td>0.48622</td>
<td>0.477</td>
<td>0.478</td>
</tr>
<tr>
<td>luke-large</td>
<td>1</td>
<td>0.47148</td>
<td>0.471</td>
<td>0.465</td>
</tr>
<tr>
<td>funnel-large-base</td>
<td>1</td>
<td>0.47117</td>
<td>0.469</td>
<td>0.468</td>
</tr>
</tbody>
</table>
<p>Could not make other models work, and could not get a score below 0.5 even for fold 0.</p>
<h1>Ensemble</h1>
<p>Simple <code>BayesianRidge</code> regression. Best score: CV is 0.447 and Public LB 0.451 Private LB 0.452.</p>
<p>Thanks!</p>
<p>Thank you! Congratulations on winning another gold medal!</p>
<p>You are welcome!</p>
<p>Without your help, I could not get these improvement. Really thanks! Hope you to get your notebook GM someday!</p> |
CommonLit Readability Prize | 100th place : Ensemble public notebooks | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Since every attempts to pre-train Roberta failed for me, I used publicly pre-trained robertas from <a href="https://www.kaggle.com/maunish" target="_blank">@maunish</a> and <a href="https://www.kaggle.com/tutty1992" target="_blank">@tutty1992</a>. I ensembled roberta-base and roberta-large with different strategy, and here are the methods.</p>
<h2>1. Roberta-Base + SVR</h2>
<p>Fine-tuning <a href="https://www.kaggle.com/maunish" target="_blank">@maunish</a> 's model and predicting with linear layer, I could get </p>
<table>
<thead>
<tr>
<th>Private</th>
<th>Public</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.465</td>
<td>0.466</td>
</tr>
</tbody>
</table>
<p>Then I tried to predict with SVR 5-folds wrt training data without using the linear layer. After fine-tuning the regularization parameters for each fold, I could get a slightly better score,</p>
<table>
<thead>
<tr>
<th>Private</th>
<th>Public</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.464</td>
<td>0.464</td>
</tr>
</tbody>
</table>
<h2>2. Roberta-Large with different poolers and seeds</h2>
<p>Fine-tuning <a href="https://www.kaggle.com/tutty1992" target="_blank">@tutty1992</a> 's model and referring to <a href="https://www.kaggle.com/jcesquiveld" target="_blank">@jcesquiveld</a> , I tried to ensemble Roberta-Large models with different poolers with different seeds. Although each model is around 0.468 on public/private leaderboard, ensembling 6 different models (2 different poolers and 3 different seeds) led to higher performances as I thought,</p>
<table>
<thead>
<tr>
<th>Private</th>
<th>Public</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.463</td>
<td>0.463</td>
</tr>
</tbody>
</table>
<p>Besides, I've also ensembled public notebooks and datasets from <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a> , <a href="https://www.kaggle.com/chamecall" target="_blank">@chamecall</a> , <a href="https://www.kaggle.com/rajat95gupta" target="_blank">@rajat95gupta</a> and <a href="https://www.kaggle.com/nomlmean" target="_blank">@nomlmean</a> to get a score</p>
<table>
<thead>
<tr>
<th>Private</th>
<th>Public</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.457</td>
<td>0.457</td>
</tr>
</tbody>
</table>
<p>The codes will be made pubic after reference checkup. Hard work everyone, and thank you for everybody who shared great insights and their excellent works!!</p> |
CommonLit Readability Prize | 114th Place Solution: standard_error Works as a Feature | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Thank you all !</h1>
<p>At first, I would like to thank competition hosts, and Kaggle community.<br>
I enjoyed, learned so many things, and was encouraged by you all.<br>
Especially, I want to give great thanks to <a href="https://www.kaggle.com/andretugan" target="_blank">@andretugan</a>, <a href="https://www.kaggle.com/maunish" target="_blank">@maunish</a>, and <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a>. Their notebooks and discussions inspired and led me a lot.</p>
<p>This is my first NLP competition. I'm very surprised to win silver medal, and so glad.</p>
<p><br></p>
<h1>Key Points</h1>
<p>In this solution, there are 3 important points.</p>
<ul>
<li>Standard Error as a Feature</li>
<li>Model Diversity (BERT variants, custom head, LGBM……)</li>
<li>Stacking Strategy (weighted ensemble)</li>
</ul>
<p>I will explain them one by one.<br>
I did not use MLM pretraining nor external datasets. Just train dataset only.</p>
<p><br></p>
<h3>Standard Error as a Feature (<strong>MOST IMPORTANT</strong>)</h3>
<p>I noticed the standard_error is a key feature 3 days before the deadline.<br>
The experiment notebook is here.<br>
<a href="https://www.kaggle.com/atsushiiwasaki/clrp-standard-error-as-a-feature" target="_blank">https://www.kaggle.com/atsushiiwasaki/clrp-standard-error-as-a-feature</a></p>
<p><strong>Main Result</strong></p>
<table>
<thead>
<tr>
<th>Stacking Layer</th>
<th>CV Score</th>
<th>Boost</th>
</tr>
</thead>
<tbody>
<tr>
<td>Linear (baseline)</td>
<td>0.4567</td>
<td>-</td>
</tr>
<tr>
<td>LightGBM (without standard_error)</td>
<td>0.4596</td>
<td>-0.0029</td>
</tr>
<tr>
<td>LightGBM (with standard_error)</td>
<td>0.3959</td>
<td>0.069</td>
</tr>
</tbody>
</table>
<p>This experiment shows that standard_error is a key feature comparable to BERTs prediction.<br>
The problem is how to get the standard_error, since it is not given in test set.</p>
<p>I deal with this problem by using LightGBM for standard_error prediction.<br>
Here, feature engineering appeared.</p>
<p><br></p>
<h3>Model Diversity</h3>
<p>LB was very unstable for a single model prediction. So, I decided to cover the instability with <br>
model diversity.<br>
I tried to train many diverse models.</p>
<ul>
<li>architecture (Roberta, Electra, XLNet, Funnel, Albert, DeBerta, BART……LGBM)</li>
<li>size (base or large)</li>
<li>head (attention, conv1D, meanpooling and so on)</li>
</ul>
<p>However, since I am not good at fine-turning or something unknown, only 8 models survived.<br>
8 models are shown below.</p>
<table>
<thead>
<tr>
<th>Architecture</th>
<th>Size</th>
<th>Head</th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>Roberta</td>
<td>large</td>
<td>Attentions (using last 4 layers)</td>
<td>0.4757</td>
</tr>
<tr>
<td>Roberta</td>
<td>large</td>
<td>Attentions-Conv1D</td>
<td>0.4761</td>
</tr>
<tr>
<td>Roberta</td>
<td>base</td>
<td>Attention</td>
<td>0.4746</td>
</tr>
<tr>
<td>Electra</td>
<td>large</td>
<td>Attentions (using last 4 layers)</td>
<td>0.4760</td>
</tr>
<tr>
<td>XLNet</td>
<td>large</td>
<td>Attentions (using last 4 layers)</td>
<td>0.4850</td>
</tr>
<tr>
<td>Electra</td>
<td>large</td>
<td>Attentions-Conv1D</td>
<td>0.4818</td>
</tr>
<tr>
<td>Roberta</td>
<td>large</td>
<td>Meanpooling</td>
<td>0.4772</td>
</tr>
<tr>
<td>Funnel</td>
<td>large</td>
<td>Attention</td>
<td>0.4836</td>
</tr>
</tbody>
</table>
<p><br></p>
<h3>Stacking Strategy</h3>
<p>Here is the stacking scheme figure.<br>
It is very complicated, especially standard_error prediction part.</p>
<p>If not using standard_error, stacking scheme would be much simpler.<br>
<img src="https://i.postimg.cc/prVYT4mh/CLRP-Stacking-Scheme.png" alt="stacking scheme"><br>
<br></p>
<h1>Results</h1>
<p>My models overfitted somewhat.</p>
<table>
<thead>
<tr>
<th>Strategy</th>
<th>CV</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>Stacking BERTs</td>
<td>0.457</td>
<td>0.458</td>
<td>0.461</td>
</tr>
<tr>
<td>LGBM using BERTs Prediction + S.E.</td>
<td>0.454</td>
<td>0.458</td>
<td>0.459</td>
</tr>
<tr>
<td>Stacking BERTs + LGBM</td>
<td>0.452</td>
<td>0.456</td>
<td>0.458</td>
</tr>
</tbody>
</table>
<p><br></p>
<h1>What did not Work</h1>
<ul>
<li>MLM pretraining</li>
<li>Data augmentation (like <a href="https://arxiv.org/abs/1901.11196" target="_blank">Easy Data Augmentation</a>)<ul>
<li>I set <code>data_collator = DataCollatorForLanguageModeling</code>, then dynamically masked texts. I expect this trick works, but not.</li>
<li>As a result, I just set <code>data_collator = DataCollatorWithPadding</code>. Dynamically padding is realized.</li></ul></li>
<li>Back translation</li>
<li>Layer reinitialization</li>
<li>ALBERT, DeBerta, BART (It is too difficult to finetune them……)</li>
<li>SmartBatchingSampler<ul>
<li>The data order fixed to a extent would not be good in the small dataset competition.</li></ul></li>
<li><a href="https://www.kaggle.com/atsushiiwasaki/clrp-attention-head-variants" target="_blank">Multi-head Attention</a><ul>
<li>I tried to divide the simple attention head into some blocks. This does not work.<br>
<br></li></ul></li>
</ul>
<h1>Other Ideas</h1>
<ul>
<li>Stratify the dataset on prediction error<ul>
<li>Stratified K Fold on target, s.e. and target & s.e. seems not to work enough. So, I decided to stratify the dataset on prediction error. There may be some important feature distributions to stratify.</li>
<li><a href="https://www.kaggle.com/atsushiiwasaki/clrp-stratify-on-target-s-e-bins-from-sturge-s" target="_blank">CLRP: Stratify on Target & S.E. Bins from Sturge's</a></li>
<li><a href="https://www.kaggle.com/atsushiiwasaki/clrp-stratify-on-predictability" target="_blank">CLRP: Stratify On Predictability</a></li>
<li><a href="https://www.kaggle.com/atsushiiwasaki/split-based-on-target-standard-error" target="_blank">Split based on Target & Standard Error
</a></li>
<li>However, this strategy would not work so well.</li></ul></li>
<li>Larger depth in attention head in Funnel-large<ul>
<li>In Roberta, Electra and XLNet, the depth of attention head is set 512~514 (I choose the depth based on the embedding size).</li>
<li>On the other hand, it is set 768 in Funnel with only last hidden layer, and this worked.</li>
<li>Maybe, this is due to Funnel architecture. We can extract the best encoded representation from last hidden layer in Funnel encoder using deep attention head having more representation ability.</li></ul></li>
</ul>
<p><br></p>
<h1>Ideas that I did not come up with</h1>
<ul>
<li><a href="https://www.kaggle.com/c/commonlitreadabilityprize/discussion/257844" target="_blank">Psuedo Labeling</a></li>
<li><a href="https://www.kaggle.com/c/commonlitreadabilityprize/discussion/258554" target="_blank">Gaussian Process Regression</a></li>
<li><a href="https://www.kaggle.com/c/commonlitreadabilityprize/discussion/260800" target="_blank">Sentence Transformers</a></li>
<li><a href="https://www.kaggle.com/c/commonlitreadabilityprize/discussion/258312" target="_blank">Bradley-Terry (BT) Loss</a></li>
<li><a href="https://www.kaggle.com/c/commonlitreadabilityprize/discussion/260939" target="_blank">Binary Pre-training</a></li>
</ul>
<p><br></p>
<h1>Notebooks Related to This Solution</h1>
<ul>
<li>Data<ul>
<li><a href="https://www.kaggle.com/atsushiiwasaki/how-many-samples-do-i-want" target="_blank">Dataset Size Dependency</a></li></ul></li>
<li>Model Training<ul>
<li>BERT Baseline (<a href="https://www.kaggle.com/atsushiiwasaki/commonlit-bert-stratified-k-fold-baseline-train" target="_blank">train</a>, <a href="https://www.kaggle.com/atsushiiwasaki/commonlit-bert-stratified-k-fold-baseline-infer" target="_blank">infer</a>)</li>
<li>BERT variants for target<ul>
<li>Electra-large + Attentions + Conv1D (<a href="https://www.kaggle.com/atsushiiwasaki/clrp-electra-large-attentions-conv1d-train" target="_blank">train</a>, <a href="https://www.kaggle.com/atsushiiwasaki/clrp-electra-large-att-conv1d-infer" target="_blank">infer</a>)</li>
<li><a href="https://www.kaggle.com/atsushiiwasaki/clrp-roberta-large-att-mask-act-infer" target="_blank">Roberta-large + Attentions</a></li>
<li><a href="https://www.kaggle.com/atsushiiwasaki/clrp-roberta-base-oof-task-fineturning-infer" target="_blank">Roberta-base + Attention</a></li>
<li><a href="https://www.kaggle.com/atsushiiwasaki/clrp-funnel-large-attentions-act" target="_blank">Funnel-large + Attention</a></li></ul></li>
<li>LGBM for standard_error & target<ul>
<li><a href="https://www.kaggle.com/atsushiiwasaki/clrp-standard-error-feature-eda?scriptVersionId=69709698" target="_blank">CLRP: Standard Error Feature EDA</a></li>
<li>This notebook is not EDA, but I have named……</li></ul></li></ul></li>
<li>Model Stacking<ul>
<li><a href="https://www.kaggle.com/atsushiiwasaki/clrp-model-stacking-with-standard-error" target="_blank">CLRP: Model Stacking with Standard Error</a></li></ul></li>
</ul>
<p><a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> Thank you for your comment. In addition to target prediction models (BERT variants), I created standard_error prediction model (LightGBM). The features for the model were engineered from the excerpts (for example, number of sentences) and BERT variants predictions.</p> |
CommonLit Readability Prize | 128th Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Very interesting competition and congrats to the winners.<br>
I was ranked 1154th on the public leaderboard and 128th on the private leaderboard. My final submission is an ensemble of 4 models, and I did not use any external data.</p>
<p>Generally, I used a comparison approach like many others, since I found the popular regression approach is very unstable. The first approach is a representation-based one. After encoding a batch of text, I construct pairs for comparison with some reshape tricks. Though my training batch size is 16, the actual batch size is 16*16. Since texts are encoded by BERT separately, I used an ESIM-like inference module to enhance the interaction of two semantic embedding vectors. Both regression(the target delta) and binary classification tasks were used to train models, and this provides good model diversity. You can find the code in this notebook <a href="https://www.kaggle.com/wuyhbb/inference-large?scriptVersionId=68357748" target="_blank">https://www.kaggle.com/wuyhbb/inference-large?scriptVersionId=68357748</a></p>
<p>I also trained two interaction-based models. In these models, paired texts are encoded together. Theoretically, this approach is better than the above one, because it can inference the two sentences better. My CV results supported this theory. However, this approach has a significant drawback: it is very slow. When predicting the test set, I only need to encode the anchor excerpts, which are excerpts I sampled from the training set, once in the above approach. In this approach, I need to encode every pair of excerpts. Code of this approach can be found in this notebook (OneBertxxx) <a href="https://www.kaggle.com/wuyhbb/inference-large-ensemble2?scriptVersionId=69518962" target="_blank">https://www.kaggle.com/wuyhbb/inference-large-ensemble2?scriptVersionId=69518962</a></p>
<p>My CV results:<br>
Representation-based binary classification with Roberta-Large: 0.47703<br>
Representation-based regression with Roberta-Large: 0.47878<br>
Interaction-based binary classification with Roberta-Large: 0.47296<br>
Interaction-based regression with Roberta Large: 0.47432</p>
<p>When training the models, I used the standard error for augmentation. It can improve CV scores when the training task is binary classification.</p>
<p>Special Thanks to Z by HP & NVIDIA for sponsoring me a Z4 Workstation with dual RTX6000 GPU and a ZBook with RTX5000 GPU!<br>
Training large transformer models with 24GB VRAM is awesome!</p> |
CommonLit Readability Prize | 12th (public 19th) place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="https://www.kaggle.com/yutoshibata" target="_blank">@yutoshibata</a> 😆 Maybe I'm slow starter ;)</p>
<p>Training flow is:</p>
<ol>
<li>prepare wikipedia data with pseudo label</li>
<li>pre-training by MLM and predict pseudo label at the same time, in order to grasp difference simple and normal wikipedia<ul>
<li>This pre-training contains competition data only for MLM (not using target). I've omitted.</li></ul></li>
<li>re-initialization some of language model layers and output layers</li>
<li>finetuning on competition data target</li>
</ol>
<p>On pre-training, the settings are:</p>
<pre><code>learning_rate: 5e-5 # no layer-wise lr
batch_size: 32
num_epochs: 2
</code></pre>
<p>It took so much time on pre-training (about 1~2 days on V100) since I've done this on 5 folds.</p>
<p>Thanks <a href="https://www.kaggle.com/snnclsr" target="_blank">@snnclsr</a> </p>
<p>Yes, re-initialization N layers of pre-trained language models. I used <code>initializer_range</code> of huggingface original config.<br>
Here is a sample code:</p>
<pre><code>class Model(nn.Module):
...
def initialize(self):
for i in range(self.num_reinit_layers):
self.model.encoder.layer[-(1 + i)].apply(self._init_weight)
def _init_weight(self, module):
if isinstance(module, nn.Linear):
module.weight.data.normal_(mean=0.0, std=self.model.config.initializer_range)
if module.bias is not None:
module.bias.data.zero_()
</code></pre>
<p>First of all, Thanks Kaggle and the host for organizing this interesting competition.<br>
I also want to thank the kagglers who made this competition so much fun for me. Thank you so much!</p>
<p>my final inference notebook: <a href="https://www.kaggle.com/kentaronakanishi/clrp-095-ensemble13-cv-kaggle93" target="_blank">https://www.kaggle.com/kentaronakanishi/clrp-095-ensemble13-cv-kaggle93</a></p>
<h1>Summary</h1>
<p>Final submissions are 13 and 11 models ensemble. The weights of ensembles are calculated by Nelder-mead with an objective using CV and LB.</p>
<table>
<thead>
<tr>
<th>Name</th>
<th>CV</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>ensemble 13</td>
<td>0.4442</td>
<td>0.450</td>
<td>0.449</td>
</tr>
<tr>
<td>ensemble 11</td>
<td>0.4451</td>
<td>0.449</td>
<td>0.449</td>
</tr>
</tbody>
</table>
<p>I pre-trained huggingface base models by wikipedia (simple/normal) data with MLM and pseudo target, and finetuned them by competition data.<br>
I didn't use other external data because I can't judge it is allowed to use scraped data.</p>
<p>Works with big improvements are re-initialization, pre-training with pseudo target, and ensemble.</p>
<h1>Models</h1>
<p>Base models for 13 models ensemble are nine <code>microsoft/deberta-large</code>, two <code>albert-xxlarge-v2</code>, and one for <code>google/electra-large-discriminator</code> and <code>roberta-large</code>.</p>
<table>
<thead>
<tr>
<th>Name</th>
<th>CV</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>deberta-large</code> multi custom head</td>
<td>0.474</td>
<td>0.464</td>
<td>0.468</td>
</tr>
<tr>
<td><code>deberta-large</code> multi custom head</td>
<td>0.467</td>
<td>0.462</td>
<td>0.467</td>
</tr>
<tr>
<td><code>deberta-large</code> attn head</td>
<td>0.467</td>
<td>0.461</td>
<td>0.463</td>
</tr>
<tr>
<td><code>deberta-large</code> multi custom head</td>
<td>0.465</td>
<td>0.457</td>
<td>0.457</td>
</tr>
<tr>
<td><code>deberta-large</code> multi custom head</td>
<td>0.463</td>
<td>0.456</td>
<td>0.456</td>
</tr>
<tr>
<td><code>roberta-large</code> multi custom head</td>
<td>0.475</td>
<td>0.465</td>
<td>0.456</td>
</tr>
<tr>
<td><code>deberta-large</code> multi custom head</td>
<td>0.462</td>
<td>0.456</td>
<td>0.460</td>
</tr>
<tr>
<td><code>deberta-large</code> multi custom head</td>
<td>0.463</td>
<td>0.455</td>
<td>0.459</td>
</tr>
<tr>
<td><code>deberta-large</code> multi custom head</td>
<td>0.463</td>
<td>0.455</td>
<td>0.460</td>
</tr>
<tr>
<td><code>deberta-large</code> with small (not pre-trained) network</td>
<td>0.467</td>
<td>0.458</td>
<td>0.458</td>
</tr>
<tr>
<td><code>alberta-xxlarge-v2</code> cls head</td>
<td>0.475</td>
<td>0.461</td>
<td>0.460</td>
</tr>
<tr>
<td><code>alberta-xxlarge-v2</code> multi custom head</td>
<td>0.467</td>
<td>0.463</td>
<td>0.454</td>
</tr>
<tr>
<td><code>electra-large</code> multi custom head</td>
<td>0.471</td>
<td>0.464</td>
<td>0.461</td>
</tr>
</tbody>
</table>
<p>Best public LB by single model is 0.453, but this is not used for ensemble.</p>
<p>custom head is a combination of cls, avg pool, max pool, attn pool. Each head has output layer to target value, and then average (or weighted average) them.</p>
<h1>Training</h1>
<h2>Basic settings</h2>
<pre><code>num_folds: 5
num_epochs: 6
lr_scheduler_epochs: 10
learning_rate: 0.00002
learning_rate_output: 0.0001
batch_size: 16
max_length: 256
validation_interval: 10
</code></pre>
<h2>Pre-training</h2>
<p>I have pre-trained again for huggingface pre-trained models with wikipedia data. I used data by <a href="https://www.kaggle.com/markwijkhuizen" target="_blank">@markwijkhuizen</a> . Thanks a lot!</p>
<p>Not only MLM, I use pseudo labeling by best ensemble model on that time for wikipedia data and train with it. The details is below:</p>
<ol>
<li>making good models and ensemble</li>
<li>predicting target for wikipedia data (simple/normal) by best ensemble models. This is predicted by each fold to prevent target leakage.</li>
<li>filtering wikipedia data by<ul>
<li>has excerpt in both of simple and normal</li>
<li>the pseudo-target by pseudo labeling looks correct: satisfying <code>simple - margin > normal</code> where margin = 0.5</li></ul></li>
<li>pre-training by wikipedia data with tasks: MLM and predict pseudo target.</li>
</ol>
<p>This pre-trained and fine-tuning framework improved scores a lot in public LB.</p>
<h2>Others</h2>
<ul>
<li>re-initialization top N layers of pre-trained model. N=5<ul>
<li>worked a lot</li>
<li><a href="https://arxiv.org/abs/2006.05987" target="_blank">https://arxiv.org/abs/2006.05987</a></li></ul></li>
<li>layerwise LR</li>
<li>RDrop<ul>
<li>maybe work a little</li>
<li><a href="https://arxiv.org/abs/2106.14448" target="_blank">https://arxiv.org/abs/2106.14448</a></li></ul></li>
<li>dropout output layer 0.2 ~ 0.3</li>
<li>LAMB optimizer make training stable (but no improvement on score)</li>
</ul>
<h1>Not worked for me</h1>
<ul>
<li>classification approach<ul>
<li>cross entropy</li>
<li>rmse with softmax weighted sum by center value</li>
<li>histgram loss<ul>
<li>same or little worse performance as regression approach</li>
<li><a href="https://arxiv.org/abs/1806.04613" target="_blank">https://arxiv.org/abs/1806.04613</a></li></ul></li></ul></li>
<li>pairwise comparison and simulate bradley terry approach<ul>
<li>make pairs by combination of all excerpts and predict which is easier to read.</li>
<li>pairwise comparison is harder than regression for me</li></ul></li>
<li>linguistic feature<ul>
<li>little or not worked with output layer or with features of svm</li></ul></li>
<li>hidden_dropout = 0.0<ul>
<li>CV improves but LB didn't change or even worse</li></ul></li>
<li>GaussianLoss, MAE, Huber, RankLoss have no improvements</li>
<li>Target balanced loss (the bigger abs target, the bigger loss) not improvement</li>
<li>pseudo labeling noisy training data<ul>
<li>CV off-course improved but LB not.</li></ul></li>
</ul> |
CommonLit Readability Prize | 136th place solution (top 4% - silver) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our solution is an ensemble of five different architectures, some manual empiric tuning and some seeds re-runs to help stabilizing the predictions and generalizing to the private test set.</p>
<p>Ensemble:</p>
<ul>
<li>2 roberta-base models, each one with a different head (5 fold CV each)</li>
<li>ExtraTreesRegressor on the embeddings extracted from roberta-base (5 fold CV, 2 seeds)</li>
<li>1 roberta-large (5 fold)</li>
<li>1 Roberta-large Mean Pooling model (5 fold, 2 seeds)</li>
</ul>
<p>Total: 35 models.</p>
<p>All Hugging Face models were further pre-trained in the competition dataset, before the fine-tuning step.</p>
<p>Seed average helped improving ~0.002 on public LB score.</p>
<p>I want to thank some kagglers that were important to this score: <a href="https://www.kaggle.com/andretugan" target="_blank">@andretugan</a>, <a href="https://www.kaggle.com/maunish" target="_blank">@maunish</a>, <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a>, and <a href="https://www.kaggle.com/rajat95gupta" target="_blank">@rajat95gupta</a>. There were many others that helped one way or the other, by publicly sharing their works and thoughts.</p>
<p>Special thanks to my teammate <a href="https://www.kaggle.com/jbomitchell" target="_blank">@jbomitchell</a>, who kept us updated with the most important public discussions and notebooks shared in this competition and also provided many valuable insights about the competition and train/test data.</p>
<p>Finally, thanks to everyone who joined, hosted or was a part of this competition!</p>
<p>Code is <a href="https://www.kaggle.com/hinepo/clpr-5-models-v1-1" target="_blank">here</a>.</p> |
CommonLit Readability Prize | 14th Place Solution with best single model only scored 0.461/0.463 on LB | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We will try to be as brief as possible.</p>
<p><strong>Step 1 - Training Part:</strong><br>
We have tried all most all the models from Hugging Face, among them, Xlnet for us is the winner.<br>
a. With original dataset. Our best single model is xlnet large model(lb 0.461,private lb 0.463).</p>
<p>b. Using pseudo labels for training. We blend some of our selected models to get a public LB score of 0.455. Using the blended models to generate pseudo labels for the legally public data as external data and continue training. Our best pseudo label model is Roberta large model(lb 0.460,private lb 0.456).</p>
<p>c. A special remark of the training process, We used std as the second loss. e.g.<br>
loss = torch.sqrt(loss_fn(logits, labels.view(-1)))+0.6*torch.sqrt(loss_fn(logits, (labels-stds).view(-1))).</p>
<p><strong>Step 2 - Avoiding shake up with multi-level stacking.</strong><br>
Since we have used at the end more than 20 models: Half of them trained with the original dataset, half of them with pseudo labeled dataset. The cv scores are much different. The blend helps us only to reach 0.451 and 0.453 on the public and private LB respectively. How to move on?</p>
<p>Stacking. But so many models with much different cv scores, normal stacking works not so good at this point. We applied multi-level stacking. We used Lasso as the base model for stacking. All the low-score models are placed at level 1, pseudo label models at level 2, linear models at level 3. This setup helps us to reach 14th place.</p>
<p>However, it is not our best score. We selected 12 models and applied 12 Level stackings, i.e. each level just append one model oof/prediction, public and private LB are both 0.448, there is no overfitting. If more models are added, it might be scored higher.</p>
<p><strong>Step 3 - detailed statistical data analysis</strong></p>
<p>We worked hard on selecting the final submissions since we noticed there might be a big chance to have a shake-up on the LB. It took us almost 3 days. We did hundreds of plottings and different statistical analyses. It worthed, especially with such a small dataset.</p>
<p>That is it, thanks for reading guys!</p> |
CommonLit Readability Prize | 15th Place - Trust Your CV and Model Diversity! | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>15th Place Team “Trust Your CV” and Trust Diversity!</h2>
<p>Thank you Kaggle and CommonLit for a fun and exciting competition. Thank you team for a great collaboration! <a href="https://www.kaggle.com/mobassir" target="_blank">@mobassir</a> <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> <a href="https://www.kaggle.com/ivanaerlic" target="_blank">@ivanaerlic</a> <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> Our team strategy was diversity of NLP transformer models because the train dataset was small and risk of overfitting was large. Besides diversity of backbones, our secret sauce was lots of creative model heads and creative training schedules including <a href="https://docs.rapids.ai/api/cuml/stable/api.html#cuml.svm.SVR" target="_blank">RAPIDS cuml SVR</a> head which boosted CV LB +0.002!</p>
<p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2021/commonlit_model.png" alt=""></p>
<h2>Congratulate Mobassir and Ragnar Becoming Competition Masters!</h2>
<p><a href="https://www.kaggle.com/mobassir" target="_blank">@mobassir</a> and <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> have been sharing high quality notebooks and discussions with the Kaggle community for years. Their code and discussion sharing has personally helped me achieve gold in previous competitions! I’m excited to share in their current achievement of gold medal and becoming Kaggle Competition Masters. Please help me congratulate them. Great job guys!</p>
<h2>Introduction</h2>
<p>Our team began with <a href="https://www.kaggle.com/mobassir" target="_blank">@mobassir</a> and <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> who had achieved top 50 among 3000 teams. I was happy when they invited me to their team because I was struggling to get in the top 200 teams! As a team of three we decided diversity was the best approach in this competition. We invited <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> and <a href="https://www.kaggle.com/ivanaerlic" target="_blank">@ivanaerlic</a> to join our team. These two Kagglers have demonstrated they can build creative diverse strong single models in their previous competitions. They succeeded once again! Thank you <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> and <a href="https://www.kaggle.com/ivanaerlic" target="_blank">@ivanaerlic</a> , your models are fantastic!</p>
<h2>Trust Your CV</h2>
<p>Every model we computed CV and LB. If both were good, we added the model to our ensemble. Our final ensemble had CV 0.452, Public LB 0.451, and Private LB 0.450. Notice that there was a strong connection between CV, Public LB, and Private LB.</p>
<h2>Mobassir - PyTorch - DeBERTa-Large</h2>
<p>Configuration (CV 0.472 LB 0.463) : </p>
<ul>
<li>Used single seed (63)</li>
<li>Used self-attention head</li>
<li>Used layerwise learning weight decay</li>
<li>Trained for 7 epochs </li>
<li>Used learning rate 0.00003 for AdamW</li>
<li>Used batch size and validation frequency = 8</li>
<li>Used max_length = 300</li>
<li>Used abhishek thakur’s bin based stratified5fold</li>
<li>Tried SVR head but couldn’t improve cv furthermore because we think the self attention head was already trained very well</li>
</ul>
<h2>Ragnar - TensorFlow - RoBERTa-Large</h2>
<ul>
<li>CV 0.462 LB 0.459</li>
<li>first two heads - cls token head, last hidden layer head </li>
<li>third head - SVR head</li>
<li>two stage training with and without frozen backbone</li>
</ul>
<h2>Ertugrul - PyTorch - Electra-Large, Funnel-Large, RoBERTa-Large</h2>
<ul>
<li>CV 0.458 LB 0.454</li>
<li>Huggingface </li>
<li>Sentence Transformers</li>
<li>Bayesian Ridge head</li>
<li>dropout = 0 (explained <a href="https://www.kaggle.com/c/commonlitreadabilityprize/discussion/260729" target="_blank">here</a>)</li>
</ul>
<p>The main idea here was to fine-tune as many as strong transformer models using Huggingface Transformers then get their embeddings using the Sentence Transformers package. First we inspected many transformer models doing 5 fold cross validation. So we can see how they learn and generalize, then take these fine tuned models into sentence transformers to create sentence embeddings. Here we got last layer embeddings plus cls tokens, then concatenate them into one 2D array with 2048 (These can change but are similar most of the time for big models) features for each instance. Then built and trained Bayesian Ridge head where takes these embeddings as input and gives target scores as float. Actually it’s not that different from the model shared here by <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a>. We believe these give decently generalized results but to get more accurate results there’s the second part.</p>
<p>In the second part the main idea was combining embedding based predictions with neural network based predictions. For testing this idea first we combined bayesian predictions with a public model. This gave us a great increase in lb score (~0.01) from 0.466 to 0.454 So we planned to combine these with the other team member models. It worked pretty well with our own models too, so at late stages we concentrated on finding optimal ratios of the combination using OOF scores.</p>
<h2>Ivan - PyTorch - Microsoft DeBERTa-Large, DeBERTa-Large-MNLI</h2>
<ul>
<li>CV 0.464 LB 0.458</li>
<li>first head - self attention head</li>
<li>second head - Bayesian Ridge head</li>
<li>dropout = 0 (explained <a href="https://www.kaggle.com/c/commonlitreadabilityprize/discussion/260729" target="_blank">here</a>)</li>
</ul>
<h2>Public Notebook - PyTorch - RoBERTa-Base</h2>
<p>Thank you <a href="https://www.kaggle.com/andretugan" target="_blank">@andretugan</a> We used your RoBERTa-Base public notebook in our final ensemble</p>
<ul>
<li>CV 0.477 LB 0.467</li>
</ul>
<h2>Chris - RAPIDS cuml SVR and Post Process</h2>
<ul>
<li>Boost CV +0.002 LB +0.002</li>
<li>We learned that during inference, if we extracted the final hidden layer activations (sentence embeddings) from our transformer models and trained a RAPIDS support vector regression model and then ensemble this model with the original model head we could boost all our models’ CV and LB. The TensorFlow models boosted LB +0.002 and most PyTorch models boosted LB +0.001.</li>
<li>We also learned that the test data has a mean target that is 0.02 lower than the train target. Therefore if we subtract 0.02 from our final predictions this boosted LB +0.001</li>
</ul>
<h2>Food For Thought</h2>
<p>In the last few days we began exploring additional ideas. With more time we believe some could have boosted our CV LB scores. One example is an excerpt compare model described <a href="https://www.kaggle.com/c/commonlitreadabilityprize/discussion/257446" target="_blank">here</a>. Another is pseudo labeling the test data during submission and then training another RoBERTa-Base during submission. Another is NLP augmentations. Another is using external data.</p>
<h2>Congratulations Top Teams - Thank you!</h2>
<p>Congratulations to all top teams. Thank you for all your sharing during and after the competition. Thanks everyone for reading our solution writeup!</p>
<p>Each model has multiple heads. Each head makes test predictions. Then we ensemble all the test predictions.</p> |
CommonLit Readability Prize | 16th Solution Ensemble + Pseudo-labeling | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you. Ensemble inference or single model training?</p>
<p>Thanks to the organizers for an interesting competition, and to <a href="https://www.kaggle.com/andretugan" target="_blank">@andretugan</a> and <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a> for the baseline</p>
<h1>Model</h1>
<p>My solution is based on an ensemble of several architectures. roberta-large, deberta_large, electra_large. The others worked out worse.</p>
<table>
<thead>
<tr>
<th>model</th>
<th>public</th>
<th>private</th>
</tr>
</thead>
<tbody>
<tr>
<td>roberta-large-mnli</td>
<td>455-458</td>
<td>465-470</td>
</tr>
<tr>
<td>roberta-large</td>
<td>459</td>
<td>462</td>
</tr>
<tr>
<td>deberta_large</td>
<td>464-466</td>
<td>457-459</td>
</tr>
<tr>
<td>electra_large</td>
<td>466-467</td>
<td>470-474</td>
</tr>
<tr>
<td>ensemble</td>
<td>446-450</td>
<td>449-450</td>
</tr>
</tbody>
</table>
<h1>Train</h1>
<p>The training went as follows:</p>
<ol>
<li>Training the roberta-large model.</li>
<li>I predict targets on the texts of Wikipedia (20 000 lines, if more, the scor drops).</li>
<li>I train different models Roberta, deberta, electra<br>
3.1. First on wikipedia<br>
3.2. Complete finetuning on a training sample</li>
<li>Predict the test</li>
</ol>
<h1>Details</h1>
<p>Using differential learning rates for BERT layers</p>
<p>Tried different approaches, but they all work the same</p>
<pre><code> last_layer_hidden_states = roberta_output.hidden_states[-1]
</code></pre>
<p>or</p>
<pre><code> last_layer_hidden_states = roberta_output.hidden_states[-2]
</code></pre>
<p>or</p>
<pre><code> layer_hidden_states1 = roberta_output.hidden_states[-1]
layer_hidden_states2 = roberta_output.hidden_states[-2]
last_layer_hidden_states = torch.cat((layer_hidden_states1, layer_hidden_states2), 2)
</code></pre>
<p>And also</p>
<pre><code> weights = self.attention(last_layer_hidden_states)
y = torch.sum(weights * last_layer_hidden_states, dim=1)
</code></pre>
<p>or</p>
<pre><code> y = torch.mean(last_layer_hidden_states, 1)
</code></pre>
<p>etc.</p>
<p>Feature engineering, Reinitializing did not work either</p>
<p><a href="https://www.kaggle.com/vlomme/commonlit-deberta-large" target="_blank">https://www.kaggle.com/vlomme/commonlit-deberta-large</a></p> |
CommonLit Readability Prize | 18th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to Kaggle and Commonlit for providing this opportunity. </p>
<p>Also thanks to <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a> and <a href="https://www.kaggle.com/andretugan" target="_blank">@andretugan</a> for sharing excellent kernels. </p>
<p>We had 24 large models and 3 base models, utilized full 3 hours of runtime. We stacked using SGD Regressor and TF-IDF features. Models in our stack described below : </p>
<p><strong>Cross Validation</strong> - All models trained with 5 folds using kincaid readability feature. All our models were using 3 seeds (42, 77, 2021). Even with 3 seeds we noticed a LB difference of ~0.005 but 3 seeds were the max we could do. Nothing specific for each fold - all folds were trained with the same head/parameters. </p>
<p><strong>Roberta Large</strong> (3 Roberta large with 3 seeds each - total 9 models)- All Roberta models were using pretrained model using Competition data, Kids frontiers data and <a href="https://github.com/LGDoor/Dump-of-Simple-English-Wiki" target="_blank">Simple wiki</a>. </p>
<ul>
<li>Attention head - along with TF-IDF features with TabNet - differential learning rates </li>
<li>weighted layer pooling - multisample dropouts, No differential learning rates </li>
</ul>
<p><strong>Deberta Large</strong> (3 Deberta large with 3 seeds each - total 9 models) <br>
All Deberta models were trained for 25 epochs with validation at the end. No pretraining. </p>
<ul>
<li>Deberta Large with pseudo labeling on <a href="https://cs.pomona.edu/~dkauchak/simplification/" target="_blank">Text Simplification dataset</a> </li>
<li>Deberta large with pooled output, attention head and multisample dropouts along with TabNet for TF-IDF features. <br>
For the multi-sample dropout architecture, the model outputs 5 values, the mean of which correspond to our targets and their standard deviation to the standard error. Thus, both the quantities are used for training.</li>
</ul>
<p><strong>Electra Large</strong> - Attention head - along with TF-IDF features with TabNet</p>
<p><strong>Ernie</strong> - Attention head - along with TF-IDF features with TabNet. </p>
<p><strong>Roberta Base</strong> - (Pretraining on competition data) This model was almost same as <a href="https://www.kaggle.com/andretugan/pre-trained-roberta-solution-in-pytorch" target="_blank">public kernel</a>, we added predictions from LGBM and Ridge to this model</p>
<p>Our best model was Roberta Large with public LB score 0.461 (CV: 0.473) and everything else was between 0.461 and 0.473. TF-IDF features gave us a boost of ~0.002 in every model. </p>
<p>Things that didn’t work - <br>
-Pretraining with other datasets for Roberta Large - we tried pretraining with many many other datasets <br>
-Pseudo label with Roberta Large - It worked on private LB but it was not in our stack. <br>
-Pretraining for Ernie <br>
-Many many different architectures and combinations of different architectures. </p>
<p>Disappointed that we missed gold zone by one rank, it was a pleasure collaborating with the team <a href="https://www.kaggle.com/kpriyanshu256" target="_blank">@kpriyanshu256</a> <a href="https://www.kaggle.com/aman1391" target="_blank">@aman1391</a> <a href="https://www.kaggle.com/gyanendradas" target="_blank">@gyanendradas</a> and Adarash S - Thank you. </p>
<p>Likewise, Thank you !</p>
<p>We put TF-IDF in stacking also and that gave us ~0.001 improvement. </p>
<p>We first created TF-IDF features (along with other readability features). Then instead of using a linear layer we used <a href="https://github.com/dreamquark-ai/tabnet" target="_blank">Tabnet</a>. <br>
This was in our <code>__init__(...)</code> model. </p>
<pre><code>self.tabnet = TabNet(**{
'n_d': 4,
'n_a': 4,
'n_steps': 2,
'input_dim': feature_len,
'output_dim': feature_hidden_size
}
)
</code></pre> |
CommonLit Readability Prize | 1st place solution - external data, teacher/student and sentence transformers | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank the competition organizers for this great competition. It was really interesting and I learned a lot. This was my first Kaggle competition and I enjoyed it very much.</p>
<p>Thanks also to the community. You were friendly, shared interesting ideas in discussions and notebooks and I was able to learn so much from you.</p>
<p>Special thanks to the following users for providing helpful bits and pieces that I used in my submission:</p>
<ul>
<li><p>Thanks <a href="https://www.kaggle.com/teeyee314" target="_blank">@teeyee314</a> for <a href="https://www.kaggle.com/teeyee314/readability-url-scrape" target="_blank">https://www.kaggle.com/teeyee314/readability-url-scrape</a> . I used this data and it saved me a lot of time and effort because it was so well prepared.</p></li>
<li><p>Thanks to <a href="https://www.kaggle.com/maunish" target="_blank">@maunish</a> <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a> <a href="https://www.kaggle.com/andretugan" target="_blank">@andretugan</a> I used this notebook for pseudo-labeling <a href="https://www.kaggle.com/andretugan/commonlit-two-models" target="_blank">https://www.kaggle.com/andretugan/commonlit-two-models</a> and it greatly improved my score.</p></li>
</ul>
<p><strong>TLDR Solution</strong><br>
I made a large collection of external data. I used sentence bert (Reimers and Gurevych 2019 - <a href="https://github.com/UKPLab/sentence-transformers" target="_blank">https://github.com/UKPLab/sentence-transformers</a>) to select for each excerpt in the train set the 5 most similar snippets in the external data collection. I then used a roberta-base (later on I used a roberta-base/-large ensemble) model trained on the original training data to label the selected data. I then filtered the data on standard error of each sample to keep roughly the same distribution as in the train set.</p>
<p>Then, I trained some different models on the pseudo-labeled data for 1-4 epochs. After training on the pseudo-labeled data, I trained each model on the original training set. Models used in my final submission were albert-xxlarge, deberta-large, roberta-large and electra-large. I used ridge regression to ensemble these models.</p>
<p><strong>Detailed solution</strong></p>
<p><strong>Background</strong></p>
<p>This is my first Kaggle competition and so far, my day job did not involve any practical work in data science or machine learning. I am working with data scientists in the field of NLP on a daily basis but in my role as Head of Product I do not code. When thinking about this competition, I knew from the start that I wouldn't be able to compete on sophisticated model design or extensive hyperparameter tuning. I decided to make smart use of external data instead.</p>
<p>I had read some papers which seemed useful in this competition and I decided to implement an approach similar to these papers.</p>
<p>The excellent work by Nils Reimers and the UKPLab was certainly a foundation in my solution. Their work on sentence embeddings allowed me to select data that was relevant to this competition. You can read up on sentence transformers in:</p>
<p><a href="https://arxiv.org/abs/1908.10084" target="_blank">Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks</a></p>
<p>One of their papers that inspired my solution uses unlabeled external data and pseudo-labeling to improve sentence embedding performance. They use a small labeled dataset to train a cross encoder, retrieve pairs of unlabeled external data using BM-25 or Sentence BERT and label these pairs using the cross encoder trained on the gold dataset. They then proceed to train a Sentence BERT using the gold dataset and the pseudo-labeled data. See: <a href="https://arxiv.org/abs/2010.08240" target="_blank">Nandan Thakur, Nils Reimers, Johannes Daxenberger, Iryna Gurevych. 2020. Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for Pairwise Sentence Scoring Tasks</a></p>
<p>Another paper that was very relevant to this competition is a paper published by Facebook AI and Stanford, where the authors use self-training to improve model performance on a wide range of NLU tasks. Here, a large external sentence bank is searched with sentence embeddings and the retrieved data is labeled by a model which was finetuned on the target data. They then use the pseudo-labeled data to train another model. See: <a href="https://arxiv.org/abs/2010.02194" target="_blank">Jingfei Du, Edouard Grave, Beliz Gunel, Vishrav Chaudhary, Onur Celebi, Michael Auli, Ves Stoyanov, Alexis Conneau. 2020. Self-training Improves Pre-training for Natural Language Understanding</a></p>
<p>I also tried to transfer the results from Google's noisy student paper to NLP. In the original paper, the authors apply their strategy to image classification, but I wanted to see if it could also be applicable in this competition. They train a model on a set of labeled images and then use this model to label unlabeled images. They then use this data to train an equally-sized or larger student model. Contrary to the solutions above, they do not stop after one round of pseudo-labeling. Instead, they use the resulting student model as the new teacher and label more data using this model. They also introduce noise in the form of data augmentation and various dropout techniques for the student. In total, they find that 3 rounds of teacher/student labeling yield the best results. I tried to run multiple rounds with my approach and I also tried to introduce noise in the form of backtranslation and word replacements (predicting MASK tokens), I also added dropout for my student. However, while multiple rounds of pseudo-labeling continuously improved my CV, my LB score got worse. I think I was overfitting to my evaluation folds because I was evaluating so often. See: <a href="https://arxiv.org/abs/1911.04252" target="_blank">Qizhe Xie, Minh-Thang Luong, Eduard Hovy, Quoc V. Le. 2019. Self-training with Noisy Student improves ImageNet classification.</a></p>
<p><strong>Training process</strong></p>
<p><em>Basics:</em></p>
<ul>
<li>I used 6-fold crossvalidation or bootstrapping for the models used in my final submission</li>
<li>models trained were albert-xxlarge, deberta-large, roberta-large, electra-large and roberta-base</li>
</ul>
<p><em>External data selection:</em></p>
<ul>
<li>I compiled a corpus of texts that seemed relevant to this competition (simplewiki, wikipedia, bookcorpus, …)</li>
<li>I made text snippets from external data that had roughly the same length as the data in the train set</li>
<li>for each excerpt in the train set, I used this model <a href="https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L6-v2" target="_blank">https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L6-v2</a> to generate sentence embeddings and retrieve the five text snippets which had the highest cosine similarity to the original excerpt</li>
</ul>
<p><em>Pseudo-labeling:</em></p>
<ul>
<li>I trained a roberta base model on the train set and used my best model to label the external data that I retrieved in the first step</li>
<li>later on, I used the models from this notebook <a href="https://www.kaggle.com/andretugan/commonlit-two-models" target="_blank">https://www.kaggle.com/andretugan/commonlit-two-models</a> to do the pseudo-labeling (good improvements in CV and LB)</li>
<li>I used standard error of each original excerpt to filter the pseudo-labeled external samples. Each external sample which had a pseudo-label score that deviated more from the original excerpt than it's standard error was removed from the external data selection</li>
</ul>
<p><em>Training:</em></p>
<ul>
<li>first, I trained a single model just on the pseudo-labeled data</li>
<li>I used low learning rates (7e-6 to 1e-5)</li>
<li>depending on model size, I evaluated every 10 - 600 steps</li>
<li>I evaluated on the whole train set</li>
<li>the best model was saved</li>
</ul>
<p>Then:</p>
<ul>
<li>I used the model from the previous step and trained 6 models on 6 folds of the original train set</li>
<li>low learning rates</li>
<li>evaluating every 10 steps</li>
<li>I also trained a single albert-xxlarge on all of the training data without evaluation for 4 epochs (albert xxlarge was very stable for me). This single model got a public LB score of 0.459 and I believe it was very important in my winning submission</li>
<li>I also trained some models using bootstrap sampling instead of crossvalidation</li>
</ul>
<p><em>Ensembling:</em></p>
<ul>
<li>I got the out of fold predictions for each model that I trained</li>
<li>I then made a new 6-fold splits of the oof-samples</li>
<li>I used the new split to train 6 ridge regression models</li>
<li>the final submission used 2 ridge regression ensembles of different models, a bootstrapped model and the single albert-xxlarge model</li>
<li>ensembles and other models were aggregated using a weighted average (the weights for each ensemble were chosen by feel and public LB score)</li>
</ul>
<p><strong>Bonus</strong></p>
<p>I also trained a completely different model using sentence transformers. For this model, I generated pairs of excerpts from the original data and the pseudo-labeled data. I then used the difference between the scores of each pair to generate labels for the difficulty distance between these pairs. I then trained a sentence embedding model using these pairs. For inference, I embedded both the train set and the test set. Then, I used the sentence embedding model to find for each test sample the 20 most similar (actually most similar in difficulty) samples from the train set. I then used the score of the selected train samples and the cosine similarity as features for another regressor (Ridge or BayesianRidge). My best model based on this sentence embedding approach scored 0.447 on public and private LB (sentence embeddings + other models). Using only sentence embeddings I got 0.456 on public LB and 0.463 on private LB.</p>
<p><strong>EDIT</strong><br>
Here is the link to the code that I used for training: <a href="https://github.com/mathislucka/kaggle_clrp_1st_place_solution" target="_blank">https://github.com/mathislucka/kaggle_clrp_1st_place_solution</a></p>
<p>Thanks <a href="https://www.kaggle.com/abcbum" target="_blank">@abcbum</a> ! Also congrats to your 13th place. We did indeed apply the same reasoning :D </p>
<p>Yes, it's stacking. Say I have trained 2 6-fold models: albert and deberta. This gives me a total of 12 models. For each model I predict the fold that was not seen during training, giving me 2 model predictions for each fold. I then merge all out of fold predictions and make a new 6-fold split. Then, I use these folds to train a RidgeRegression model. This results in 6 RidgeRegression models each using albert and deberta predictions as an input. I then averaged the 6 ridge models for submissions (simple average)</p>
<p>Thanks. I tried dropout but it didn't work very well for me.</p>
<p>No, I did not remove them.</p>
<p>Well :D Actually I did ensemble quite a few models. In total, I count 38 transformer models in my final submission.</p>
<p>(1) I didn't get a good "difficulty sentence transformer" up until close to the competition deadline. There was simply no time to do the data selection again using the difficulty model. My main motivation for choosing the paraphrase model was that I wanted data that was similar to the training data (similar in the sense of semantic similarity) because I expected that the test data would also be similar to the training data. It didn't really matter if the retrieved sentence had a different difficulty because I used a separate model to predict the difficulty of all retrieved sentences and then filtered these predictions on standard error of the original sample to keep roughly the same distribution. My score actually got worse quite a bit when I didn't do the filtering step. I also tried different top k values for the retrieval and 5 to 10 additional data snippets per train sample seemed to be the sweet spot. When selecting only data that was a bit further away from my training samples (cosine similarity wise) the score would also get worse.</p>
<p>(2) I actually submitted a model which included the sentence transformer model but it scored 0.447 both on the private and public leaderboard. I decided to also choose one model without the sentence transformer model because I did not want to take the risk of submitting 2 very similar models. Fun fact: my best performing model (which I didn't select) even scored 0.445 on the private test set but it was also not using the sentence transformer model. Also the sentence transformer model took quite a bit of time because each fold needed to encode both the train and the test set. With one submission I even hit the timeout. I think the model that timed out would have performed even better because it was combining the best of both worlds.</p> |
CommonLit Readability Prize | 20th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all I would like to thank the <strong>CommonLit</strong> Team and <strong>Kaggle</strong> for putting together an interesting challenge that allowed for the exploration of state-of-the-art NLP approaches for this challenge. The small size of the training data and the variance in the labeling of the target by reviewers, made it an interesting challenge. </p>
<h1>Approach at a high level:</h1>
<p>Our team <strong>Force Awakens</strong> merged 1 month before the competition deadline and we found out that we had taken 2 different approaches to the challenge which worked quite well in our favor. </p>
<ul>
<li>Regression Approach - I will outline the regression approach which is what a lot of the public kernels are using as well. </li>
<li>Pairs Model Approach - Discussed <a href="https://www.kaggle.com/c/commonlitreadabilityprize/discussion/257360" target="_blank">here</a> This was pretty unique and something I haven't seen discussed in the forums. </li>
</ul>
<p>The regression approach by itself scored <code>0.454</code> on public LB and <code>0.457</code> on private LB. Combined with the pairs model approach which also scored <code>0.454</code> on public, <code>0.454</code> on private, our overall best score was <code>0.450</code> on public LB and <code>0.452</code> on private. </p>
<h1>Regression Approach:</h1>
<p>We trained many different architectures including base/large/x-large. Similar to what many people observed, we found that the large models worked best for this dataset and target distribution. We tried a lot (and I mean a lot!) of architectures including but not limited to <em>Roberta/Electra/Funnel/Deberta/StructBERT/BERT/XLNET/BART/T5/AlBERT//Luke/DistilBERT</em> etc. <br>
The models that performed best on the regression task were:<br>
- <strong>Roberta-large</strong> - Public <code>LB 0.464</code> <br>
- <strong>Funnel-large</strong> - Public <code>LB 0.465</code><br>
- <strong>Deberta-large</strong>- Public <code>LB 0.466</code> <br>
- <strong>Electra-large</strong> - Public <code>LB 0.468</code> <br>
- <strong>Roberta-base</strong> - Public <code>LB 0.467</code> </p>
<p><strong>Training approach</strong>:</p>
<ul>
<li>This is where we spent a lot of time trying out different learning rate schedulers, loss functions, different embeddings of BERT models and custom heads on top of the embedding layers. </li>
<li>What worked best for us for the regression approach was using the CLS token and then using a custom head with 2 linear layers and GELU activation. </li>
</ul>
<pre><code> self.whole_head = nn.Sequential(OrderedDict([
('dropout0', nn.Dropout(args.use_dropout)),
('l1', nn.Linear(args.fc_size, 256)),
('act1', nn.GELU()),
('dropout1', nn.Dropout(args.use_dropout)),
('l2', nn.Linear(256, 1))
]))
</code></pre>
<ul>
<li>Training each model was a challenge in itself because there was no silver bullet of params that worked well for all architectures. On top of that, the scores across different folds varied quite a bit so we took an approach where we could specify the fold/seed in the training script and re-trained poor performing folds again with different params like changing the LR, eval steps, use of MLM, dropout. </li>
<li>Pre-training the models on MLM task on competition data seemed to help but we never used it for all folds. Using the approach in the previous bullet point, we used MLM pretrained model on a few folds and that worked best on LB vs using it for all folds (possibly preventing overfitting)</li>
<li>Things that worked well:<br>
- Using pre-trained MLM model on some folds<br>
- Using a small learning rate 1e-5/2e-5 with cosine scheduler, 5 epochs and running eval intervals in each epoch often. <br>
- Using differential learning rates for BERT layers</li>
<li>Things that did not work:<br>
- Data augmentation<br>
- Adding additional readability features either to last NN layer in BERT or using meta-learning like SVR/GBM based models. <br>
- Reinitializing of last N BERT layers - Local CV's looked promising but this never resulted in better LB at least for us. <br>
- Multisample dropout<br>
- Some Architectures like XLNET, ALBERT, T5, never converged. <br>
- Different BERT embeddings like BERT pooler or combination of mean/max pooler, last N hidden layers etc. </li>
<li>Our final regression model was a simple blend of the 5 architectures listed above. We tried meta-learners like SVR/Ridge/GBM on top but they didn’t seem to help much. </li>
</ul>
<p>On a personal note, I learned a lot about fine-tuning BERT architectures with various innovative strategies and that is something that will definitely help in the future (Kaggle/Work) </p>
<p>Special thanks to all my team mates <a href="https://www.kaggle.com/eduardopeynetti" target="_blank">@eduardopeynetti</a> <a href="https://www.kaggle.com/ryches" target="_blank">@ryches</a> <a href="https://www.kaggle.com/maxjeblick" target="_blank">@maxjeblick</a> and <a href="https://www.kaggle.com/jesucristo" target="_blank">@jesucristo</a> for coming up with great approaches and ideas that helped improve our models. </p>
<p>Also a shout out to <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a> for sharing some great ideas which I am sure a lot of people have seen and benefited from. </p>
<p>Congrats on 19th place !</p>
<p>Ah ok. The external dataset refers to augmenting the training dataset with more training samples using external data. Almost all notebook competitions allow for loading a trained model to infer.</p>
<p><a href="https://www.kaggle.com/darrentitor" target="_blank">@darrentitor</a> pairs approach is here: <a href="https://www.kaggle.com/c/commonlitreadabilityprize/discussion/257360" target="_blank">https://www.kaggle.com/c/commonlitreadabilityprize/discussion/257360</a> </p>
<p><a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> pairs approach is here: <a href="https://www.kaggle.com/c/commonlitreadabilityprize/discussion/257360" target="_blank">https://www.kaggle.com/c/commonlitreadabilityprize/discussion/257360</a> </p>
<p>Yes, writeup coming soon. Will post the link here. </p>
<p>Yes, writeup coming soon. Will post the link here.</p>
<p>You dont really need to train the models in the Kaggle kernels. All solutions use the trained models and load the weights to do inference. We didnt find the 3 hr limit restrictive for this competition at all. You could get a lot of models running inference in that time w/ GPU. </p>
<p>Thank <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a> for all the amazing kernels in this competition. It helped me as well. </p>
<blockquote>
<p>which GPU you have used?</p>
</blockquote>
<p>2 x RTX 2080</p>
<blockquote>
<p>can you share some intuition for custom head?</p>
</blockquote>
<p>We tried different heads including single FC layer but this arch seemed to work better along with GELU activation. Dropout was added to prevent overfitting considering the small dataset size</p>
<blockquote>
<p>Will you share scripts of pretaining and fine-tuning?</p>
</blockquote>
<p>Code is here <a href="https://github.com/mv-lab/commonlit-readability-nlp" target="_blank">https://github.com/mv-lab/commonlit-readability-nlp</a></p> |
CommonLit Readability Prize | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank Kaggle and host for hosting such an interesting competition! </p>
<h2>Summary</h2>
<p>I chose the final sub the one with the best Public and the one with the best CV.</p>
<table>
<thead>
<tr>
<th></th>
<th>CV</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>best Public</td>
<td>0.4503</td>
<td>0.444</td>
<td>0.446</td>
</tr>
<tr>
<td>best CV</td>
<td>0.4449</td>
<td>0.447</td>
<td>0.447</td>
</tr>
</tbody>
</table>
<p>The following is about best Public.(In best CV, the weights were determined by nelder-mead and the restriction of weight summing to 1 has been removed. And the models were partially changed from best Public. )<br>
I ensembled 19 models and did a post process. I adjusted the weights of the model by looking at the LB and CV. I've included negative values for weights as well as positive ones. In the post process, I multiplied different coefficients depending on the predicted value.</p>
<h2>Cross Validatioin</h2>
<p>I used the following.<br>
<a href="https://www.kaggle.com/abhishek/step-1-create-folds" target="_blank">https://www.kaggle.com/abhishek/step-1-create-folds</a></p>
<h2>Model & Weight</h2>
<p>I trained the model with dropout set to 0 except for 1 and 2. And, I did mlm pretraining only for model 3. Weights were calculated by nelder-mead and then tuned for higher LB</p>
<table>
<thead>
<tr>
<th>model</th>
<th>CV</th>
<th>Public</th>
<th>weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>1. roberta-base -> svr</td>
<td>0.500</td>
<td>0.476</td>
<td>0.020</td>
</tr>
<tr>
<td>2. roberta-base -> ridge</td>
<td>0.500</td>
<td></td>
<td>0.020</td>
</tr>
<tr>
<td>3. roberta-base</td>
<td>0.485</td>
<td>0.476</td>
<td>0.040</td>
</tr>
<tr>
<td>4. roberta-large</td>
<td>0.483</td>
<td>0.463</td>
<td>0.088</td>
</tr>
<tr>
<td>5. muppet-roberta-large</td>
<td>0.480</td>
<td>0.466</td>
<td>0.022</td>
</tr>
<tr>
<td>6. bart-large</td>
<td>0.476</td>
<td>0.469</td>
<td>0.090</td>
</tr>
<tr>
<td>7. electra-large</td>
<td>0.483</td>
<td>0.470</td>
<td>0.050</td>
</tr>
<tr>
<td>8. funnel-large-base</td>
<td>0.479</td>
<td>0.471</td>
<td>0.050</td>
</tr>
<tr>
<td>9. deberta-large</td>
<td>0.481</td>
<td>0.460</td>
<td>0.230</td>
</tr>
<tr>
<td>10. deberta-v2-xlarge</td>
<td>0.486</td>
<td>0.466</td>
<td>0.050</td>
</tr>
<tr>
<td>11. mpnet-base</td>
<td>0.482</td>
<td>0.470</td>
<td>0.130</td>
</tr>
<tr>
<td>12. deberta-v2-xxlarge</td>
<td>0.482</td>
<td>0.465</td>
<td>0.140</td>
</tr>
<tr>
<td>13. funnel-large</td>
<td>0.475</td>
<td>0.464</td>
<td>0.110</td>
</tr>
<tr>
<td>14. gpt2-medium</td>
<td>0.498</td>
<td>0.478</td>
<td>0.170</td>
</tr>
<tr>
<td>15. albert-v2-xxlarge</td>
<td>0.486</td>
<td>0.467</td>
<td>0.120</td>
</tr>
<tr>
<td>16. electra-base</td>
<td>0.493</td>
<td></td>
<td>-0.170</td>
</tr>
<tr>
<td>17. bert-base-uncased</td>
<td>0.507</td>
<td></td>
<td>-0.140</td>
</tr>
<tr>
<td>18. t5-large</td>
<td>0.504</td>
<td></td>
<td>-0.110</td>
</tr>
<tr>
<td>19. distilbart-cnn-12-6</td>
<td>0.489</td>
<td>0.479</td>
<td>0.090</td>
</tr>
</tbody>
</table>
<h2>Post process</h2>
<p>Post process improved the score by about 0.001 ~ 0.002. The coefficients were calculated by nelder-mead and then tuned by looking at Public. The threshold was determined by looking at CV and Public</p>
<ul>
<li>pred >= 0.3 -> pred * 1.07</li>
<li>0.3 > pred >= 0 -> pred * 1.2</li>
<li>0 > pred >= -0.7 -> pred * 0.974</li>
<li>-0.7 > pred >= -0.9 -> pred * 1.01</li>
<li>-0.9 > pred > =-2 -> pred * 1.021</li>
<li>-2 > pred -> pred * 1.027</li>
</ul>
<h2>GPU</h2>
<ul>
<li>A100(GCP)</li>
<li>V100(GCP)</li>
<li>P100(kaggle)</li>
</ul>
<h2>Edit</h2>
<p>Inference notebook(final sub)<br>
<a href="https://www.kaggle.com/takoihiraokazu/lb-ensemble-add-electra-base-bert-base2-t5-diba2?scriptVersionId=69659799" target="_blank">https://www.kaggle.com/takoihiraokazu/lb-ensemble-add-electra-base-bert-base2-t5-diba2?scriptVersionId=69659799</a><br>
Inference notebook(Partially modified version)<br>
<a href="https://www.kaggle.com/takoihiraokazu/final-sub1" target="_blank">https://www.kaggle.com/takoihiraokazu/final-sub1</a><br>
code<br>
<a href="https://github.com/TakoiHirokazu/kaggle_commonLit_readability_prize" target="_blank">https://github.com/TakoiHirokazu/kaggle_commonLit_readability_prize</a></p>
<p>Thanks! I expect you to be GM in next competition!</p>
<p>Yes, roughly 15 subs were used to tune the post process.</p>
<p>Thank you very much. The cost of GPU instances was about $1500.</p>
<p>Thanks! I used drop out in bert part.</p>
<p>Thanks!<br>
First, I calculated the coefficients with the following. The threshold value was determined by looking at the distribution of the target of the train and LB while changing the threshold value.<br>
Because of the slight difference in the distribution of the target between train and test, it was better to increase or decrease the coefficients to get a better LB than to use the results as they are, so I adjusted them while watching the LB.</p>
<pre><code>from scipy.optimize import minimize
def f(x):
pred1 = y_test.copy()
pred1[y_test >= 0.3] = y_test[y_test>= 0.3] * x[0]
pred1[(y_test < 0.3) & (y_test >= 0)] = y_test[(y_test< 0.3) & (y_test >= 0)] * x[1]
pred1[(y_test < 0) & (y_test >= -0.7)] = y_test[(y_test < 0) & (y_test >= -0.7)] * x[2]
pred1[(y_test < -0.7) & (y_test >= -0.9)] = y_test[(y_test< -0.7) & (y_test>= -0.9)] * x[3]
pred1[(y_test < -0.9) & (y_test >= -2.0)] = y_test[(y_test< -0.9) & (y_test>= -2.0)] * x[4]
pred1[(y_test < -2.0)] = y_test[(y_test < -2.0)] * x[5]
score = np.sqrt(mean_squared_error(y, pred1))
return score
result = minimize(f, [1,1,1,1,1,1], method="Nelder-Mead")
result
</code></pre>
<p>I did not use any external dataset.</p>
<p>ありがとうございます!<br>
First, I set the initial value of weight as shown below, and then I adjusted the weight so that the LB would be high by increasing the weight of models with high LB.</p>
<pre><code>from scipy.optimize import minimize
def f(x):
pred1 = (ex15 + ex237)/2 * x[0] + ex64 * x[1] + \
(ex72 * 0.8 + ex384 * 0.2)* x[2] + ex107 * x[3] + \
(ex190 + ex272) / 2 * x[4] + ex182 * x[5] + ex194 * x[6] +\
ex292 * x[7] + ex216 * x[8] + ex407 * x[9] + ex429 * x[10] +\
ex450 * x[11] + ex465 * x[12] + ex434 * x[13] + \
ex497 * x[14] + ex507 * x[15]
score = np.sqrt(mean_squared_error(y, pred1))
return score
weight_init = [ 1 / 16 for _ in range(16)]
result = minimize(f, weight_init , method="Nelder-Mead")
result
</code></pre>
<p>Thanks! I mainly used cls embedding and mean pooling. The privete score of deberta large is 0.463 and the private score of deberta xxlarge is 0.466. deberta large was better.</p>
<p>Thanks!<br>
I did not know during the competition if LB tuning would work in this competition. Since the socre I could refer to was LB and CV, I tried to make one sub higher LB, although the data was small.</p>
<p>I'm sorry for the late reply, but the score of the one with drop out was low as it is, so I output embedding and used it as a feature to train with svr and ridge.</p>
<p>I'm sorry, but I don't know of any material to learn about post process. In this case, I decided to post process by comparing the target and predicted values, or by looking at how many weights sum up when optimizing the ensemble weights. The total optimized weight was about 1.02.</p>
<p><a href="https://www.kaggle.com/potatoc5" target="_blank">@potatoc5</a> I think so too. This is the first time I've used negative weights in an ensemble.</p>
<p>Thanks for the question. I used the model with negative weights because it improved my score. In fact, it improved by 0.003 for public and 0.004 for private. </p>
<p>In Best public, weight is manually adjusted by looking at LB, so there is not much correlation between CV and weight. As for the negative weight, I was referring to the value when I did the weight optimization.</p> |
CommonLit Readability Prize | 3rd place solution (0.447 private - 3 models simple average) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h3>update</h3>
<p>Sept 10, 2021 - added link to submission notebook<br>
Aug 16, 2021 - added code link</p>
<h3><strong>tl;dr at the bottom</strong></h3>
<p>First, I would like to thank Kaggle and CommonLit for hosting this competition and the community here at Kaggle for making this platform a wonderful place to learn. This is my second competition at Kaggle, with the first one being the Coleridge Initiative; I have to say the past three months or so has been a tremendous journey. If anyone told me three months ago that I would end up in 3rd place in a Kaggle competition, I would not believe it. None of this would be possible without Kaggle and its incredible community.</p>
<p>Special thanks to:<br>
<a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a> for sharing so many great notebooks with detailed and excellent explanations. Those notebooks guided my first few weeks in this competition. <br>
<a href="https://www.kaggle.com/andretugan" target="_blank">@andretugan</a> for sharing a simple yet elegant solution. Some parts of my final model are from his approach.</p>
<h2>Analysis of the objective</h2>
<p>Before I go over my approach, I'd like to share my point of view on the competition objective. Please correct me if what I say is wrong or doesn't make sense to you.<br>
This competition is characterized by small training data and noisy, non-intuitive to understand target values. The target is generated by the Bradley-Terry algorithm from a relatively small number of pair-wise comparison data. From my intuition, Bradley-Terry acts like an Elo rating system where match results are translated to raw numbers representing the "strength" of each player. (again, this could be wrong and is merely my point of view) In this competition, the texts are players, and the "strength" is "easiness to read," where an easier-to-read text would score higher. <br>
Training data + public/private data nets to little less than 5k samples, and each sample is compared with others 46.47 times on average. That means each text was compared to only ~1% of the entire dataset on average. Imagine a chess player's Elo where the player has only played against 1% of the whole chess player population, and the opponents are selected at random. I believe most of the noise is due to the small number of comparisons. <br>
The objective of the competition is to predict where an arbitrary text would land on that leaderboard generated from 5k texts.</p>
<h2>Path to public LB 0.442:</h2>
<p>I've played with Roberta base mostly at the beginning of the competition, and with a gru+multihead attention on the last hidden layer + learning rate decay, I was able to get <strong>~0.473 on both public and private LB.</strong><br>
I then switched to large and with the same setup I was at around <strong>0.465-0.467 on both public and private LB.</strong><br>
After which I learned from <a href="https://www.kaggle.com/andretugan" target="_blank">@andretugan</a>'s notebook and incorporated 3 group lr decay, simple attention head, and validation step policy into my solution<br>
Roberta-large + simple attention head + learning rate decay: <strong>0.462 on both LB</strong></p>
<p>0.462 + weighted average of all layers and multisample dropout (from google quest #1 solution) + separate lr for head: **0.459 on public **<br>
0.459 + a 3 stage lr schedule: <strong>0.454 on public</strong><br>
<img src="https://i.ibb.co/fNjh6pc/LR.png" alt="custom schedule"><br>
0.454 + early stop at the beginning of 3rd epoch: <strong>0.453 on public</strong><br>
However, <strong>the private did not agree with the improvement to 0.453</strong>, which I didn't know at the time. <strong>Private LB stayed at ~0.461</strong><br>
An ensemble of 2 Roberta large and a Roberta base trained this way, however, got <strong>0.454 on public</strong> and <strong>0.456 on private</strong>. </p>
<p>I then used two of my 0.453 public models to label extra data consists of simple wiki dump, children's book test, and OneStopEnglishCropus.<br>
Using all folds to label lead to unrealistic low cv (due to leak) and no improvement to LB, so I used 5 fold pseudo label described in google quest #1 solution. </p>
<p>I first train the model on pseudo labeled data and then finish up with low lr 3 epoch and an lr scheduler with a decaying rate (1,0.5,0.25) for each epoch, no warmup.</p>
<p>2 cycles improved my public to 0.448: <strong>0.453 -> 0.450 -> 0.448</strong><br>
<strong>yet private still stayed at ~0.461</strong></p>
<p>I then used the average of two 0.448 models to label the data again and used that data to train a Deberta-large; Deberta ends up with <strong>0.448 on public and 0.449 on private</strong></p>
<p>ensemble of a Roberta-large and a Deberta-large: <strong>0.442 on public, 0.448 on private</strong></p>
<p>Another model trained with a further cycle labeled with 0.442 public was added to the blend and got <strong>0.442/0.447 on public/private</strong>. This is the best scoring out of the two I selected.</p>
<p>Best private was <strong>0.446</strong>, which I did not select because it didn't improve public LB. (I turned my eyes away from cv long ago) <br>
The difference of this one to the 0.447 private is the third model. The new model is trained with training data mixed into pseudo labeled extra data. The train data however, was relabeled and mixed with the original train data with weight of 0.5. I tested 0.8 and 1 but 0.5 performed the best.</p>
<h2>what I wish I had time to do:</h2>
<ul>
<li>ensembling techniques, In my solution I only used simple average as I didn't have time to explore ensembling techniques</li>
<li>better pseudo-label pipeline. I only tried a few variants and I believe some are always better than others </li>
<li>pairwise comparison model which simply outputs which text is easier to read to generate raw data for Bradley-Terry algorithm, convert it to target score using Bradley-Terry algorithm, and then maybe recalibrate to competition training data distribution. (This might be a wild-goose chase, but seems like a very promising one to me)</li>
</ul>
<h2>hardware:</h2>
<p>1080ti at the beginning of the competition <br>
bought a 3090 when switching to large models</p>
<h2>tl;dr:</h2>
<p>what worked:</p>
<ul>
<li>3 group decaying lr for encoder</li>
<li>simple attention head</li>
<li>separate lr for head</li>
<li>multi-sample dropout</li>
<li>weighted average of hidden layers</li>
<li>early stop</li>
<li>fold specific pseudo-label</li>
<li>Deberta-large</li>
</ul>
<p>code: <a href="https://github.com/Danielhuxc/CLRP-solution" target="_blank">https://github.com/Danielhuxc/CLRP-solution</a><br>
notebook: <a href="https://www.kaggle.com/potatoc5/final-sub" target="_blank">https://www.kaggle.com/potatoc5/final-sub</a></p>
<p>PS: I am looking for ML/DL job (preferably within Canada), please DM me if you would like to get in touch.</p>
<p>Thank you for your kind words! I've learned so much in the past 3 months and a solo gold with a prize is more than I can dream of. I consider myself very lucky already. what a wild journey. </p>
<p>There might be some paper out there about lr schedule but I did not follow one. I tuned the lr scheduler by experimenting with different settings and kept the one that's learning the fastest (faster decrease of val loss) with the least amount of data. Stability of val loss is also important.</p>
<p>Thank you for your question. In the part you quoted I relabeled external data only, however, I did make an attempt to relabel train data in the last day and it did boost my private to 0.446. I've updated the details about 0.446 private in the original post. I did not have the time to pursue this path but I think with a proper pipeline single model performance could be further boosted.</p>
<p>Before/after fold specific pseudo label:</p>
<table>
<thead>
<tr>
<th></th>
<th>CV</th>
<th>Private</th>
<th>Public</th>
</tr>
</thead>
<tbody>
<tr>
<td>Before</td>
<td>449</td>
<td>457</td>
<td>456</td>
</tr>
<tr>
<td>After</td>
<td>479</td>
<td>458</td>
<td>450</td>
</tr>
</tbody>
</table>
<p>Looking back at it, it did not help in generalization to private LB. Instead, it might have learned more biases from each fold than generalizable knowledge. If you want to use this method, I'd suggest you do your own testing. And if you ever do, I'd appreciate it if you share the result with me.</p>
<p>As for weights for hidden layers, I set them as parameters and have the model learn them during training. I learned this from <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/129840" target="_blank">google quest 1st solution</a> as well.</p>
<p>Thank you! For 3 group decaying lr I tuned a single parameter decay_rate/layer on base model. When used on large models I use the square root of it to maintain the relative lr ratio between groups. 3 stage lr schedule is made by eyeballing validation loss, while it should be different for every data-model combination I find my schedule work reasonably well for all models I used. The only graph in the original post is a full cycle of my lr scheduler. </p> |
CommonLit Readability Prize | 40th Place solution: How we improved our rank from 1100th three days ago | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all <a href="https://www.kaggle.com/shahules" target="_blank">@shahules</a> and I would like to start by heartily thanking the following people: <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a>, <a href="https://www.kaggle.com/maunish" target="_blank">@maunish</a>, <a href="https://www.kaggle.com/jcesquiveld" target="_blank">@jcesquiveld</a>, <a href="https://www.kaggle.com/andretugan" target="_blank">@andretugan</a>, <a href="https://www.kaggle.com/jacob34" target="_blank">@jacob34</a>, <a href="https://www.kaggle.com/kurupical" target="_blank">@kurupical</a>, <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> and <a href="https://www.kaggle.com/markwijkhuizen" target="_blank">@markwijkhuizen</a>. Without their great work and discussion it would not have been possible for us to have made it here. Additionally I would like to personally thank <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a> for his wonderful book <strong>Meta learning</strong>. This book inspired me with a lot of ideas in general as well as I used them in this competition. I highly recommend this book to all fellow data scientists. This is my first serious Kaggle competition. I did one for a couple of days five years ago but didn't pursue it. But this time I survived :) Once again a very big hearty thank you to all of you. I personally had never worked with transformers two months ago. Thanks to this competition I am happy I discovered it. We were at 1100th place three days ago, so we are really happy that we made it to 40th.</p>
<p>Getting inspired from <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a> and <a href="https://www.kaggle.com/maunish" target="_blank">@maunish</a> I decided to develop everything publicly for the first two months of the competition. Later on <a href="https://www.kaggle.com/shahules" target="_blank">@shahules</a> and I met exactly one month ago and we formed a team. </p>
<p>We understood that the key to succeed in this competition were the following:</p>
<ol>
<li>Stabilizing the training of the transformer</li>
<li>Leveraging massive external data</li>
<li>A good ensemble of good models</li>
<li>Definitely not to overfit</li>
</ol>
<h3>Stabilizing the training of the transformer</h3>
<p>Of all these things the most difficult part was stabilizing the training. Our validation graphs looked like seismographs. Here is a snippet of the graph before and after stabilizing.<br>
<img src="https://i.ibb.co/x8q5NGy/image-3.png" alt=""></p>
<p>At least with the base model it was ok but with the large model it was just impossible. We also felt very uncomfortable with evaluating every n iterations. Therefore we spent about twenty days focusing on running experiments to stabilize the training. That's when we came across <a href="https://www.kaggle.com/jcesquiveld" target="_blank">@jcesquiveld</a>'s models trained using <strong>differential learning rate</strong>. The first time when we implemented it felt magical. The training was extremely stable, no more seismographs ;) This helped us in avoiding frequent evaluation and also let us comfortably train our models on the full dataset without validation. Yes the idea of training on full data instead of Kfold CV came from <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>. You can watch the wonderful video posted by <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a> in his youtube channel where 4 grandmasters share their secrets of success</p>
<h3>Design of work</h3>
<p>We also realized a good ensemble was key to success and independence is the key to successful ensemble. Therefore we always designed and ran experiments independently and discussed results later on so we don't bias each other as well. In order to facilitate identical experimental conditions and also not to do double work we created a central repository of all our pipeline components adhering to good coding principles. </p>
<h3>Choosing the models and not overfitting</h3>
<p>As the CV and LB correlation was not great, we had two choices: we could either probe LB and understand more about the test data but we had almost no time/submissions left for that. Our other choice was choosing models not based on their CV scores but by looking at the validation graphs, evaluating them visually if the training and validation loss look stable, and then if the final model was close to the best model (with frequent evaluation) and so on. So this was more of the data scientist instinct that helped us. We never chose a model because the CV score was high. </p>
<h3>Model definition</h3>
<p>The attention head that were used in almost all the public notebook we came across took the pad tokens into account. Therefore we modified it to exclude the pad tokens in the attention calculation. </p>
<pre><code>class MaskAddedAttentionHead(nn.Module):
def __init__(self, input_dim, head_hidden_dim):
super(MaskAddedAttentionHead, self).__init__()
self.W = nn.Linear(input_dim, head_hidden_dim)
self.V = nn.Linear(head_hidden_dim, 1)
def forward(self, x, attention_mask):
attention_scores = self.V(torch.tanh(self.W(x)))
attention_scores = attention_scores + attention_mask
attention_scores = torch.softmax(attention_scores, dim=1)
attentive_x = attention_scores * x
attentive_x = attentive_x.sum(axis=1)
return attentive_x
</code></pre>
<p>Once we stabilized the training, replacing RMSE loss with MSE loss gave us a good boost. We didn't dig deeper into the reason but I guess it had something to do with gradient flow. </p>
<h3>Leveraging massive external data</h3>
<p>One of our star differences was the pretraining of our models with the external data. Thanks to <a href="https://www.kaggle.com/markwijkhuizen" target="_blank">@markwijkhuizen</a> we did not have to scrape Wikipedia. We converted the problem into a text ranking problem and trained the model with the ranking loss as the objective. We understood that Bradley Terry scores were essentially computed by comparing pairs of text excerpts therefore we have to do something similar. As many of the text excerpts come from Wikipedia and Simple Wikipedia we trained a ranker as follows: <img src="https://i.ibb.co/HNPN1vG/text-comparator.png" alt=""><br>
Our initial experiments with pretraining did not give us good results so we dropped it. Only on the last day after running out of ideas we remembered abandoning this and thought why not try it again. The reason why transfer learning did not work well in the past for us but worked well now was because our transformer became a stable bed now. </p>
<p>Now when we look back the key to rapid progress in three days comes from the fact that we invested all our effort in preparing a very stable experimental set-up. </p>
<p>Thank you all for all your valuable discussion in the forums and notebooks. Without great work being made public, we couldn't have done this. I really cannot thank <a href="https://www.kaggle.com/maunish" target="_blank">@maunish</a> and <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a> enough. This continues to inspire us to develop more and more publicly. Thank you.</p>
<p>Here is our code: <a href="https://www.kaggle.com/vigneshbaskaran/commonlit-nn-kit" target="_blank">https://www.kaggle.com/vigneshbaskaran/commonlit-nn-kit</a></p>
<p>Hi Gary,<br>
I am very sorry for the late response. I have released a small prototype of the comparator here: <a href="https://www.kaggle.com/vigneshbaskaran/commonlit-text-comparator-rmse-vs-ranking-loss" target="_blank">https://www.kaggle.com/vigneshbaskaran/commonlit-text-comparator-rmse-vs-ranking-loss</a></p>
<p>Essentially <strong>comparator</strong> is a model that takes in a pair of inputs and identifies which one is more difficult to read in comparison to the other. <strong>Readability ground truth</strong> is the target of the text available from the training data. Please go through the notebook and please don't hesitate to write to me if you have follow up questions. Thank you</p>
<p>Thank you Chris. We also experimented with dropout and observed similar effect so we also set all the dropouts to 0. We set both the hidden and the attention dropouts to 0. We didn't find this inmtuitive but nevertheless we couldn't investigate the reason. </p>
<pre><code>'roberta_hidden_dropout_prob': 0.0,
'roberta_attention_probs_dropout_prob': 0.0,
</code></pre>
<p>Thank you <a href="https://www.kaggle.com/davidedwards1" target="_blank">@davidedwards1</a> </p>
<p>Thanks <a href="https://www.kaggle.com/overwatcher" target="_blank">@overwatcher</a>. Yes its the same. I just noticed that my private score has improved by +0.003 just by training on full dataset. We know this is a great boost in this competition. So all credits goes to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> </p>
<p>Hi <a href="https://www.kaggle.com/spy1234567" target="_blank">@spy1234567</a><br>
We learnt about this technique from <a href="https://www.kaggle.com/jcesquiveld" target="_blank">@jcesquiveld</a>'s discussion post and we implemented it ourselves. You may find our code here: <a href="https://www.kaggle.com/vigneshbaskaran/commonlit-nn-kit" target="_blank">https://www.kaggle.com/vigneshbaskaran/commonlit-nn-kit</a></p>
<p>Under the function <code>get_optimizer_parameters</code> with <code>group_mode</code>=<code>be_wd</code> you may find our implementation.</p>
<p>It was the same working with you <a href="https://www.kaggle.com/shahules" target="_blank">@shahules</a>! I am extremely happy that I got a great friend through Kaggle! I learnt so much from you. Thank you Shahul! </p>
<p>Hi <a href="https://www.kaggle.com/shivu11" target="_blank">@shivu11</a> <br>
It was the most difficult part of this competition. I made it very clear to myself it is ok if I don't make progress but its not ok to drop out. Here is a blog that inspired me to keep going: <a href="https://typesense.org/blog/the-unreasonable-effectiveness-of-just-showing-up-everyday/" target="_blank">https://typesense.org/blog/the-unreasonable-effectiveness-of-just-showing-up-everyday/</a> . Hope this helps you as well! Good luck with future competitions</p>
<p>Thank you very much <a href="https://www.kaggle.com/jcesquiveld" target="_blank">@jcesquiveld</a> </p> |
CommonLit Readability Prize | 5th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks Kaggle and CommonLit for hosting this competition I learnt a lot in my first NLP competition.<br>
And also great thanks to <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a> (I learnt a lot about NLP from your notebooks.), <a href="https://www.kaggle.com/leighplt" target="_blank">@leighplt</a> (I use your notebook as initial baseline.) and <a href="https://www.kaggle.com/konradb" target="_blank">@konradb</a> (I use features from your notebook).</p>
<p><strong>CV</strong></p>
<ul>
<li>I use k fold without shuffle since I note that the training set is not shuffled and the excerpt seems comes in batchs of different sources. I think the test set might also composed of excerpts from other sources and it is well known that test set has more modern texts. So I think use k fold without shuffle might better measure the 'out of distribution' generaliztion ability.</li>
<li>According to my early stage experiments CV without shuffle usally get cv score around 0.01 higher than shuffled ones. As LB scores are average of 5 models (I use 5 fold) and CV is given by oof prediction of 5 models but one model each time so I expect CV score to be higher than LB scores.</li>
<li>Although I think k fold without shuffle seems to be better but I didn't systematically check whether it is superior than shuffled ones or even further group by bins.</li>
</ul>
<p><strong>Dropout</strong></p>
<ul>
<li>Model performance seems to have a gap between .train() and .eval() mode when apply dropout in this task.I think this might comes from potential bugs that I can't identify or the variance difference between .train() and .eval() of some layers. The scaling factor in dropout in general don't guranteen the layer to have same variance and 'Readbility' sounds like something that will related to 'variance'/'perplexity'/… Of course this might not be the real reason.</li>
<li>I was not able to solve this problem thoroughly. I use dropout in my models but use .train() to inference, this give me reasonable performace and superior to the zero dropout setting in my case. Due to randomness from dropout I average the predictions from same model but with different seed while inference, this gives me further boost.</li>
</ul>
<p><strong>Other hyper-parameters that worked for me</strong></p>
<ul>
<li>Adamw 5 epochs</li>
<li>cosine learning rate scheduler</li>
<li>change learning rate exponentially across layers</li>
<li>re-initialize some layers</li>
<li>gradient accumulation for larger batch size (I only tried batch size 32 with base models and 16 with large models due to notebook time limit)</li>
<li>linear head followed by mean pooling on last hidden state (some times I use two linear layers for output but mainly due to I want to keep in track with one of my ealier experiemnt, perfromance of two or one linear layer is not much different.)</li>
</ul>
<p>I included one model with ITPT in final solution but just for diversity it didn't work for me in terms of both CV and LB.</p>
<p><strong>Single models for final inference</strong><br>
(score without dropout average)</p>
<ul>
<li>F2 roberta large CV 0.4839 public LB 0.470 </li>
<li>F17 roberta base CV 0.4758 public LB 0.468 </li>
<li>F23 roberta large CV 0.4792 public LB 0.465 </li>
<li>F32 deberta large CV 0.4749 public LB 0.469 </li>
<li>F41 roberta large mnli CV 0.4796 public LB 0.475 </li>
<li>F50 funnel large CV 0.480 public LB 0.465 </li>
<li>F51 electra large CV 0.479 public LB 0.469 </li>
</ul>
<p>Models with same name have different parameters, you can check respective notebook for details.</p>
<p>I use one one drop based on CV to select models, although my best single model deberta large with CV 0.4757 public LB 0.462 private LB 0.462, it is not included in my final inference notebook.</p>
<p><strong>Ensemble</strong><br>
I use same CV split across all models and ensemble performance is evaluated on oof predictions.</p>
<ul>
<li>Ridge regression CV 0.4553 public LB 0.451 private LB 0.449</li>
<li>Although manual features didn't help when I integrate them with predictions of transformers by a linear model, it helps when I use lightgbm. And average predictions of lightgbm and ridge gives my final score: CV 0.4532 public LB 0.454 private LB 0.448. As public score is worse so this might not a real improvement.</li>
</ul>
<p><strong>Code</strong></p>
<ul>
<li>ITPT<br>
<a href="url" target="_blank">https://www.kaggle.com/w5833946/f1-itpt</a></li>
<li>Train<br>
<a href="url" target="_blank">https://www.kaggle.com/w5833946/f2-train</a><br>
<a href="url" target="_blank">https://www.kaggle.com/w5833946/f17-train/data</a><br>
<a href="url" target="_blank">https://www.kaggle.com/w5833946/f23-train</a><br>
<a href="url" target="_blank">https://www.kaggle.com/w5833946/f32-train</a><br>
<a href="url" target="_blank">https://www.kaggle.com/w5833946/f41-train</a><br>
<a href="url" target="_blank">https://www.kaggle.com/w5833946/f50-train</a><br>
<a href="url" target="_blank">https://www.kaggle.com/w5833946/f51-train</a></li>
<li>Inference trainning set<br>
<a href="url" target="_blank">https://www.kaggle.com/w5833946/f-inference-1</a><br>
<a href="url" target="_blank">https://www.kaggle.com/w5833946/f-inference-1-1/data</a></li>
<li>Inference<br>
<a href="url" target="_blank">https://www.kaggle.com/w5833946/f-inference-3-1-v4</a></li>
</ul>
<p>You need to remove some private models that loaded (but not used when inference) to re-run some of the noetbooks.</p>
<p>I didn't aware that. I will fix it later, you can check my profile and find the code that I posted.</p> |
CommonLit Readability Prize | 69th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for your comment!<br>
Roberta Large LSTM*2 head model is as shown below:</p>
<pre><code>class LitModel(nn.Module):
def __init__(self):
super().__init__()
config = AutoConfig.from_pretrained(ROBERTA_PATH)
config.update({"output_hidden_states": True,
"hidden_dropout_prob": 0.0,
"layer_norm_eps": 1e-7})
self.roberta = AutoModel.from_pretrained(ROBERTA_PATH, config=config)
self.attention = nn.Sequential(
nn.Linear(1024, 512),
nn.Tanh(),
nn.Linear(512, 1),
nn.Softmax(dim=1)
)
self.lstm1 = nn.Sequential(
nn.LSTM(1024, 768, batch_first=True, bidirectional=False)
)
self.lstm2 = nn.Sequential(
nn.LSTM(768, 512, batch_first=True, bidirectional=False)
)
self.regressor = nn.Sequential(
nn.Linear(512, 1)
)
def forward(self, input_ids, attention_mask):
roberta_output = self.roberta(input_ids=input_ids,
attention_mask=attention_mask)
last_layer_hidden_states = roberta_output.hidden_states[-1] # (8, 256, 1024)
weights = self.attention(last_layer_hidden_states) # (8, 256, 1)
context_vector = weights * last_layer_hidden_states # (8, 256, 1024)
# LSTM
lstm_output, (h, c) = self.lstm1(context_vector) # (8, 256, 768)
lstm_output, (h, c) = self.lstm2(lstm_output) # (8, 256, 512)
context_vector = torch.sum(lstm_output, dim=1) # (8, 512)
return self.regressor(context_vector) # [8, 1]
</code></pre>
<p>First of all, thanks to kaggle host, and all participants.<br>
I got a silver medal for the first time!</p>
<h2>Models</h2>
<p>I used one notebook as a baseline. Thank you very much, <a href="https://www.kaggle.com/andretugan" target="_blank">@andretugan</a>.<br>
<a href="https://www.kaggle.com/andretugan/pre-trained-roberta-solution-in-pytorch/output" target="_blank">https://www.kaggle.com/andretugan/pre-trained-roberta-solution-in-pytorch/output</a></p>
<ul>
<li>4models ensemble<ul>
<li>Roberta base</li>
<li>Roberta Large LSTM*1 head</li>
<li>Roberta Large LSTM*2 head</li>
<li>Roberta Large MeanPooling head</li></ul></li>
</ul>
<h2>CV Strategy</h2>
<p>5 Fold Stratified CV<br>
<a href="https://www.kaggle.com/abhishek/step-1-create-folds" target="_blank">https://www.kaggle.com/abhishek/step-1-create-folds</a></p>
<h2>Submissions</h2>
<p>Public 0.458, Private 0.456</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Ratio</th>
<th>CV</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>Roberta base</td>
<td>0.25</td>
<td>0.477</td>
<td>0.467</td>
<td>0.470</td>
</tr>
<tr>
<td>Roberta Large LSTM*1 head</td>
<td>0.25</td>
<td>0.480</td>
<td>0.465</td>
<td>0.464</td>
</tr>
<tr>
<td>Roberta Large LSTM*2 head</td>
<td>0.25</td>
<td>0.480</td>
<td>0.464</td>
<td>0.462</td>
</tr>
<tr>
<td>Roberta Large MeanPooling head</td>
<td>0.25</td>
<td>0.481</td>
<td>0.468</td>
<td>0.465</td>
</tr>
</tbody>
</table>
<p>See you at the next competition!</p> |
CommonLit Readability Prize | 6th place solution (Gaussian process regression (GPR)) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes, you are right. In total, we have 1024x5x9 embeddings on the test set, and we took the average of the 5-folds GPR prediction as one final prediction.</p>
<p>E.g. Assuming the test size is 7</p>
<pre><code>embeddings1 = np.concatenate((embedding1_roberta_large), .., (embeddings1_deberta_large), axis=1) # (7, 9216)
...
embeddings5 = np.concatenate((embedding5_roberta_large), .., (embeddings5_deberta_large), axis=1) # (7, 9216)
embeddings = [embeddings1, ..., embeddings5] # e.g. [(7, 9216), ..., (7, 9216)]
gpr_preds_concat = [gpr_inference(x) for x in embeddings] # list of gpr predictions array each of size (7,) e.g. [(7,), ..., (7,)]
final_gpr_preds = np.mean(gpr_preds_concat, axis=0) # (7,)
</code></pre>
<p>I believe that other than GPR u can use the concatenate embeddings for other models as well e.g. SVR. However, I am not sure how well it will perform since we only tested on GPR as the training dataset is small and seems more logical for our approach.</p>
<p>Thanks <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> and congratulations to you too! We learnt a lot from you as well!</p>
<p>Thank you for your comment :). Yes, we plan to share our code, currently, we are in the midst of tidying up. Will share our training/inference code soon!</p>
<p>Thanks <a href="https://www.kaggle.com/trushk" target="_blank">@trushk</a> for your comment :). Yes our transformer models uses AttentionHead with dropout and added another linear layer similar to the public notebook <a href="https://www.kaggle.com/maunish/clrp-pytorch-roberta-finetune" target="_blank">https://www.kaggle.com/maunish/clrp-pytorch-roberta-finetune</a>.</p>
<p>Thanks <a href="https://www.kaggle.com/xfffrank" target="_blank">@xfffrank</a> for your great contribution to the team! I've added the additional information to our post :)</p>
<h3>Acknowledgement</h3>
<p>First of all, we would like to thanks Kaggle & CommonLit for hosting this interesting competition, this has been an interesting journey for us! Also congratulations to all winning teams!</p>
<p>And special thanks to <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a>, <a href="https://www.kaggle.com/maunish" target="_blank">@maunish</a>, and <a href="https://www.kaggle.com/andretugan" target="_blank">@andretugan</a> for their knowledgeable and resourceful notebooks. Since our team is fairly new to transformers, leveraging the resources from public notebooks definitely helps in our journey!</p>
<h3>Summary</h3>
<p>Our final submission consists of 9 transformer models, and our final solution is based on our 2nd-stage model Gaussian Process Regression (GPR) [1] which we use to correct overconfident transformer predictions due to the small training data. This can be summarized in two steps. </p>
<ol>
<li>Extract all the OOF embeddings from the last transformer layer (Attention-head) for all 9 models and concatenate all of them.</li>
<li>Train a GPR model using the concatenated embeddings. </li>
</ol>
<h3>Architecture for final solution</h3>
<p><img src="https://i.imgur.com/ANcnHNt.png"></p>
<h3>External data</h3>
<p>We scraped data from the “url_legal” information which is available in the training data and only use those with licenses of “CC BY” and “CC BY-SA”.</p>
<h4>Transformers model</h4>
<p>Like every other team, we first started out fine-tuning transformers model and focus on increasing the single model LB score by training different transformers for diversity. Since our goal is to use GPR to improve the transformer predictions, we need robust transformer models that can generate robust embeddings.</p>
<p>Our final submission consists of the following 9 transformer models and all models are trained using 5-fold stratified CV with attention head.</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>CV</th>
<th>Public</th>
<th>Private</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>roberta-large</td>
<td>0.492</td>
<td>0.471</td>
<td>0.471</td>
<td>mlm on training set</td>
</tr>
<tr>
<td>microsoft/deberta-large</td>
<td>0.485</td>
<td>0.474</td>
<td>0.476</td>
<td>mlm on training set</td>
</tr>
<tr>
<td>xlnet-large-cased</td>
<td>0.494</td>
<td>0.475</td>
<td>0.476</td>
<td>-</td>
</tr>
<tr>
<td>deepset/roberta-large-squad2</td>
<td>0.488</td>
<td>0.464</td>
<td>0.467</td>
<td>-</td>
</tr>
<tr>
<td>deepset/roberta-large-squad2</td>
<td>0.484</td>
<td>0.466</td>
<td>0.464</td>
<td>mlm on train set and external data</td>
</tr>
<tr>
<td>allenai/longformer-large-4096-finetuned-triviaqa</td>
<td>0.489</td>
<td>0.467</td>
<td>0.47</td>
<td>-</td>
</tr>
<tr>
<td>valhalla/bart-large-finetuned-squadv1</td>
<td>0.471</td>
<td>0.462</td>
<td>0.466</td>
<td>mlm on train set and external data, remove dropout</td>
</tr>
<tr>
<td>microsoft/deberta-large-mnli</td>
<td>0.469</td>
<td>0.462</td>
<td>0.469</td>
<td>mlm on train set and external data, remove dropout</td>
</tr>
<tr>
<td>ahotrod/electra_large_discriminator_squad2_512</td>
<td>0.477</td>
<td>0.468</td>
<td>0.468</td>
<td>remove dropout</td>
</tr>
</tbody>
</table>
<p>The model selection is based on the individual LB score (preferably <= 0.475)</p>
<h3>Fine-tune</h3>
<p>The following are the fine-tuned strategy for the transformer models</p>
<ul>
<li>Epochs: 5</li>
<li>Batch Size: 8</li>
<li>Optimizer: AdamW with SWA</li>
<li>LR Scheduler: Linear learning rate scheduler with 6% warmup steps</li>
<li>Learning rate: 2e-5</li>
<li>Layer reinitialisation: last 3 transformer layers (reference: <a href="https://www.kaggle.com/rhtsingh/on-stability-of-few-sample-transformer-fine-tuning" target="_blank">https://www.kaggle.com/rhtsingh/on-stability-of-few-sample-transformer-fine-tuning</a>)<br>
Use attention head. (reference: <a href="https://www.kaggle.com/maunish/clrp-pytorch-roberta-finetune" target="_blank">https://www.kaggle.com/maunish/clrp-pytorch-roberta-finetune</a>)</li>
<li>Evaluating for every n steps: 10</li>
</ul>
<h3>Gaussian Process Regression (GPR)</h3>
<p>We are aware of the limited training samples ~2.8k, hence we immediately thought of Gaussian process regression (GPR). Given the constraint of limited training samples, using Bayesian techniques such as GPR could potentially help us make a better approximation/prediction. Since Bayesian/Probabilistic model are proven to performs relatively well in small setting, and it also has a strong notion of uncertainty. The uncertainty is important here as we know the distribution of the training and test data are different, and we do not want the transformer to make overconfident predictions, hence GPR alleviates overconfident predictions from transformers. <strong>A NN with one hidden layer of infinite width is basically a Gaussian Processes.</strong></p>
<p>The below figure shows the deberta-large model predictions for 100 randomly sampled points. The green curve is the deberta-large predictions, and the red curve is the GPR predictions. We can see that deberta-large will try to fit all the points as closely as possible (Red circle), however, GPR will avoid overconfident predictions and will regress to the mean of nearby samples when it is not confident, thus providing reliable uncertainty estimation.</p>
<p><img src="https://i.imgur.com/yMlaKpG.png"></p>
<h3>Gaussian process regression (GPR) as 2nd-stage model</h3>
<p>We use GPR to correct overconfident transformer predictions. Doing this allows us to further improved the single model score significantly by training a GPR model using all the OOF embeddings extracted from the last transformer layer (attention-head). </p>
<p>The following submission shows that using deberta-large trained with pseudo-labeling (PL) can achieve a single private score of 0.459, but with GPR it can achieve 0.453 alone!</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>CV</th>
<th>Public</th>
<th>Private</th>
<th>GPR Public</th>
<th>GPR private</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>deberta-large</td>
<td>0.462</td>
<td>0.465</td>
<td><b>0.459</b></td>
<td>0.457</td>
<td><b>0.453</b></td>
<td>train with pseudo labeling</td>
</tr>
</tbody>
</table>
<p>Unfortunately, we didn't include any model trained with pseudo-labeling since many of our PL-trained models has bad single/ensemble LB score as compared to the model trained without pseudo-labeling.. </p>
<p>This leads us to believe the LB has a different data distribution as compared to the training data, hence we use all the OOF embeddings for training and only validate the individual GPR model performance on LB. </p>
<h3>Training GPR using all oof embeddings</h3>
<p>From the transformer models section above, the 9 models were selected in terms of their individual LB scores and correlations for diversity. We extracted the attention head embeddings which is of size 1024 for each model, hence 1024*9 embeddings for 9 models. We concatenate all 1024*9 embeddings and train all samples using GPR model with an initial RBF kernel hyperparameters of lengthscale=3, variance=1, noise=1, and run for 1000 steps using stochastic variational inference (SVI). Since we are doing inference with GPR, we are able to learn the GPR/kernel hyperparameters directly from the embeddings itself with minimal fine-tuning.</p>
<p>Our final submission with GPR model achieved public 0.448, private 0.448</p>
<h3>What worked</h3>
<p><b>Dropout removal</b><br>
The <code>remove dropout</code> note in the table above refers to removing dropout in the transformer architecture, instead of the dropout placed in the output head. We were inspired by this <a href="https://www.kaggle.com/andretugan/lightweight-roberta-solution-in-pytorch">kernel</a>. It is a very interesting idea to remove it as the default dropout of 0.1 is used in nearly every transformer model. Previously, we can see that the validation loss is very unstable, which is why a small evaluation interval is commonly used. But after removing dropout in the transformers, the val loss became much more steady, and the converged points were usually much better. The cv score improved by a large margin, but the public LB doesn't reflect the change accordingly. Later we found more and more people used this strategy, so we just adopted it in some of our models.</p>
<p><b>Pseudo labelling</b><br>
We scraped the data from the same url in the training data and labelled them with an ensemble model which scored LB 0.450. We didn't train on the pseudo data first and finetune with the competition data. Instead, during each fold training, we only added the pseudo data which have the same id as the training data in that fold, to avoid potential data leakage. With this method, the CV score improved a lot but the public LB didn't agree. Though we didn't found data leakage in the cv, we abandoned the models trained with pseudo labelling. It turned out that it works well for the private set, and we should've trusted the cv more. The public test set is only ~600 samples so it doesn't reflect the data distribution in the private set well.</p>
<p><b>Pretrained models</b><br>
We found that some of the models further pretrained on question answering datasets are better for finetuning in this competition such as <code>squad</code> and <code>triviaqa</code>. So it's always better to try different pretrained models when using transformer models.</p>
<p><b>Gaussian process regression</b><br>
This is the most important ensembling/preprocessing/correction steps for our solution. All of our single transformer models improved significantly with gains from 0.002 - 0.01 after training a GPR model using the attention-head embedding.</p>
<h3>What didn't work</h3>
<p><b>Extreme data and post-processing</b><br>
We found that the data with the targets < -3 or > 1 are the ones that make the training unstable. There are only 131 of these samples in the training set. Their standard errors are also usually higher than the others. By removing these samples, the cv improved a lot. And we found that there are also some patterns in the targets of the extreme data that we can use for post-processing, only if we can detect these extreme samples. But we failed to build a model to achieve that.</p>
<p><b>Using different heads</b><br>
We tried training transformers using different heads, e.g. cls, mean pooling, pooler, concatenate last 4/8/12 layers, mean max pooling. But attention head works best for us, hence we stick with attention-head for all our models.</p>
<p><b>Different optimization strategies</b><br>
Initially, we also tried mixout regularization and multi-sample dropout, doing this stabilizes training, but the individual LB score is bad. And in the end, the transformers w/o dropout works better.</p>
<h3>Final words</h3>
<p>We should have trusted our CV for the models trained with pseudo-labeling. Our best submission has public 0.452, private 0.446 which is achieved by using only 6 models trained with pseudo-labeling. Unfortunately, we didn't select that for our final submission since the public LB is bad..</p>
<p>References:<br>
[1] <a href="https://pyro.ai/examples/gp.html" target="_blank">https://pyro.ai/examples/gp.html</a></p> |
CommonLit Readability Prize | 9th place solution -Ensemble of four models | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for your questions!</p>
<blockquote>
<p>1 During prediction, did you reverse the calculation of Sigmoid(target-Median(target)) using the target median of the training set to get the predicted target value?</p>
</blockquote>
<p>Yes, that's right. Thanks to BCEWithLogitsLoss, we don't need to add another sigmoid layer. All we need is add Median(target) during prediction.</p>
<blockquote>
<p>2 Why do you need -Median(target) instead of sigmoid(target)?</p>
</blockquote>
<p>Because we were concerned about the side effect of biased sigmoid(target).<br>
Since target values are biased to the negative side by about -0.9, sigmoid(target) will be also biased to <0.5. This situation is close to class inbalance in ordinary classification problems. To avoid side effect of bias, we introduced-Median(target). However, we did not test whether the bias towards <0.5 really has any side effect.</p>
<blockquote>
<p>And why choosing median instead of mean?</p>
</blockquote>
<p>Because median is known to be more robust against outliers than mean.<br>
However, to be honest, we just wanted to get the center of the distribution and didn't have a strong reason. </p>
<p>Thanks a lot!<br>
We never would have thought of your great ideas.<br>
(<a href="https://www.kaggle.com/c/commonlitreadabilityprize/discussion/257446" target="_blank">https://www.kaggle.com/c/commonlitreadabilityprize/discussion/257446</a>)</p>
<p>To begin with, We would like to thank Kaggle and Commonlit for hosting such an interesting competition! This is our first challenge in the field of NLP competition and very thanks to @rhtsingh, <a href="https://www.kaggle.com/maunish" target="_blank">@maunish</a> and <a href="https://www.kaggle.com/andretugan" target="_blank">@andretugan</a> for sharing their wonderful kernels. Their notebooks were great help to making this competition a great one for sure.</p>
<h2>Summary</h2>
<p>We chose the one with best Public LB and the one with the best CV as our final submissions. The following are our best submissions.</p>
<table>
<thead>
<tr>
<th></th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>best Public LB</td>
<td>0.450</td>
<td>0.451</td>
</tr>
<tr>
<td>best CV</td>
<td>0.451</td>
<td>0.448</td>
</tr>
</tbody>
</table>
<p>We ensembled 10 models including two 25-fold CV models.</p>
<h2>Cross validation strategy</h2>
<p>The dataset size is one of the most important factor for machine learning. In this competition, train.csv contained about 3000 samples. Generally speaking, this number is very small for training deep learning models. To exhaustively utilize this small dataset, we carefully determined the number of folds (K) in K-fold cross validation. Our hypothesis was that if we increased K, models would perform better thanks to larger number of training samples (Of course, there would be an upper limit of K because the number of validation samples becomes too small). We tried 4 candidates(K=5,10,15,25) to test this hypothesis. As we expected, we observed small but stable improvement of LB score with larger K. To be more specific, increasing K from 5 to 15 improved public (and private) LB by 0.002. Increasing 15 to 25 did not change the displayed score but improved the score slightly (we confirmed that by sorting the submissions by the scores).We decided to employ 25 folds for some strong single models (deberta-large, roberta-large trained by cross entropy loss), paying attention to inference time. As a side note, we used normal k fold cross validation because stratified k fold using binned target did not seem to improve cv-lb correlation much. </p>
<h2>Models</h2>
<p>Like most teams, we used pretrained models from huggingface implemented in pytorch. Through several experiments,and found that deberta-large performed significantly better. However,deberta-large required much longer training time than other models, so we attempted to obtain the optimal parameters for deberta-large in a short time by using deberta base. Several kernels and papers were very helpful in this experiments (e.g.: <a href="https://arxiv.org/abs/2006.04884" target="_blank">https://arxiv.org/abs/2006.04884</a>) . After changing many parameters, including lerning rate, activation layer, the best results were obtained by changing loss function. Some models using cross-entropy loss have shown excellent results in previous NLP competitions(e.g. :<a href="https://www.kaggle.com/c/google-quest-challenge/discussion/129978)" target="_blank">https://www.kaggle.com/c/google-quest-challenge/discussion/129978)</a>. So we trained BCEWithLogitsLoss using Sigmoid(target-Median(target)) as the prediction label. Median(target) was added to the regressor during prediction.This idea significantly contributed to both CV and LB score, and also contributed significantly to the ensemble. </p>
<h2>Weights</h2>
<table>
<thead>
<tr>
<th>Model</th>
<th>Public</th>
<th>Weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>RoBERTa-Large (25 fold cross entropy)</td>
<td>-</td>
<td>0.4</td>
</tr>
<tr>
<td>DeBERTa large (25fold)</td>
<td>0.458</td>
<td>0.25</td>
</tr>
<tr>
<td>DeBERTa large (15 fold cross entropy)</td>
<td>-</td>
<td>0.25</td>
</tr>
<tr>
<td>DeBERTa large 5epochs 5fold</td>
<td>0.460</td>
<td>0.1</td>
</tr>
</tbody>
</table>
<h2>WHAT did not work</h2>
<ul>
<li>large model<ul>
<li>deberta-xlarge</li></ul></li>
<li>other model<ul>
<li>ernie_large</li>
<li>xlnet-base-cased</li>
<li>albert-base-v1</li>
<li>distilbert-base-uncased</li>
<li>xlm-roberta-base</li>
<li>microsoft/deberta-base</li>
<li>nghuyong/ernie-2.0-en</li>
<li>nghuyong/ernie-2.0-large-en</li></ul></li>
<li>CV strategy<ul>
<li>stratified k-fold by target</li>
<li>stratified k-fold by std</li></ul></li>
<li>pseudo labeling</li>
<li>data argumentation</li>
<li>post processing</li>
<li>SWA</li>
</ul>
<h2>Computing environments</h2>
<ul>
<li>google colab pro *3 </li>
<li>AWS p3.8xlarge (for deberta-xlarge, not worked)</li>
<li>GCP v100*4 (for deberta-xlarge, not worked)</li>
</ul> |
CommonLit Readability Prize | Private 106th / Public 79th Solution [Top 3%] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Firstly, we would like to thanks the competition organizers for hosting such a unique competition, we learned a lot from this competition. This was our first NLP Kaggle Competition and we enjoyed it very much. </p>
<p>Congratulations to all the winning teams!!</p>
<p>Thank you to the Kaggle community for sharing such innovative ideas through notebooks and discussions, where we were able to learn a lot!!</p>
<p>Special thanks to the following users/community for providing amazing ideas and code that we used in our solution:</p>
<ol>
<li>Thank you to <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a> for providing so many resources to learn about transformers: how to pre-train them, how to fine-tune them, how to improve fine-tuning [both speed and accuracy]; theory as well as code through your amazing notebooks.</li>
<li>Thank you to <a href="https://www.kaggle.com/maunish" target="_blank">@maunish</a> [for providing insights on Attention head and providing a clean pipeline from Pre-training to fine-tuning to scoring], <a href="https://www.kaggle.com/andretugan" target="_blank">@andretugan</a> [for providing a fine-tuning notebook that solved the problem of stability in CV while training], <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> [for providing a unique method of ensembling using forward OOF technique], <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> [for providing fine-tuning notebook using TF] <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a> [for amazing tutorials on training/finetuning transformers using pytorch], <a href="https://www.kaggle.com/rajat95gupta" target="_blank">@rajat95gupta</a> and <a href="https://www.kaggle.com/jcesquiveld" target="_blank">@jcesquiveld</a> [for providing insights on how to find the optimal head for a particular transformer], the <a href="https://discuss.huggingface.co/" target="_blank">Huggingface community</a> [for solving all the queries albeit being small ones], and <strong>finally my teammates <a href="https://www.kaggle.com/muktan" target="_blank">@muktan</a> and <a href="https://www.kaggle.com/deepangraval" target="_blank">@deepangraval</a> [being part of this 3 months journey with immense enthusiasm]</strong>.</li>
</ol>
<h1>Brief Solution</h1>
<p>We used Roberta-base, Roberta-large, Electra-base, and Electra-large with different heads [Mean-Pooing, CLS Token, and Attention], with/without pre-training, with/without layer-reinitialization, Gradient Accumulation, Group Differential LR, and a unique method of using 2D Attention Head that will be discussed by <a href="https://www.kaggle.com/muktan" target="_blank">@muktan</a> in <strong><a href="https://www.kaggle.com/c/commonlitreadabilityprize/discussion/261206" target="_blank">this post</a></strong>. We used one submission based solely on our CV values and another was based on LB score.</p>
<p>1) CV Fit [Private LB: 0.458] - We used the following <a href="https://www.kaggle.com/cdeotte/forward-selection-oof-ensemble-0-942-private" target="_blank">notebook</a> to find the best ensembling score using our oof files and used 14 models which are as follows:</p>
<table>
<thead>
<tr>
<th>Model-Description</th>
<th>CV</th>
<th>Public LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>Roberta-base using Tensorflow <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a></td>
<td>0.509</td>
<td>0.482</td>
</tr>
<tr>
<td>Roberta-base <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a></td>
<td>0.481</td>
<td>0.477</td>
</tr>
<tr>
<td>Roberta-large-1 <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a></td>
<td>0.479</td>
<td>0.474</td>
</tr>
<tr>
<td>Roberta-large- [No-Pretraining+5 Layer Reinitialization]</td>
<td>0.481</td>
<td>0.473</td>
</tr>
<tr>
<td>Roberta-base <a href="https://www.kaggle.com/andretugan" target="_blank">@andretugan</a></td>
<td>0.477</td>
<td>0.467</td>
</tr>
<tr>
<td>Roberta-base- [variation of the above model]</td>
<td>0.475</td>
<td>0.469</td>
</tr>
<tr>
<td>Roberta-large- [Attention Head with Differential LR]</td>
<td>0.474</td>
<td>0.460</td>
</tr>
<tr>
<td>Roberta-large- [Above Model + 5 layer Reinitialization]</td>
<td>0.471</td>
<td>0.463</td>
</tr>
<tr>
<td>Roberta-large- [Mean Pooling Head]</td>
<td>0.479</td>
<td>0.469</td>
</tr>
<tr>
<td>Roberta-large- [2D Attention for 2 4 6 8 layers from the end- <a href="https://www.kaggle.com/muktan" target="_blank">@muktan</a>]</td>
<td>0.474</td>
<td>0.461</td>
</tr>
<tr>
<td>Roberta-large- [Variation of above technique]</td>
<td>0.475</td>
<td>0.465</td>
</tr>
<tr>
<td>Electra-large- [Variation of the best LB Roberta-large model]</td>
<td>0.479</td>
<td>0.469</td>
</tr>
<tr>
<td>Electra-large- [Variation of the above model]</td>
<td>0.487</td>
<td>0.480</td>
</tr>
<tr>
<td>Electra-large- [2D Attention- <a href="https://www.kaggle.com/muktan" target="_blank">@muktan</a>]</td>
<td>0.487</td>
<td>0.474</td>
</tr>
<tr>
<td>Overall</td>
<td>0.455</td>
<td>0.457</td>
</tr>
</tbody>
</table>
<p>2) LB Fit [Private LB: 0.458, Public LB: 0.455] - We used more-or-less the same models from the above list but we provided weights based on the LB scores and combined them in a different manner.</p>
<h1>Training Config</h1>
<ul>
<li>Optimiser: AdamW<ul>
<li>Weight decay: 0.5</li>
<li>LR: 2.5e-5 to 3.5e-5 (depending on batch size) [Group Differential LR]</li></ul></li>
<li>Epochs: 3-5 [depending on the model]</li>
<li>Batch size: 16 (or 10 for the larger models)</li>
<li>Validation & Checkpointing every 5-10 steps [depending on the model]</li>
<li>Loss: MSE/RMSE</li>
</ul>
<h1>What worked for us</h1>
<ul>
<li>Pre-training Roberta-base [for large models, we were not able to pre-train ]</li>
<li>Differential learning rate [2.5e-5 to 3.5e-5 with a factor of 2.5 depending on the architecture]</li>
<li>Last 5 layer-reinitialization [we tried different techniques but this was the best amongst them]</li>
<li>Gradient Accumulation 2 [BS-8 for training on larger batch size]</li>
<li>Attention head, Mean Pooling head - [We were struggling to find custom heads for fine-tuning and these heads worked very well for the given tasks]</li>
</ul>
<h1>What didn't work for us</h1>
<ul>
<li>External-data pre-training- We tried to pre-train the model on MLM tasks using different external datasets, but we were not achieving good results w.r.t. CV or LB with that.</li>
<li>Stacking- We used the embeddings features from different models and added them to a Ridge/BayesianRidge Regression, but we were not getting good results with that, we concluded that as most of the features are extracted because of how trained our models using Grouped Differential LR, Stacking wasn't working for those models.</li>
<li>External-Features+Stacking- For some of the models, we combined the embeddings with some externals features and tried stacking using AutoML as well as Optuna but were unsuccessful as it worsened our score even more.</li>
</ul>
<h1>Tools Used</h1>
<ul>
<li>We are fresh undergraduate students with no external access to GPUs so we have been using Kaggle Kernels and Google Colab [we used 7 accounts 🤪to perform different experiments for finding hyper-parameters.]</li>
<li>We used Google-Docs from the beginning to first find all the techniques that are available for a readability task by reading different white papers [ML/Statistical/DL/RL] and then moved on to implement one-by-one as per the model-avail. and finally read about improving said models using different fine-tuning techniques.</li>
<li>We used Google-Sheets to track all the progress of the competition, and different models configurations as well.</li>
</ul>
<h2>P.S.</h2>
<p>Our main intent was to learn about how transformers has revolutionalized the NLP community. Thus, because of this competition, we were able to achieve this goal by learning various innovative strategies to fine-tune transformer models that will eventually help us in the future.</p>
<p>Thank you for reading & Happy Kaggling!!</p> |
CommonLit Readability Prize | Private 22nd Place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I didn't think my model weights would be useful for the other team :D<br>
Good job! Congratulations on your medal.</p>
<p>Hi, <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a>! (sorry for wrong mention.) I think your contribution to this competition is so great!</p>
<ul>
<li>Which strategy did you use for combining the text and tabular features?</li>
</ul>
<p>The code is below. and concatenate other features like LSTM head.<br>
nn.Linear's input is normalized by train dataset.</p>
<pre><code>self.linear_feature = nn.Sequential(
nn.Linear(17, 128),
nn.GELU(),
)
</code></pre>
<ul>
<li>Can you highlight more on the Nelder Mead Ensembling technique.</li>
</ul>
<p>The code is below. </p>
<pre><code>from scipy.optimize import minimize
def rmse_func(weights):
# predictions.shape = (n_data, n_models_for_ensemble)
pred = (predictions * weights).sum(axis=1)
rmse = np.sqrt(1 / len(pred) * ((target - pred)**2).sum())
return rmse
n_models = predictions.shape[1]
initial_weights = np.ones(n_models) / len(n_models)
res = minimize(rmse_func, initial_weights, method='Nelder-Mead')
weights = res["x"]
rmse = res["fun"]
</code></pre>
<p>Thanks! I'm not doing anything special. <br>
The main hyperparameters are as follows: </p>
<table>
<thead>
<tr>
<th>nlp_model_name</th>
<th>funnel-large-base</th>
<th>funnel-large</th>
</tr>
</thead>
<tbody>
<tr>
<td>dropout</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>batch_size</td>
<td>12</td>
<td>12</td>
</tr>
<tr>
<td>lr_bert</td>
<td>2E-05</td>
<td>2E-05</td>
</tr>
<tr>
<td>lr_fc</td>
<td>5E-05</td>
<td>5E-05</td>
</tr>
<tr>
<td>warmup_ratio</td>
<td>0.05</td>
<td>0.05</td>
</tr>
<tr>
<td>epochs</td>
<td>6</td>
<td>6</td>
</tr>
<tr>
<td>activation</td>
<td>GELU</td>
<td>GELU</td>
</tr>
<tr>
<td>optimizer</td>
<td>AdamW</td>
<td>AdamW</td>
</tr>
<tr>
<td>weight_decay</td>
<td>0.1</td>
<td>0.1</td>
</tr>
<tr>
<td>rnn_module</td>
<td>LSTM</td>
<td>LSTM</td>
</tr>
<tr>
<td>rnn_module_num</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>rnn_hidden_indice</td>
<td>(-1, 0)</td>
<td>(-1, 0)</td>
</tr>
<tr>
<td>linear_vocab_enable</td>
<td>False</td>
<td>True</td>
</tr>
<tr>
<td>multi_dropout_ratio</td>
<td>0.3</td>
<td>0.3</td>
</tr>
<tr>
<td>multi_dropout_num</td>
<td>10</td>
<td>10</td>
</tr>
<tr>
<td>max_length</td>
<td>256</td>
<td>256</td>
</tr>
<tr>
<td>hidden_stack_enable</td>
<td>True</td>
<td>True</td>
</tr>
<tr>
<td>reinit_layers</td>
<td>4</td>
<td>4</td>
</tr>
<tr>
<td>gradient_clipping</td>
<td>0.2</td>
<td>0.2</td>
</tr>
<tr>
<td>feature_enable</td>
<td>False</td>
<td>True</td>
</tr>
<tr>
<td>stochastic_weight_avg</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<td>val_check_interval</td>
<td>0.05</td>
<td>0.05</td>
</tr>
</tbody>
</table>
<h1>Acknowledgement</h1>
<p>First of all, thanks to competition organizers for hosting this competition.<br><br>
And I would like to thank you <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a> for share a lot of great notebook and discussion. I couldn't have achieved this ranking without you, and I learned a lot method for fine-tune BERT models from you!<br></p>
<h1>summary</h1>
<p>I have three options(below), and in the end I chose the top two for final submission.</p>
<ul>
<li>LB best (9models, CV: 0.4449, LB: 0.444, Private LB: 0.452) </li>
<li>CV best exclude base-model (9models, CV: 0.4430, LB: 0.447, Private LB: 0.452)</li>
<li>CV best (12models, CV: 0.4420, LB: 0.453, Private LB: 0.452)</li>
</ul>
<p>The detail of LB best is below table:<br></p>
<h2>LB-Best</h2>
<table>
<thead>
<tr>
<th></th>
<th>CV(5fold)</th>
<th>weight</th>
<th>Public</th>
<th>Private</th>
<th>other</th>
</tr>
</thead>
<tbody>
<tr>
<td>luke-large</td>
<td>0.4795</td>
<td>0.0877</td>
<td>0.467</td>
<td>?</td>
<td></td>
</tr>
<tr>
<td>roberta-large</td>
<td>0.4693</td>
<td>0.1035</td>
<td>0.464</td>
<td>0.469</td>
<td></td>
</tr>
<tr>
<td>gpt2-medium</td>
<td>0.481</td>
<td>0.158</td>
<td>0.469</td>
<td>0.482</td>
<td></td>
</tr>
<tr>
<td>funnel-large-base</td>
<td>0.4706</td>
<td>0.1455</td>
<td>0.461</td>
<td>0.47</td>
<td></td>
</tr>
<tr>
<td>funnel-large</td>
<td>0.4705</td>
<td>0.0267</td>
<td>0.46</td>
<td>0.467</td>
<td></td>
</tr>
<tr>
<td>ernie-large</td>
<td>0.4712</td>
<td>0.093</td>
<td>0.463</td>
<td>0.477</td>
<td></td>
</tr>
<tr>
<td>deberta-large</td>
<td>0.475</td>
<td>0.1693</td>
<td>0.464</td>
<td>0.467</td>
<td></td>
</tr>
<tr>
<td>mpnet-base</td>
<td>0.4782</td>
<td>0.1415</td>
<td>0.466</td>
<td>0.474</td>
<td></td>
</tr>
<tr>
<td>gpt2-large</td>
<td>0.4744</td>
<td>0.1121</td>
<td>0.468</td>
<td>0.48</td>
<td>batch_size=4</td>
</tr>
<tr>
<td>coef</td>
<td>-</td>
<td>0.0235</td>
<td>-</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<h2>CV-Best</h2>
<table>
<thead>
<tr>
<th></th>
<th>CV(5fold)</th>
<th>weight</th>
<th>Public</th>
<th>Private</th>
<th>other</th>
</tr>
</thead>
<tbody>
<tr>
<td>electra-large</td>
<td>0.4743</td>
<td>0.0829</td>
<td>0.474</td>
<td>0.479</td>
<td></td>
</tr>
<tr>
<td>funnel-large</td>
<td>0.4752</td>
<td>0.1215</td>
<td>0.467</td>
<td>0.474</td>
<td></td>
</tr>
<tr>
<td>deberta-large(1)</td>
<td>0.4738</td>
<td>0.1516</td>
<td>0.47</td>
<td>0.463</td>
<td></td>
</tr>
<tr>
<td>gpt2-medium</td>
<td>0.4775</td>
<td>0.1219</td>
<td>0.471</td>
<td>0.483</td>
<td></td>
</tr>
<tr>
<td>luke-large</td>
<td>0.474</td>
<td>0.0937</td>
<td>0.468</td>
<td>0.465</td>
<td></td>
</tr>
<tr>
<td>deberta-large(2)</td>
<td>0.475</td>
<td>0.1179</td>
<td>0.464</td>
<td>0.467</td>
<td></td>
</tr>
<tr>
<td>mpnet-base</td>
<td>0.4782</td>
<td>0.0999</td>
<td>0.466</td>
<td>0.474</td>
<td></td>
</tr>
<tr>
<td>ernie-large</td>
<td>0.4681</td>
<td>0.0792</td>
<td>0.47</td>
<td>0.473</td>
<td></td>
</tr>
<tr>
<td>gpt2-large</td>
<td>0.4744</td>
<td>0.1648</td>
<td>?</td>
<td>?</td>
<td>batch_size=4</td>
</tr>
<tr>
<td>coef</td>
<td>-</td>
<td>0.009</td>
<td>-</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<h1>configuration for almost all models</h1>
<ul>
<li>epochs = 4</li>
<li>optimizer: AdamW</li>
<li>scheduler: linear_schedule_with_warmup(warmup: 5%)</li>
<li>lr_bert: 3e-5</li>
<li>batch_size: 12</li>
<li>gradient clipping: 0.2~0.5</li>
<li>reinitialize layers: last 2~6 layers </li>
<li>ensemble: Nelder-Mead</li>
<li>custom head(finally concat all)<ul>
<li>averaging last 4 hidden layer</li>
<li>LSTM head</li>
<li>vocabulary dense</li></ul></li>
</ul>
<pre><code> hidden_states: (batch_size, vocab_size, bert_hidden_size)
linear_vocab = nn.Sequential(
nn.Linear(bert_hidden_size, 128),
nn.GELU(),
nn.Linear(128, 64),
nn.GELU()
)
linear_final = nn.Linear(vocab_size * 64, 128)
out = linear_vocab(hidden_states).view(len(input_ids), -1)) # final shape: (batch_size, vocab_size * 64)
out = linear_final(out) # out shape: (batch_size, 128)
</code></pre>
<ul>
<li>17 hand-made features <ul>
<li>sentence count</li>
<li>average character count in documents</li></ul></li>
</ul>
<h1>worked for me</h1>
<ul>
<li>model ensemble: I thought diversity is the most important thing in this competition.<ul>
<li>At the beginning of the competition, I tested the effectiveness of the ensemble.</li>
<li>Up to the middle stage, I fixed the model to roberta-large and tried to improve the score. </li>
<li>At the end, I applied the method to another models. I found that key parameters for this task are {learning_rate, N layers to re-initialize}, so I tuned those parameters for each models.</li></ul></li>
<li>re-initialization<ul>
<li>This paper (<a href="https://arxiv.org/pdf/2006.05987.pdf" target="_blank">https://arxiv.org/pdf/2006.05987.pdf</a>) shows that fine-tuning with reinitialization last N layers works well.</li>
<li>Different models have different optimal N. Almost models set N=4~5, gpt2-models set N=6.</li></ul></li>
<li>LSTM head<ul>
<li>Input BERT's first and last hidden layer into LSTM layer worked well.</li>
<li>I think first layer represent vocabulary difficulty and last layer represent sentence difficulty. Both are important for inference readbility.</li></ul></li>
<li>Remove dropout. Improve 0.01~0.02 CV.</li>
<li>gradient clipping. (0.2 or 0.5 works well for me, improve about 0.005 CV)</li>
</ul>
<h1>not worked for me</h1>
<ul>
<li>Input attention matrix to 2D-CNN(like ResNet18 or simple 2DCNN)<ul>
<li>I thought this could represent the complexity of sentences with relative pronouns.</li></ul></li>
<li>masked 5%~10% vocabulary.</li>
<li>Minimize KLDiv loss to fit distribution.</li>
<li>Scale target to 0~1 and minimize crossentropy loss</li>
<li>"base" models excluding mpnet. I got 0.47x CV but Public LB: 0.48x ~ 0.49x.</li>
<li>Stacking using LightGBM.</li>
<li>another models.(result is below table. single CV is well but zero weight for ensemble)</li>
<li>T5. Below notebook achieve 0.47 LB using T5, so I tried but failed.<br>
I got only 0.49x(fold 0 only) with learning_rate=1.5e-4<br>
<a href="https://www.kaggle.com/gilfernandes/commonlit-pytorch-t5-large-svm/comments" target="_blank">https://www.kaggle.com/gilfernandes/commonlit-pytorch-t5-large-svm/comments</a></li>
</ul>
<table>
<thead>
<tr>
<th>model</th>
<th>CV(5fold)</th>
<th>Public</th>
</tr>
</thead>
<tbody>
<tr>
<td>xlnet-large-cased</td>
<td>0.471</td>
<td>0.473</td>
</tr>
<tr>
<td>bert-large-cased</td>
<td>0.4828</td>
<td>0.488</td>
</tr>
</tbody>
</table>
<h1>GPU</h1>
<ul>
<li>GeForce RTX3090 * 1.</li>
</ul>
<h1>Inference notebook</h1>
<p><a href="https://www.kaggle.com/kurupical/191-192-202-228-251-253-268-288-278-final?scriptVersionId=69642056" target="_blank">https://www.kaggle.com/kurupical/191-192-202-228-251-253-268-288-278-final?scriptVersionId=69642056</a><br>
(sorry for almost of dataset is private.)</p>
<h1>repository</h1>
<p><a href="https://github.com/kurupical/commonlit" target="_blank">https://github.com/kurupical/commonlit</a></p>
<p>Thank you for pointing that out. I fix mention! <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a>'s work is very wonderful :D I was helped by him/her a lot!</p>
<p>Thank you! <br>
I add the inference notebook and github repository in this post.</p>
<p>Thanks!<br>
All model is same fold and optimize all of the out of fold predictions!<br>
I tried to optimize each fold, but didn't work for me.</p>
<p>Thanks!<br>
If I want to know if the model improve ~0.01, train 1 fold, else train 5 fold.<br>
And to increase the efficiency of the experiment, I defined a minimum line for fold1 for each model. If the minimum line was not met, the experiment was considered a failure.</p>
<p>Thanks! Normal LSTM didn't work, but input first/last hidden_layer to LSTM worked for me :D</p>
<p>Hi! Both worked for public/private.</p> |
CommonLit Readability Prize | Private 25th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, my contribution to the team is mostly in the process of stacking models. So, I’d like to discuss mainly in the way of ensemble.</p>
<p><strong>Public LB Best Models</strong></p>
<table>
<thead>
<tr>
<th></th>
<th>CV</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>roberta-base</td>
<td>0.4778</td>
<td>?</td>
<td>?</td>
</tr>
<tr>
<td>roberta-large</td>
<td>0.4697</td>
<td>0.460</td>
<td>0.464</td>
</tr>
<tr>
<td>electra-large</td>
<td>0.4747</td>
<td>0.462</td>
<td>0.474</td>
</tr>
<tr>
<td>mpnet-base</td>
<td>0.4816</td>
<td>?</td>
<td>?</td>
</tr>
<tr>
<td>longformer-large</td>
<td>0.4873</td>
<td>?</td>
<td>?</td>
</tr>
</tbody>
</table>
<p>・Some of our models used re-initialization layers. In ‘large’ models, the number of re-init layers is 5(‘base’ is 2).<br>
・This article (<a href="url" target="_blank">https://towardsdatascience.com/pitfalls-with-dropout-and-batchnorm-in-regression-problems-39e02ce08e4d</a>) suggests that dropout and batchnorm layers should be avoided in regression problems, so we did’t use them.<br>
・Other than the models mentioned in the table, we employed roberta-base x 2, roberta-large x 12, electra-large x 2, mpnet-base x 1, and longformer-large x 1 in the private best submission, but I think there is room for improvement by narrowing down the list of models.</p>
<p><strong>Stacking</strong><br>
In stacking, we added sentence embeddings (generated by sentence_transformers) with the predictions of all models for the purpose of considering not only the predicted targets but also the characteristic of the sentence itself. CV score was the best when sentence embeddings were dimensionally reduced by kpca (n_components=2, kernel=’poly’). In ensemble, we employed the average prediction of Ridge and ARDRegression, because LGBM was good at CV (about 0.432~) but seemed to be overfitted (Public LB was worse.) and other models did not achieve so much good CV scores.</p>
<p><strong>Things that did not work for us</strong><br>
・Pseudo labeling<br>
・Data Augmentation<br>
・Some architectures like xlnet, gpt(I think one of the reasons why we missed gold medal is that we failed to handle these architectures effectively.)</p>
<p>Finally, I’d like to thank all the people competing with us and sharing a lot of useful insights. Thank you for reading!</p> |
CommonLit Readability Prize | Public&Private 37th solution [Team CM, T0m Part] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Firstly, thanks to kaggle and hosts for an interesting competition, and to my teammate <a href="https://www.kaggle.com/teeyee314" target="_blank">@teeyee314</a> , <a href="https://www.kaggle.com/nazarov" target="_blank">@nazarov</a> , and <a href="https://www.kaggle.com/leecming" target="_blank">@leecming</a> . I learned a lot from them during this competition.<br>
Since this is my first NLP competition, I started by reading BERT paper and thought a lot about how to train BERT stable and how to design the regression model.</p>
<p>So, I'd like to discuss about what I've done to achieve the above.</p>
<p>The followings are main idea of my part.</p>
<ul>
<li>Binary Pretraining using competition data.</li>
<li>High Probability Mixout (at the same time, Dropout=0 in bert layer)</li>
<li>Weighted Loss by using Standard_error</li>
<li>Custom Aggregation and Head for each hidden layer</li>
</ul>
<h1>Binary Pre-training</h1>
<p>I did an additional pre-training that was inspired by the roots of the competition target. They are actually Bradley-Terry values and from pairs of excerpts.<br>
So, I first designed binary-classification model, and then use the weights for main regression task.</p>
<p><img src="https://i.postimg.cc/SxKdqRPL/2021-08-04-7-22-36.png"></p>
<p>Note:</p>
<ul>
<li>The bert's weights are shared.</li>
<li>In train phase, pairs are randomly sampled.</li>
<li>In validation phase, the pairs are should be a fixed one.</li>
<li>There are many sampling strategy(ex, sample the pairs whose target difference is greater than a threshold), but random is the best.</li>
</ul>
<p>The pre-training weight boost my CV and LB by 0.002 ~ 0.01.</p>
<h1>Weighted Loss</h1>
<p>To make training more stable, I used the standard_error for calculating the training loss.<br>
the details are below,<br>
1, scaling standard_error range {0. ~ 0.5}<br>
2, calculating MSELoss<br>
3, Weighting the each loss by using scaled standard_error<br>
4, Mean and Backward</p>
<pre><code># pytorch brief example code
# scaling
values = train["standard_error"].values
train["standard_error"] = 0.5 * (values - np.min(values)) / (np.max(values) - np.min(values))
~~~
# training
criterion = nn.MSELoss(reduction='none')
~~~
loss = (criterion(output.squeeze(), labels) * (1 - errors)).mean()
loss.backward()
</code></pre>
<h1>Custom Aggregation and Head for each hidden layer</h1>
<p>I added the regression head for each bert hidden layers, and then (learnable) weighted sum for predicting final target value.<br>
Each aggregation and head weights are not shared. </p>
<p><img src="https://i.postimg.cc/CKhNhNxQ/2021-08-04-7-46-36.png""></p>
<pre><code>class AggregationLayer(nn.Module):
def __init__(self, in_size, max_length):
super().__init__()
weights_init = torch.zeros(max_length).float()
self.seq_weights = torch.nn.Parameter(weights_init)
weights_init = torch.zeros(in_size).float()
weights_init.data[1:] = -1
self.dim_weights = torch.nn.Parameter(weights_init)
def forward(self, x):
w_dim = torch.softmax(self.dim_weights, dim=0)
w_seq = torch.softmax(self.seq_weights, dim=0)
x_dim = (x * w_dim).sum(1)
x_seq = (x * w_seq.unsqueeze(-1).expand(x.shape).float()).sum(1)
z = (x_dim + x_seq) / 2
return z
</code></pre>
<h1>Other tips</h1>
<ul>
<li>add token randomly improved CV and LB (p=0.05)</li>
<li>Strong Mixout improved CV and LB (p=0.4) (this may be an effect of the dropout being zero.)</li>
<li>max_length = 300</li>
</ul>
<h1>Model</h1>
<p>With the above implementation, I achieved CV=0.4723 and LB=0.460 (roberta-large)</p>
<h1>After Team Merge</h1>
<p>I made a lot of models with pseudo labeling (external data).</p> |
CommonLit Readability Prize | Solution to our first NLP competition🥉 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you!</p>
<p>Thank you Kaggle and CommonLit for exciting competition!</p>
<p>our team (cpptake nishipy) was in the silver medal range in the public LB, but in the private LB, the shake down happened and we fall into the bronze medal range.<br>
It's frustrating, but we learned a lot, so we'll leave you with the solution</p>
<p>I would like to get a better ranking in the next competition, so if you have any ideas for improvement, please comment.</p>
<h3>Pipe Line & ideas</h3>
<h5>see the following notebook:</h5>
<p><a href="https://www.kaggle.com/takeshikobayashi/bronze-medal-solution-roberta-stacking-ensemble" target="_blank">our final sumit notebook</a></p>
<h3>reference¶</h3>
<ul>
<li><a href="https://www.kaggle.com/rhtsingh/commonlit-readability-prize-roberta-torch-infer-3" target="_blank">https://www.kaggle.com/rhtsingh/commonlit-readability-prize-roberta-torch-infer-3</a></li>
<li><a href="https://www.kaggle.com/rhtsingh/commonlit-readability-prize-roberta-torch-itpt" target="_blank">https://www.kaggle.com/rhtsingh/commonlit-readability-prize-roberta-torch-itpt</a></li>
<li><a href="https://www.kaggle.com/andretugan/pre-trained-roberta-solution-in-pytorch" target="_blank">https://www.kaggle.com/andretugan/pre-trained-roberta-solution-in-pytorch</a></li>
<li><a href="https://www.kaggle.com/maunish/clrp-roberta-svm" target="_blank">https://www.kaggle.com/maunish/clrp-roberta-svm</a></li>
<li><a href="https://www.kaggle.com/jcesquiveld/best-transformer-representations" target="_blank">https://www.kaggle.com/jcesquiveld/best-transformer-representations</a></li>
<li><a href="https://www.kaggle.com/gilfernandes/commonlit-pytorch-t5-large-svm" target="_blank">https://www.kaggle.com/gilfernandes/commonlit-pytorch-t5-large-svm</a><br>
Thank you <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a> <a href="https://www.kaggle.com/andretugan" target="_blank">@andretugan</a> <a href="https://www.kaggle.com/maunish" target="_blank">@maunish</a> <a href="https://www.kaggle.com/jcesquiveld" target="_blank">@jcesquiveld</a> <a href="https://www.kaggle.com/gilfernandes" target="_blank">@gilfernandes</a> and the comunity for sharing great notebooks!</li>
</ul>
<p>Thank you for reply!<br>
We will continue trying hard to get better place in competitions and to contribute for the Kaggle community</p> |
CommonLit Readability Prize | [88th place] solution - Team DilBERT | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Acknowledgement</h1>
<p>First of all, my team-mate <a href="https://www.kaggle.com/mananjhaveri" target="_blank">@mananjhaveri</a> and I would like to thank the competition organizers for organizing this competition.</p>
<p>We would also like to thank <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a> and <a href="https://www.kaggle.com/andretugan" target="_blank">@andretugan</a> for their amazing notebooks. Special thanks to <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a> for his awesome book called Approaching Almost Any Machine Learning Problem. The chapter on NLP helped us a lot.</p>
<h1>The breakthrough (best single-model)</h1>
<p>Our best single model was RoBERTa-Large with attention head. We also added layer wise learning rate decay and layer reinitialization used from <a href="https://www.kaggle.com/rhtsingh/on-stability-of-few-sample-transformer-fine-tuning" target="_blank">this amazing notebook</a> which boosted our score.</p>
<p>Configuration for our best single model - </p>
<pre><code>NUM_EPOCHS = 5
NUM_FOLDS = 5
BATCH_SIZE = 12
MAX_LEN = 256
LEARNING_RATE = 3e-5
WEIGHT_DECAY = 1e-2
NUM_WARMUP_STEPS = 50
EVAL_SCHEDULE = [(0.51, 50), (0.49, 25)]
</code></pre>
<p>With this, our best single model reached a score of 0.467 in Public LB and 0.468 in Private LB. </p>
<h1>The breakthrough (ensemble)</h1>
<p>We used 3 models for this - </p>
<ul>
<li><p>Model 1 = LB 0.467 - <a href="https://www.kaggle.com/andretugan/pre-trained-roberta-solution-in-pytorch" target="_blank">https://www.kaggle.com/andretugan/pre-trained-roberta-solution-in-pytorch</a></p></li>
<li><p>Model 2 = LB 0.466 - <a href="https://www.kaggle.com/rajat95gupta/clrp-mean-pooling-inference" target="_blank">https://www.kaggle.com/rajat95gupta/clrp-mean-pooling-inference</a></p></li>
<li><p>Model 3 = Our best single model</p></li>
</ul>
<table>
<thead>
<tr>
<th>Model</th>
<th>Ratio</th>
</tr>
</thead>
<tbody>
<tr>
<td>Model 1</td>
<td>0.35</td>
</tr>
<tr>
<td>Model 2</td>
<td>0.35</td>
</tr>
<tr>
<td>Model 3</td>
<td>0.3</td>
</tr>
</tbody>
</table>
<p>This gave us a score of 0.457 on public as well as private LB. </p>
<p>A big thank you to the Kaggle community for being so supportive. This was our first competition where we landed in the medal zone and it was a very exciting journey. We look forward to more competitions like this and hopefully contribute to this community.</p> |
CommonLit Readability Prize | [Private 13th Solution] Pseudo-labeling | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks the community for this competition!</p>
<h3>Overview</h3>
<ul>
<li>Model architectures: roberta-large, xlnet-large, electra-large, deberta-large.</li>
<li>Attention head: <a href="https://www.kaggle.com/maunish/clrp-roberta-svm" target="_blank">maunish's clrp-roberta-svm</a>.</li>
<li>Semi-supervised learning, Pseudo-label.</li>
<li>Hyperparameters: I didn't tune much and use mostly original setup from <a href="https://www.kaggle.com/rhtsingh/commonlit-readability-prize-roberta-torch-fit" target="_blank">torch's training script</a> since I won't have much chance competing in what I'm far behind others.</li>
</ul>
<p>Note: I'll report my Public scores in this post since Public / Private difference is only around 0.001.</p>
<h3>First half</h3>
<p>I spent the majority of first half trying different approaches (classification, custom loss) and learning to use hunggingface with TF and Pytorch. But I didn't know how things work so all of the experiments were not reproducible and turning into classification problem basically throws all the ordinal information away so it wasn't effective.</p>
<p>After getting a hang of Pytorch and huggingface, I trained models TF models on train set that was shifted to the left since TF models were overpredicting on val set. Ensembling roberta-large with xlnet-large got me to LB <strong>0.462</strong>. Adding Pytorch models and attention head trained with <a href="https://www.kaggle.com/rhtsingh/commonlit-readability-prize-roberta-torch-fit" target="_blank">torch's amazing script</a> boosted me to <strong>0.458</strong>.</p>
<p>At this point my single best 5 fold model was only <strong>0.468</strong>.</p>
<h3>Second half</h3>
<h4>Pseudo-labeling</h4>
<p>Inspired after reading <a href="https://openaccess.thecvf.com/content_CVPR_2020/papers/Xie_Self-Training_With_Noisy_Student_Improves_ImageNet_Classification_CVPR_2020_paper.pdf" target="_blank">Self-training with Noisy Student improves ImageNet classification
</a>.</p>
<p>I used mainly <a href="https://www.cs.cmu.edu/~dbamman/booksummaries.html" target="_blank">CMU's book summaries</a>, <a href="https://www.cs.cmu.edu/~ark/personas/" target="_blank">movie summaries</a> and a little bit of <a href="https://www.kaggle.com/markwijkhuizen/simplenormal-wikipedia-abstracts-v1" target="_blank">Mark Wijkhuizen's wikipedia abstracts</a>. Datasets were created by cleaning and mixing different ones together. I generated pseudo-labels with my best ensemble at the time. Most datasets have <strong>16k to 26k</strong> excerpts.</p>
<p>Models were trained with 80% of train data (first fold) concatenated with the new dataset. This helped me boost my literal single model (1 file) to <strong>0.465</strong> on roberta-large. </p>
<p>After this I spent two weeks struggling to find what makes a good dataset, I ended up with 20+ different datasets and still weren't able to figure it out due to my lack of knowledge. </p>
<p>Last two weeks of the competition, I decided to use only 3 datasets that performed best and simply added xlnet-large, electra-large, deberta-large and my best score becomes <strong>0.455</strong> with 4 files. Adding SVR <strong>0.454</strong> with 4 files. Splitting the new dataset to 4 folds and ensembling deberta-large, electra-large got me to <strong>0.452</strong> with 8 files.</p>
<p>Few findings:</p>
<ul>
<li>Public score got worse after the first generation. I hypothesize this is due to model's learning capacity since this method leans more towards distilling and compressing model rather than the usual setup for most semi-supervised learning methods where a single model's prediction is used to generate pseudo-labels instead of an ensemble. This finding is similar to what was found in <a href="https://openaccess.thecvf.com/content_ICCV_2019/papers/Cho_On_the_Efficacy_of_Knowledge_Distillation_ICCV_2019_paper.pdf" target="_blank">On the Efficacy of Knowledge Distillation</a> by the authors even though it's a different technique: "Bigger models are not better teachers".</li>
<li>deberta-large performed best.</li>
<li>Adding noise didn't improve CV so I didn't submit.</li>
</ul>
<p>Note: I'm not an expert in the field and there might be hindsight bias so feel free to correct me.</p>
<h4>Finishing line</h4>
<ul>
<li>After ensembling a bunch of models together, I finished with <strong>0.450 (20th place)</strong> on Public LB.</li>
<li>Due to long training time (~2-3 hours/model), I used up to 6 Colab accounts in the last week LOL. I feel bad for this but that's the best I can do with my 8GB RAM laptop and Colab Pro isn't available where I live.</li>
</ul>
<h3>Summary</h3>
<p><strong>What worked</strong></p>
<ul>
<li>Attention head: +0.004.</li>
<li>Pseudo-label: +0.015.</li>
<li>SVR: +0.001.</li>
</ul>
<p><strong>What didn't work</strong><br>
Most of my experiments revolves around new datasets so I don't have a lot here.</p>
<ul>
<li>Classification: I couldn't utilize ordinal information for classification task.</li>
<li>Iterative pseudo-labeling: Score gets worse from 2nd generation (my thoughts on this above).</li>
<li>Post-processing: Interval mean matching, basically I try to match prediction's mean in each interval ([-0.5, 0.5], …) with the train set. Worked okay on validation set but not LB.</li>
</ul>
<p><a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a> xD thanks, great to hear from you, couldn't have made it without you!</p>
<p><a href="https://www.kaggle.com/duykhanh99" target="_blank">@duykhanh99</a> haha, means a lot coming from you, I look up to you the whole competition!</p>
<p><a href="https://www.kaggle.com/hungkhoi" target="_blank">@hungkhoi</a> haha yes I am :D.</p>
<p><a href="https://www.kaggle.com/ktgiahieu" target="_blank">@ktgiahieu</a> oh wow I didn't know that, thanks for letting me know. I'll follow you up on that ;D.</p>
<p><a href="https://www.kaggle.com/rk3256" target="_blank">@rk3256</a> thanks for the question. It means I take the train data of first fold (80% of train set) together with the new dataset to train the model. So in total I train the model with 80% * 2800 + ~16k = ~18k.</p>
<p>haha, you guys were so high up so I didn't request xD, hope to see you in future competitions! (hơi tiếc tí a ạ :D).</p> |
CommonLit Readability Prize | [Private 28th/public 14th solution]-Tri's part-custom loss | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to say thank you very much to my wonderful teammates, <a href="https://www.kaggle.com/horsek" target="_blank">@horsek</a>, and <a href="https://www.kaggle.com/lftuwujie" target="_blank">@lftuwujie</a>, I learned a lot from them during this competition. Next, thank you to all competitors, the competition host, and Kaggle for competing and organizing this amazing competition. Besides, congratulation to all teams in the gold medal zone.</p>
<p>At the beginning of this competition, I told <a href="https://www.kaggle.com/horsek" target="_blank">@horsek</a> that let's treat it as our chance to learn about NLP because I didn't have knowledge of it and it has been my first NLP competition.</p>
<p>In this write-up, I would like to share not only my models but also my journey in this competition, from starting points to forming ideas under a newbie's viewpoint like me.</p>
<h1>Starting point</h1>
<p>I started my journey with this wonderful notebook (<a href="https://www.kaggle.com/rhtsingh/commonlit-readability-prize-roberta-torch-fit)" target="_blank">https://www.kaggle.com/rhtsingh/commonlit-readability-prize-roberta-torch-fit)</a>, please give him/her an upvote.</p>
<p>However, I didn't fork it but read and re-wrote my own notebook. This helped me to debug, and forming some ideas to further customize and optimize the pipeline.</p>
<p>After this step, my best single model is the RoBERTa-large and scored 0.468 on the public LB and 0.469 on the private LB, my inference notebook is <a href="https://www.kaggle.com/shinomoriaoshi/readability-roberta-large-inference?scriptVersionId=65661288" target="_blank">here</a></p>
<p>The setting of this model is,</p>
<ul>
<li>Epochs: 8;</li>
<li>Batch size: 8;</li>
<li>LR: 2e-5;</li>
<li>Evaluate after every 2 steps;</li>
<li>Differentiate learning rate of hidden layers;</li>
<li>Model: Backbone --> Weighted-mean/max pooling of hidden layers --> Attention pooling over document length --> Output</li>
<li>Loss: RMSE</li>
<li>Optimizer and scheduler: AdamW, get_linear_schedule_with_warmup</li>
</ul>
<h1>Custom loss</h1>
<p>In this competition (and also in all of my competitions), I studied related literature a lot. This step helped me to understand the concept of the competition and find new ideas. The first idea I got is the Quadratic-Weighted-Kappa loss, the second is the custom Bradley-Terry loss.</p>
<h3>Quadratic-Weighted-Kappa (QWK) Loss</h3>
<p>The idea of this loss came from this <a href="https://direct.mit.edu/coli/article/47/1/141/97334/Supervised-and-Unsupervised-Neural-Approaches-to" target="_blank">paper</a>. The main idea of this loss is binning the excerpt targets into categories, and train the model to classify them into these categories. In my opinion, this step is equivalent to label-smoothing, because it gathers excerpts with similar targets into a bin. The QWK loss is not a must, you can implement it with Cross-entropy loss. This is the implementation of this QWK loss,</p>
<pre><code>class QuadraticWeightedKappaLoss(nn.Module):
def __init__(self, num_cat = 7, device = 'cpu'):
super(QuadraticWeightedKappaLoss, self).__init__()
self.num_cat = num_cat
cats = torch.arange(num_cat).to(device)
self.weights = (cats.view(-1,1) - cats.view(1,-1)).pow(2) / (num_cat - 1)**2
def _confusion_matrix(self, pred_cat, true_cat):
confusion_matrix = torch.zeros((self.num_cat, self.num_cat)).to(pred_cat.device)
for t, p in zip(true_cat.view(-1), pred_cat.view(-1)):
confusion_matrix[t.long(), p.long()] += 1
return confusion_matrix
def forward(self, pred_cat, true_cat):
# Confusion matrix
O = self._confusion_matrix(pred_cat, true_cat)
# Count elements in each category
true_hist = torch.bincount(true_cat, minlength = self.num_cat)
pred_hist = torch.bincount(pred_cat, minlength = self.num_cat)
# Expected values
E = torch.outer(true_hist, pred_hist)
# Normlization
O = O / torch.sum(O)
E = E / torch.sum(E)
# Weighted Kappa
numerator = torch.sum(self.weights * O)
denominator = torch.sum(self.weights * E)
return numerator / denominator
</code></pre>
<p>My best model after this step is again a RoBERTa-large with the following setting,</p>
<ul>
<li>Epochs: 8;</li>
<li>Batch size: 8;</li>
<li>LR: 2e-5;</li>
<li>Evaluate after every 2 steps;</li>
<li>Differentiate learning rate of hidden layers;</li>
<li>Model: Backbone --> Weighted-mean/max pooling of hidden layers --> Attention pooling over document length --> Output</li>
<li>Loss: RMSE + QWK</li>
<li>Optimizer and scheduler: AdamW, get_linear_schedule_with_warmup</li>
</ul>
<p>It scored 0.466/0.466 on the public/private LB.</p>
<h3>Bradley-Terry (BT) Loss</h3>
<p>This is so far the most effective trick and my most contribution to my team. The idea came from the observation about the benchmark excerpt which its target and standard error are both 0. As I studied the information from the host, each excerpt in the corpus is compared to some others. Then, by using the Bradley-Terry method, their prior targets are estimated. Finally, the prior targets of all excerpts are subtracted to the target a chosen benchmark excerpt to obtain the data we have.</p>
<p>From this observation, I designed my models accordingly. The model architecture is <a href="https://www.kaggle.com/shinomoriaoshi/readability-model-architecture" target="_blank">here</a> (sorry that I couldn't insert an image to a post, so I uploaded my model architecture in another notebook).</p>
<p>The implementation of the custom BT loss is as follow,</p>
<pre><code>class BradleyTerryLoss(nn.Module):
def __init__(self):
super(BradleyTerryLoss, self).__init__()
def forward(self, pred_mean, true_mean):
batch_size = len(pred_mean)
true_comparison = true_mean.view(-1,1) - true_mean.view(1,-1)
pred_comparison = pred_mean.view(-1,1) - pred_mean.view(1,-1)
return torch.log(1 + torch.tril(torch.exp(-true_comparison * pred_comparison))).sum() / (batch_size * (batch_size - 1) / 2)
</code></pre>
<p>With this structure, the predicted target of the benchmark excerpt is always 0, which is consistent with the truth. This loss also allows models to compare the targets to each other. We applied this custom loss with many models of my teams.</p>
<p>This custom loss gave me the best single model so far with 0.461/0.462 on the public and private LB.<br>
My best model after this step is again a RoBERTa-large with the following setting,</p>
<ul>
<li>Epochs: 20, stop after 8 epochs;</li>
<li>Batch size: 8;</li>
<li>LR: 2e-5;</li>
<li>Evaluate after every 10 steps;</li>
<li>Differentiate learning rate of hidden layers;</li>
<li>Model: Backbone --> Weighted-mean/max pooling of hidden layers --> Attention pooling over document length --> Output</li>
<li>Loss: RMSE + QWK + BT</li>
<li>Optimizer and scheduler: AdamW, get_linear_schedule_with_warmup</li>
</ul>
<p>Using this setting, I train my models with different types of backbone. The following summarizes my best models according to public LB with each backbone (public LB/private LB),</p>
<ul>
<li>RoBERTa-large with weighted-mean/max pooling: 0.461/0.462;</li>
<li>RoBERTa-large with max-pooling only: 0.463/0.466;</li>
<li>GPT2-medium: 0.492/0.508;</li>
<li>XLNet-large-cased: 0.468/0.473;</li>
<li>ALBERT-xlarge-v2: 0.502/0.500;</li>
<li>ELECTRA-large: 0.468/0.476;</li>
<li>Funnel-transformer/large: 0.477/0.477;</li>
<li>BART-large: 0.487/0.491;</li>
<li>DeBERTa-large: 0.463/0.468;</li>
</ul>
<p>For each backbone, I sometimes changed the setting (evaluate less frequently, omit the QWK loss, training with another fold splitting, etc.) to avoid overfitting.</p>
<p>Finally, I simply took the equal average of all of my 13 models, it gave me 0.453/0.456 on the public/private LB. Ensemble my average model with my teammates' models gave us 0.448/0.452 on the public/private LB.</p>
<h1>What didn't work for me?</h1>
<ul>
<li>Data augmentation;</li>
<li>Linguistic features;</li>
<li>HAN</li>
</ul>
<h1>Final words</h1>
<p>Although the result is not as good as I expected, I still enjoyed the competition a lot. From nearly 0 in NLP, after 3 months, I learned a lot from my teammates and fellow competitors. I hope to meet you in future competitions!</p>
<h1>Notebooks</h1>
<ul>
<li>Training of my models: <a href="https://www.kaggle.com/shinomoriaoshi/readability-training-notebook" target="_blank">https://www.kaggle.com/shinomoriaoshi/readability-training-notebook</a></li>
<li>Inference of my stacking model: <a href="https://www.kaggle.com/shinomoriaoshi/readability-stacking-inference" target="_blank">https://www.kaggle.com/shinomoriaoshi/readability-stacking-inference</a></li>
<li>OOF notebook of a RoBERTa-large model: <a href="https://www.kaggle.com/shinomoriaoshi/readability-roberta-large-oof" target="_blank">https://www.kaggle.com/shinomoriaoshi/readability-roberta-large-oof</a></li>
</ul>
<p>Thank you, I am currently working in an academic environment, so I guess reading literature as my first step is just my habit, haha.</p>
<p>Thank you for your kind words, Jie. Me too, I am looking forward to working with you in the future.</p>
<p>Thank you for your kind words. Haha, so now we should be the ones who regret about that. Anyway, congratulation with your solo gold. I look forward to working with you in future competitions.</p>
<p>Thank you very much. I couldn't make HAN model break 0.50x. I will study your solution for sure.</p>
<p>Haha, after seeing the improvement, we commit to this competition more seriously without realizing it. Let's get a gold next time!</p> |
CommonLit Readability Prize | [Private 47th / Public 55th] Stable local cv & public leaderboard using stacking with linear models and forward selection | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>In this competition, our team "3 of a kind" (@steubk <a href="https://www.kaggle.com/pietromarinelli" target="_blank">@pietromarinelli</a> <a href="https://www.kaggle.com/lucamassaron" target="_blank">@lucamassaron</a>) used a stacking strategy, trying to mainten our solution as simple as possible (given the circumstances of the competition).</p>
<ol>
<li><p>we used a stratified cv 5-fold strategy, based on the target distribution which has been divided into 25 parts using pd.cut. Our local cv always followed the LB improvements and we didn't have any doubt in the end if to choose the LB or the local cv. We decided for our two final submissions based on their diversity. We always recorded the OOF for every model and stored each fold's model.</p></li>
<li><p>We experimented different models and found that Roberta-base, Roberta-large, Studio Ousia Luke-large, MS Deroberta-large worked better for the problem. Here are ourt best models and their oof rmse:</p></li>
</ol>
<table>
<thead>
<tr>
<th>model</th>
<th>oof rmse</th>
</tr>
</thead>
<tbody>
<tr>
<td>clrpdebertalargeppln4attheadwd</td>
<td>0.47696</td>
</tr>
<tr>
<td>robertalarge-read2vec</td>
<td>0.50651</td>
</tr>
<tr>
<td>litmodel-dataset</td>
<td>0.47092</td>
</tr>
<tr>
<td>clrp-roberta-large-1h-att-head-readability-pt</td>
<td>0.49076</td>
</tr>
<tr>
<td>lightweight-roberta</td>
<td>0.47214</td>
</tr>
<tr>
<td>clrp-deberta-large-4-se-wd</td>
<td>0.48815</td>
</tr>
<tr>
<td>litmodel-deroberta</td>
<td>0.47007</td>
</tr>
<tr>
<td>clrp-roberta-large-2f-se</td>
<td>0.48927</td>
</tr>
<tr>
<td>clrp-deberta-large-ppln4-atthead</td>
<td>0.48395</td>
</tr>
<tr>
<td>clrprobertalargeppln4attheadwdr3l</td>
<td>0.48865</td>
</tr>
<tr>
<td>clrp-deberta-large-4-se</td>
<td>0.49059</td>
</tr>
<tr>
<td>litmodel-luke</td>
<td>0.47827</td>
</tr>
<tr>
<td>deroberta-read2vec</td>
<td>0.50054</td>
</tr>
<tr>
<td>clrp-roberta-large-2h-atthead-se</td>
<td>0.49703</td>
</tr>
<tr>
<td>clrp-deberta-large-2-se</td>
<td>0.48962</td>
</tr>
<tr>
<td>litmodel-roberta-large</td>
<td>0.48182</td>
</tr>
<tr>
<td>roberta-large-outlier-trim</td>
<td>0.50249</td>
</tr>
<tr>
<td>litmodel-deroberta2</td>
<td>0.47150</td>
</tr>
<tr>
<td>studioousialukelarge</td>
<td>0.49649</td>
</tr>
</tbody>
</table>
<p>We tried pre-training, but the results didn't improve (probably we needed a larger and better selected dataset) so we gave up insisting on that.</p>
<ol>
<li><p>We used just two different methods to deal with the output of our transformer models (CRLP and Lightweight as you can find in the public Kernels) using single or multiple attention heads, but we exercised our creativity in figuring out model heads. We found that piping some readability measures into the final dense layers helped the reslt a bit. We came up with the features to concatenate after doing a residual analysis on our off and finding out that they still correllated with indexes such as Kincaid, FleschReadingEase, GunningFogIndex, SMOGIndex and with simple measurements such as the number of syllables or the ratio of complex words on the total of words. We also tried to exclude certain examples from the training in order to obtain more diverse models.</p></li>
<li><p>We experimented with different learning rates and trying to freeze part of the layers: we got out the best results using the AdamW optimizer, having a warm-up epoch and then decreasing the learning rate and increasing it using the cosine scheduler. After the first epoch we also started mixing up the target, drawing for a part of it another value based on the standard error provided.</p></li>
<li><p>Knowing that the best epochs were from 2 to 5, we often evaluated during this phase and retained the state of the model with best validation, even if the epoch wasn't completed.</p></li>
<li><p>After having a sufficient number of models we stacked them as features together with all the readability measurements provided by the readability package. We used a Ridge regression, a linear SVR and an ElasticNet, tuning both their hyperparameters and doing forward selection of the features to be included. The forward selection helped our final ensembles to be both different and well performing, getting better results that finding weights by hand. Finally we blended the three models to obtain our prediction. This move gave use the boost climb up in the last days from position 190th to 55th.</p></li>
</ol>
<p>In conclusion, we indeed found this competition interesting in order to figure out what to do when you have a small textual datasets and noisy targets.</p> |
CommonLit Readability Prize | [Private 48th / Public 25th] Notebooks and Codes | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for the comment! Congrats too.<br>
I'll be happy if my codes help you.</p>
<p>thanks !!<br>
I'll continue challenging !!</p>
<p>Thanks for the comment!<br>
I'll be happy if my model help you.</p>
<p>thank you!</p>
<p>Thanks for the comment!<br>
I'll be happy if my codes help you.</p>
<p>Congrats for all prizewinner and gold holders.<br>
I'm really sorry I can't reach solo gold…<br>
I share my works, my single best notebook and final notebook and my codes.</p>
<ul>
<li><p>Single best<br>
<a href="https://www.kaggle.com/kaerunantoka/commonlit-110-inf?scriptVersionId=67789829" target="_blank">https://www.kaggle.com/kaerunantoka/commonlit-110-inf?scriptVersionId=67789829</a></p></li>
<li><p>Final<br>
<a href="https://www.kaggle.com/kaerunantoka/commonlit-2nd-stacking-57?scriptVersionId=69559317" target="_blank">https://www.kaggle.com/kaerunantoka/commonlit-2nd-stacking-57?scriptVersionId=69559317</a></p></li>
<li><p>Repo<br>
<a href="https://github.com/osuossu8/CommonLitReadabilityPrize" target="_blank">https://github.com/osuossu8/CommonLitReadabilityPrize</a></p></li>
</ul> |
CommonLit Readability Prize | 🏅️ 4th Place Solution (0.447) 🏅️ | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit Readability Prize
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you! I actually forgot to mention KL Loss above.</p>
<p>I tried the loss from <a href="https://www.kaggle.com/c/commonlitreadabilityprize/discussion/239421#1310794" target="_blank">this discussion</a> but didn't really see gains on CV. However, there were some models which were difficult to train (the loss would suddenly jump up after a couple of epochs) and this loss function appeared to help slightly. MSE was my default loss for most models though</p>
<p>I'd like to thank Kaggle and CommonLit for hosting this competition, as well has everyone here who has shared such great discussions and notebooks. I'm still in shock as I've always dreamed of a solo gold but never thought I'd actually get here!</p>
<h1>CV Strategy</h1>
<p>This was my second NLP competitions after the <a href="https://www.kaggle.com/c/tweet-sentiment-extraction" target="_blank">Tweet Sentiment Extraction</a> competition last year. There was some similarity between the two, namely a) a small dataset and b) a fairly noisy target. The trick with the TSE competition was to use a 5-fold CV but to average multiple seeds (I used 5 for a total of 25 runs per experiment, which I would then average). <br>
To create my 5 folds, I used <a href="https://www.kaggle.com/abhishek/step-1-create-folds" target="_blank">Abhishek's binning and stratification</a> method. On average I was seeing about 0.01 gap between CV & public LB, but the correlation was good. I didn't bother submitting the majority of my single models as I was happy with my CV setup and I'm lazy 😂️.<br>
My best performing model was a deberta-large with a CV of 0.4738 (average of 5 seeds).</p>
<h1>Pipeline</h1>
<h2>Transformers</h2>
<p>For all of my transformer models I used the <a href="https://www.kaggle.com/gogo827jz/roberta-model-parallel-fold-training-on-tpu" target="_blank"><code>AttentionBlock</code> from this notebook</a>, but applied it to the sequence last hidden state of the transformer.</p>
<pre><code>n_hidden = self.config.hidden_size # from HF config
self.seq_attn_head = nn.Sequential(
nn.LayerNorm(n_hidden),
AttentionBlock(n_hidden, n_hidden, 1),
nn.Linear(n_hidden + n_additional_features, 2 if kl_loss else 1),
)
</code></pre>
<h2>Additional features</h2>
<p>I also added a couple of features using <code>textstat</code>, <code>flesch_reading_ease</code> and <code>smog_index</code> and concatenated them to the output of the AttentionBlock before the regression head. These brought a tiny but repeatable improvement to my CV, but I'm not sure if this had any major impact on the final performance. I did a test during the late stages of the competition where I concatenated predictions of other models, and this didn't work any better than the ensemble methods below, which makes me think these features were doing very little.</p>
<h2>Training config</h2>
<ul>
<li>Optimiser: AdamW<ul>
<li>Weight decay: 1.0</li>
<li>LR: 0.00005 or 0.000025 (depending on batch size)</li></ul></li>
<li>Epochs: 6 with SWA after epoch 3</li>
<li>Batch size: 16 (or 12 for the larger models)</li>
<li>Validation/Checkpointing every 5 steps</li>
<li>Loss: MSE</li>
<li>No dropout</li>
</ul>
<h2>Other tools</h2>
<ul>
<li>Logging: combination of Weights & Biases and Google Sheets (to track multiple seeds)</li>
<li>Model Store: Google Cloud Bucket and transfer to a Kaggle Notebook (to /kaggle/output/). I found this was much more reliable than regular dataset uploads, plus you won't run into storage limitations.</li>
<li>PyTorch Lightning</li>
<li>Shell script using Kaggle API to measure notebook execution times</li>
</ul>
<h1>Ensemble</h1>
<h2>Selection</h2>
<p>I started with all of my models and iteratively removed the worst models until I could get down to the GPU time limit. (I found that this worked better than greedily adding models). I think this worked well in the end since the selection process would select models that will help the overall ensemble instead of models with the best CV.<br>
I used 3 main methods for ensembling:</p>
<ul>
<li><code>RidgeCV</code> (which uses <code>LeaveOneOut</code> cross validation)</li>
<li><code>BayesianRidgeRegression</code> (with a LOO CV loop)</li>
<li>The <a href="https://kaggler.readthedocs.io/en/latest/_modules/kaggler/ensemble/linear.html#netflix" target="_blank">Netflix method from 2009 BigChaos solution</a> (<a href="https://www.netflixprize.com/assets/GrandPrize2009_BPC_BigChaos.pdf" target="_blank">paper</a>). This method uses RMSE scores from CV to allocate ensemble weights.</li>
</ul>
<h2>Final submission</h2>
<p>My final ensembles usually had about 38 models to fit into the 3 hour run time and had quite a range of model architectures and CV scores.<br>
I chose the RidgeCV & BayesianRidgeRegression methods as my final 2 submissions with the former scoring 0.447. <br>
Interestingly, the Netflix method scored 0.446, but I had more faith in the CV scores from LOO CV, as I was worried about leakage using the Netflix method.</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>microsoft/deberta-large</td>
<td>0.484005</td>
</tr>
<tr>
<td>microsoft/deberta-base</td>
<td>0.486186</td>
</tr>
<tr>
<td>deepset/roberta-base-squad2</td>
<td>0.495078</td>
</tr>
<tr>
<td>deepset/roberta-base-squad2</td>
<td>0.501851</td>
</tr>
<tr>
<td>deepset/roberta-large-squad2</td>
<td>0.482696</td>
</tr>
<tr>
<td>distilroberta-base</td>
<td>0.505654</td>
</tr>
<tr>
<td>funnel-transformer/large-base</td>
<td>0.495666</td>
</tr>
<tr>
<td>bert-large-uncased</td>
<td>0.524264</td>
</tr>
<tr>
<td>bert-large-uncased</td>
<td>0.538173</td>
</tr>
<tr>
<td>facebook/bart-base</td>
<td>0.534491</td>
</tr>
<tr>
<td>facebook/bart-base</td>
<td>0.556662</td>
</tr>
<tr>
<td>facebook/bart-large</td>
<td>0.543126</td>
</tr>
<tr>
<td>facebook/bart-large</td>
<td>0.526896</td>
</tr>
<tr>
<td>roberta-base</td>
<td>0.49835</td>
</tr>
<tr>
<td>microsoft/deberta-large</td>
<td>0.474008</td>
</tr>
<tr>
<td>microsoft/deberta-large</td>
<td>0.471707</td>
</tr>
<tr>
<td>distilroberta-base</td>
<td>0.502802</td>
</tr>
<tr>
<td>albert-large-v2</td>
<td>0.545104</td>
</tr>
<tr>
<td>albert-large-v2</td>
<td>0.506411</td>
</tr>
<tr>
<td>funnel-transformer/large-base</td>
<td>0.493789</td>
</tr>
<tr>
<td>funnel-transformer/large-base</td>
<td>0.491098</td>
</tr>
<tr>
<td>microsoft/deberta-base</td>
<td>0.499093</td>
</tr>
<tr>
<td>microsoft/deberta-base</td>
<td>0.509152</td>
</tr>
<tr>
<td>deepset/roberta-base-squad2</td>
<td>0.490829</td>
</tr>
<tr>
<td>deepset/roberta-large-squad2</td>
<td>0.489583</td>
</tr>
<tr>
<td>deepset/roberta-large-squad2</td>
<td>0.494185</td>
</tr>
<tr>
<td>funnel-transformer/large-base</td>
<td>0.523333</td>
</tr>
<tr>
<td>albert-large-v2</td>
<td>0.508817</td>
</tr>
<tr>
<td>albert-large-v2</td>
<td>0.529876</td>
</tr>
<tr>
<td>sentence-transformers/LaBSE</td>
<td>0.525731</td>
</tr>
<tr>
<td>microsoft/deberta-large</td>
<td>0.471197</td>
</tr>
<tr>
<td>microsoft/deberta-large</td>
<td>0.475243</td>
</tr>
<tr>
<td>bert-large-cased-whole-word-masking</td>
<td>0.514011</td>
</tr>
<tr>
<td>bert-large-cased</td>
<td>0.505559</td>
</tr>
<tr>
<td>xlm-roberta-large</td>
<td>0.505576</td>
</tr>
<tr>
<td>facebook/bart-base</td>
<td>0.530183</td>
</tr>
<tr>
<td>google/electra-large-discriminator</td>
<td>0.514033</td>
</tr>
<tr>
<td>sentence-transformers/paraphrase-mpnet-base-v2</td>
<td>0.510096</td>
</tr>
</tbody>
</table>
<h1>Code</h1>
<ul>
<li>GitHub: <a href="https://github.com/Anjum48/commonlitreadabilityprize" target="_blank">https://github.com/Anjum48/commonlitreadabilityprize</a></li>
<li>Submission code: <a href="https://www.kaggle.com/anjum48/commonlit-submission?scriptVersionId=67928130" target="_blank">https://www.kaggle.com/anjum48/commonlit-submission?scriptVersionId=67928130</a></li>
</ul>
<p>Thanks Tim! I averaged the 5 folds prior to the ensemble and treated the 5 seeds as independent models for the purpose of the ensemble (but only selected the models that improved the ensemble). This meant there were 38 * 5 = 195 model checkpoints.</p>
<p>My final RidgeCV score was 0.44254 with an alpha of 0.01</p>
<p>I used the method described by Nischay Dhankhar in this <a href="https://www.kaggle.com/c/commonlitreadabilityprize/discussion/249689" target="_blank">discussion</a>.</p>
<p>Essentially anything that gets saved into the /kaggle/output/ section of a notebook doesn't count towards your 100GB (seems like a weird loophole but it works). I just connected my Google Cloud Bucket and downloaded my model weights to that folder</p>
<p>Code repository and submission notebook is now public :)</p>
<p>Thanks very much <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>! Normally when ensembling I try to keep the same folds for each model so that I can use the OOF predictions to create a blending model. In this case, since I had models trained with different seeds for CV this made things a little tricky and I wasn't sure if a normal 5 fold CV would be reliable.</p>
<p>My default method was to build a dataset using all my OOF predictions and then use <code>RidgeCV</code>. If <code>cv=None</code>, by default it uses an efficient Leave-One-Out Cross-Validation (the code is <a href="https://github.com/scikit-learn/scikit-learn/blob/2beed55847ee70d363bdbfe14ee4401438fba057/sklearn/linear_model/_ridge.py#L1076" target="_blank">here</a> and it's quite interesting to see how it works). My understanding is that it uses LOO CV to select the best alpha and then fit the weights using the <strong>full</strong> X dataset. The CV score is then the <code>best_score_</code> attribute (which is negative MSE).</p>
<p>I also tried to replicate this with other sklearn linear models, but using a manual <code>LeaveOneOut</code> CV loop, which was slower, but still gave me good CV/LB correlation. It was these CV scores I used to iteratively remove models for the final ensemble. In the final submission notebook I would just use fit & predict using the OOFs for the chosen models.</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>CV</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>BayesianRidge</td>
<td>0.44256</td>
<td>0.452</td>
<td>0.448</td>
</tr>
<tr>
<td>RidgeCV</td>
<td>0.44254</td>
<td>0.451</td>
<td>0.447</td>
</tr>
</tbody>
</table>
<p>Perhaps a picture will help :)</p>
<p>Each "timestamp" or row is a single model run with a single seed run for 5 folds. Each "group name" is a group of 5 identical models run with different seeds.</p>
<p>I would take the mean RMSE of the group (25 model checkpoints) to determine if a change made something better or worse. This is how I calculated my CV score.</p>
<p>But I would treat each timestamp as a unique model in my ensemble (averaging the 5 folds), but only using that model if it improved my ensemble CV.</p>
<p><img src="https://i.imgur.com/sUBxnBI.png" alt=""></p>
<p>Hi <a href="https://www.kaggle.com/spy1234567" target="_blank">@spy1234567</a>, yes I used 25 model checkpoints to determine if a change to heads/features/hyperparams etc. actually made an improvement that was worth keeping.</p>
<p>But for my final prediction, I treated each of the 5 folds as a single model and averaged the predictions. So in the table above, you can see some models appear multiple times due to being from different seeds (or some other change).</p> |
Cornell Birdcall Identification | 101st Place Solution (Top 8%) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cornell Birdcall Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you! Congrats to you too on your great placing.</p>
<p>This competition was a toughie. Special thanks to my team members for their support <a href="https://www.kaggle.com/doanquanvietnamca" target="_blank">@doanquanvietnamca</a> , <a href="https://www.kaggle.com/truonghoang" target="_blank">@truonghoang</a> throughout this competition.</p>
<p>As this was my first audio competition, in the beginning, I tried effnetb3 using fastai as a baseline, with precomputed spectrograms. This was able to beat the public LB baseline, giving a score of 0.554. However, I was unable to improve this model. My team and I subsequently tried several other new models in the process</p>
<p><strong>Models tried:</strong></p>
<ol>
<li>EfficientNetB1</li>
<li>ResNest50 (best model)</li>
<li>MobileNetV2</li>
<li>SED model, with effnetb0 backbone</li>
<li>EfficientNetB3 (with pretrained noisystudent weights)</li>
</ol>
<p>ResNest50 performed well for this task, with the same classifier head provided by <a href="https://www.kaggle.com/hidehisaarai1213" target="_blank">@hidehisaarai1213</a>. </p>
<p>Augmentations also worked well for us, namely:</p>
<ol>
<li>Adding gaussian noise</li>
<li>Adding pink noise</li>
<li>Adding background noise </li>
</ol>
<p>We also tried a model with 265 classes, with nocall as a class but that didn't work better than a normal 264 classifier model.</p>
<p>We tried incorporating secondary labels to perform the multilabel classification task, it gave decent CV (up to 0.7x validation micro score), faltered in public LB (0.569) but was the best performing model for private LB (0.616). </p>
<p>We were unable to bridge the gap between CV and LB, the correlation between CV and LB was not strong so in the end we had to trust our methodology and public LB.</p>
<p>We settled for an ensemble of models between ResNest50 and MobileNetV2 which landed us with the bronze solution.</p>
<p><strong>Summary of things that did not work for us:</strong></p>
<ol>
<li>SED ( could not get it to work somehow, will have to learn from training solution by <a href="https://www.kaggle.com/hidehisaarai1213" target="_blank">@hidehisaarai1213</a> to understand more why)</li>
<li>265 class classifier (including nocall class), decreased public and private LB</li>
<li>Denoising on test set</li>
<li>F1Loss (possibly due to batch size being too small)</li>
<li>FocalLoss</li>
</ol>
<p>Special thanks to the organizers of this competition and fellow competitors whom I learn loads from (@hidehisaarai1213 , <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a>, <a href="https://www.kaggle.com/kneroma" target="_blank">@kneroma</a>, <a href="https://www.kaggle.com/ttahara" target="_blank">@ttahara</a>). Hope to join another audio competition to further my learning. Congrats to all winners in this competition.</p> |
Cornell Birdcall Identification | 10th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cornell Birdcall Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Acknowledgments</h2>
<p>First of all thanks for the organizers for this challenging and fascinating competition.<br>
I am grateful for the relatively quick answers in the discussions (external data, domain knowledge,<br>
it was certainly helpful.</p>
<p>Special thanks to</p>
<ul>
<li>Hidehisa Arai for showing how to use PANNS effectively</li>
<li>Qiuqiang Kong et al for PANNS repo and pretrained models</li>
<li>Vopani for scraping and maintaining the external XenoCanto dataset</li>
<li>Jan Schlüter and Mario Lasseck for their previous BirdCLEF winning papers </li>
</ul>
<h2>The beginning</h2>
<p>Seven years ago I already participated in bird song detection challenge at kaggle.<br>
That time we had only a few hundred 10 seconds recordings to train on and similar multiclass multilabel problem but with only 19 species.<br>
I was able to win that competition with Computer Vision template matching and Random Forests.<br>
I thought it would be a quick & easy experiment to beat that with the available 40K bird recordings and all the available pretrained image net models.<br>
Well it was not.</p>
<p>Possible reasons</p>
<ul>
<li>Different sample rate (16kHz vs 32kHz)</li>
<li>Soundscape vs Xenocanto</li>
<li>MLSP train-test split was random split across soundscapes it was possible to overfit to the same recording (e.g. crickets & rain -> Hermit Warbler))</li>
<li>Different spectrogram/noise distribution</li>
</ul>
<h2>Ornithology and LB Probing</h2>
<p>In the beginning of the competition I was not able to submit meaningful results so I tried to understand the North American bird population better.<br>
Just by submitting individual birds one would expect ~0.001 LB score with equal bird distribution.<br>
From ebird.org observation data I was able to rank the 264 species based on their unique observations during the last two years.<br>
It does not necessary reflect the distribution in the Public/Private test set but I found it better than the number of XC recordings.</p>
<p>E.g.</p>
<ul>
<li>Red Crossbill 1223 XC recordings 84K observations 0.000 LB Score</li>
<li>White-crowned Sparrow 474 XC recordings 1.1M observations 0.1 LB Score (!)</li>
</ul>
<h2>Data Preparation</h2>
<p>Resampling everything to 32kHz and splitting the first 2 minutes of each recording to 10 second duration chunks and saving them as .npy arrays.<br>
I used the extended dataset a<br>
4 fold cross validation was used stratified on the author-created at to try to avoid same birds in different folds.<br>
For early stopping I saved the best weights based on XC validation and BirdCLEF Validation as well.</p>
<h2>Augmentation</h2>
<p>I only used additive noises.</p>
<ul>
<li>freefield1010</li>
<li>warblrb10k</li>
<li>BirdVox-DCASE-20k</li>
<li>Animal Sound Archive Published by Museum für Naturkunde Berlin ()</li>
</ul>
<p>Probably should have tried synthetic noise generation too.</p>
<h2>Architecture</h2>
<p>I ended up with slightly modified CNN14 (128 mel bins, mean/std standardised)<br>
They were relatively quick to train on Nvidia Tesla T4, training a single model took 3-8 hours.<br>
I tried PANN ResNet38, Cnn14_DecisionLevelAtt or ImageNet pretrained ResNet50 but without proper validation I got mixed results…</p>
<p>The dataloader handled the additive augmentations for the waveforms</p>
<ul>
<li>Add multiple possible 1-2-3 birds with multi-class setting</li>
<li>Add same class chunks</li>
<li>Add noise</li>
<li>Add animal sound</li>
</ul>
<p>the GPU created the spectrograms for the batches.</p>
<h2>Blending</h2>
<p>During the last weekend I fixed my whole training pipeline and rented V100 to retrain a few final models and create some additional experiments.<br>
On Sunday evening I submitted my first blended model with quite disappointing 0.570 Public Leaderboard result.<br>
Actually it would have been enough for my final 10th place with Private score 0.649 Private Score.</p>
<p>In the las two days I made some desperate submissions with more models, varying thresholds<br>
(e.g. increasing the thresholds of west coast birds, reducing the thresholds for common birds) <br>
They did improve my public LB score and fortunately they did not improve nor hurt the private score.<br>
Probably with a few more submissions I would start to overfit…</p>
<h2>What did not work but I thought it would…</h2>
<ul>
<li>Using secondary labels to improve primary labels with oof predictions</li>
<li>Using a separate nocall classifier</li>
<li>Utilizing the apriori knowledge that some birds are more frequent than others</li>
<li>Blending normalized and unnormalized models</li>
<li>Mixup</li>
<li>More than 128 Mel bins</li>
</ul>
<p>Thanks, actually your 2w marching was more impressive. I started the comp quite early :)</p>
<p>I have two possible explanations for that :) </p>
<p>This was my first competition with pytorch and it took me some time to implement/train models that worked well in the past competitions (AudioSet, BirdCLEF) </p>
<p>Well, it took more time to make them working for this challenge :)</p>
<p>I spent most of my submissions for experiments to understand the test data:</p>
<ul>
<li>LB probing for individual species</li>
<li>Nocall classifier experiments</li>
<li>Test spectrogram statistics</li>
</ul>
<p>This helped me to understand what is the main difference between test and train.</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/1014715/17029/std.png" alt=""></p>
<p>Yep, I should have labeled my axis :) The dashed lines are mean of 90 percentiles the solid lines are the mean for each mel band. I think fmin=300 fmax=11000 was used. </p>
<p>I had to kill/delete all gcp instances and data so it would be difficult to recreate the chart…</p> |
Cornell Birdcall Identification | 13th place digest | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cornell Birdcall Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/watzisname" target="_blank">@watzisname</a> On model training with cross-validation you can keep OOF (out-of-fold) predictions for each validation fold (nothing new here). You can store OOFs based on a sliding window (e.g. width=5s, step=1s) over each validation audio file. Do it for several models (with good LB) and you have, for each time slice of each audio, the probabilities of the ground truth. Average them and keep the best slices for each audio (e.g. probs > 0.9 or 0.7). That's way you've generated a subset with cleaned labels to you can use in a next step. That is what we called hot time slices.</p>
<p>Thanks, we had our best public LB submission with 0.647 on private LB (last gold) but we've selected the one with 0.646 that had more diversity in ensemble.</p>
<p>Good labels, good models 😏. There are threads in the forum from <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> about attempts/approaches to find such good time slices.</p>
<p>For CV5 you have 5 OOF so at the end you've covered the full dataset. But that's true that stage2 dataset is smaller than stage1. For us, it reduced from 40k audio files to 32k audio files because we had 8k files with bad predictions.</p>
<p>It was my first audio competition and I really enjoyed it! Thanks to organizers and Kaggle for it.<br>
Thanks to my teammates <a href="https://www.kaggle.com/tikutiku" target="_blank">@tikutiku</a> <a href="https://www.kaggle.com/zfturbo" target="_blank">@zfturbo</a> the great collaboration, it was a pleasure and I've learnt a lot again.</p>
<p>Here is a digest of our solution.</p>
<p><strong>Main Pipeline</strong> (see image below):</p>
<ul>
<li>MEL-Spectrogram (torchaudio) with differents settings to capture most birds frequency shapes.</li>
<li>Most processing/augmentation occurs in GPU to speed up training and inference.</li>
</ul>
<p><strong>Time Augmentations</strong>:</p>
<ul>
<li>Pink Noise/White noise</li>
<li>Time roll</li>
<li>Volume gain</li>
<li>PitchShift</li>
<li>Additional bandpass filters,lowcut (1kHz-2.5kHz), highcut (10kHz-15kHz)</li>
<li>Background/Ambient noise Mixup</li>
<li>Other birds Mixup (2 to 3 birds)</li>
</ul>
<p><strong>Spec/Image augmentations</strong>:</p>
<ul>
<li>Frequency/time masking</li>
<li>Color jitter</li>
</ul>
<p><strong>CNN backbones</strong>:</p>
<ul>
<li>EfficientNet B1 and B2</li>
<li>SEReseXt26</li>
<li>ResneSt50</li>
<li>Optional GRU layer to learn about time sequences (see <a href="https://github.com/srvk/TALNet" target="_blank">TALNet Sound Event Classifier</a>)</li>
<li>Different pooling to apply SED concept with either Attention block or max pooling<br>
We also had one model based on 1D signal only (DensetNet1D)</li>
</ul>
<p>What did not work well (it worked but was disapointing):</p>
<ul>
<li>Wavegram + MEL-Spectrogram model (was bad with soundscape)</li>
<li>Create large image with 3x2 grid spectrogram (was bad on inference)</li>
</ul>
<p><strong>Data</strong>:</p>
<ul>
<li>Train audio provided with duration outliers removed</li>
<li>Additional xeno-canto data (from Vopani dataset, thanks <a href="https://www.kaggle.com/rohanrao" target="_blank">@rohanrao</a> for your clean dump)</li>
<li>Distractors (NoBird/Nocall/Ambient) 10s slices (from freefield1010)<br>
We've built our own validation data by mixing test_audio/birds/noise/nocall to try to correlate LB. It correlated a bit but was not enough to give trust in it. Too bad as it was key for this competition.</li>
</ul>
<p><strong>Training procedure</strong>:</p>
<ul>
<li>Stage1: Train a few models with 5s slices picked randomly, then save birds probabilities on CV OOF, ensemble all OOF models results to generate "<em>hot</em>" slices with high probabilities.<br>
It allowed to reach public LB=0.582</li>
<li>Stage2: Train more models with only such 5s <em>hot slices</em>.<br>
It allowed to reach public LB=0.596<br>
While reading other's solutions, we should have tried another stage with "very hot" slices to have a super clean train dataset.</li>
</ul>
<p><strong>Final ensemble</strong>:<br>
Ensemble is a combination of models (8 to 10) with union strategy and with per-model threshold tuned on public LB (it overfitted for sure).<br>
Idea of union was to capture most of the TP (better for metric used in this competition), drawback is that we capture FP too.<br>
Inference was fast, we cached resampled audio in memory, and (almost) full GPU pipeline helped. It ran in around 1h30 so we still had room for more models.</p>
<p><strong>Post-processing</strong>:<br>
We have a "nobird" majority vote to try to remove FP (force nocall) as some models have been trained with 265 classes instead of 264 (class 265 = nobird)</p>
<p>Both our final submissions reached similar score on private LB.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F698363%2F4e2db4b7460eb1a51737fbf81bd8ed50%2Fpipeline.png?generation=1600274872405000&alt=media" alt=""></p>
<p>Final words to conclude: Congratulations to top teams! and thanks to <a href="https://www.kaggle.com/hidehisaarai1213" target="_blank">@hidehisaarai1213</a>, <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> for their sharings.</p> |
Cornell Birdcall Identification | 167 Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cornell Birdcall Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all winners! And thanks to organizers!<br>
My score is not good, but sharing my gotten knowledge.</p>
<h3>modeling</h3>
<p>My base model is DenseNet201 trained ImageNet.<br>
Most people used ResNet, but it not work for me.</p>
<p>I read <a href="https://arxiv.org/pdf/1908.02876.pdf" target="_blank">this paper</a>, and apply applied to my model.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1483555%2F4d0acfb9be5bb112c566bc8d55627aa2%2F2020-09-15%2018.45.50.png?generation=1600214606636136&alt=media" alt=""></p>
<p>code is:</p>
<pre><code>class BirdcallNet(nn.Module):
def __init__(self):
super(BirdcallNet_densenet, self).__init__()
densenet = densenet161(pretrained=config.PRETRAINED)
self.features = densenet.features
self.l8_a = nn.Conv1d(2208, config.N_LABEL, 1, bias=False)
self.l8_b = nn.Conv1d(2208, config.N_LABEL, 1, bias=False)
def forward(self, x, perm=None, gamma=None):
# input: (batch, channel, Hz, time)
frames_num = x.shape[3]
x = x.transpose(3, 2) # (batch, channel, time, Hz)
h = self.features(x) # (batch, unit, time, Hz)
h = F.relu(h, inplace=True)
h = torch.mean(h, dim=3) # (batch, unit, time)
xa = self.l8_a(h) # (batch, n_class, time)
xb = self.l8_b(h) # (batch, n_class, time)
xb = torch.softmax(xb, dim=2)
pseudo_label = (xa.sigmoid() >= 0.5).float()
clipwise_preds = torch.sum(xa * xb, dim=2)
attention_preds = xb
return clipwise_preds, attention_preds, pseudo_label
</code></pre>
<h4>Parameters</h4>
<ul>
<li>BCE loss</li>
<li>5-fold CV</li>
<li>Adam<ul>
<li>learning_rate=1e-3</li>
<li>55 epoch</li>
<li>batch size=64</li></ul></li>
<li>CosineAnnealingWarmRestarts<ul>
<li>T=10</li></ul></li>
</ul>
<h3>Data Augumentation</h3>
<ul>
<li>Adjust Gamma</li>
<li>Spec Augmentation Freq</li>
<li>MixUp</li>
</ul>
<h3>Other Technique</h3>
<ul>
<li>SWA</li>
</ul>
<h3>Not Work for me</h3>
<ul>
<li>Denoise(I tried my best, but it didn't work …)</li>
<li>"nocall" prediction by PANNs trained model</li>
<li>Transformer</li>
<li>WaveNet</li>
<li>use secondary_labels</li>
<li>optimize threshold</li>
<li>Focal Loss</li>
<li>CutMix</li>
<li>Label Smoothing</li>
<li>Spec Augmentation Time</li>
<li>DIfferent learning rate for each layer</li>
<li>Multi Sample Dropout</li>
</ul>
<h3>My Code</h3>
<p><a href="https://github.com/trtd56/Birdcall" target="_blank">https://github.com/trtd56/Birdcall</a></p>
<h3>My Blog(Japanese)</h3>
<p><a href="https://www.ai-shift.co.jp/techblog/1271" target="_blank">https://www.ai-shift.co.jp/techblog/1271</a></p>
<p>Thank you!!</p> |
Cornell Birdcall Identification | 17th Place Solution : file-level post-process | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cornell Birdcall Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://github.com/xins-yao/Kaggle_Birdcall_17th_solution" target="_blank">https://github.com/xins-yao/Kaggle_Birdcall_17th_solution</a></p>
<h2>Feature Engineering</h2>
<ul>
<li>Log MEL Spectrogram</li>
</ul>
<pre><code>sr = 32000
fmin = 20
fmax = sr // 2
n_channel = 128
n_fft = 2048
hop_length = 512
win_length = n_fft
</code></pre>
<h2>Augment</h2>
<ul>
<li>Random Clip<br>
randomly cut 5s clip from audio and only keep clips with SNR higher than 1e-3</li>
</ul>
<pre><code> def signal_noise_ratio(spec):
spec = spec.copy()
col_median = np.median(spec, axis=0, keepdims=True)
row_median = np.median(spec, axis=1, keepdims=True)
spec[spec < row_median * 1.25] = 0.0
spec[spec < col_median * 1.15] = 0.0
spec[spec > 0] = 1.0
spec = cv2.medianBlur(spec, 3)
spec = cv2.morphologyEx(spec, cv2.MORPH_CLOSE, np.ones((3, 3), np.float32))
spec_sum = spec.sum()
try:
snr = spec_sum / (spec.shape[0] * spec.shape[1] * spec.shape[2])
except:
snr = spec_sum / (spec.shape[0] * spec.shape[1])
return snr
</code></pre>
<ul>
<li>MixUp<br>
mixup over LogMelSpec with beta(8.0, 8.0) distribution<br>
beta(0.4, 0.4) and beta(1.0, 1.0) will raise more TruePositive but lead to much more FalsePositive</li>
<li>Noise<br>
add up to 4 noises with independent probabilities and scales in waveform<br>
noises were extracted from training samples</li>
</ul>
<pre><code> def signal_noise_split(audio):
S, _ = spectrum._spectrogram(y=audio, power=1.0, n_fft=2048, hop_length=512, win_length=2048)
col_median = np.median(S, axis=0, keepdims=True)
row_median = np.median(S, axis=1, keepdims=True)
S[S < row_median * 3] = 0.0
S[S < col_median * 3] = 0.0
S[S > 0] = 1
S = binary_erosion(S, structure=np.ones((4, 4)))
S = binary_dilation(S, structure=np.ones((4, 4)))
indicator = S.any(axis=0)
indicator = binary_dilation(indicator, structure=np.ones(4), iterations=2)
mask = np.repeat(indicator, hop_length)
mask = binary_dilation(mask, structure=np.ones(win_length - hop_length), origin=-(win_length - hop_length)//2)
mask = mask[:len(audio)]
signal = audio[mask]
noise = audio[~mask]
return signal, noise
</code></pre>
<h2>Model</h2>
<ul>
<li>CNN<br>
9-layer CNN<br>
average pooling over frequency before max pooling over time within each ConvBlock2D<br>
SqueezeExcitationBlock within each ConvBlock2D<br>
pixel shuffle: (n_channel, n_freq, n_time) -> (n_channel * 2, n_freq / 2, n_time)</li>
<li>CRNN<br>
2-layer bidirectional GRU after 9-layer CNN</li>
<li>CNN + Transformer Encoder<br>
Encoder with 8-AttentionHead after 9-layer CNN</li>
</ul>
<h2>Trainng</h2>
<ul>
<li>Label Smooth: 0.05 alpha</li>
<li>Balance Sampler: randomly select up to 150 samples of each bird</li>
<li>Stratified 5Fold based on ebird_code</li>
<li>Loss Function: BCEWithLogitsLoss</li>
<li>Optimizer: Adam(lr=1e-3)</li>
<li>Scheduler: CosineAnnealingLR(Tmax=10)</li>
</ul>
<h2>Post-Process</h2>
<p>if model gets a confident prediction of any bird, then lower threshold for this bird in the same audio file</p>
<ul>
<li>use thr_median as initial threshold</li>
<li>use thr_high for confident prediction</li>
<li>if any bird with probability higher than thr_high in any clip, lower threshold to thr_low for this specific bird in the same audio file</li>
</ul> |
Cornell Birdcall Identification | 18th place solution: efficientnet b3 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cornell Birdcall Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks. In the writeup:</p>
<blockquote>
<p>We added noise extracted from the two test sequences made available, a bit like what Theo Viel did. But we used the meta data to extract sub sequences without bird call, then we merged these sequence with a smooth transition between them. We then added a random clip of the merges sequences to our training clips</p>
</blockquote>
<p>We multiplied the noise by a random number between 0.2 and 1.</p>
<p>Indeed, I was way more worried than you were ;)</p>
<p>Thank you.</p>
<p>Thanks. Indeed, a number of late comers did well here, some in gold unless mistaken. I didn't track precisely hence I prefer not to name people in case I forget some ;) I hope you'll share your approach as well! Congrats on the result even if you dropped a bit in private LB. I hope you're not too disappointed.</p>
<p>Thank you. yes, progress is slower if you write your own code but you'll go farther.</p>
<p>nocall worked for us maybe because of class weights. I updated the post to describe what we did.</p>
<p>First of all I want to thank the host and Kaggle for this very challenging competition, with a truly hidden test set. A bit too hidden maybe, but thanks to the community latecomers like us could get a submission template up and running in a couple of days. I also want to thank my team mate Kazuki, without whom I would probably have given up after many failed attempts to beat the public notebook baseline…</p>
<p><strong>Overview</strong></p>
<p>Our best subs are single efficientnet models trained on log mel spectrograms. For our baseline I started from scratch rather than reusing the excellent baselines that were available. Reason is that I enter Kaggle competition to learn, and I learn more when I try from scratch than when I modify someone else' code. We then evolved that baseline as we could in the two weeks we had before the end of competition.</p>
<p>Given the overall approach is well known and probably used by most participants here I will only discuss the items that may be a bit different from what others did.</p>
<p><strong>Training data clips</strong></p>
<p>It was clear from host that training on random 5 second clips had a drawback: some of the clips may not contain the target bird. We then used a simple hypothesis: clips were stripped, i.e. periods without a song at the beginning or at the end were removed for the sake of reducing storage needs. We therefore trained on first 5 seconds or last 5 seconds of clips, assuming these would contain the target bird. We preprocessed all data to be sampled at 32 kHz.</p>
<p><strong>Noise</strong><br>
We added noise extracted from the two test sequences made available, a bit like what Theo Viel did. But we used the meta data to extract sub sequences without bird call, then we merged these sequence with a smooth transition between them. We then added a random clip of the merges sequences to our training clips</p>
<p><strong>No Call</strong><br>
We added the freefield1010 clips that were labelled as nocall to our training data. We added a 265th class to represent the no call. As a result our model could predict both one or more birds, and a nocall. Adding this data and the nocall class was probably the most important single improvement we saw in CvV and LB scores. It is what led us to pass the public notebook baseline.</p>
<p><strong>Multi Bird Clips</strong><br>
The main documented difference between train and test data is that train is a multi class data while test is a multi label data. Therefore we implemented a mixup variant were up to 3 clips could be merged. This is not really mixup as the target for the merged clip is the maximum of the targets of each merged clip.</p>
<p><strong>Secondary labels</strong><br>
Primary labels were noisy, but secondary labels were even noisier. As a result we masked the loss for secondary labels as we didn't want to force the model to learn a presence or an absence when we don't know. We therefore defined a secondary mask that nullifies the BCE loss for secondary labels. For instance, assuming only 3 ebird_code b0, b1, and b2, and a clip with primary label b0 and secondary label b1, then these two target values are possible:</p>
<p>[1, 0, 0]</p>
<p>[1, 1, 0]</p>
<p>The secondary mask is therefore:</p>
<p>[1, 0, 1]</p>
<p>For merged clips, a target is masked if it it not one of the primary labels and if it is one of the secondary labels.</p>
<p><strong>Loss Function</strong></p>
<p>We use binary cross entropy on one hot encoding of ebird codes. Using bce rather than softmax makes sense as bce extends to multi label seamlessly. We tried dice loss to directly optimize F! score, but for some reason this led to very strong overfitting.</p>
<p><strong>Class Weights</strong></p>
<p>The number of record per species is not always 100. In order to not penalize the less frequent one we use class weights inversely proportional to class frequencies. And for the nocall class we set it to 1 even though it was way more frequent than each bird classes to make sure the model learns about nocall correctly.</p>
<p><strong>Model</strong></p>
<p>Our best model was efficientnet on log mel spectrograms. We resized images to be twice the size of effnet images: 240x480 for effnet b1, 260x520 for effnet b2, and 300x600 for effnet b3. We started from efficientnet_pytorch pretrained models. We tried the attention head from PANNs models but it led to severe overfitting. I am not sure why to be honest, maybe we did something wrong.</p>
<p><strong>Training</strong><br>
Nothings fancy, adam optimizer and cosine scheduler with 60 epochs. In general last epoch was the one with best score and we used last epoch weights for scoring.</p>
<p><strong>Log Mel Spectrogram</strong></p>
<p>Nothing fancy, except that we saw a lot of power in low frequencies of the first spectrograms we created. As a result we clipped frequency to be at least 300 Hz. We also clipped them to be below 16 kHz given test data was sampled at twice that frequency. </p>
<p><strong>Augmentations</strong></p>
<p>Time and pitch variations were implements in a very simple way: modify the length of the clipped sequence, and modify the sampling rate, without modifying the data itself. For instance, we would read 5.2 seconds of a clip instead of 5 seconds, and we could tell librosa that the sampling rate was 0.9*32 kHz. We then compute the hop so that the number of stft is equal to the image width for the effnet model we are training. We also computed the number of mel bins to be equal to the height of the image. As a result we never had to resample data nor resize images, which speed up training and inferencing quite a bit. There was an issue with that still: this led us to use a high nftt value of 2048 which lead to poor time resolution. We ended up with nfft of 1048 and a resize of the images in the height dimension.</p>
<p><strong>Cross Validation</strong></p>
<p>We started with cross validating on our training data (first or last 5 seconds of clips) but it was rapidly clear that it was not representative of the LB score. And given we had very few submissions, we could not perform LB probing. We therefore spent some time during last week to create a CV score that was correlated with the public LB. Our CV score is computed using multi bird clips for 46% of the score, and nocall clips for 54% of the score. We added warblrb and birdvox nocall clips to the freefield1010 for valuating no call performance. We tuned the proportion of each possible number of clips in multi bird clips, and the amount of noise until we could find a relatively good CV LB relationship, see the picture below: x is cv, y is public lb.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F75976%2F39f916bec9be955cfa2a8fd5e8384b3b%2Fcv_lb.png?generation=1600261413279842&alt=media" alt=""></p>
<p>The correlation with private LB is also quite good:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F75976%2F02c89402f0dfd6e9069f36e67fc32e85%2Fcv_private.png?generation=1600261852596913&alt=media" alt=""></p>
<p>We then used it to guide our last few submissions training and thresholding. Our CV said 0.7 was best, and late submissions proved it was right. We ended up selecting our 2 best CV submissions, which were also the 2 best public LB submissions, and also the two best private Lb submissions.</p>
<p><strong>Conclusion</strong></p>
<p>These were probably the most intensive two weeks I had on Kaggle since a long time. My first two subs 14 days ago were a failure, then a LB score of 0. Kazuki had started a bit earlier, but not much. I am very happy about where we landed, and I am not sure we would have done much better with few more days. Maybe using effnet b4 or b5 wold have moved us higher but I am not sure. I am looking for gold medalist solutions to see what we missed. I'm sure I'll learn quite a bit.</p> |
Cornell Birdcall Identification | 1st Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cornell Birdcall Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I don't have all of them as I was limited on submissions but I have an example of 3 of the results on fold 0,1,2 in the attached image. At the top contains an ensemble of 5 of the folds with 3 votes. I didn't trust the LB completely so didn't want to pick individual models based on the LB score and instead based it on the ensemble score. </p>
<p>An ensemble of 9 models with 3 votes got private LB 0.676 and public LB 0.613, so just adding 4 more models with the same vote count scored a bit better on the private and public LB. An ensemble of 13 models with the same votes also improves the private LB to 0.679. </p>
<p>It is possible to get a high scoring single model though. One of my earlier models when I was hyperparameter testing got a score of 0.605 on the public LB and 0.674 on the private LB. It wasn't included in the final ensemble.</p>
<p>Thanks, <a href="https://www.kaggle.com/ramarlina" target="_blank">@ramarlina</a></p>
<p>Thanks, <a href="https://www.kaggle.com/kupchanski" target="_blank">@kupchanski</a></p>
<p>Thanks, <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a>. Yeah I mainly used voting to reduce the false positives, I think it was the major factor in my solution since individually the models were in the silver region.</p>
<p>Thanks, <a href="https://www.kaggle.com/holgerklinck" target="_blank">@holgerklinck</a>. </p>
<p>Thanks <a href="https://www.kaggle.com/hidehisaarai1213" target="_blank">@hidehisaarai1213</a> , I think you provided a lot of guidance and help with what you shared.</p>
<p>I was a kind of afraid of adding too much augmentation as I didn't want to distort the bird calls too much. I mainly relied on the pink noise as suggested by one of the hosts.</p>
<p>Most of my solution was based on the baseline SED model provided by <a href="https://www.kaggle.com/hidehisaarai1213" target="_blank">@hidehisaarai1213</a> . Without his kernel I wouldn't have achieved the result I did. So I am really grateful to him. Thanks for sharing a lot during the competition, I learnt a lot. </p>
<h2>Data Augmentation</h2>
<p>No external data.</p>
<ul>
<li>Pink noise</li>
<li>Gaussian noise</li>
<li>Gaussian SNR</li>
<li>Gain (Volume Adjustment)</li>
</ul>
<h2>Models</h2>
<p>I noticed that the default SED model had over 80 million parameters so I switched all my models to use a pretrained densenet121 model as the cnn feature extractor and reduced the attention block size to 1024. Since it was much smaller and wouldn't overfit as much as we only had around 100 files for each audio class. I mainly tried densenet as previous top solutions to audio competitions used a densenet like architecture. I also replaced the clamp on the attention with tanh as mentioned in the <a href="https://www.kaggle.com/hidehisaarai1213/introduction-to-sound-event-detection/comments#962915" target="_blank">comments on the SED notebook</a></p>
<ul>
<li>4 fold models without mixup</li>
<li>4 fold models with mixup</li>
<li>5 fold models without mixup</li>
</ul>
<h2>Training</h2>
<ul>
<li>Cosine Annealing Scheduler with warmup </li>
<li>batch size of 28</li>
<li>Mixup (on 4 of the final models)</li>
<li>50 epochs for non-mixup models and 100 epochs for mixup models</li>
<li>AdamW with weight_decay 0.01</li>
<li>SpecAugmentation enabled</li>
<li>30 second audio clips during training and evaluating on 2 30 second clips per audio.</li>
</ul>
<h3>Loss Function</h3>
<p>My loss function looked something like the below. I wanted to experiment with different parameters but in the end I mainly used the default values, which was just BCELoss. I used a different loss function for 2 of the non-mixup models and it was based on randomly removing the primary label predictions from the loss function, to try increase the secondary_label predictions but I gave up on the approach for the rest of the models since I was running out of time and resources.</p>
<pre><code>class SedScaledPosNegFocalLoss(nn.Module):
def __init__(self, gamma=0.0, alpha_1=1.0, alpha_0=1.0, secondary_factor=1.0):
super().__init__()
self.loss_fn = nn.BCELoss(reduction='none')
self.secondary_factor = secondary_factor
self.gamma = gamma
self.alpha_1 = alpha_1
self.alpha_0 = alpha_0
self.loss_keys = ["bce_loss", "F_loss", "FScaled_loss", "F_loss_0", "F_loss_1"]
def forward(self, y_pred, y_target):
y_true = y_target["all_labels"]
y_sec_true = y_target["secondary_labels"]
bs, s, o = y_true.shape
# Sigmoid has already been applied in the model
y_pred = torch.clamp(y_pred, min=EPSILON_FP16, max=1.0-EPSILON_FP16)
y_pred = y_pred.reshape(bs*s,o)
y_true = y_true.reshape(bs*s,o)
y_sec_true = y_sec_true.reshape(bs*s,o)
with torch.no_grad():
y_all_ones_mask = torch.ones_like(y_true, requires_grad=False)
y_all_zeros_mask = torch.zeros_like(y_true, requires_grad=False)
y_all_mask = torch.where(y_true > 0.0, y_all_ones_mask, y_all_zeros_mask)
y_ones_mask = torch.ones_like(y_sec_true, requires_grad=False)
y_zeros_mask = torch.ones_like(y_sec_true, requires_grad=False) *self.secondary_factor
y_secondary_mask = torch.where(y_sec_true > 0.0, y_zeros_mask, y_ones_mask)
bce_loss = self.loss_fn(y_pred, y_true)
pt = torch.exp(-bce_loss)
F_loss_0 = (self.alpha_0*(1-y_all_mask)) * (1-pt)**self.gamma * bce_loss
F_loss_1 = (self.alpha_1*y_all_mask) * (1-pt)**self.gamma * bce_loss
F_loss = F_loss_0 + F_loss_1
FScaled_loss = y_secondary_mask*F_loss
FScaled_loss = FScaled_loss.mean()
return FScaled_loss, {"bce_loss": bce_loss.mean(), "F_loss_1": F_loss_1.mean(), "F_loss_0": F_loss_0.mean(), "F_loss": F_loss.mean(), "FScaled_loss": FScaled_loss }
</code></pre>
<p>`</p>
<h2>Thresholds</h2>
<p>I used a threshold of 0.3 on the <code>framewise_output</code> and 0.3 on the <code>clipwise_output</code> to reduce the impact of false positives. So if the 30 second clip contained a bird according to the clipwise prediction and the 5 second interval based on framewise prediction also said it had the same bird then it would be a valid prediction. During inference I also applied 10 TTA by just adding the same audio sample 10 times in the batch and enabling Spec Augmentation.</p>
<h2>CV vs LB</h2>
<p>My CV didn't match the public LB at all, so I mainly relied on the LB for feedback. During training I monitored the f1 score of the clipwise prediction, framewise prediction and the loss associated with classes existing in the audio (i.e the value of <code>F_loss_1</code> in the above loss function). When loss value of <code>F_loss_1</code> increased it generally meant that it would do worse on the LB even though the f1 score was increasing too. </p>
<h2>Ensemble</h2>
<p>I used voting to ensemble the models. My voting selection was based on LB score so in total I had 13 models with 4 votes to consider if the bird existed or not. <br>
On the public LB, the 3 votes approach scored 0.617 which was slightly better than 4 votes model of 0.616, but I didn't select the 3 votes approach as I thought it was too risky which turnout out to be the correct choice as the 4 votes approach achieved 0.002 better than the 3 votes model on the private LB. My second selected submission was an an ensemble of the nomix up models (9 models) with 3 votes which scored 0.676 private, 0.613 public LB.</p>
<p>My individual models were pretty bad on the public LB. I didn't check some of them individually as I was running out of submissions but they generally ranged between 0.585-0.605 on the Public LB. I mainly relied on my ensemble technique to get the score boost.</p>
<p>Thanks to the hosts and Kaggle for this interesting competition. </p>
<p><strong>Inference Notebook</strong>: <a href="https://www.kaggle.com/taggatle/cornell-birdcall-identification-1st-place-solution" target="_blank">https://www.kaggle.com/taggatle/cornell-birdcall-identification-1st-place-solution</a> <br>
<strong>Training Code</strong>: <a href="https://github.com/ryanwongsa/kaggle-birdsong-recognition" target="_blank">https://github.com/ryanwongsa/kaggle-birdsong-recognition</a><br>
<strong>Example on How to train the model on Kaggle Kernels</strong>: <a href="https://www.kaggle.com/taggatle/example-training-notebook" target="_blank">https://www.kaggle.com/taggatle/example-training-notebook</a></p>
<p>Hi, if you create a file for hyperparameters like the one in the Kaggle notebook (<a href="https://www.kaggle.com/taggatle/example-training-notebook" target="_blank">https://www.kaggle.com/taggatle/example-training-notebook</a>) and save it somewhere as "config_params/example_config.py". Then you can use those configs for training with the following command:</p>
<pre><code>python sed_train.py --config "config_params.example_config"
</code></pre> |
Cornell Birdcall Identification | 21st Place Write-up [Stack-up] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cornell Birdcall Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/truonghoang" target="_blank">@truonghoang</a> thank you 😍</p>
<p>absolutely </p>
<p><a href="https://www.kaggle.com/nyleve" target="_blank">@nyleve</a> Thank you so much 😁</p>
<p>Very simple we trust our CV as well as training process. We alredy know we need to predict mix of bird calls.<br>
16k because of disk space in colab, we are using colab for training. </p>
<p>Don't bothered about 27% data…… Care about 73% Private data </p>
<p>First of all, I would like to thank kaggle and the organizers for hosting such an interesting competition.</p>
<p>thanks for my wonderful team <a href="https://www.kaggle.com/seshurajup" target="_blank">@seshurajup</a> and <a href="https://www.kaggle.com/tarique7" target="_blank">@tarique7</a>. this was great learning and collaboration.</p>
<p><a href="https://www.kaggle.com/seshurajup" target="_blank">@seshurajup</a> and I worked hard for this from the last 20days</p>
<h4>[Big Shake-up]</h4>
<ul>
<li>LB : 0.533 ---> Private LB : 0.632</li>
<li>From 364th place ---> 21st place</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2058044%2F0725e3d4482b444098f50d6df08441ef%2FScreenshot%20from%202020-09-16%2006-20-45.png?generation=1600217490267914&alt=media" alt=""></p>
<h3>[Summary]</h3>
<ul>
<li>converted mp3 audio file into wav with sample_rate 16k, because of colab drive space and birds sing up to 12kHz</li>
<li>we trained <strong>stage1</strong> model based on 5sec random clip.</li>
<li>It gives average of 0.65+ AMP on 5-fold</li>
<li>we created 1sec audio clips dataset.</li>
<li>predicted 5fold <strong>stage1</strong> models on 1sec dataset.</li>
<li>selected <code>df[(df.ebird_code == df.pred_code) & (df.pred_prob >= 0.5)]</code> selected those 1sec clips for <strong>stage3</strong></li>
<li>we trained <strong>stage2</strong> model on topof <strong>stage1</strong> using public data</li>
<li>again predicted <strong>stage1</strong> and <strong>stage2</strong> models on public 1sec clips</li>
<li>again select <code>df2[(df2.ebird_code == df2.pred_code) & (df2.pred_prob >= 0.5)]</code>selected those 1sec clips for <strong>stage3</strong></li>
<li><strong>stage3</strong> dataset is <code>stage3_df = df.append(df2)</code> we endup with 612K 1sec clips with sudo labels</li>
<li>we created 1sec noise clips using PANN <strong>Cnn14_16k</strong> model.</li>
<li>we predict <strong>Cnn14_16k</strong> model on 1sec dataset and select some noise labels from PANN labels </li>
<li>those labels are <code>['Silence', 'White noise', 'Vachical', 'Speech', 'Pink noise', 'Tick-tock', 'Wind noise (microphone)','Stream','Raindrop','Wind','Rain', ...... ]</code> selected those labels as noise labels</li>
<li>based on those noise labels and <strong>stage1</strong> model predicted probabilities we select 1sec noise clips data.</li>
<li><code>noise_df = df[(df.filename.isin(noise_labels_df.filename)) & df.pred_prob < 0.4]</code></li>
<li>we end up with 103k noise 1sec clips you find dataset <a href="https://www.kaggle.com/gopidurgaprasad/birdsong-stage1-1sec-sudo-noise" target="_blank">link</a></li>
<li>we know in this competition our main goal is to predict a mix of bird calls.</li>
<li>now the main part, at the end we have <strong>stage1</strong>, <strong>stage2</strong> models, <strong>1sec</strong> dataset with Sudo labels, and <strong>1sec noise data</strong>. we trained the <strong>stage3</strong> model using all of those.</li>
<li>now in front of us, we need to build <strong>CV</strong> and train a model that more reliable on predicting a mix of bird calls.</li>
</ul>
<h3>[CV]</h3>
<ul>
<li>we created a <strong>cv</strong> based on <strong>1sec bird calls</strong> and <strong>1sec noise data</strong></li>
<li>In the end, we need to predict for <strong>5sec</strong> so we take 5 random birdcalls and noise stack them and give labels based on birdcall clips.</li>
</ul>
<pre><code>call_paths_list = call_df[["paths", "pred_code"]].values
nocall_paths_list = nocall_df.paths.values
def create_stage3_cv(index):
k = random.choice([1,2,3,4,5])
nocalls = random.choises(nocall_paths_list, k=k)
calls = random.choises(call_paths_list, k=5-k)
audio_list = []
code_list = []
for f in nocalls:
y, _ = sf.read(f)
audio_list.append(y)
for l in calls:
path = l[0]
code = l[1]
y, _ = sf.read(path)
audio_list.append(y)
code_list.append(code)
random.shuffle(audio_list)
audio_cat = np.concatenate(audio_list)
codes = "_".join(code_list)
sf.write(f"{index}_{codes}.wav", audio_cat, sample_rate=16000)
_ = Parallel(n_jobs=8, backend="multiprocessing")(
delayed(create_stage3_cv)(i) for i in tqdm(range(160000//5)))
)
</code></pre>
<p>Ex: <code>10000_sagthr_normoc_gryfly.wav</code> in this file you find 3bird calls and 2noise as 5sec clip.<br>
you find the cv dataset at <a href="https://www.kaggle.com/gopidurgaprasad/birdsong-stage3-cv" target="_blank">link</a></p>
<h3>[Stage3]</h3>
<ul>
<li>on top of <strong>stage1</strong> and <strong>stage2</strong> models we trained <strong>stage3</strong> model using <strong>1sec birdcalls</strong> and <strong>1sec noise</strong>.</li>
<li>the training idea is very simple as same as <strong>cv</strong>.</li>
<li>at dataloder time we are taking 20% of <strong>1sec noise</strong> clips and 80% of <strong>1sec birdcalls</strong> clips</li>
</ul>
<pre><code>if np.random.random() > 0.2:
y, sr = sf.read(wav_path)
labels[BIRD_CODE[ebird_code]] = 1
else:
y, sr = sf.read(random.choice(self.noise_files))
labels[BIRD_CODE[ebird_code]] = 0
</code></pre>
<ul>
<li>at each batch time, we did something like shuffle and stack, inspired from cut mix and mixup</li>
<li>In each batch, we have 20% noise and 80% birdcalls shuffle them and concatenate.</li>
</ul>
<pre><code>def stack_up(x, y, use_cuda=True):
batch_size = x.size()[0]
if use_cuda:
index0 = torch.randperm(batch_size).cuda()
index1 = torch.randperm(batch_size).cuda()
index2 = torch.randperm(batch_size).cuda()
index3 = torch.randperm(batch_size).cuda()
index4 = torch.randperm(batch_size).cuda()
ind = random.choice([0,1,2,3,4])
if ind == 0:
mixed_x = x
mixed_y = y
elif ind == 1:
mixed_x = torch.cat([x, x[index1, :]], dim=1)
mixed_y = y + y[index1, :]
elif ind == 2:
mixed_x = torch.cat([x, x[index1, :], x[index2]], dim=1)
mixed_y = y + y[index1, :] + y[index2, :]
elif ind == 3:
mixed_x = torch.cat([x, x[index1, :], x[index2], x[index3, :]], dim=1)
mixed_y = y + y[index1, :] + y[index2, :] + y[index3, :]
elif ind == 4:
mixed_x = torch.cat([x, x[index1, :], x[index2], x[index3, :], x[index4, :]], dim=1)
mixed_y = y + y[index1, :] + y[index2, :] + y[index3, :] + y[index4, :]
mixed_y = torch.clamp(mixed_y, min=0, m[](url)ax=1)
return mixed_x, mixed_y
</code></pre>
<ul>
<li>for <strong>stage3</strong> model we mouniter row_f1 score from this <a href="https://www.kaggle.com/shonenkov/competition-metrics" target="_blank">notebook</a> </li>
<li>for every epoch our <strong>cv</strong> increased, then we conclude that we are going to trust this <strong>cv</strong></li>
<li>at the end the best <strong>cv</strong> <strong>row_f1</strong> score in between <strong>[0.90 - 0.95]</strong></li>
<li>this all processes are done in the last 3days so we are managed to train up to 5 models.</li>
<li>at the end we don't have time as well as submissions, so we did a simple average on 5 models and using a simple threshold 0.5</li>
<li>our average <strong>row_f1</strong> score is 0.94+ on 5 models.</li>
</ul>
<h3>[BirdSong North America Set]</h3>
<ul>
<li>for some folds in <strong>stage3</strong> we are only trained on North America birds it improves our <strong>cv</strong></li>
<li>you can find North America bird files in this <a href="https://www.kaggle.com/seshurajup/birdsong-north-america-set-stage-3" target="_blank">notebook</a> </li>
</ul>
<h3>[Stage3 Augmentations]</h3>
<pre><code>import audiomentations as A
augmenter = A.Compose([
A.AddGaussianNoise(p=0.3),
A.AddGaussianSNR(p=0.3),
A.AddBackgroundNoise("stage1_1sec_sudo_noise/", p-0.5),
A.Normalize(p=0.2),
A.Gain(p=0.2)
])
</code></pre>
<h3>[Stage3 Ensamble]</h3>
<ul>
<li>we trained our stage3 models 2dyas before the competition ending so we are managed to train 5 different models.</li>
<li>1. <code>Cnn14_16k</code></li>
<li>2. <code>resnest50d</code></li>
<li>3. <code>efficientnet-03</code></li>
<li>4. <code>efficientnet-04</code></li>
<li>5. <code>efficientnet-05</code></li>
<li>we did a simple average of those 5-models with simple threshold <code>0.5</code> our <em>cv</em> <code>0.94+</code> on LB: <code>0.533</code></li>
<li>in the end, we satisfied our selves and trust our <strong>CV</strong> and we know we need to predict a mixed bird calls.</li>
<li>so we selected as our final model and it gives Private LB: 0.632 </li>
<li>we frankly saying that we are not able to beat the public LB score but we trusted our cv and training process</li>
<li>that brings us 21st place in Private LB.</li>
</ul>
<blockquote>
<p>inference notebook : <a href="https://www.kaggle.com/gopidurgaprasad/birdcall-stage3-final" target="_blank">link</a></p>
</blockquote>
<p><a href="https://www.kaggle.com/deepchatterjeevns" target="_blank">@deepchatterjeevns</a></p>
<p>Thank you 😊</p>
<p><a href="https://www.kaggle.com/tyadav" target="_blank">@tyadav</a> thank you 😊 </p> |
Cornell Birdcall Identification | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cornell Birdcall Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>fp16? No, I read that, like, on my video card it will not give a speed boost</p>
<p>Thanks! I was close to you=)<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3577701%2Fc0930dd97c6edb9f23b5074b88ab670f%2F1.png?generation=1600240188494923&alt=media" alt=""></p>
<p>Resnet50<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3577701%2Fe2275dd21df2118cc59c52a57097f5e5%2F2.png?generation=1600238988645838&alt=media" alt=""></p>
<p>While training, I built these graphs. Resnet34<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3577701%2Fc3fc0161139d1bf5b4c2012b0d12ff60%2F1.png?generation=1600238831970203&alt=media" alt=""></p>
<p>Thanks! 1 for secondary_labels didn't help me either</p>
<p>There was no goal to win.I only participated for a couple of days</p>
<p>I'm just learning, so I didn't do anything complicated. High quality validation and augmentation is my choice</p>
<p>Hi. This is my first Kaggle competition, and I really enjoyed participating.<br>
Thank you to the organizers for an interesting competition, <a href="https://www.kaggle.com/hidehisaarai1213" target="_blank">@hidehisaarai1213</a> for the basic solution, and to all the teams for an interesting and intense fight.</p>
<h1>My decision</h1>
<p><strong>github</strong>: <a href="https://github.com/vlomme/Birdcall-Identification-competition" target="_blank">https://github.com/vlomme/Birdcall-Identification-competition</a><br>
<strong>datasets</strong>: <a href="https://www.kaggle.com/vlomme/my-birdcall-datasets" target="_blank">https://www.kaggle.com/vlomme/my-birdcall-datasets</a><br>
<strong>kaggle notebook</strong>: <a href="https://www.kaggle.com/vlomme/surfin-bird-2nd-place" target="_blank">https://www.kaggle.com/vlomme/surfin-bird-2nd-place</a></p>
<ul>
<li>Due to a weak PC and to speed up training, I saved the Mel spectrograms and later worked with them</li>
<li><strong>IMPORTANT</strong>! While training different architectures, I manually went through 20 thousand training files and deleted large segments without the target bird. If necessary, I can put them in a separate dataset.</li>
<li>I mixed 1 to 3 file</li>
<li><strong>IMPORTANT</strong>! For contrast, I raised the image to a power of 0.5 to 3. at 0.5, the background noise is closer to the birds, and at 3, on the contrary, the quiet sounds become even quieter.</li>
<li>Slightly accelerated / slowed down recording</li>
<li><strong>IMPORTANT</strong>! Add a different sound without birds(rain, noise, conversations, etc.)</li>
<li>Added white, pink, and band noise. Increasing the noise level increases recall, but reduces precision.</li>
<li><strong>IMPORTANT</strong>! With a probability of 0.5 lowered the upper frequencies. In the real world, the upper frequencies fade faster with distance</li>
<li>Used BCEWithLogitsLoss. For the main birds, the label was 1. For birds in the background 0.3.</li>
<li>I didn't look at metrics on training records, but only on validation files similar to the test sample (see dataset). They worked well.</li>
<li>Added 265 class nocall, but it didn't help much</li>
<li>The final solution consisted of an ensemble of 6 models, one of which trained on 2.5-second recordings, and one of which only trained on 150 classes. But this model did not work much better than an ensemble of 3 models, where everyone studied in 5 seconds and 265 classes.</li>
<li>My best solution was sent 3 weeks ago and would have given me first place=)</li>
<li>Model predictions were squared, averaged, and the root was extracted. The rating slightly increased, compared to simple averaging.</li>
<li>All models gave similar quality, but the best was efficientnet-b0, resnet50, densenet121.</li>
<li>Pre-trained models work better</li>
<li>Spectrogram worked slightly worse than melspectrograms</li>
<li>Large networks worked slightly worse than small ones</li>
<li>n_fft = 892, sr = 21952, hop_length=245, n_mels = 224, len_chack 448(or 224), image_size = 224*448</li>
<li><strong>IMPORTANT</strong>! If there was a bird in the segment, I increased the probability of finding it in the entire file.</li>
<li>I tried pseudo-labels, predict classes on training files, and train using new labels, but the quality decreased slightly</li>
<li>A small learning rate reduced the rating</li>
</ul>
<p>I'll try writing more later here or on GitHub. I will be happy to answer your questions, because I almost didn't sleep and forgot to write a lot</p>
<p>1) 1^2 = 1, 0.5^2 = 0.25. 1^0.5 = 1, 0.25^0.5 = 0.5<br>
2) <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/ece3.3889" target="_blank">https://onlinelibrary.wiley.com/doi/full/10.1002/ece3.3889</a></p>
<p><code>images = images ** (random.random()*power + c)</code> . I do it twice, before and after librosa.power_to_db</p> |
Cornell Birdcall Identification | 33rd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cornell Birdcall Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all thanks to competition organizer and Kaggle to host such interesting competition. For the first time I learned about audio features and how ML is applied on them. Quite a learning.</p>
<p>Whatever improvement I got is during last 4 days.<br>
Previous to that I struggled to match the best public kernel.<br>
Anyways the score is of single fold Resnest50 as shared in public kernel.</p>
<h1>Training process: key points</h1>
<p>Extracted nocall (duration for which no labels are present) from example_test_audio and BIRDCLEF 2020 data and used as noise as well as extra nocall class.<br>
As ResNet50 expect images with 3 channel and MelSpectrogram gives single channel I stacked three different Melspectrogram with window size of 1024,1536 ans 2048. Not sure is it a good technique as I have not evaluated the same model and training process with same channel duplicated.</p>
<h1>Inference:</h1>
<p>Same as available in public kernel with threshold of 0.5</p>
<h1>Lots of areas for improvement</h1>
<p>One idea was to select sample from competition training data based on available metadata like rating, duration, background birds.<br>
Training more models and ensemble.</p>
<p>At the end local CV (CV of random clips from validation data) matches the private leaderboard.<br>
Also I think I am bit lucky as I see quite a lot pvt leaderboard shakeup.</p> |
Cornell Birdcall Identification | 36th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cornell Birdcall Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a> <a href="https://www.kaggle.com/vlomme" target="_blank">@vlomme</a> <br>
Thank you for comments. Let me answer as much as I can understand.</p>
<blockquote>
<p>I'm curious how much LogMel Mixup got improvement against simple MixUp?</p>
</blockquote>
<p>I experimented this during the early stages of tiral and error. So do not know the exact degree of improvement. But I did the same thing <a href="https://www.kaggle.com/vlomme" target="_blank">@vlomme</a> did above, and I found if one of the main signals was small, the signals became extremely small after mixup and became invisible.<br>
I suppose this is probably a matter of normalizing after mixup and setting of hyper parameters used in mixup coefficients: parameters of beta distribution for mixup coefficients.</p>
<p><a href="https://www.kaggle.com/returnofsputnik" target="_blank">@returnofsputnik</a> <br>
Thank you for your warm comment :)<br>
Congratulations, too!</p>
<p>In Japan, it was very very hot this summer. This made me decide to try this competition with a light model. But I’m also curious same as you, how my solution can be improved with more complicated models, like resnext or others. Even if my room become a perfect sauna:-) I will check it later with a slightly deeper model (I’m sorry this is the best I can do in this season) which I did not use, although had implemented.</p>
<p>At first, congratulations to the winners and all participants who finished this competition. And also thanks to <a href="https://www.kaggle.com/shonenkov" target="_blank">@shonenkov</a> for kindly posting the kernel for submission.</p>
<p>In the early stage of this competition, I was worried about the shake in private LB. But I continued with this competition, changed my thought as it is a relatively stable. I think this task was relevant to real issues and very interesiting. I would like to thank <a href="https://www.kaggle.com/stefankahl" target="_blank">@stefankahl</a>, <a href="https://www.kaggle.com/tomdenton" target="_blank">@tomdenton</a>, <a href="https://www.kaggle.com/holgerklinck" target="_blank">@holgerklinck</a> and Kaggle.</p>
<p>So let me shamelessly share a simple solution.<br>
The figure below is the overview of my model pipeline. As I'm sure there are already some great solutions out there, and will be more to come, so I'd like to give just a few points of my own.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F479538%2Fd51398618f29983e9f7c73c327f42112%2Fbirdcall_model-pipeline_01.png?generation=1600220214358611&alt=media" alt=""></p>
<p>1) Event aware extraction<br>
As the participants noticed, not all the time of an audio had bird voices. Therefore, if we randomly extracted some parts of an audio (e.g. 5 sec or so), there would be no birdsong at all, resulting in a form of mislabeling for training. To mitigate that, I used a naive algorithm to extract the signal parts of the audio as shown below.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F479538%2Ff84b1e09ab70cd92ce87b957d69b38f5%2Fbirdcall_model-pipeline_02.png?generation=1600220869680424&alt=media" alt=""></p>
<p>2) LogMel Mixup<br>
When I mixed 2 LogMels, once I convert it back into the power domain and then did a logarithmic transformation again. This is because LogMels are logarithmic values, and just linear summing is not the sum of the powers.<br>
a * log(X1) + b * log(X2) != log(a * X1 + b * X2)<br>
And labels were not scaled with the mixup coefficients used in features, but an union of them.</p>
<p>3) Binary Model<br>
I made a binary model (ResNet18) to classify call/nocall audio chunks. This slightly improved my score in publicLB, but as a result, it was a big improvement in privateLB.</p>
<p>4) Multi Label Model (Multi Task Learning)<br>
Although the primary label was provided in the data for this competition, it was clear that there were actually other bird calls in the background that fell under the competition's predictive label. So, in training the model, I did multitasking learning by splitting the model top into two parts, one for the primary label and the other for the background. And I trained the models as the learning strategy below.</p>
<p>Step1: primary only<br>
Step2: primary + background<br>
Step3: psuedo soft labeling using background predictions</p>
<p>Last step3 improved my score by 0.004 in public and 0.006 in private, respectively.</p>
<p>Now, that's what I'm going to share with you. I'll see you at the next competition somewhere else. <br>
Until next time!</p> |
Cornell Birdcall Identification | 39th place solution [top1 at public] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cornell Birdcall Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you so much.</p>
<p>You are welcome, if only I had more time to work on it more properly, like u guys</p>
<p>I just found out during this competition that an implementation I use, quite common one based on logits, is designed only for labels equal to 0 or 1 (not for labels produced by MixUp):</p>
<pre><code>class FocalLoss(nn.Module):
def __init__(self, gamma=2):
super().__init__()
self.gamma = gamma
def forward(self, input, target, reduction='mean'):
input = input.view(-1).float()
target = target.view(-1).float()
max_val = (-input).clamp(min=0)
loss = input - input * target + max_val + \
((-max_val).exp() + (-input - max_val).exp()).log()
invprobs = F.logsigmoid(-input * (target * 2.0 - 1.0))
loss = (invprobs * self.gamma).exp() * loss
return loss.mean() if reduction=='mean' else loss
</code></pre>
<p>Look at invprobs and check what will be happening if target is not 0 or 1 comparing to the <a href="https://arxiv.org/pdf/1708.02002.pdf" target="_blank">original paper</a>. <br>
So, I have written the following code:</p>
<pre><code>class FocalLoss(nn.Module):
def __init__(self, gamma=2):
super().__init__()
self.gamma = gamma
def forward(self, input, target, reduction='mean'):
input = input.view(-1).float()
target = target.view(-1).float()
loss = -target*F.logsigmoid(input)*torch.exp(self.gamma*F.logsigmoid(-input)) -\
(1.0 - target)*F.logsigmoid(-input)*torch.exp(self.gamma*F.logsigmoid(input))
return loss.mean() if reduction=='mean' else loss
</code></pre>
<p>Focal loss originally was destined to distinguish true signal from the background in object detection, quite similar from distinguishing bird calls from nocall background in this competition. It's quite unbalanced problem because on average 1 true label corresponds to 264 background for a given label. In my initial tests Focal loss worked better, but later I didn't perform a comparison on a finalized pipeline. Probably, it is, indeed, giving too many FP, which may be the reason of our drop.</p>
<p>You are very welcome.</p>
<p>You are welcome. For white noise I added a normal random variable (sequence) with zero mean and std randomly selected in a range [0,0.15] to the original wave. For test noise, I used the following: <code>n + min(1,max(0.1,rand_exp(0.25))) * w</code>, where <code>rand_exp</code> is a random variable from an exponential distribution.</p>
<h2>Summary</h2>
<ol>
<li>Noise is the key</li>
<li>Test might be recorded with 16 kHz sampling rate</li>
<li>Sequence-wise predictions transformed into global and 5s chunk predictions with logsumexp pooling (training on 20-40s segments, inference on full files)</li>
<li>Multi-head self-attention applied to entire sequences</li>
</ol>
<p>Congratulations to all participants and thanks to organizers for making this competition possible. Also, I would like to express my gratitude to my teammates for working together with me on this challenge. Below I will share some ideas used in our solution. Since I have been using quite a different approach from most of people in this competition, I decided to prepare a write-up regarding my part. </p>
<h2>Look into data</h2>
<p>It's probably the most important thing, especially for a competition like this.<br>
<strong>Test might be recorded with 16 kHz sampling rate</strong>. Look at the example of test data:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2F542dd41d9abe4e1d884a4a2453ec0cd6%2Ftest_spectrogram.png?generation=1600220233763583&alt=media" alt=""><br>
There is a frequency gap above 8 kHz that could suggest either that the data is recorded with 16 kHz sampling rate and then up-sampled to 32 kHz or that there is some noise filter used (which would be quite unlikely). Meanwhile below 1 kHz the noise it too high to recognize anything. Therefore, I've chosen <strong>1-8 kHz range</strong> for mel spectrograms using 128 mels. Based on CV drop, frequencies above 8 kHz might be important, but they are not present in test data, and high CV may be misleading. Also models may learn features not present in test if frequencies above 8 kHz are used.</p>
<p><strong>Noise is the key</strong>. The test data depicted above is quite noisy and the overall level of signal is weaker than one in train (meanwhile while noise+signal is similar in train and test). So several things were used: adding white noise and taking test noise extracted by <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> and posted <a href="https://www.kaggle.com/theoviel/bird-backgrounds" target="_blank">here</a>. In the second case the train signal is multiplied by an exponential random variable with lambda 0.25 limited at [0.1,1] and added to a randomly selected test noise chunk from concatenated noise array.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2F76f6605687ce4c594ea9542e4445c07e%2Fnoise.png?generation=1600221824043141&alt=media" alt=""><br>
The produced train example looks quite similar (lower image) to test examples. Meanwhile, the original train data (upper image) has many features that could not be recognized at a high level of noise, and facilitates creation of a model that is good at CV but bad at test. Even training for several epochs with noise substantially improves the quality of the model, so top 5 predictions on test examples start making sense:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2Fde0513d3685be301d6474e6e7ca0615f%2Fnoise1.png?generation=1600225333219466&alt=media" alt=""></p>
<h2>Model</h2>
<p>I used a quite different approach from most of participants, which I schematically depict below:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2F38e2de03b6065918c872efe81adf4e13%2Fmodel.png?generation=1600223467234638&alt=media" alt=""><br>
Instead of working with 5s segments, I worked with sequences: 20 and 40s for training and entire audio for inference. I collapse the frequency domain into dim of size 1 and then consider the produced tensor as a sequence and apply multi-head self-attention blocks to it, like in transformers. The produced output with stride of ~0.3s is merged with logsumexp pooling to produce the prediction for the entire audio segment or 5s intervals when run prediction on the test. During training the loss is computed based on global labels. I attached several examples below showing the prediction for top5 classes over time of my best model for first 40s of both test examples.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2Ff5a60d9929f97f4d7caa2af4572ce615%2Fbest.png?generation=1600225290412477&alt=media" alt=""><br>
The spikes coincide with birdcalls, and if organizers provided more time resolved examples, sufficient at least to properly initialize the model, this method would be performing even better.<br>
Training on 20 and 40s intervals is chosen to mitigate the possibility of having nocall in a sampled train chunk. In addition, consideration of an entire sequence during inference utilizes global attention, so the model is capable to incorporate knowledge about noise characteristics and different calls of the same bird when generate the predictions. Also, it naturally produces global predictions used for site_3. <br>
The basic kernel showing training on 5s chunks and reaching 0.65 CV in 16 epochs is posted <a href="https://www.kaggle.com/iafoss/cornell-birdcall" target="_blank">here</a>. It provides the details of implementation of the above approach.<br>
The performance of the best single model is <strong>0.622/0.588</strong> private/public LB. Ensemble of my models with more traditional models trained by my teammates (prediction based on 5s chunks with a number of additional tricks) boosted our public LB to 0.628 within last several days but unfortunately only slightly improved private LB, giving 0.622.</p>
<p><strong>Additional details:</strong><br>
Backbone: ResNeXt50<br>
Loss: Focal loss, corrected to be suitable for soft labels<br>
Augmentation: MixUp, white and test noise, stretch, temporal dropout.<br>
External data: images beyond 100 examples<br>
Use secondary labels with 0.1 contribution</p>
<p><strong>Postprocessing</strong>: I have been using quite a complex pipeline finetuned on test examples: in this competition I made only ~15 subs. First, I generate global predictions above the threshold, based on logsumexp of the predicted sequence, and selecte top3 or top4 of them if their number is large. Next, I compute predictions for 5s chunks (using logsumexp of parts of the predicted sequence), selected ones that are above a particular threshold in comparison with their average value, and dropped all of predictions not listed in global ones. Finally, as suggested by my teammate <a href="https://www.kaggle.com/kirillshipitsyn" target="_blank">@kirillshipitsyn</a> , if both neighboring chunks have the same bird predicted I added this bird as a prediction, which gave 0.001+ boost. </p>
<p>Some words about validation. I mostly considered test examples as a way to assess how good it the model. In my nearly first attempt I got ~0.80 CV (computed for the best threshold based on 4 fold train/val split) when trained on 20s chunks and ~0.83 CV when continued training on 40s chunks. However, when I checked the performance of the model on test examples, I realized that it predicts nearly nothing. Moreover, even top predictions are quite different from that should be. So I started adding such tricks as noise and 1-8kHz frequency range, which reduce CV but improve the model performance at test examples and LB. A good way to perform CV in this competition would be generating a val set based on train data with adding noise and excluding frequencies beyond 8 kHz, to make sure that it is as similar as possible to test examples. But I realize it nearly at the end of the competition. If I joined it not just 2-3 weeks before the deadline, probably, I could have more time to explore and fully handle the above ideas, and hopefully get better score at LB.</p>
<p>I shared it at <a href="https://www.kaggle.com/iafoss/cornell-birdcall" target="_blank">https://www.kaggle.com/iafoss/cornell-birdcall</a><br>
The thing with audio is that it should be considered as a hybrid of CNN + Transforemer rather than just a Transformer. It's important to consider the close correlations in the spectrogram first, done by CNN, and then one can look into the entire sequence, as in NLP.</p> |
Cornell Birdcall Identification | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cornell Birdcall Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="https://www.kaggle.com/muskanchaddha2112" target="_blank">@muskanchaddha2112</a> , I will</p>
<p>Thanks <a href="https://www.kaggle.com/youhanlee" target="_blank">@youhanlee</a> ! Hopefully you'll get that GM title soon :) </p>
<p>Thanks <a href="https://www.kaggle.com/fiyeroleung" target="_blank">@fiyeroleung</a> and congratz on the silver !</p>
<p>Double thank you <a href="https://www.kaggle.com/sheriytm" target="_blank">@sheriytm</a> !</p>
<p>No problem :)</p>
<p>Thanks, and congratz on the strong finish !</p>
<p>Thank you !</p>
<p>Thanks a lot <a href="https://www.kaggle.com/alanchn31" target="_blank">@alanchn31</a> </p>
<p>Thanks, and you're welcome !</p>
<p>Thanks <a href="https://www.kaggle.com/watzisname" target="_blank">@watzisname</a> </p>
<p>Thanks ! I need to add some stuff indeed aha</p>
<blockquote>
<p>Do you know how much you get from postprocessing (limiting to at most 3 birds)</p>
</blockquote>
<p>Score is the same, so this part of our post-processing is useless aha</p>
<p>Thanks !</p>
<p>3rd actually, there's <a href="https://www.kaggle.com/nhlx5haze" target="_blank">@nhlx5haze</a> as well :)</p>
<p>One of the few good ideas I had in this comp :) </p>
<p>Merci <a href="https://www.kaggle.com/ulrich07" target="_blank">@ulrich07</a> !</p>
<p>Thanks ! Congratz on the win !</p>
<p>Thanks again <a href="https://www.kaggle.com/hidehisaarai1213" target="_blank">@hidehisaarai1213</a> <br>
We're genuinely surprised we won the fight though !</p>
<p>All the merit goes <a href="https://www.kaggle.com/kneroma" target="_blank">@kneroma</a> for the idea :) </p>
<p>Merci <a href="https://www.kaggle.com/kneroma" target="_blank">@kneroma</a> !</p>
<p>Thanks <a href="https://www.kaggle.com/mpware" target="_blank">@mpware</a> !</p>
<p>Thanks a lot, that would not have been possible without you, and the other people I teamed up with in previous comps. </p>
<p>I'd like to read about your team's solution though, that +0.03 jump on public at the end was impressive.</p>
<p>Thanks <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a>, glad to join you in the French GM club ! </p>
<p>I'm not sure about that, I guess I'll run a sub without it to give you the answer</p>
<p>Thanks <a href="https://www.kaggle.com/holgerklinck" target="_blank">@holgerklinck</a>, means a lot ! We hope our solution will be useful to CCB </p>
<p>Very glad to end my journey to Kaggle GM with a 3rd place, congratz to my teamates and everybody who made it to the end !</p>
<p>Our solution has three main aspects : data augmentation, modeling and post-processing</p>
<h4>Data Augmentation</h4>
<p>Data augmentation is the key to reduce the discrepancy between train and test. We start by randomly cropping 5 seconds of the audio and then add aggressive noise augmentations :</p>
<ul>
<li>Gaussian noise</li>
</ul>
<p>With a soud to noise ratio up to 0.5</p>
<ul>
<li>Background noise</li>
</ul>
<p>We randomly chose 5 seconds of a sample in the background dataset available <a href="https://www.kaggle.com/theoviel/bird-backgrounds" target="_blank">here</a>. This dataset contains samples without bircall from the example test audios from the competition data, and some samples from the freesound bird detection challenge that were manually selected.</p>
<ul>
<li>Modified Mixup</li>
</ul>
<p>Mixup creates a combination of a batch <code>x1</code> and its shuffled version <code>x2</code> : <code>x = a * x1 + (1 - a) * x2</code> where <code>a</code> is samples with a beta distribution. <br>
Then, instead of using the classical objective for mixup, we define the target associated to <code>x</code> as the union of the original targets. <br>
This forces the model to correctly predict both labels.<br>
Mixup is applied with probability 0.5 and I used 5 as parameter for the beta disctribution, which forces <code>a</code> to be close to 0.5.</p>
<ul>
<li>Improved cropping </li>
</ul>
<p>Instead of randomly selecting the crops, selecting them based on out-of-fold confidence was also used. The confidence at time <code>t</code> is the probability of the ground truth class predicted on the 5 second crop starting from <code>t</code>.</p>
<h4>Modeling</h4>
<p>We used 4 models in the final blend :</p>
<ul>
<li>resnext50 [0.606 Public LB -> 0.675 Private] - trained with the additional audio recordings.</li>
<li>resnext101 [0.606 Public LB -> 0.661 Private] - trained with the additional audio recordings as well.</li>
<li>resnest50 [0.612 Public LB -> 0.641 Private] </li>
<li>resnest50 [0.617 Public LB -> 0.620 Private] - trained with improved crops </li>
</ul>
<p>Turns out that training with more data was the key, and that both our resnest were overfitting to public LB. Thanks to people who shared the datasets ! </p>
<p>They were trained for 40 epochs (30 if the external data is used), with a linear scheduler with 0.05 warmup proportion. Learning rate is 0.001 with a batch size of 64 for the small models, and both are divided by two for the resnext101 one, in order to fit in a single 2080Ti.</p>
<p>We had no reliable validation strategy, and used stratified 5 folds where the prediction is made on the 5 first second of the validation audios.</p>
<h4>Post-processing</h4>
<p>We used 0.5 as our threshold <code>T</code>.</p>
<ul>
<li>First step is to zero the predictions lower than <code>T</code></li>
<li>Then, we aggregate the predictions<ul>
<li>For the sites 1 and 2, the prediction of a given window is summed with those of the two neighbouring windows. </li>
<li>For the site 3, we aggregate using the max</li></ul></li>
<li>The <code>n</code> most likely birds with probability higher than <code>T</code> are kept<ul>
<li><code>n = 3</code> for the sites 1 and 2</li>
<li><code>n</code> is chose according to the audio length for the site 3.</li></ul></li>
</ul>
<h4>Code</h4>
<p>Everything is fully available :</p>
<ul>
<li>Inference : <a href="https://www.kaggle.com/theoviel/inference-theo?scriptVersionId=42527667" target="_blank">https://www.kaggle.com/theoviel/inference-theo?scriptVersionId=42527667</a> </li>
<li>Train in Kaggle kernels : <a href="https://www.kaggle.com/theoviel/training-theo-3" target="_blank">https://www.kaggle.com/theoviel/training-theo-3</a> (code is a bit dirty but directly usable)</li>
<li>Github : <a href="https://github.com/TheoViel/kaggle_birdcall_identification" target="_blank">https://github.com/TheoViel/kaggle_birdcall_identification</a> (code is cleaned and documented)</li>
</ul>
<h4>Final words</h4>
<p>As we had no proper validation scheme, private LB was really a coinflip for us. The top 3 is a nice surprise ! Our best submission is actually the 5-fold ResNext50 alone, which was quite unpredictable.</p>
<p>Thanks for reading !</p>
<p>(topic will probably be updated)</p>
<p>You're welcome :)</p>
<p>Thanks <a href="https://www.kaggle.com/daisukelab" target="_blank">@daisukelab</a> </p>
<p>Confidences are computed in the <code>__getitem__</code> function of the dataset, and the cropping is done in the <code>crop_or_pad</code> where the start second of the crop is sampled using the <code>probs</code> distribution</p>
<p><a href="https://github.com/TheoViel/kaggle_birdcall_identification/tree/master/src/data" target="_blank">Link</a></p>
<p>Thanks <a href="https://www.kaggle.com/wochidadonggua" target="_blank">@wochidadonggua</a> !<br>
It didn't took me too long to catch up with you :)</p>
<ul>
<li><code>y = np.clip(y_1 + y_2, 0, 1)</code></li>
<li>5 for both yes</li>
</ul>
<p>Spectrogram space :)</p> |
Cornell Birdcall Identification | 4-th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cornell Birdcall Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>5 seconds for all models</p>
<p>Hi ,Kagglers!</p>
<p>It was an amazing competition! And I am happy to share training tips that helped us reach 4-th LB position</p>
<h1>Data Preparation</h1>
<p>We used resampled into 32 000 sample rate and normalized audio</p>
<p>As most competitors, we used spectral features - Logmels</p>
<p>For most of my models we have added external xeno-canto datasets:<br>
<a href="https://www.kaggle.com/rohanrao/xeno-canto-bird-recordings-extended-a-m" target="_blank">https://www.kaggle.com/rohanrao/xeno-canto-bird-recordings-extended-a-m</a> <br>
<a href="https://www.kaggle.com/rohanrao/xeno-canto-bird-recordings-extended-n-z" target="_blank">https://www.kaggle.com/rohanrao/xeno-canto-bird-recordings-extended-n-z</a><br>
Great thanks to <a href="https://www.kaggle.com/rohanrao" target="_blank">@rohanrao</a> </p>
<p>Also we used RMS trimming, in order to get clips with bird calls. More details you can find in <a href="https://www.kaggle.com/vladimirsydor/4-th-place-solution-inference-and-training-tips?scriptVersionId=42796948" target="_blank">our notebook</a></p>
<h1>Feature Extraction</h1>
<p>As we were using pretrained CNNs, we have to support it with 3-channel input (or inplace first Conv)</p>
<p>Worked:</p>
<ul>
<li>Repeat of Logmel 3 times - good baseline option</li>
<li>Use <a href="https://pytorch.org/audio/functional.html#compute-deltas" target="_blank">deltas</a> - We have used concatenation of Logmel, 1-st order delta and 2-nd order delta. It worked the best</li>
<li>Adding <code>secondary_labels</code> for training</li>
</ul>
<p>Not Worked great:</p>
<ul>
<li>Time and Frequency encoding - originally used by <a href="https://www.kaggle.com/ddanevskyi" target="_blank">@ddanevskyi</a> in <a href="https://www.kaggle.com/c/freesound-audio-tagging-2019/discussion/97926" target="_blank">Freesound</a>. But it does not work well here</li>
<li>Adding some more features, like Loudness and Spectral Centroid</li>
</ul>
<h1>Validation Scheme</h1>
<p>All our models were trained in cross-validation mode. So we had one fold for validation. Also we used <code>example_audio</code> as one more validation set. All in all we tracked such metrics:</p>
<ul>
<li>loss</li>
<li>MaP score by one validation fold - <code>map_score</code></li>
<li>Original competition F1 metric with threshold 0.5 on test set - <code>f1_test_median</code></li>
<li>Original competition F1 metric with best threshold on test set</li>
<li>Original competition F1 metric with threshold 0.5 on validation fold (if we use <code>secondary_labels</code>)</li>
<li>Original competition F1 metric with threshold best threshold on validation fold (if we use <code>secondary_labels</code>)</li>
</ul>
<p>We made early stopping and scheduling by MaP score, as it has converged the last one from all metrics</p>
<p>Then we took 3 best checkpoints by <code>f1_test_median</code> and averaged weights matrices for each fold - some kind of SWA. Then blend 5 models (5 folds) and evaluate on test_set (example audio). This score correlates well with LB till 0.607 point. After this point test_set was nearly useless :)</p>
<h1>Model</h1>
<p>We used different EfficientNets (B3, B4, B5) pretrained on <a href="https://github.com/rwightman/gen-efficientnet-pytorch" target="_blank">noisy student</a>.</p>
<p>We tried some classifier heads. But 2 Layer Dropout->Liner->Relu works the best. Also <br>
<a href="https://arxiv.org/pdf/1905.09788.pdf" target="_blank">Multi-Sample Dropout</a> slightly boosts the performance and give some more stability</p>
<p>We tried SeResnexts but they did not work at all for us. Also we tried model proposed by <a href="https://www.kaggle.com/ddanevskyi" target="_blank">@ddanevskyi</a> <a href="https://github.com/ex4sperans/freesound-classification" target="_blank">here</a> but it worked worse.</p>
<h1>Training process</h1>
<p>We used <code>Adam</code> optimizer and <code>ReduceLROnPlateau</code> scheduler and <code>BCEwithLogits</code> loss</p>
<p>Augmentations really boosted performance (~2%). We listened to example audio and tried to choose such augmentations, that can shift our train set to example audio:</p>
<ul>
<li>Gain (to make bird call less loud)</li>
<li>Background noise - very and less loud. We have taken some background from <a href="https://www.kaggle.com/mmoreaux/environmental-sound-classification-50" target="_blank">here</a> and some 5 second clips directly from example audio. Finally we created such <a href="https://www.kaggle.com/vladimirsydor/cornelli-background-noises" target="_blank">background dataset</a></li>
<li>LowFrequancy CutOff - we found out, that example audio has no lower frequency</li>
</ul>
<p>Also we used MixUp - we add audios and take max from two one-hot targets. As it was done by <a href="https://www.kaggle.com/ddanevskyi" target="_blank">@ddanevskyi</a> in <a href="https://www.kaggle.com/c/freesound-audio-tagging-2019/discussion/97926" target="_blank">Freesound</a></p>
<h1>Choose Final Blend</h1>
<p>First we tried simply to Blend all our good models - 14 experiments (70 models) and it gave us 0.623 Public score and 0.669 Private score. But then we have taken 4 best experiments with external data and 3 best without external data, which gave us 0.624 Public and 0.67 Private scores</p>
<p>All Training details and inference of best Blend you can find in our <a href="https://www.kaggle.com/vladimirsydor/4-th-place-solution-inference-and-training-tips?scriptVersionId=42796948" target="_blank">inference notebook</a> </p>
<h1>Framework</h1>
<p>For training and experiment monitoring was used <a href="https://pytorch.org/" target="_blank">Pytorch</a> and <a href="https://catalyst-team.github.io/catalyst/" target="_blank">Catalyst</a> frameworks. Great thanks to <a href="https://www.kaggle.com/scitator" target="_blank">@scitator</a> and catalyst team! </p>
<h1>P.S</h1>
<p>Great thanks to my teammate - <a href="https://www.kaggle.com/khapilins" target="_blank">@khapilins</a><br>
Also thanks to all DS community, especially to <a href="https://www.kaggle.com/ddanevskyi" target="_blank">@ddanevskyi</a>, <a href="https://www.kaggle.com/yaroshevskiy" target="_blank">@yaroshevskiy</a> and <a href="https://www.kaggle.com/frednavruzov" target="_blank">@frednavruzov</a>. They have taught me a lot and give inspiration to take part in Kaggle competitions.<br>
Also thanks to all Kaggle team and community.<br>
And happy Kaggling! </p> |
Cornell Birdcall Identification | 43rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cornell Birdcall Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to your team, you are way better than me, you have proper CV and I train blindly. Until next competition</p>
<p>Big thank you to HOSTKEY for allowing me to use a machine with 2x1080Ti as a grant. You can check them out here: <a href="https://www.hostkey.com/gpu-servers#/" target="_blank">https://www.hostkey.com/gpu-servers#/</a> HOSTKEY servers are cheaper than AWS and Google Cloud, and they are offering pre-orders for servers with RTX 3080's on them.</p>
<p>The goal of this competition is to predict the species of bird in a soundscape recording, given non-soundscape recordings. Basically, they give you a 5 second clip recorded in a forest, and you have to say what birds they are. There are 264 species in total to predict.</p>
<p>The difficulties in this competition come from: </p>
<ol>
<li>The training data is from recordings of any birdwatchers - anyone can record and upload, which means sometimes it is recorded just on a smartphone, there is speech in it, etc. However, the testing data is recorded from boxes strapped to trees, recording for 10 minutes at a time.</li>
<li>The training data is of variable length, from seconds to minutes. The data is recorded at different sample rates, at different volumes, in different locations. The test data comes from 3 unknown sites.</li>
<li>Some species of birds make different birdcalls even though they are the same species. There are regional dialects of birdcalls. And some calls can vary (alarm call, mating call, etc.)</li>
<li>The training clips can have different birds in the background, or they have long periods of no birds.</li>
<li>You can have false positives through other animals like chipmunks, cicadas, flies, cars driving nearby, etc.</li>
</ol>
<p>It is evident that the training data is extremely different from the testing data. I noticed that an improvement in my validation loss on a 20% holdout set from the training set yielded weaker leaderboard results on the hidden test set.</p>
<p>Then, I decided to train for lots more epochs, past the optimum for my validation loss. This ended up getting better on the leaderboard. So I concluded that training more epochs = better score, even if I considered it locally overfit.</p>
<p>In order to address Difficulty #1, I randomly augment my training clips. I add pink noise with varying volumes, and random soundscape recordings (up to 3 with different volumes). I also randomly applied a Butterworth filter (randomly lowpass, highpass, bandpass, bandstop) with random cutoffs. I also used Cutout augmentation, which randomly replaces an area of the clip with noise pixels. I also randomly use ColorJitter to change saturation, hue, brightness, and contrast. <strong><em>You can see I use the word "random" so much -- I really wanted to make sure my submission was very robust.</em></strong> Much of these ideas are inspired from previous Birdclef solution: <a href="http://ceur-ws.org/Vol-2125/paper_140.pdf" target="_blank">http://ceur-ws.org/Vol-2125/paper_140.pdf</a></p>
<p>In order to address Difficulty #2, I randomly sample 5 second clips from the training clips. If the clip is less than 5 seconds, then add 0's to the start/end of the clip randomly.</p>
<p>I hoped the model could cope with Difficulty #3 by itself.</p>
<p>In order to address Difficulty #4's long periods of no birds, I removed contiguous stretches >= 4 seconds in my training clips where the absolute signal amplitude doesn't exceed the 99.9th quantile. This effectively removed long contiguous seconds of silence, which allows my model to focus more on the birdcalls and less on the absence of birds. I did not use "secondary_labels" for different birds in a single clip because I found it empty or inconsistent in many clips.</p>
<p>In order to address Difficulty #5, I also added examples of chipmunks/insects as background noise to tell my model that it is an absence of birds.</p>
<p>All models converted the 5 second training clip into a Melspectrogram, which is a 2D "picture" that represents what the sound looks like. Some models then used Power_to_Db function to convert the power spectrogram to decibel units; this is called Log-Melspectrogram. Other models used PCEN which is a novel transformation shown to outperform Log-Melspectrograms (<a href="http://www.justinsalamon.com/uploads/4/3/9/4/4394963/lostanlen_pcen_spl2018.pdf)" target="_blank">http://www.justinsalamon.com/uploads/4/3/9/4/4394963/lostanlen_pcen_spl2018.pdf)</a>.</p>
<p>I trained one Efficientnet-B1, two Efficientnet-B2, one Efficientnet-B3 (<a href="https://arxiv.org/abs/1905.11946)" target="_blank">https://arxiv.org/abs/1905.11946)</a>, one Resnest50 (<a href="https://arxiv.org/abs/2004.08955)" target="_blank">https://arxiv.org/abs/2004.08955)</a>, two Inceptionv4 (<a href="https://arxiv.org/abs/1602.07261)" target="_blank">https://arxiv.org/abs/1602.07261)</a>, and one SE-Resnext (<a href="https://arxiv.org/abs/1709.01507)" target="_blank">https://arxiv.org/abs/1709.01507)</a>. I also trained an Efficientnet-B5 and more Resnest50/101, but it would not fit into the runtime to use these models. I also trained a 1D Convolutional Net but it wasn't strong enough. All models were initialized with pretrained imagenet weights. Some models have mixup and some have label smoothing for additional diversity.</p>
<p>In order to fit all of these models in the runtime, I converted all models into ONNX format which brought significant speedup.</p>
<p>Instead of averaging the predictions, I found that squaring the predictions, taking the mean, and then taking square root was better (follows from Lasseck's findings in previous Birdclef competition). I early stopped the models through intuition and Leaderboard feedback.</p>
<p>I trained on full 100% training data blindly, using BinaryCrossEntropyWithLogits as the loss. All models have different Melspectrogram parameters. I found that using high <code>fmin</code> parameter was good to get rid of some noisiness (effectively simulates a "zoom" into the birdcalls). Different image sizes and <code>n_mels</code> were also used. Different model heads were used (with different dropouts and number of Linear layers)</p>
<p>Special thanks to the strong Japanese Kaggle contributors (Tawara and Hidehisa Arai) and thank you to thesoundofai.slack.com for inspiration and tips.</p>
<p>Things that worked:</p>
<ul>
<li>Cutting out silence from training data</li>
<li>Overlaying noise onto training data</li>
<li>Data augmentation methods (noise injection, Cutout, ColorJitter, etc.)</li>
<li>Squaring predictions, averaging, then Square Rooting</li>
<li>Ensemble with different parameters</li>
</ul>
<p>Things that didn't work:</p>
<ul>
<li>ArcFace Loss</li>
<li>Custom F1 Row-wise Micro Loss</li>
<li>MultiLabelSoftMarginLoss</li>
<li>Freesound2019 Winning CNN solution architecture</li>
<li>Removing the top k losses from each batch, assuming some clips are still noisy/incorrect in training data</li>
<li>Using Freesound non-bird audio external dataset</li>
<li>Using NIPS 2013 Bird identification external dataset</li>
</ul>
<p>Things I didn't try:</p>
<ul>
<li>Adding in MFCC information or other numeric features</li>
<li>Using external Xenocanto data</li>
<li>PANN/pretrained "audionet" models</li>
</ul> |
Cornell Birdcall Identification | 5th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cornell Birdcall Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks!<br>
I've tried mean averaging, voting and rank renormalization (with <code>scipy.interpolate.interp1d(probs, ranks, kind="linear")</code>) but didn't get any local improvement for all of thm. So I just submitted mean averaged and leaderboard showed a great boost. Selecting the threshold based on test examples worked too. If I had more submissions I'd try also voting as top-1 performer. </p>
<p>Thank you!</p>
<p>Yeah, thanks Bali for having me during pandemic ^^</p>
<p>A great journey, thanks for organizing it! Now every time I walk around I hear much more birds :) <br>
I've started my machine learning journey with speech processing few years ago and that was a joy to play again with it. One more time to thank and mention <a href="https://www.kaggle.com/hidehisaarai1213" target="_blank">@hidehisaarai1213</a> who led us through the darkness here.</p>
<p>From the very scratch I understood that the problem can be decomposed into two main problems:</p>
<ul>
<li>a domain shift (which is pretty tough us we don't have target test set)</li>
<li>a clipwise to framewise classification transit (which also introduced us additional "nocall" class with huge influence on results, and labels noise)</li>
</ul>
<p>Tackling those two needed a validation/test set as close to target distribution as possible in terms of snr and nocall distributions. </p>
<p><strong>Validation</strong></p>
<p>Believe me or not but with 6 records from <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/158877#911336" target="_blank">here</a> and 2 example audios I was able to get some correlation with LB even though it contained only 500-800 clips. But more important I was able to control % of "nocall"s and pick the right threshold based on them. Last two days I splitted those clips into 3 test sets with 50%, 60%, 80% "nocalls" to see possible scenarios on private leaderboard (thanks organizers private is almost equal to public) and to secure the score with my second submission. </p>
<p>As a target metric I've used F1 by <code>average="samples"</code> pointed by <a href="https://www.kaggle.com/cpmp" target="_blank">@cpmp</a>. By the end I've also monitored validation (default CV split) primary+secondary F1 score as my another decision-making metric. Primary F1 or mAP was not enough.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F805012%2Fa290c42c9808e69c800abb79f2889f22%2FScreenshot%20from%202020-09-16%2009-53-50.png?generation=1600239312223536&alt=media" alt=""></p>
<p>Adding separate nocall class didn't work for me for whatever reason.<br>
And yeah, I trusted leaderboard too.</p>
<p><strong>Domain shift</strong><br>
Assuming we don't have the transit problem, this might be solved simply with augmentations. I've spent too much time on this and right now I understand that was not productive. My final models contain different augmentations configurations: </p>
<ul>
<li>backgrounds (all from external data thread)</li>
<li>pink and brown noise</li>
<li>pitch shift</li>
<li>low pass filtering</li>
<li>spec augments (time and frequency masking)</li>
</ul>
<p>As organizers informed that we can train on two test examples, I tried to collect batch norm statistics from those as a domain adaptation technique but it didn't work great. </p>
<p><strong>Clipwise > framewise transit p.1</strong></p>
<p>That's my favorite part!<br>
So what we have here - every time we crop 5 seconds clip we have a chance to crop a nocall clip. So labels become really noisy. Even more it's hard to crop secondary classes. Easy way to tackle this? Label smoothing 0.2. I don't remember when I became a fan of label smoothing but it works well with noisy labels on practice. But that's not serious.</p>
<p>People here tackled this task having an energy based cuts. And it really worked. So I've tried both approaches: soft and hard. By soft I mean random sampling based on energy, by hard - removing everything below normalized energy threshold.</p>
<p>But what if we can use our model to extract these labels? Having avg/map pooling head gives a chance to get a free segmentation:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F805012%2Fb9938f7758cd413a47006ac98c5f62e0%2Fphoto5377374165436313567.jpg?generation=1600241278088906&alt=media" alt=""><br>
which you can use as soft labels or hard binarized labels. These methods showed really cool performance for primary labels. For secondary I kept label smoothing :)</p>
<p><strong>Clipwise > framewise transit p.2 (The most important part)</strong></p>
<p>I found SED models comparably weak trained on 5 sec clips but as I trained them on longer clips (10-30 seconds) I noticed that due to labels noise reduction they show much better training loss/mAP. Moreover, it was possible to run inference on 5 sec clips with nice performance. So I took B4 EffNet, added the same Attention decision making head and … failed!</p>
<p>I've spent a lot on this one. So why did Cnn14_DecisionLevelAtt trained on 30 sec clips work well on 5 sec clips? Playing with EffNet I found that problem was in receptive field - it's too big (630 compared to 200 or so). With wide receptive field attention head was not able to build a meaningful framewise feature maps. As I've changed EffNet kernels from 5x5 to 3x3 or reduced number of blocks - it solved problem but price was too big - pretrained weights. </p>
<p>So what we have: </p>
<ul>
<li>longer input</li>
<li>clipping based on some weakly labeled probabilities</li>
<li>label smoothing for secondary</li>
</ul>
<p>Having these I've decided to simply run 5sec classification inference with no sophisticated postprocessing that might fail on private LB. As it always happens, I've found this too late so trained only two PANNs models (resnet38 and cnn14 mentioned above) on 12 and 15 second clips (128 mels) with mixup and didn't have time to configure them properly.</p>
<p><strong>Finally</strong></p>
<p>I've optimized inference to run all my models (cnn14, resnet38 and few effnets) in 20 mins with <code>kaiser_fast</code> resampling and around one hour with <code>kaiser_best</code>. I've picked thresholds based on my test set (0.3) and some skepticism about it (0.4). I found I have 670+ private score submissions from two weeks ago, which I can't explain. Probably, Kaggle leaderboard is another stochastic process.</p>
<p>This has been a long journey in which I not only solved many competitions and learned just hundreds of tools and tricks, but also met wonderful minds from all over the world and found job of my dreams. I wish to meet you in person at Kaggle Days once covid is over. See you!</p>
<p>Truly to say, as I remember max and avg+max poolings worked better. </p>
<p>Yes, I tried to train on different input lengths - 5/10/15/30 seconds and monitored my test set. I found that PANNs CNN14 generalized well to 5 sec (!) inference but EffNet didn't. I've tried different configurations: padding types, kernel sizes, number of blocks repeats to find that the problem was in receptive field. </p> |
Cornell Birdcall Identification | 6th place solution and some thoughts | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cornell Birdcall Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you! and congratulations <a href="https://www.kaggle.com/taggatle" target="_blank">@taggatle</a> for winning!</p>
<p>Thanks!</p>
<blockquote>
<p>Sorry but what is EMA, in the link of GWD comp also didn't talk the detail about it.</p>
</blockquote>
<p>It's Exponential Moving Average in weight space. OK, I'll add some explanation :)</p>
<p>Thanks! <br>
In fact, I saw you sharing a lot in the PANDA competition and wanted to behave like you :)</p>
<blockquote>
<p>Good to know that label self-distillation is working here.</p>
</blockquote>
<p>I now learned a new keyword: self-distillation, thanks!</p>
<p>Thanks!<br>
I also found you shared many interesting ideas, thanks for all those things 😊</p>
<p>Thanks! and congrats <a href="https://www.kaggle.com/kneroma" target="_blank">@kneroma</a> for winning the 3rd place!<br>
You also shared a lot, thanks for all those things :)</p>
<p>First of all, I would like to sincerely thank <a href="https://www.kaggle.com/stefankahl" target="_blank">@stefankahl</a>, <a href="https://www.kaggle.com/tomdenton" target="_blank">@tomdenton</a>, <a href="https://www.kaggle.com/holgerklinck" target="_blank">@holgerklinck</a>, and the members of kaggle team for hosting this competition. I had a lot of fun tackling on some of the challenging problems of machine learning thinking of the generative process of the data. Also many thanks to <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> for actively sharing a lot of deep insights. It helped me a lot to come up with some good ideas and also made me convinced that I was in good direction.</p>
<p>Following the recent two competitions: <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment" target="_blank">PANDA</a> challenge and <a href="https://www.kaggle.com/c/global-wheat-detection" target="_blank">GWD</a> challenge, this competition was also about <strong>domain shift</strong> and <strong>noisy labels</strong>.<br>
Combination of these two challenging topics made this competition extremely difficult and we were troubled a lot how to make stable validation scheme. To be honest, contrary to <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a>'s <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/181499#1010570" target="_blank">expectation</a>, I couldn't find any good local validation scheme as the labels of training dataset contains a lot of noise. <code>Trust LB</code> was also not a very good policy since public LB was only 27% and we didn't know how the test set was devided. Instead, I took the policy of <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/169230" target="_blank">PANDA's competitors</a> - <strong>Ignore CV, care about public LB, and trust methodology</strong>.</p>
<p>Although this competition was difficult as a data science competition, it was actually pretty close to a realistic setting and was full of problems that we often face in applying data science to real-world problem. Especially the combination of domain shift and noisy labels often happens (I think) when we are to use data from User Generated Contents(UGC) web service like Xeno Canto, YouTube, Twitter for training machine learning algorithms. Therefore, I think my solution is useful not only for this competition but also for those data science tasks related with UGC data, as it's basically focused on dealing with noisy labels and domain shift.</p>
<h2>Solution in three lines</h2>
<ul>
<li>3 stages of training to gradually remove noise in labels</li>
<li>SED style training and inference as I introduced <a href="https://www.kaggle.com/hidehisaarai1213/introduction-to-sound-event-detection" target="_blank">here</a></li>
<li>Ensemble of 11 EMA models trained with the whole dataset / whole <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/159970" target="_blank">extended dataset</a> to stabilize the result</li>
</ul>
<p>Code is available here: <a href="https://github.com/koukyo1994/kaggle-birdcall-6th-place" target="_blank">koukyo1994/kaggle-birdcall-6th-place</a>.<br>
Note: I've re-implemented the code from original repository since it's quite messy. However, I haven't checked whether whole pipeline works; if you find something wrong with the code, please let me know :)</p>
<h2>Motivation</h2>
<p>It's quite obvious that there is a huge gap between training dataset and test dataset: it was described by the host and can also be seen through submission. However, <em>how</em> they are different is not obvious. Therefore, the first thing I did was to understand by observation what kind of domain shifts are seen between training and test datasets.</p>
<p>Domain shift is an umbrella term and there are several problem classes of domain shift. The most famous one is <em>covariate shift</em>, where \( P_{train}(X)\neq P_{test}(X) \) and \( P_{train}(Y|X)=P_{test}(Y|X) \). This one is well-studied and several algorithms are proposed to deal with the situation, but is not the type of domain shift in this competition. Another problem class is <em>prior probability shift</em> or <em>target shift</em>, where \( P_{train}(Y)\neq P_{test}(Y) \) and \( P_{train}(X|Y)=P_{test}(X|Y) \), also not the type in this competition because \( P_{train}(X|Y) \) is not the same as \( P_{test}(X|Y) \). In fact, in this competition, multiple distribution shifts are present - shift in input space, shift in prior probability of labels, and shift in the function which connects \( X \) and \( Y \).</p>
<p>How should we tackle a problem with various distribution shifts? The answer is simple - <em>divide the difficulty</em>. As I wrote above, shifts were three folds:</p>
<p>1) shift in input space. For example, SNR difference or difference in sound collection environment (device/sampling rate/temperature/weather/…) between train and test. Occurence of non-target sound events is also a part of this shift.<br>
2) shift in prior probability of labels. Distribution difference of species or distribution difference of calltypes, or else.<br>
3) shift in the function which connects \( X \) and \( Y \). This has a very strong relation with label noise. Thinking of how the labels were created in train dataset and in test dataset, one could come up with the fact that Label Function(LF) of train dataset is completely different from that of test dataset. The former is annotations of the uploader (and can have large variation), whereas the latter is probably those of dedicated annotator(s) (and possibly have smaller variation).</p>
<p>I decided to address these one by one and applied the following techniques.</p>
<ul>
<li>For 1), providing all the possible variation for train dataset may help. This is done by data augmentation.</li>
<li>For 2), I just couldn't come up with smart ideas. I used ensemble of multiple models trained with datasets with different distributions of the labels to address this, but I think that is suboptimal.</li>
<li>For 3), correcting the labels of train dataset to make train LF closer to test LF can help.</li>
</ul>
<p>On the other hand, label noise is also a term that contains multiple problem classes. First, in this competition, labels of train dataset are provided as <em>weak labels</em>. As <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> pointed out <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/174774#972122" target="_blank">here</a>, weak label can be treated as noisy label if we change the point of view. Also there are some missing labels, which I'll explain later.</p>
<p>I also decided to address these one by one.</p>
<ul>
<li>For weak label as noisy label, I first train a model with long chunk and then use the prediction as corrected label. I at first tried to create strong labels but couldn't make the first stage model good enough, therefore I used the prediction to correct weak labels we have in train dataset.</li>
<li>For missing label, I also used the prediction of a model to find those.</li>
</ul>
<h2>First stage - build a model useful enough for addressing missing labels</h2>
<p>In this stage, I used PANNs model. I used some basic augmentations (<code>NoiseInjection</code>, <code>PitchShift</code>, <code>RandomVolume</code>) and used <code>secondary_labels</code>. The keys in this stage were two folds:</p>
<ul>
<li>train with long chunk(30s) so that it would include call events of the species in <code>primaly_label</code> and <code>secondary_labels</code></li>
<li>use attention pooling and max pooling to get weak prediction from <code>framewise_output</code></li>
</ul>
<p>Here are the reason behind.</p>
<h3>train with long chunk</h3>
<p>Assume we have <code>primary_label</code> of <code>birdA</code> and <code>secondary_labels</code> of <code>birdB</code> and <code>birdC</code>. Melspectrogram of the corresponding audio clip is something like the figure below (sorry for my poor drawing). If we use small window size, it may not include any sound events or include some sound events but not enough for the given labels. To make the model learn correctly, we need to make each label correspond to call event(s) of each species. For this reason, I used long chunk. Maybe it is better to use longer chunk like 1 minutes or more but I compromised to use 30s chunk considering the time for computation.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1489975%2Fcd841cb51b7b1fa4de6a6b2b7880e2d6%2FIMG_BBFA97950653-1.jpeg?generation=1600216062555033&alt=media" alt=""></p>
<h3>Combination of attention pooling and max pooling to get weak prediction</h3>
<p>This is something I shared <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/167611" target="_blank">here</a>.<br>
In the comments in <a href="https://www.kaggle.com/hidehisaarai1213/introduction-to-sound-event-detection" target="_blank">my SED notebook</a>, some said there were lots of false positives. This comes from the following reason.</p>
<p>Weak prediction from attention pooling is made by applying self-attention filter on <code>framewise_outputs</code>. Now, we only have weak labels, we calculate the loss with weak predictions and weak labels. The gradient will be distributed to self-attention layer and pointwise classifier, but for pointwise classifier, strong supervision comes when self-attention put high probability at that point. Therefore, pointwise classifier are more likely to produce high probability value and rely on the attention layer to reduce false positives. This is not good when we are interested in the output of pointwise classifier(<code>framewise_outputs</code>).</p>
<p>On the contrary, max pooling suppresses high probability values come out from pointwise classifier but it also has a defect that it is weak to impulse noise. Therefore, combination of max pooling and attention pooling can make balanced prediction and we can expect that be good prediction. For this reason, I used both but not combining the weak prediction from each aggregation but use each output and calculate loss for each, and sum them up. Therefore, the loss function I used in this stage was like this</p>
<pre><code>bce = BCELoss()
loss_att = bce(weak_pred_with_attention, label)
loss_max = bce(weak_pred_with_maxpooling, label)
loss = 1.0 * loss_att + 0.5 * loss_max
</code></pre>
<h3>Summarize this stage</h3>
<ul>
<li>Single PANNs model</li>
<li>BCE on <code>clipwise_output</code> and also on maxpooled output.</li>
<li>Adam + CosineAnnealing, 55epochs training</li>
<li>train with randomly cropped 30s chunk</li>
<li>validate on randomly cropped 30s chunk</li>
<li>Augmentations on raw waveform<ul>
<li><code>NoiseInjection</code> (max noise amplitude 0.04)</li>
<li><code>PitchShift</code> (max pitch level 3)</li>
<li><code>RandomVolume</code> (max db level 4)</li></ul></li>
</ul>
<h3>Get oof prediction and use it to find missing labels</h3>
<p>With training procedure above, the model would get around 0.575 - 0.578. In fact, this is the weight I used in the public notebook.<br>
I trained 5folds and got oof prediction on the whole training set. Then I used this oof prediction to find missing labels.</p>
<p>Missing labels are more likely to be found from samples that does not have <code>secondary_labels</code>. It is up to the uploader to fill in <code>secondary_labels</code> or <code>background</code>, so some uploaders may not feel like to fill in those. Therefore, I picked samples without <code>secondary_labels</code> and used oof prediction of those to get additional labels if the probability of species that are not in their <code>primary_label</code> is over 0.9.</p>
<h2>Second stage - build a model with additional labels to get stronger labels</h2>
<p>In this stage, I used SED model with ResNeSt encoder. The difference between first stage and second stage is not that big - only the model, the existence of found labels (the labels obtained from the oof prediction of the first stage), and the input. I started to use 3channel input. The first channel was normal log-melspectrogram and the second channel was PCEN. The third channel was also log-melspectrogram but instead of using <code>librosa.power_to_db(melspec)</code>, I used <code>librosa.power_to_db(melspec ** 1.5)</code>. The idea of using different input for each channel comes from <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/170959" target="_blank">this post</a>. The chunk size is also reduced to 20s because of the GPU memory size limitation.</p>
<p>I also changed the attention pooling slightly given the <a href="https://www.kaggle.com/hidehisaarai1213/introduction-to-sound-event-detection/comments" target="_blank">advice of </a><a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> to use <code>torch.tanh</code> instead of <code>torch.clamp</code>.</p>
<p>The result of this stage got around 0.60. I also trained 5folds model and this time I got oof prediction of <code>framewise_outputs</code>.</p>
<p>To summarize,</p>
<ul>
<li>SED model with ResNeSt50 encoder, attention pooling head of PANNs (<code>torch.clamp</code> -> <code>torch.tanh</code>)</li>
<li>BCE on <code>clipwise_output</code> and also on maxpooled output</li>
<li>Adam + CosineAnnealing, 75epochs training</li>
<li>Add additional <code>secondary_labels</code> found in stage1</li>
<li>3channels input - [normal logmel, PCEN, <code>librosa.power_to_db(melspec ** 1.5)</code>]</li>
<li>train with randomly cropped 20s chunk</li>
<li>validate on randomly cropped 30s chunk</li>
<li>Augmentations on raw waveform<ul>
<li><code>NoiseInjection</code> (max noise amplitude 0.04)</li>
<li><code>PitchShift</code> (max pitch level 3)</li>
<li><code>RandomVolume</code> (max db level 4)</li></ul></li>
</ul>
<h2>Third stage - build a model with oof framewise_outputs</h2>
<p>Despite the use of training with long chunk size and missing labels, label noise problem is far from solved. SNR level to decide whether a call event was present is different between training set and test set. In fact, that was also different between samples of training set because basically each annotator (uploader) had their own labeling criteria. For this reason, I used the oof prediction of <code>framewise_outputs</code> of the second stage to further correct the labels of the training dataset. </p>
<p>In this stage, I also cropped 20s chunk randomly to get a batch. At this time I also cropped the corresponding part of the predicted <code>framewise_outputs</code> and apply threshold on them (threshold varies from 0.3 - 0.7). After thresholding, I get max pooling of the thresholded prediction in time axis to get chunk level prediction. Still, this prediction can be noisy and may contain false positives, I apply <code>logical_and</code> between predicted labels and provided labels. In this way, I got corrected chunk level label and use that for training.</p>
<p>Here's the pseudo-code of the process above</p>
<pre><code>y_batch = y[start_index:end_index]
soft_label_batch = soft_label[start_index_for_label:end_index_for_label] # (n_frames, 264)
thresholded = (soft_label_batch >= threshold).astype(int)
weak_pred = thresholded.max(axis=0) # (264,)
corrected_label = np.logical_and(label, weak_pred) # (264,)
</code></pre>
<p>In this stage, I also tried EfficientNet-B0 encoder and FocalLoss. Combination of ResNeSt encoder and FocalLoss didn't work well, whereas EffNet-B0 and FocalLoss worked well on public LB.</p>
<p>All the other settings were the same as that of second stage. After this, I got around 0.61x score on public LB.</p>
<p>To summarize,</p>
<ul>
<li>SED model with ResNeSt50 encoder, attention pooling head of PANNs (<code>torch.clamp</code> -> <code>torch.tanh</code>) / EfficientNet-B0 encoder</li>
<li>BCE on <code>clipwise_output</code> and also on maxpooled output / FocalLoss on <code>clipwise_output</code> and also on maxpooled output for EfficientNet-B0</li>
<li>Adam + CosineAnnealing, 75epochs training</li>
<li>Add additional <code>secondary_labels</code> found in stage1</li>
<li>Correct labels using the prediction of stage2 model.</li>
<li>3channels input - [normal logmel, PCEN, <code>librosa.power_to_db(melspec ** 1.5)</code>]</li>
<li>train with randomly cropped 20s chunk</li>
<li>validate on randomly cropped 30s chunk</li>
<li>Augmentations on raw waveform<ul>
<li><code>NoiseInjection</code> (max noise amplitude 0.04)</li>
<li><code>PitchShift</code> (max pitch level 3)</li>
<li><code>RandomVolume</code> (max db level 4)</li></ul></li>
</ul>
<h2>Ensemble</h2>
<p>With the corrected chunk level label, I trained the model with the whole dataset and use EMA model (using the implementation <a href="https://pytorch.org/docs/stable/optim.html#stochastic-weight-averaging" target="_blank">here</a>) for inference. This is a technique also used in <a href="https://www.kaggle.com/c/global-wheat-detection/discussion/172458" target="_blank">GWD competition</a>. </p>
<p>I prepared models with different threshold (threshold on <code>framewise_outputs</code> of oof prediction) to make the model robust. Also important was the use of extended dataset. It doesn't get better result on public LB but when I used that for ensemble, it bumped up the score.</p>
<h2>Things that didn't work for me</h2>
<ul>
<li>Mixup</li>
<li>Calltype classification (781 class)</li>
<li>noisy student training</li>
<li>Larger models of efficientnet (b1, b2, b3…)</li>
<li>etc…</li>
</ul>
<h2>Things that worked but doesn't make sense to me</h2>
<p>When I change the PERIOD used for inference, the result greatly changed. Basically the larger value PERIOD is, the better the result. I just couldn't figure out why.</p>
<h2>Things that I wanted to try but couldn't</h2>
<p>There are lots of things I couldn't do due to the time limitation </p>
<ul>
<li>Use a variety of models</li>
<li>Further refinement of the labels of train dataset/train extended dataset</li>
<li>Use of auxiliary predictor to predict longitude/latitude/elevation and use the prediction to correct the main classifier</li>
<li>Post-processing to refine prediction using species correlation information (I couldn't get the API key for ebird.org therefore I couldn't collect correlation information)</li>
<li>Mixing background noise</li>
<li>etc…</li>
</ul>
<h2>The thing that helped me a lot during competition</h2>
<p>I created a simple streamlit app to check the audio data. I mainly used this to check the effect of augmentations or to check the quality of SED models prediction. This helped me a lot to figure out the major problems of this competition</p>
<p><a href="https://github.com/koukyo1994/streamlit-audio" target="_blank">https://github.com/koukyo1994/streamlit-audio</a></p>
<p>Later I learned <a href="https://www.kaggle.com/fkubota" target="_blank">@fkubota</a> also made an app with similar functionality. I didn't use this but it seems it's better than mine.</p>
<p><a href="https://github.com/fkubota/spectrogram-tree" target="_blank">https://github.com/fkubota/spectrogram-tree</a></p>
<p>Congratz again <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> !</p>
<blockquote>
<p>Although I didn't believe in the SED idea (we didn't try it)</p>
</blockquote>
<p>Haha, I somehow knew this during the competition 😄</p>
<p>Thank you!</p>
<blockquote>
<p>the noisy student method</p>
</blockquote>
<p>I actually thought of using noisy student method but couldn't make it work well 😑</p>
<blockquote>
<p>I want to congratulate you for sharing so much as well.</p>
</blockquote>
<p>Thanks! This word is also for you! Thanks for actively sharing lots of things on discussion!</p>
<p>Thanks! I used my brain a lot to come up with the ideas😎</p>
<p>Thank you! I'll update it later to put some more detail and make it easier to read :)</p>
<p>Thank you for hosting such an interesting competition! I hope it helps your work :)</p>
<blockquote>
<p>Are the values in thresholded prediction some 0 or 1 (actually True or False?), how to get max-pooling on them? If I'm wrong, could you tell more about how to apply threshold and how to get chunk level prediction.</p>
</blockquote>
<p>I updated the post, check out above :)</p>
<blockquote>
<p>In Third Stage, after you correcting the chunk-level label, is the question turn to strong-label-SED? or it's still a weak-label manner ?</p>
</blockquote>
<p>Yes, I corrected the chunk-level label. I managed to use prediction as a label for strong-label-SED but it turned out the model that produced the prediction wasn't good enough, so I quit using it in strong-label manner and switched to use that to just correct the given weak label.</p> |
Cornell Birdcall Identification | 7th place solution (writeup with code) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cornell Birdcall Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><blockquote>
<p>I'm curious if you tried pooling on the penultimate representations, rather than the logits - ie, pool, then apply a final dense layer.</p>
</blockquote>
<p>I think I tried that some time and it produced worse results, but that was on a different dataset. Maybe I should try again, I'm curious now. I'll post here if I do.</p>
<blockquote>
<p>(Also, the <a href="http://www.justinsalamon.com/uploads/4/3/9/4/4394963/mcfee_autopool_taslp_2018.pdf" target="_blank">http://www.justinsalamon.com/uploads/4/3/9/4/4394963/mcfee_autopool_taslp_2018.pdf</a> is a nice point of reference for the 'inbetween' pooling, if you haven't seen it.)</p>
</blockquote>
<p>Yes, I'm aware of it, but it's a good pointer! I only learned about it after BirdCLEF 2018. Their softmax-based pooling is highly related to log-mean-exp pooling, the latter was just around longer (Equation 6 in <a href="http://openaccess.thecvf.com/content_cvpr_2015/html/Pinheiro_From_Image-Level_to_2015_CVPR_paper.html" target="_blank">Pinheiro et al., 2015</a>).</p>
<p>First of all, congratulations to all the participants, and thanks a lot to the organizers for this competition! It was great fun!</p>
<p>In short, our approach was based on <a href="https://github.com/f0k/birdclef2018" target="_blank">Jan's Sound Event Detection model from BirdCLEF 2018</a>, replacing the predictor with a pretrained CNN from <a href="https://github.com/qiuqiangkong/audioset_tagging_cnn" target="_blank">Qiuqiang Kong's PANN repository</a>. It was trained on 30-second snippets from the official training set extended with <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/159970" target="_blank">Rohan Rao's xeno-canto crawls</a>, using binary cross-entropy against all labeled foreground and background species. Data was augmented by choosing random weights for downmixing the left and right channel to mono (for stereo files), and by mixing in bird-free background noise from the <a href="https://doi.org/10.5285/be5639e9-75e9-4aa3-afdd-65ba80352591" target="_blank">Chernobyl BiVA</a> and <a href="https://zenodo.org/record/1205569" target="_blank">BirdVox-full-night</a> datasets. Inference followed a two-stage procedure that first established a set of species for the recording, using 20-second windows and a threshold of 0.5, then looked for these species in 5-second windows with a threshold of 0.3. Code is <a href="https://github.com/f0k/kagglebirds2020" target="_blank">available on github</a>.</p>
<p>The following sections will explain things in more detail, spread over additional posts due to Kaggle's post size limit. Choose "Sort by: Oldest" to see them in their original order.</p>
<p>If you prefer a slide deck over text, feel free to click through <a href="https://docs.google.com/presentation/d/10m0W13sJozYmfWPcPlaBg1j-MOW7p6w_-Tp3pG5KGE0/edit?usp=sharing" target="_blank">some slides explaining what we did</a>.</p>
<h2>Data</h2>
<p>We used both the official training data and <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/159970" target="_blank">Rohan Rao's xeno-canto crawls</a>, totalling in 43499 recordings for the 200 species. We reserved about 10% for validation (splitting such that no recordist is part of both the training and validation set). Labels contain both the annotated foreground and background species for a recording, without distinguishing them (setting lower target values or lower weights for background species did not improve results).</p>
<p>All recordings are predecoded to 22050 Hz 16-bit stereo or mono wave files. As part of the data loading pipeline, stereo files are <a href="https://github.com/CPJKU/kagglebirds2020/blob/master/definitions/datasets/kagglebirds2020/__init__.py#L259" target="_blank">downmixed to mono</a> with a randomly uniform weight \(p\) for the left channel, and \(1-p\) for the right channel. This augmentation slightly improves results over downmixing them with fixed weights during decoding.</p>
<p>To make the models work under low signal-to-noise ratios (i.e., the conditions found in the test files), we mix them with excerpts from the <a href="https://doi.org/10.5285/be5639e9-75e9-4aa3-afdd-65ba80352591" target="_blank">Chernobyl BiVA</a> and <a href="https://zenodo.org/record/1205569" target="_blank">BirdVox-full-night</a> datasets, which Paul suggested and hunted down. These datasets are precisely annotated with bird occurrences, so we can extract all parts void of birds. Mixing is <a href="https://github.com/CPJKU/kagglebirds2020/blob/master/definitions/datasets/kagglebirds2020/__init__.py#L195" target="_blank">done on-the-fly</a> as part of the data loader. I started out carefully, but the best setting turned out to be mixing <em>every</em> training example with background noise, drawing a value \(p \in [0,1)\) and scaling the noise with \(p\) and the bird recording excerpt with \(1-p\). For two of the models in the final ensemble, I also set \(p=1\) with 1% probability, setting the labels to all zero in this case.</p>
<p>For model selection (explained further below), we also made use of the two official <code>example_test_audio</code> recordings, as well as the six 10-minute North American BirdCLEF 2020 validation set recordings <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/158877#911091" target="_blank">posted on the forum</a>. However, they only contain a fraction of the relevant species, contain species not part of the competition, and are not accurately labeled (missing several calls).</p>
<h2>Training</h2>
<p>Ideally, the model would be trained on complete recordings – this is the only way we can be sure all the targets are correct. If we pick a random excerpt, it is not guaranteed that all birds annotated to be present in the recording are also audible in the chosen excerpt. However, the longest recordings are 3 hours, which is impractical. As for BirdCLEF 2018, we train on randomly selected 30-second excerpts instead, hoping that most annotated birds will be audible at least once. Too short files are looped to make up 30 seconds (for BirdCLEF, we selected mini-batches among files of similar length to avoid unneeded looping, but PyTorch runs out of memory if the input tensors change shapes all the time). Validation uses the central 30 seconds of a recording.</p>
<p>Training uses ADAM with mini-batches of 16 examples, an initial learning rate of 1e-3, and PyTorch's default settings for beta1, beta2 and epsilon. The validation loss is computed every 1000 update steps. If it does not improve over the current best value for 10 such evaluations in a row, the learning rate is reduced to a tenth, and training is continued. Training is stopped when the learning rate reaches 1e-6.</p>
<p>For the PANN, I tried reducing the learning rate for the pretrained layers to 1% or 10% compared to the novel layers or to freeze the pretrained layers for some time, but it turned out that using the full learning rate for all the layers from the start works best.</p>
<p>Training takes about 6h for a vanilla model, 7h for a residual network, and 8h for a PANN, on an RTX 2080 Ti.</p>
<h2>Inference</h2>
<p>The challenge required two types of inference:</p>
<ol>
<li>Predicting a set of species for every non-overlapping 5-second window of a 10-minute recording (for recording sites 1 and 2)</li>
<li>Predicting a set of species for a full 10-minute recording (for recording site 3)</li>
</ol>
<p>Since the network is built to produce local predictions that are then pooled over time, we can simply pass a complete recording through the network up to the pooling operation, and then either</p>
<ol>
<li>Apply pooling with non-overlapping 5-second windows</li>
<li>Apply pooling over the full recording</li>
</ol>
<p>The first case requires some trickery, since the network produces an uneven number of predictions within 5 seconds (5x70/9=38.889 for the vanilla model and ResNet, and 5x100/32=15.625 for the PANN). I solved it by having the models compute and report their receptive field (size, padding and stride) wrt. the audio samples, using this to <a href="https://github.com/CPJKU/kagglebirds2020/blob/master/predict_kagglebirds2020.py#L252-L258" target="_blank">compute a time point for each prediction</a>, and then <a href="https://github.com/CPJKU/kagglebirds2020/blob/master/predict_kagglebirds2020.py#L87" target="_blank">placing the pooling boundaries</a> appropriately. Note that this produces similar results as presegmenting the audio into 5-second windows and passing them through the network, but without artifacts introduced by zero-padding for the PANN, and with considerably less computation.</p>
<p>Unfortunately, both types of inference do not match what the model was trained for: to produce good labels for 30-second audio excerpts. Since we use global log-mean-exp pooling, the output of the pooling stage is not completely independent of the input length; it is still somewhat resembling a global mean pooling:</p>
<ol>
<li>A short false positive that would stay under the threshold in a 30-second excerpt may score above the threshold for a 5-second window.</li>
<li>If we feed 10-minute recordings, birds that are locally detected in some parts of the recording may still fall under the threshold after pooling.</li>
</ol>
<p>After 8 days of optimizing the models (4.5 days before the end of the challenge), I changed inference for the second case:</p>
<p> 2. Apply pooling with 20-second windows overlapping by 50%, then take the maximum over the pooled windows for each species.</p>
<p>Using 20 seconds instead of 30 seconds seemed to produce slightly better results on the validation data. However, this new scheme did not change my score on the public leaderboard – probably because it only affects the recordings from site 3, which may have a small influence on the total score (depending on how they are weighted). (On the private set, the score improved from 0.656 to 0.657, also unimportant.)</p>
<p>After some more experimentation with models (improving the score to 0.587 by switching to PANNs), one day and 15 minutes before the deadline, I also changed inference for the first case (recording sites 1 and 2):</p>
<ol>
<li>First predict species globally for the 10-minute recording, using the new inference scheme developed for site 3. Then predict species locally by pooling in non-overlapping 5-second windows, but <strong>using a detection threshold of 0.3 instead of 0.5</strong>, and <strong>only keeping species that were predicted globally</strong> for that recording.</li>
</ol>
<p>The idea is that we should use longer windows to reliably detect birds, but once we are sure they are present in a recording, we can increase the classifier's sensitivity to find all occurrences.<br>
This gave a real boost from 0.587 to 0.600 on the public leaderboard. I thought about spending the second submission of that day on reducing the threshold to 0.2, but tried submitting an improved ensemble instead, which then finished uploading 30 seconds too late.</p>
<p>Regarding computational efficiency, the final ensemble consisting of 3 PANNs took 4 minutes to run over the test set on Kaggle (but I do not know if and by how much the submission notebook evaluation is parallellized over subsets of the data).</p>
<h2>Model selection</h2>
<p>Most of our submissions used ensembles of 3 to 5 models selected by their F1 score on the validation set (10% of the training data from xeno-canto). The validation set may not be a good indicator for the challenge test set performance since a) it consists of focal recordings, not soundscapes, and b) it only comes with file-wise labels, not 5-second chunks or smaller. In addition, c) its distribution of species probably differs significantly from the test data.</p>
<p>In addition to the validation set, we used the 8 annotated soundscape excerpts (2 from the challenge dataset, 6 from BirdCLEF 2020) to verify aspects such as the inference strategy. The 8 soundscape excerpts are also not a good indicator for the challenge performance since a) they only cover a small subset of the species and b) they are not thoroughly annotated.</p>
<p>On the final day, I tried to improve model selection by setting up 6 different tasks:</p>
<ol>
<li>Prediction in 5-second windows on the 8 annotated soundscape excerpts</li>
<li>Predicting file-level labels for the 8 annotated soundscape excerpts</li>
<li>Predicting file-level labels on the validation set</li>
<li>As 3., but recordings augmented with background noise from Chernobyl and BirdVox</li>
<li>Predicting file-level labels only on those files from the validation set which have annotated background species (the other files may not be annotated completely)</li>
<li>As 5., but recordings augmented with background noise from Chernobyl and BirdVox</li>
</ol>
<p>I evaluated all 16 previous submissions on these 6 tasks and looked for some correlation of precision, recall or F1 score with the submissions' public leaderboard performance:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2546802%2F50bc581848f305ec9036bbca2313efed%2Fwriteup_correlation.png?generation=1608567691969180&alt=media"><br>
There was no clear indicator. With a combination of two scores that was at least almost monotonically increasing (when ordered by public leaderboard scores), I selected an ensemble that improved public leaderboard score from 0.600 to 0.601.</p>
<p>Disappointed, for the last submission I chose another ensemble based on validation set performance alone. I hesitated whether to try reducing the prediction threshold from 0.3 to 0.2, but was afraid to ruin the submission. It scored 0.596 on the public leaderboard (it would have made the 4th place on the private set).</p>
<h2>What did not work</h2>
<ul>
<li>Khaled ran several experiments using his <a href="https://arxiv.org/abs/1909.02859" target="_blank">receptive-field-regularized CNNs</a> which he successfully employed in past DCASE challenges, but unfortunately was not able to surpass the other models.</li>
<li>Lukas contributed an <a href="https://github.com/CPJKU/kagglebirds2020/blob/master/definitions/datasets/kagglebirds2020/__init__.py#L430" target="_blank">augmentation using colored noise</a>, but it turned out not to improve results over augmenting with the Chernobyl and BirdVox recordings. In addition, he worked on pretraining a model with self-supervised learning.</li>
<li>Using shorter excerpts (and larger batches) made results worse, longer excerpts (and shorter batches) did not improve results.</li>
<li><a href="https://github.com/CPJKU/kagglebirds2020/blob/master/experiments/train_kagglebirds2020.sh#L405-L414" target="_blank">Reducing the frequency range</a> in order to have higher frequency resolution did not improve results.</li>
<li>Augmentation via a cheap pitch shift implemented by <a href="https://github.com/CPJKU/kagglebirds2020/blob/master/definitions/models/audioclass/frontend.py#L297-L303" target="_blank">randomly warping the mel filterbank</a> did not improve results.</li>
<li>Training <a href="https://github.com/CPJKU/kagglebirds2020/blob/master/experiments/train_kagglebirds2020.sh#L272-L278" target="_blank">only on high-quality recordings</a>, or <a href="https://github.com/CPJKU/kagglebirds2020/blob/master/experiments/train_kagglebirds2020.sh#L69-L74" target="_blank">weighting the loss by the quality rating</a> of the recording made results worse.</li>
<li>Mixup made results worse (vanilla model, validation set results: 0.661 without mixup, 0.653 with alpha=0.1, 0.643 with alpha=0.3).</li>
<li><a href="https://github.com/CPJKU/kagglebirds2020/blob/master/definitions/metrics/__init__.py#L171-L174" target="_blank">Label smoothing</a> did not have a consistent effect (PANN, validation set results: 0.690 without, 0.682 with 1% smoothing, 0.687 with 5% smoothing).</li>
<li><a href="http://proceedings.mlr.press/v80/furlanello18a.html" target="_blank">Born-Again Neural Networks</a> helped for BirdCLEF 2018, but not here, at least not for the PANN (validation set results: 0.690 originally, 0.685 as a born-again network).</li>
</ul>
<h2>Acknowledgements</h2>
<p>Apart from the challenge hosts, we are indebted to the following people:</p>
<ul>
<li>Hidehisa Arai for his <a href="https://www.kaggle.com/hidehisaarai1213/inference-pytorch-birdcall-resnet-baseline/#Prediction-loop" target="_blank">template on computing predictions</a> that falls back to a replacement test dataset at development time, when the actual test data is not available</li>
<li>Alex Shonenkov for compiling this <a href="https://www.kaggle.com/shonenkov/birdcall-check" target="_blank">replacement test dataset</a></li>
<li>Rohan Rao for crawling xeno-canto to compile his <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/159970" target="_blank">additional training dataset</a></li>
<li>All competitors who raised the bar on the leaderboard, and everyone discussing on the forum!</li>
</ul>
<h2>Afterthoughts</h2>
<p>After the challenge finished, I was anxious to check if reducing the threshold from 0.3 to 0.2 would have made a difference. Turns out I had a knob to make money that I didn't try due to a lack of remaining submissions 😃<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2546802%2F70b5a32bbc3cd1b741c6696251c54c25%2Fmoneyknob.png?generation=1600330942624036&alt=media" alt=""><br>
A threshold of 0.2 would have ended on the third place, a threshold of 0.1 or less (even 0.02) would have made the second place.</p>
<p><em>Were my submissions close to being in the money?</em><br>
Arguably yes, the models were fine, it only took changing a single hyperparameter in the inference pipeline that I was fully aware of. But as CPMP mentioned, that's probably true of other submissions as well!</p>
<p><em>Was I close to submitting a winning solution?</em><br>
This thought has given me some unrest after the competition, but I think I wasn't.<br>
After my first try of the new inference strategy, there were only three submissions remaining. I thought about trying a reduced threshold, but reducing the threshold in the old inference strategy had yielded worse results (despite an improved F-score on the 8 annotated soundscapes available for testing), so I was pessimistic about this. If I had tried anyway, I probably wouldn't have spent the last two submissions on reducing it far enough to be in the money with that ensemble.<br>
On the last submission, minutes before the deadline, I considered submitting the last new ensemble with threshold 0.2, but was afraid to change two things at the same time and ruin it. It would have yielded 0.599 on the public set. I would have thought I ruined it and not have selected it for the final standing.<br>
If I had started the whole competition earlier, I might have fallen for optimizing too much on the public set and missing the top ten. So overall I'm glad how it played out!</p>
<h2>Team</h2>
<p>We are three PhD students (Khaled Koutini, Lukas Martak, Paul Primus) and a postdoc (Jan Schlüter) from the <a href="https://https://www.jku.at/en/institute-of-computational-perception/" target="_blank">Institute of Computational Perception</a> at Johannes-Kepler-University Linz, Austria. Since 20 of our 23 submissions were done by me (Jan), most of this is written from my perspective, but each of us did important contributions I will highlight below.</p>
<h2>Network architecture</h2>
<p>The general model architecture looks as follows:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2546802%2Faf6861f1136591254f287a776e3682f1%2Fwriteup_network.png?generation=1608311897835230&alt=media" alt=""></p>
<p>From an (arbitrarily long) monophonic raw audio recording, the part denoted as "Frontend" computes a spectrogram-like representation. In the next step, a Fully-Convolutional Network (FCN) processes this representation into a time series of logits for every class. When passed through a sigmoid, these would give us local predictions at every time step. Since we do not have local labels to train these, only file-wise labels, we apply a global pooling operation (over time) to obtain a single logit per class. Passed through a sigmoid, these serve as our file-wise predictions.</p>
<p>This layout matches what we used for <a href="https://github.com/f0k/birdclef2018" target="_blank">BirdCLEF 2018</a>, except that we are using a sigmoid instead of a softmax now (in BirdCLEF, evaluation was based on the ranking of species probabilities, for which a softmax helped). It also matches <a href="https://www.kaggle.com/hidehisaarai1213/introduction-to-sound-event-detection" target="_blank">what Hidehisa Arai explained</a> early on in the competition.</p>
<p>We still have several options for the three components: The frontend, the local predictor, and the global pooling operation.</p>
<h3>Frontend</h3>
<p><br>
PyTorch architecture listing:</p>
<pre><code>(frontend): Sequential(
(filterbank): Sequential(
(stft): STFT(winsize=1024, hopsize=315, complex=False)
(melfilter): MelFilter(num_bands=80, min_freq=27.5, max_freq=10000.0)
)
(magscale): Log1p(trainable=True)
(denoise): SubtractMedian()
(norm): TemporalBatchNorm(
(bn): BatchNorm1d(80, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
</code></pre>
<p></p>
<p>The frontend takes in audio recordings at a sample rate of 22050 Hz. (While the test recordings were at 32000 Hz, not all training recordings go that high, so I resampled everything to 22050 Hz.)</p>
<p>It consists of:</p>
<ul>
<li><a href="https://github.com/CPJKU/kagglebirds2020/blob/master/definitions/models/audioclass/frontend.py#L171" target="_blank">STFT</a>:<br>
window size of 1024 samples, hop size of 315 samples (resulting in 22050/315=70 frames per second), with Hann window, keeping only the magnitudes.</li>
<li><a href="https://github.com/CPJKU/kagglebirds2020/blob/master/definitions/models/audioclass/frontend.py#L268" target="_blank">mel filterbank</a>:<br>
80 triangular filters from 27.5 Hz to 10 kHz (not going up to 11025 Hz on purpose to leave some room for pitch shifting)</li>
<li><a href="https://github.com/CPJKU/kagglebirds2020/blob/master/definitions/models/audioclass/frontend.py#L72" target="_blank">nonlinear magnitude scaling</a>:<br>
Magnitudes are compressed by passing them through \(y = \log(1 + 10^a x)\), where \(a\) is initialized to zero and learned by backpropagation.<br>
I got similar, but maybe slightly worse results with \(y = x^{\sigma(a)}\), where \(a\) is initialized to zero (resulting in \(y = \sqrt{x}\)) and learned by backpropagation.</li>
<li><a href="https://github.com/CPJKU/kagglebirds2020/blob/master/definitions/models/audioclass/frontend.py#L163" target="_blank">denoising</a>:<br>
The different recordings have very different background noise floors, both due to the different environments and due to different recording equipment. Human listeners are quite good at adapting to this within a few seconds. To make it easier for the network, I unify the recordings somewhat by subtracting the median over time from each frequency band (separately for each recording or excerpt). I also tried <a href="https://github.com/CPJKU/kagglebirds2020/blob/master/definitions/models/audioclass/frontend.py#L114" target="_blank">Per-Channel Energy Normalization</a>, but subtracting the median worked better (and is faster). For PANNs this step is skipped, as it wouldn't match the input they are pretrained on.</li>
<li><a href="https://github.com/CPJKU/kagglebirds2020/blob/master/definitions/models/audioclass/frontend.py#L53" target="_blank">normalization</a>:<br>
To ensure inputs are in a reasonable range (and stay in a reasonable range when the magnitude scaling changes during training), each frequency band is normalized over time with Batch Normalization.</li>
</ul>
<h3>Local predictor</h3>
<p>The purpose of the local predictor is to take the spectrogram produced by the frontend, and produce 200 time series of logits, one for each bird species.<br>
The spectrogram can be regarded as an 80 pixel high one-channel image, and the output as a 1 pixel high 200-channel image. So what we need in between is a series of convolutions and pooling operations (i.e., a fully-convolutional network) that reduces the image height from 80 to 1, and produces 200 channels.<br>
For illustrative purposes, assume we use a single convolutional layer for that: It would need 200 filters of height 80, but we are still free to choose the width (corresponding to the temporal context used for a single local prediction), and the horizontal stride (corresponding to the temporal density of predictions, i.e., we may find that we do not need predictions at the same rate as our 70 spectrogram frames per second).<br>
Of course, we went deeper than a single layer, but we can still compute the receptive field width (the temporal context used for a prediction) and the horizontal stride (the prediction rate) for a fully-convolutional network.</p>
<p>I experimented with three different architectures: A "vanilla" ConvNet, a small residual network, and a pretrained PANN.</p>
<h3>Vanilla</h3>
<p>The most simple architecture (based on "sparrow" from a <a href="http://ofai.at/~jan.schlueter/pubs/2017_eusipco.pdf" target="_blank">2017 EUSIPCO paper of ours</a>) was <a href="https://github.com/CPJKU/kagglebirds2020/blob/master/definitions/models/audioclass/defaults.vars#L21" target="_blank">specified in our code</a> as follows:</p>
<pre><code>conv2d:64@3x3,bn2d,lrelu,
conv2d:64@3x3,bn2d,
pool2d:max@3x3,lrelu,
conv2d:128@3x3,bn2d,lrelu,
conv2d:128@3x3,bn2d,lrelu,
conv2d:128@17x3,bn2d,
pool2d:max@5x3,lrelu,
conv2d:1024@1x9,bn2d,lrelu,
dropout:0.5,conv2d:1024@1x1,bn2d,lrelu,
dropout:0.5,conv2d:C@1x1
</code></pre>
<p>Where</p>
<ul>
<li><code>conv2d:F@HxW</code> denotes an unpadded 2d convolution of <code>F</code> filters with height <code>H</code> (frequency bands) and width <code>W</code> (time frames),</li>
<li><code>pool2d:max@HxW</code> denotes non-overlapping unpadded 2d max pooling of height <code>H</code> and width <code>W</code>,</li>
<li><code>bn2d</code> denotes batch normalization,</li>
<li><code>lrelu</code> denotes the leaky rectifier (LReLU) of leakiness 0.01,</li>
<li>and <code>dropout:0.5</code> denotes ordinary (non-spatial) dropout of 50%.</li>
</ul>
<p>The last convolution is indicated with <code>C</code> channels, this will be replaced by 200, the number of classes. (The last two convolutions can be seen as two fully-connected layers of 1024 and 200 dimensions operating on each time step.)</p>
<p>In total, this has a receptive field of 79x103 (79 frequency bands, 103 spectrogram frames, about 1.5 seconds) and a temporal stride of 9 (~7.778 predictions per second). (Late insight: We will actually lose the highest frequency band due to the first pooling operation – we could have used a mel filterbank of 79 filters up to 9646.4 Hz instead for this model.) It clocks in at 3.6 mio parameters.</p>
<p>As our batch size was not that large (see the section on training), I later replaced batch normalization with group normalization, using 16 groups throughout the model. This improved performance a bit.</p>
<h3>ResNet</h3>
<p>As in <a href="https://github.com/f0k/birdclef2018" target="_blank">BirdCLEF 2018</a>, I extended the previous model into a simple residual network by replacing each of the first four convolutions with a residual block. Taken <a href="https://github.com/CPJKU/kagglebirds2020/blob/master/experiments/train_kagglebirds2020.sh#L170" target="_blank">from our config file</a>:</p>
<pre><code>add[conv2d:64@3x3,bn2d,relu,
conv2d:64@3x3
|crop2d:2,conv2d:64@1x1],
add[bn2d,relu,conv2d:64@3x3,
bn2d,relu,conv2d:64@3x3
|crop2d:2],
pool2d:max@3x3,
add[bn2d,relu,conv2d:128@3x3,
bn2d,relu,conv2d:128@3x3
|crop2d:2,conv2d:128@1x1],
add[bn2d,relu,conv2d:128@3x3,
bn2d,relu,conv2d:128@3x3
|crop2d:2],
bn2d,relu,
conv2d:128@12x3,bn2d,
lrelu,pool2d:max@5x3,
conv2d:1024@1x9,bn2d,lrelu,
dropout:0.5,conv2d:1024@1x1,bn2d,lrelu,
dropout:0.5,conv2d:C@1x1"
</code></pre>
<p>Where:</p>
<ul>
<li><code>add[A|B]</code> adds the output of two branches, and</li>
<li><code>crop2d:K</code> reduces the height and width by <code>K</code> pixels on each side, needed to match what is lost in the unpadded convolutions.</li>
</ul>
<p>The receptive field is at 80x119, only minimally larger, and the temporal stride stays at 9 frames. With 3.73 mio parameters it is a little bit larger than the vanilla model, it trains longer, and performs a bit better.</p>
<p>Again, it later turned out this worked better with group normalization.</p>
<h3>PANN</h3>
<p>For this challenge, it was explicitly allowed to use models pretrained on external data, as long as they were disclosed <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/158877" target="_blank">on the forum</a>. Among the models listed there is <a href="https://github.com/qiuqiangkong/audioset_tagging_cnn" target="_blank">Qiuqiang Kong's PANN repository</a> of CNNs trained on AudioSet, which is also used in <a href="https://www.kaggle.com/hidehisaarai1213/introduction-to-sound-event-detection" target="_blank">Hidehi Saarai's notebook</a>.</p>
<p>I downloaded <a href="https://zenodo.org/record/3987831" target="_blank">the weights</a> for his <a href="https://github.com/qiuqiangkong/audioset_tagging_cnn/blob/542c8c8/pytorch/models.py#L2542" target="_blank">Cnn14_16k</a> model and <a href="https://github.com/CPJKU/kagglebirds2020/blob/master/definitions/models/pann/__init__.py#L58" target="_blank">reproduced its central part</a> to stack it onto my frontend.</p>
<p>This required some adaptations to the frontend to produce spectrograms compatible with what the PANN expects. I could mostly adopt <a href="https://github.com/qiuqiangkong/audioset_tagging_cnn/blob/542c8c8/pytorch/models.py#L2547-L2552" target="_blank">the settings from the PANN repository</a>, but as my inputs are sampled at 22050 Hz instead of 16000 Hz, I had to <a href="https://github.com/CPJKU/kagglebirds2020/blob/master/definitions/models/pann/__init__.py#L92-L94" target="_blank">scale up the FFT window and hop size accordingly</a>. In addition, I needed to <a href="https://github.com/CPJKU/kagglebirds2020/blob/master/definitions/models/pann/__init__.py#L108" target="_blank">scale and shift</a> the result of my magnitude compression. I verified that remaining differences in the formulation of the mel filterbank did not affect predictions on an AudioSet example.</p>
<p>On top of Cnn14_16k`s 6 convolutional blocks (ending in 2048 feature maps; using only the first 4 or 5 blocks resulted in worse performance), I added two new 1x1 convolutions of 1024 and 200 channels each, with a leaky rectifier in between and 50% ordinary dropout applied to each.</p>
<p>Overall, this model has a receptive field of 284x284 and a stride of 32x32, on spectrograms of 64 frequency bands and 100 frames per second (so it produces ~3 predictions per second, taking 2.84 seconds of context into account for a prediction, and its predictions still have 2 frequency bands that will be taken care of in the global pooling step). The receptive field of 284 frequency bands on a spectrogram of only 64 bands is achieved by excessive zero-padding (via padded convolutions).</p>
<p>This model is much larger (~78 mio parameters) and slower to train, but achieves notably better results. This is in part since it is pretrained on AudioSet (training from scratch instead of using the pretrained weights reduced F1 score on the validation set from 0.688 to 0.676 in an early experiment).</p>
<h2>Global pooling</h2>
<p>Up to here, our model produces a time series of logits for each class. The final step is to pool these logits into a single prediction per class for the full recording, such that we can compute (and minimize) the classification error wrt. the given global labels for the recording.</p>
<p>Reproducing the reasoning from my <a href="https://github.com/f0k/birdclef2018" target="_blank">BirdCLEF 2018 model</a>, there are two obvious ways and a third one in between:</p>
<ul>
<li><strong>mean pooling:</strong> This computes the average of the predictions over time, separately per class. The effect is that our prediction will be more confident for a bird that is detected many times during the recording compared to a bird that is detected once (even if very confidently).<br>
This is a bad idea: the label for a recording should reflect whether a bird is present, irrespective of how often it can be heard.<br>
Mean pooling will also distribute the gradient of the loss uniformly over all time points, training the network to predict each labeled species everywhere in the recording.</li>
<li><strong>max pooling:</strong> This computes the maximum over time per class, so a single confident local detection will result in a confident global detection, matching the meaning of the global annotations.<br>
However, max pooling will backpropagate the gradient of the loss only to the single time point where the prediction was most confident (separately for each species), pulling it up if the bird appears among the global labels, and pushing it down otherwise. This means a lot of computation for a very sparse update which ignores all the other vocalizations of the bird in the same recording.</li>
<li><strong>log-mean-exp pooling:</strong> As a compromise, \(\frac{1}{a} \log \left( \frac{1}{T} \sum_{t=1}^{T} \exp (a x_t) \right) \) allows to interpolate between taking the maximum (\(a \rightarrow \infty\)) and mean (\(a \rightarrow 0\)). With \(a=1\), the output depends on the largest couple of values, which is also where the gradient is distributed to. This is what I used for most models. I also tried learning it, and learning a separate \(a\) per species (since some species might vocalize densely, warranting a small \(a\), and others sparsely, requiring a large \(a\)), but this only improved my validation scores without helping on the challenge data.</li>
</ul>
<p>Note that pooling operates in the logit domain here, that is, before applying the sigmoid that turns predictions into probabilities. This does not make a difference in max-pooling, but otherwise I empirically found that this improves results.</p> |
Cornell Birdcall Identification | 8th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cornell Birdcall Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the competition host and kaggle teams for holding this competition and congratulations to all winners. And thanks <a href="https://www.kaggle.com/ttahara" target="_blank">@ttahara</a> for providing resampled train data and <a href="https://www.kaggle.com/rohanrao" target="_blank">@rohanrao</a> for external data. Also thanks <a href="https://www.kaggle.com/hidehisaarai1213" target="_blank">@hidehisaarai1213</a> for providing a good baseline and introduction to SED. I am excited to get my solo gold.</p>
<h4>Train</h4>
<p>The labels provided by competition host is super noisy. There is even a 40 minute audio with a single label. I think how to use the secondary label is vital in this competition. In my test, using only primary labels will only predict very view bird calls and get many nocalls. </p>
<p><strong>Mix sound</strong></p>
<p>In order to generate more samples and get clips with more than one bird(with good labels), I choose to mix the clips from audios with no secondary labels. I use an or operation to generate the labels for mixed clips and mix sound like this:</p>
<pre><code>mixed_sound=sound1*uniform(0.8,1.2)+sound2*uniform(0.8,1.2)+sound3*uniform(0.8,1.2)
</code></pre>
<p><strong>Generate pseudo strong labels</strong></p>
<p>I tried to generate pseudo strong labels with a SED method. I train SED models with 10s clips and make prediction on whole audios. However, the generated pseudo labels are still noisy and unreliable,there is too many false positives. Finally, I used them to fix labels for audios with secondary labels. </p>
<p><strong>Train Detail</strong></p>
<p>I used logmel spectrogram as input and randomly crop 5s clips from that. Then mixed sound augmentations will be applied to audios with no secondary labels, labels for audio with secondary labels will be adjusted according to pseudo strong labels. </p>
<p>Making a good validation is very difficult in this competition, since there are no reliable nocall samples and a single clip can contain multiple bird calls. I chose to use a similar mechanism with site_3 audios in testset and calculated record-wise F1 scores. </p>
<p>Preprocessing: MelSpectrogram-> ToDB -> Normalize -> Resize</p>
<p>DataAugmentation: GaussianNoise, BackgroundNoise, Shift, Drop, Clipping,</p>
<p>Backbones: Resnest50, Regnety_040</p>
<p>Single fold Score: 0.654/0.591/CV 0.75</p>
<h4>Ensemble and TTA</h4>
<p>In my best submission, the models are 5fold resnest50d(256x512) + 5fold resnest50d(320x768)+ 4fold regnety_040 (224x512).</p>
<p>I used two shifted versions of sound clip as TTA, both with half hop length. However, they make no difference on LB as well as on my CV.</p>
<h4>What works</h4>
<p>Using secondary labels, record-wise F1 increases, but clip-wise F1 decreases. </p>
<p>Train longer make result stable. Train with 40 epochs can also get decent scores, but the score is varying a lot. Training 80 epochs results in higher CV and stable scores in LB.</p>
<h4>What doesn't work</h4>
<p>Simulate nocall sound and train a binary classifier. There is a huge domain gap between simulated ones and real ones. The classifier just finds a shortcut. My classifier prediction is nearly all positive on competition data. </p>
<p>It seems there is an additional mp3 compression for testset. I tried to finetune on compressed audios. It improves some score on public LB but not on private LB.</p>
<p>Mixup make my result worse.</p>
<p>Using more mel bins can improve local CV, but hurts LB.</p>
<p>Thank you!</p> |
Cornell Birdcall Identification | 9th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cornell Birdcall Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you very much. Unfortunately, I don't know what is "model dashboard" 😅 I am planning to share the code.</p>
<p>I would like to thank the hosts for this unique challenge. Congratulations to my teammate <a href="https://www.kaggle.com/canalici" target="_blank">@canalici</a> and to all competitors. It was indeed a very educative competition in the audio domain. Many thanks to <a href="https://www.kaggle.com/hidehisaarai1213" target="_blank">@hidehisaarai1213</a> guidance through competition, and <a href="https://www.kaggle.com/doanquanvietnamca" target="_blank">@doanquanvietnamca</a> for GRU-SED and Dmytro Karabash for ideas.</p>
<h2>Data Augmentation</h2>
<p>Models have trained in both original and extended datasets. Augmentations applied in both waveform and mel-spectrogram level.</p>
<ul>
<li>Gaussian Noise</li>
<li>SpecAug</li>
</ul>
<h2>Modeling</h2>
<p>I have modified the SED model (PANN’s) and replace its inefficient backbone with a noisy-Efficientnet and further experimented with GRU’s, LSTM’s, and with Transformers for temporal modeling. We had a CNN backbone -> a GRU layer -> and attention layer in the final model. </p>
<ul>
<li>EfficientNet-B4 (Noisy Student) </li>
<li>EfficientNet-B7 (Noisy Student)</li>
<li>EfficientNet-B7 (Noisy Student)</li>
</ul>
<p>To ensemble different solutions, I have removed the attention layer for each model, kept pre-trained weights of the extracted part, and re-trained an attention layer from features extracted from 3 different models listed above.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4470174%2F00b6195649463cefa402d7d0097e17d1%2FUntitled-3.png?generation=1600242638521318&alt=media" alt=""></p>
<h2>Training</h2>
<ul>
<li>Batch size of 32 for B4, and 8 for B7 models (single GPU)</li>
<li>No mixup :( </li>
<li>BCELoss </li>
<li>AdamW with Cosine Anneal</li>
<li>5 seconds of audio clips (501, 64) (scaling mel_bins into 224, 300 would really have helped, but very expensive)</li>
<li>Pre-training on primary labels, fine tuning with secondary labels</li>
</ul>
<p>For validation, we have hand-labeled no-calls into gt_birdclef2020_validation_data and excluded irrelevant species, which provided a chance to test the algorithm in the wild. </p>
<h3>Possible further work</h3>
<p>Pre-trained models proven self to be leverage in many knowledge transfer tasks. It is tough not to over-fit our classifier, especially in this competition, where we had a few audio clips with very noisy labels. Hidehisa Arai pointed out PANN’s(one of the largest pre-trained models in the audio domain) for this issue, their CNN backbone was less potent than lighter alternatives. We have used a firm CNN backbone to overcome this issue, pre-trained on a large corpus of images (Noisy Student, Efficientnet). However, it is possible to extract mel-spectrogram encoder/decoder parts from very famous text-to-speech, speech conversation (Tacotron, Glow TTS…) algorithms that trained on a relatively larger corpus. It could be beneficial to adapt successfully pre-trained models from the Audio domain, fine-tuning it with all the bird data we have, then applying a noisy-student training scheme. </p>
<p>The reason I would say is extended pre-training corpus and smarter optimization. You can review this table for comparison. <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4470174%2F58cbe63ae9d32ba8334831b8d4007969%2F11.png?generation=1600244025433433&alt=media" alt=""></p>
<p>My original plan was to stack noisy effnet-l2 models, though I couldn't do it on single GPU 😅</p>
<p>Although I had comparative scores on local validation, GRU outperformed it on LB. My idea was to handling longer sequences with Linformer (self-attention with linear complexity). I think it's my bad and with better initilization and better optimization, it could outperform GRU or LSTM with a large margin, yet sadly I didn't have time to test all ideas :( Thank you very much for the kind comment.</p> |
Cornell Birdcall Identification | My solution for my first medal (silver) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cornell Birdcall Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to kaggle and competition host for bringing this competition and congrats to the winners and other competitors. this is my first time working with audio data and i sure learned a lot about DL due to the nature of this competition. The domain mismatch between the training and testing environment posted a good challenge to constantly question my understanding about Data and DL understanding .</p>
<p><strong>My solution:</strong><br>
this is my best solution which i did not select for final submission <strong>0.626</strong><br>
This is an ensemble of two best models from different fold and varying level of augmentation</p>
<p>Feature extractor : <strong>Resnet18</strong><br>
Network architecture : Sound Event Detection(Not PANN)<br>
Method of training : Multi instance learning<br>
Key layer : <strong>Adaptive Auto pooling</strong> Presented <a href="https://arxiv.org/pdf/1804.10070.pdf" target="_blank">here</a><br>
Augmentation: extracted 20 sec sound data and mixed with other randomly chosen audio sample, Low pass filter(as low frequency waves travel longer distance), added background noise extracted from the provided autio sample.<br>
Best score threshold : 0.30</p>
<p><strong>Things i should have focused:</strong><br>
i completely ignored the fact that there is additional data available and posted in the competition. this would have given me some more benefits. not aware of domain adaptation techniques and hoping to learn from others solution. Not changing from Resnet18 base model. for some reason i kept on working on the data and ignored to change the extractor when i had the time to experiment with it.</p>
<p>code: <a href="https://github.com/yuvaramsingh94/Cornell-Birdcall-Identification-kaggle-solution" target="_blank">github</a></p>
<p>the code needs some more cleaning and i will add a detailed Readme </p>
<p>Thanks to <a href="https://www.kaggle.com/hidehisaarai1213" target="_blank">@hidehisaarai1213</a> <a href="https://www.kaggle.com/ttahara" target="_blank">@ttahara</a> and others for constantly sharing ideas and findings to help others. <br>
On to the next competitions :)</p> |
Corporación Favorita Grocery Sales Forecasting | 12th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Corporación Favorita Grocery Sales Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>As far as I remember, there was almost no gain from it.</p>
<p>Code with main parts of the solution is here
<a href="https://github.com/antklen/kaggle_favorita-grocery-sales">https://github.com/antklen/kaggle_favorita-grocery-sales</a>. </p>
<h2>Feature engineering</h2>
<p>I used the same features as in public scripts, namely</p>
<ul>
<li>mean sales during several last periods for item/store pairs</li>
<li>mean sales during several last periods for item/store pairs for each day of week</li>
<li>mean ompromotion during several last periods for item/store pairs
but as for time windows, I used more history - up to 1 year</li>
</ul>
<p>plus some other grouping averages</p>
<ul>
<li>mean sales during several last periods for item/store pairs for each day of month</li>
<li>mean number of zero sales days during several last periods for item/store pairs</li>
<li>mean sales during several last periods for store/item family</li>
<li>mean sales during several last periods for store/item class</li>
<li>mean sales during several last periods for item/store city</li>
<li>mean sales during several last periods for item/store stat</li>
<li>mean sales during several last periods for item/store type</li>
<li>mean sales during several last periods for item/store cluster</li>
<li>mean sales during several last periods group by only item</li>
<li>mean number of zero sales days during several last periods group by only stores</li>
</ul>
<p>plus some differences between mean features with different time window (for example, mean_7days - mean_28days)</p>
<p>plus linear regression on each item/store pair time series (for incorporating possible time trend)</p>
<p>plus mean promo for all 16 days (mean of promo_0, promo_1, ... in Ceshine Lee's script).</p>
<p>as for categorical features (store and item info - family, type, etc.)</p>
<ul>
<li>standard label encoding for lightgbm</li>
<li>embeddings for neural network</li>
</ul>
<p>To be honest, base mean, mean by day of week and promo features do almost all the job, my additional features added not much value.</p>
<h2>Onpromotion issue</h2>
<p>As for problem with missing onpromotion data, I didn't come up with any clever solution. But taking as less promo based features as possible helped on public LB and helped to have more stable relation between validation and LB (but still not very good). Firstly, I made some extra features based on promo and got stuck around public scripts, but when I removed them, I was able to climb up on the leaderboard.</p>
<h2>Training and validation scheme</h2>
<p>For validation nothing fancy, I used just the same scheme as in Ceshine Lee's script - 26.07-10.08 for validation, several previous 16-days periods for train. May be it would be useful to have several periods for validation (Giba and CPMP reported that it helped), but we had a lot of data here and training took a lot of time, so I decided to stay with only one period.</p>
<p>What helped to improve score is just to take more data for training. Switching from 6 16-days periods to 20-25 periods helped. During last several days I was running the same models but for even longer training history - 52, 75 and 104 periods (1 year, 1.5 year and 2 year for training). Taking more history consistently improved validation score, taking too long (104 periods) made public score worse, but really surprisingly on private LB even 52 periods was too much - 25 periods was enough, I have the best private LB score on single models with 25 periods.</p>
<h2>Models</h2>
<p>I used LightGBM and neural network with categorical embeddings on the same set of features.</p>
<p>Also I used two different approaches:</p>
<ol>
<li>Train one general model for all lags (very memory-consuming=)</li>
<li>Train separate model for each lag. But in a little different way than in public scripts. I didn't use the same set of features for all lags. For each lag I constructed a little different set of features on-the-fly. The motivation for that is that I thought that it's not a good idea to feed into the model all day of week mean features when we train model only for one day of week. For example, if we train model for monday, why we add features for all other weekdays? It probably adds some noise.</li>
</ol>
<p>Switching to these two schemes from scheme used in public scripts really improved my score and led me to the top.</p>
<h2>Final models</h2>
<p>So in the end I had 4 types of models (lightgbm or neural network + general model for all lags or separate model for each lag), trained on several training sets with different length.
My final models were simple averages of some of these models.</p>
<p>Averaging models helped a lot here. Public and private score of my single models have very bad correlation. The best single public model (0.507) has only 0.519 on private, while best private model (0.514) had 0.510 on public. But my blending submissions are much more consistent. All my last blending submissions (about 10 submissions) are in the interval [0.506, 0.508] on public and in the interval [0.513, 0.516] on private.</p>
<p>Thanks and congrats on one more step towards Grandmaster=)</p> |
Corporación Favorita Grocery Sales Forecasting | 13th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Corporación Favorita Grocery Sales Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My intuition is that perishable goods like food might have a very different pattern than non-perishable goods. My economic training tells me that people are probably more sensitive to price (hence promotion) for perishable good as they observed the price more frequently. A lot of people can probably tell how much was the meat they brought last weekend but wouldn't be able to remember how much was the soap they brought last time. Also, for non-perishable goods, I used a longer look back horizon (64 days instead of 28). </p>
<p>That being said, I went back to check the private score of those two, my single model approach (perishable as a feature) actually performed a little better (both are 0.517) on private, but 0.002 less on the public. So I might actually over-fitted a little. However, I still believe there might be some merit in training them separately, but validation might be tricky. </p>
<h2>Feature Engineering</h2>
<p>After reading a couple of solutions posted, I think this is my biggest trump card compared to the others. I started by thinking about my own grocery shopping habit.</p>
<ul>
<li>I tends to shop on a weekend. (timing)</li>
<li>I will stock up if things I want is on promotion. (on promotion)</li>
<li>If I already have enough of something at home, it's much less likely for me to buy more even if it is on discount. (historical sales)</li>
<li>If I have enough beef at home, it's much less likely for me to buy more any other type of meat as well (sales by class)</li>
<li>If beef is on discount, then I am less likely to buy chicken (promotion by class)</li>
</ul>
<p>The key insight is that most people maintain a certain level of stock for a particular group of goods at home, not individual goods. I think most people missed this. Or in classic economic theory, there is a substitution effect among goods. I believe including these class-based features boosted my model by a large margin. I added these features at the very early stage so I don't have a good estimate on how much the impact this. </p>
<p>My formal education is in economics which definitely helped, but at the same time, I think these features are pretty generic with limited domain expertise involved. </p>
<h2>Model</h2>
<p>I used sliding window approach like everyone else. </p>
<p>My base model is a simple DNN. started with a batch norm layer, then followed by 6 dense layers with a .3 dropout layer between every two. Activation with 'elu' with the only exception of last layer of 'relu' to force the output to be positive.</p>
<p>I started with train the model using the last 2 years data and after a lot of tuning the best public score, I managed to achieve with this 0.510. </p>
<p>I stuck at 0.510 for a long time then because I have been training with RMSLE without weighting, and my intuition also tells me, the behaviour for perishable and non-perishable might be different, I decided to train the perishable and non-perishable separately. Doing this, I managed to improve the public score by another .001.</p>
<p>My final model is an ensemble of the above two variants and AhmetErdem's public kernel. this improved the public score to 0.507</p>
<h2>Validation</h2>
<p>Initially, I used the last 16 days across all series in train dataset. And the performance has been quite consistent compared to the public score.</p>
<p>However, after I started training the perishable and non-perishable, I noticed further improving my local cv actually decreased my public score so I added another 16 days to my validation set, which turns out to be quite helpful in training the perishable goods. </p> |
Corporación Favorita Grocery Sales Forecasting | 18th Place Solution - Simple Improvement of Public Kernel | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Corporación Favorita Grocery Sales Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My approach is almost based on this kernel. <br>
<a href="https://www.kaggle.com/vrtjso/lgbm-one-step-ahead">https://www.kaggle.com/vrtjso/lgbm-one-step-ahead</a></p>
<p>You can reach this rank by just improving kernel.</p>
<ul>
<li>Private Score : 0.523 -> 0.516</li>
<li>Public Score : 0.515 -> 0.508</li>
</ul>
<h2>Feature Engineering</h2>
<p>In the kernel, most features were made from time series data of each combinations of stores and items. I added some variations of averaging features. Further more, I made similar features about stores and items separately, by aggregating unit_sales value and transaction data for each stores and items. Additional features affect public LB about 0.003.</p>
<h2>Effective Use of Data</h2>
<p>In this competition, we have to predict sales for the next 16 days after the given dataset. When we predict the data at the farthest days (16 days) based on time series features, we can only use data up to 16 days before the predicted date. The kernel used data up to 3 weeks before the prediction target period because of considering above reason and characteristics of the day of the week.
However when predicting closer date sales, such as the next day or ten days later, more recent data can be used. So I used data up to 1 week before prediction date for predicting future 1-7days sales, and data up to 2 weeks before for predicting 8-14days sales. This improve public LB score about .002 (but maybe not affect private LB so much because LB is separated by target dates).</p>
<h2>Extending Training Dataset for Predicting Test Set</h2>
<p>For validation, the kernel uses a dataset from 26th July to 10th August as validation set, and data before the validation set was used for training models which predict validation set and test set.
This validation process is very important to confirm whether or not a model is overfitting. However when predicting test set, it is not good to only use the data that made for predict the validation set because dates for training and predicting dates are far away. So after validation process I retrain model using data which was extended 2 weeks, and predict test set by the model. This model improve public LB score about 0.002. </p>
<p>Kaggle kernel always gives me useful information and stimulation.
Thanks for fantastic kernel authors and all competitors!</p> |
Corporación Favorita Grocery Sales Forecasting | 1st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Corporación Favorita Grocery Sales Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>In fact , we did a two-stage linear blending and all weights are decided based on both cv and public score. <br>
submission = 0.7 * (0.6 model_1 + 0.4 model_2) + 0.3 * ( 0.6 model_3 + 0.4 model_4)</p>
<p>Congrats to all winner teams and new grandmaster sjv.
Thanks to kaggle for hosting and Favorita for sponsoring this great competition.
Special thanks to @sjv, @senkin13, @tunguz, @ceshine, we build our models based on your kernels.</p>
<ul>
<li><a href="https://github.com/sjvasquez/web-traffic-forecasting/blob/master/cnn.py">https://github.com/sjvasquez/web-traffic-forecasting/blob/master/cnn.py</a></li>
<li><a href="https://www.kaggle.com/senkin13/lstm-starter/code">https://www.kaggle.com/senkin13/lstm-starter/code</a></li>
<li><a href="https://www.kaggle.com/tunguz/lgbm-one-step-ahead-lb-0-513">https://www.kaggle.com/tunguz/lgbm-one-step-ahead-lb-0-513</a> </li>
<li><a href="https://www.kaggle.com/ceshine/lgbm-starter">https://www.kaggle.com/ceshine/lgbm-starter</a></li>
</ul>
<p>Like the Rossmann competiton, the private leaderboard shaked up again this time. I think luck is on our side finally. </p>
<hr>
<h2>Sample Selection</h2>
<p>we used only 2017 data to extract features and construct samples.</p>
<p>train data:20170531 - 20170719 or 20170614 - 20170719, different models are trained with different data set.</p>
<p>validition: 20170726 - 20170810</p>
<p>In fact, we tried to use more data but failed. The gap between public and private leadboard is not very stable. If we train a single model for data of 16 days, the gap will be smaller(0.002-0.003).</p>
<h2>Preprocessing</h2>
<p>We just filled missing or negtive promotion and target values with 0.</p>
<h2>Feature Engineering</h2>
<ol>
<li>basic features
<ul><li>category features: store, item, famlily, class, cluster...</li>
<li>promotion</li>
<li>dayofweek(only for model 3)</li></ul></li>
<li>statitical features: we use some methods to stat some targets for different keys in different time windows
<ul><li>time windows
<ul><li>nearest days: [1,3,5,7,14,30,60,140]</li>
<li>equal time windows: [1] * 16, [7] * 20...</li></ul></li>
<li>key:store x item, item, store x class</li>
<li>target: promotion, unit_sales, zeros</li>
<li>method
<ul><li>mean, median, max, min, std</li>
<li>days since last appearance</li>
<li>difference of mean value between adjacent time windows(only for equal time windows)</li></ul></li></ul></li>
<li>useless features
<ul><li>holidays</li>
<li>other keys such as: cluster x item, store x family...</li></ul></li>
</ol>
<h2>Single Model</h2>
<ul>
<li>model_1 : 0.506 / 0.511 , 16 lgb models trained for each day <a href="https://www.kaggle.com/shixw125/1st-place-lgb-model-public-0-506-private-0-511">source code</a> </li>
<li>model_2 : 0.507 / 0.513 , 16 nn models trained for each day <a href="https://www.kaggle.com/shixw125/1st-place-nn-model-public-0-507-private-0-513">source code</a></li>
<li>model_3 : 0.512 / 0.515,1 lgb model for 16 days with almost same features as model_1</li>
<li>model_4 : 0.517 / 0.519,1 nn model based on @sjv's code</li>
</ul>
<h2>Ensemble</h2>
<p>Stacking doesn't work well this time, our best model is linear blend of 4 single models.</p>
<p>final submission = 0.42*model_1 + 0.28 * model_2 + 0.18 * model_3 + 0.12 * model_4 </p>
<p>public = 0.504 , private = 0.509</p> |
Corporación Favorita Grocery Sales Forecasting | 2nd place solution overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Corporación Favorita Grocery Sales Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks a lot!. </p>
<p>First, we want to thanks @sjv. We learned a lot from his prevous sharing.</p>
<p>Second, thanks to our team members to stay with me, discuss and work hard together until last minute. </p>
<p>Also thanks to @CPMP, his success on WTF with CNN encouraged me to peruse breakthrough with deep learning model.</p>
<p>We won't introduce the basic model architecture for wavenet. </p>
<p>We just post our understanding and our experience with this model. We think the model and the way to train model is naturally robust.</p>
<p>Feeding data with mini-batch with randomly sampled 128 sequence. Then randomly to choose the start of decode/target date. So, we could say somehow the model will see different data for each training iteration. because the total dataset is around 170000 (seq) x 365 days. My feeling is the wavenet trained in this way will be good to handle overfitting.</p>
<p>During the training, we set the learning rate as 0.0005 and with a learning decay.</p>
<p>In fact, the training loss is hard to get decrease after some iteration.</p>
<p>We take the approach for validation is step by step. We keep the last 16 days validation data. </p>
<p>Trigger by WTF top solution, We think the shifted unit_sales and onpromotion data could be used for capture better quarterly and yearly pattern. </p>
<p>The way we feeding data gives us robust. but we think there is a little drawback. which is the prediction from model will be slightly 'overfitted' the last several hundreds or thousands of mini-batch data. So the result has a little fluctuation, not very stable. This experience also shown in Top solution in WTF. Also CPMP mentioned by different cv setting he got different performcen.</p>
<p>The best way we thought is averaging. In WTF top solution 30 models prediction averaging. We trained the model and after 5000 mini-batch, model will generate a prediction for every 2000 mini-batch. </p>
<p>After certain iteration, choose 5 model average will give better result than single model. But different 5 or 10 model average output still have a little difference on performance from Local CV. </p>
<p>So finally we choose EMA to average on prediction. The parameter used for EMA, is from LocalCV. </p>
<p>New Item processing, it might only has minor impact, but because we are hanging on the edge of gold zone, so finally I suggest to do some work on new items. which is agreed by all team member to bet on it for gold medal.</p>
<p>After analysis of the data, we found the on the date 2015-07-01, 2015-09-01, 2015-10-01 and 2016-11-27, there are lots of items start to have unit_sales records. but the hard problem is that we do not know in test period most of new items on which day start to have unit_sale. and on which stores? By checking onpromotion information, all onpromotion with new items only on 08-30 and 08-31. by reviewing the data on 2015-10-01 or 2016-11-27, those 'old new items' is shown onpromotion on that day, not before that. So, we bet the most new items will start have unit_sales on 08-30 and 08-31. Before that there might be only a few new item was sold. We do not use model to predict to prevent overfitting, We only use some value to validate with loss calculation on those 'old new items'</p>
<p>We also used Max Observation value cap. etc.</p>
<p>Finally, thanks to our team member work together to get this very good result. </p>
<p>Update, the new item prediction is not good. checked by late submission. the score got slight dropped less than 0.001 when I add new item prediction</p>
<p>Thanks. My teammates are awesome, but i still think it is kind of pity that we missed cooperation. I notice you since summer too. You always perform good. !</p>
<p>Thanks a lot!</p> |
Corporación Favorita Grocery Sales Forecasting | 34th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Corporación Favorita Grocery Sales Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://github.com/btrotta/kaggle-favorita">https://github.com/btrotta/kaggle-favorita</a></p> |
Corporación Favorita Grocery Sales Forecasting | 3rd place solution overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Corporación Favorita Grocery Sales Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Cogratulations to the members of #1 and #2 teams, you did a great job here. And my deepest condolences to the teams suffered in this shake-up: I feel for you, guys. </p>
<p>My teammate, <a href="https://www.kaggle.com/divrikwicky">Ahmet Erdem</a>, will hopefully describe his approach in more detail later, when the time will permit, but as a short summary : He used gradient boosting (LGBM) with an original set of features, and tried to capture the monthly dynamics by using sales on the same days of different months as the train targets.</p>
<p>My part of the final solution consists mostly of neural networks, although I also used a LGBM model with a low weight in the final ensemble.</p>
<p>When it comes to time series, I strongly belive that the two most important things are validation and bagging. By correct validation I mean simulating train/test (train/unknown future) split to avoid any kind of future information leaks. Thus, the validation set should include a time period that is not present in train in any way, even with different timeseries, because these time series could correlate with each other. </p>
<p>So I used the following train/validation split:
for the training set, I took a history of 80 consequtive sales days for each (item, store pair), and used the next 16 sales days as a train set target. The split between training history and training target was always from Tuesday to Wednesday, as in the original train/test split, in orded to capture the weekly dynamics.
After training my models on this set and estimating an optimal number of epochs to run, I retrained them on the latest data available (i.e. finals days of the original training set) in order to use the most up-to-date information.</p>
<p>About bagging : our final result includes models, trained on 10 runs each. The weights initializations are different each time, so results are different as well, and averaging them helps to improve the solution, particularly when dealing with the uncertain future.</p>
<p>Another part of my bagging was a bit unusual. After each training epoch (including the very first) I predicted the targets, and then averaged these predictions. I've read about averaging a few last epochs, but never heard about averaging everything, even the first very underfitted epoch. Yet my validation consistently showed that it does improve the results.</p>
<p>As for the model's achitectures, I belive they are relatively less important here. I had an ensemble of 3 NNs : CNN, LSTM and GRU with almost equal weights. As a single model, GRU shows a bit better results than the rest, but an ensemble is even better, as usual.</p>
<p>All of them have 3 top (convolutional or recurrent) layers, 2 or 3 bottom fully-connected layers and embedding layers for categorical features. I tried different tweaks like skip-connections, concatenating CNN layers with different convolution kernel sizes and so on, but the results were negligible. The only achitectural tweak that worked better was using dilated convolutions: my CNN uses convolutions with dilation of 2 on the first layer, 8 on the second and 32 on the third (and no pooling). But an original plain CNN without dilated convolutions worked just a bit worse.</p>
<p>What did not work:
The main disappointment was the poor results of a version with updated promotion information for the train set. I've used a relatively sophisticated method to fill onpromotion status for the dates of the training period without sales, and it showed very promising results on the public leaderboard on the last day of this competition, boosting our score from .501 to .499. And it failed miserably on the private, so we remained in the prize area only because our other submit was more conservative and used less weight for these adjusted promotions. So next time I'll try to be more careful with the last day "improvements".</p>
<p>Thank you for your interest. Unfortunatelly, at this point I don't know if we will have a legal right to share the code. I'll consult with the organizers about this. In case we will, I will share it and post a link here.</p> |
Corporación Favorita Grocery Sales Forecasting | 443 steps jump in one night! | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Corporación Favorita Grocery Sales Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to the winners of this competition, and thanks Kaggle and Favorita for hosting it. My special thanks to @Ceshine and @Lingzhi for sharing their codes.</p>
<p>My code was based on Ceshine's code, I added more features and played with it a lot trying to get better public LB. I had stuck at 0.513 public LB after several tries, and did not have enough time to try other solutions. Fortunately, I jumped 443 steps in private LB that made me smile today! </p>
<p>Here I share some points that I believe to be the reason for my improved performance on the test set:
1- My model is an LGBM model with added features. Latest feature count was near 100 focusing more on daily, weekly and monthly features, plus categorical features focusing on items and stores.</p>
<p>2- Large train size: I increased the train data set to 19 weeks starting from 3/15/2017, and the validation set was the last possible week 7/26/2017. This gave a train set almost 3 times larger than public kernels, and most likely helped me to scape the 'curse of dimensionality' with the increased number of data points. I think there was still room to increase the train data size, specially the available data between 7/26 to 8/15 were not used properly. This could be better handled.</p>
<p>3- Forgetting factor (decay): I weighted early weeks of train data with a lower weight compared to the late weeks. This means that models are trained to be more dependent on the latest customer behavior rather than old patterns.
I didn't make it to spend time on fine tuning the LGBM model, I wish I had!</p>
<p>I don't have a separate submission with decay as the only difference, so unfortunately I cannot give quantitative answer to your question. However, locally I observed an improvement in the validation set (~0.005) when I applied that. I had a gut feeling that it's useful.</p> |
Corporación Favorita Grocery Sales Forecasting | 4th-Place Solution Overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Corporación Favorita Grocery Sales Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Shakeups are expected in certain competitions, and it's generally possible to identify these before entering the competition. If the data was prepared responsibly and the train/test mismatch was handled more carefully, this competition would have had a much more reasonable shakeup. You should not dismiss the shakeup in this competition as an intrinsic part of machine learning competitions -- it was the result of negligence on Kaggle's behalf (though this is not always the case). I don't like to be this critical, but I haven't seen any evidence that Kaggle takes this issue seriously. Frankly, respected academics and AI/ML researchers don't hold my Kaggle performances in high regard at all, particularly because Kaggle results are extremely noisy signals of actual ML competence. Competitions like this one only further this poor reputation.</p>
<p>Glad to hear my code was useful. I used a bidirectional LSTM on the 16 future onpromotion values and fed the output states to the decoder. I think this has the same effect as what you are describing. Unfortunately, I think it makes the model more sensitive to onpromotion values, making the effect of the train bias more noticeable.</p>
<p>First off, congrats to the winners and thanks to all those who contributed to the forum discussions. I'm definitely disappointed with the result, but I might as well share my solution anyways. I'll only discuss my contribution to my team's solution, which is a single model which scores .499 on the public leaderboard. </p>
<h3>Model Architecture</h3>
<p>We framed forecasting as a sequence to sequence modeling problem, where the encoder reads in a portion of a time series and the decoder sequentially emits estimates of the subsequent 16 values. The encoder and decoder do not share parameters, but they are both similarly parameterized as a stack of ~30 dilated, causal convolutions. Dilated convolutions allowed the receptive field to span the entirety of the 4+ year train period, while still maintaining a small number of parameters (~1 million). To allow the decoder to use future onpromotion values, it was also augmented with a bidirectional LSTM which encoded information from prior and future onpromotion data. The model struggled to accurately predict zero, so we modified the architecture to additionally emit a bernoulli parameter at each timestep (corresponding to the probability that the output was zero) and the final predictions were adjusted accordingly. Weighted RMSE of the log-transformed data was directly minimized.</p>
<h3>Input Representation</h3>
<p>The input to the network consisted of the raw time series values, embeddings of the categorical variables, and some manually engineered features. The manual features included lags, diffs, rolling statistics, date features, and conditioning time series (i.e. average sales for a given product/store/etc.). Promotion data required some special handling, which I'll describe later.</p>
<h3>Validation</h3>
<p>5% of the time series were held out, and the model was evaluated on random periods from the last 365 training days of these held out series. The purpose of this was to prevent the model from overfitting a validation set which was biased toward any sort of weekly or monthly trends.</p>
<h3>Promotion Data</h3>
<p>As discussed in the forums, if you naively impute missing onpromotion values with 0, the sales distribution conditioned on onpromotion values will differ in the train and test sets. I didn't find a way to remedy this, but thankfully my teammates were able to come up with a clever solution which improved my score by about .004. The missing values were imputed randomly: 1 with probability p and 0 otherwise. Determining p was a rather tedious task, since we had no data to model it and we had to resort to trial and error leaderboard submissions. We ended up setting p separately for each day, and it was computed as the mean onpromotion rate for each day, scaled by a learnable factor estimated by stochastic leaderboard descent (leaderboard probing...). All credit goes to my teammates for this, and they can probably explain it better than I can.</p>
<h3>Source Code</h3>
<p>I normally share my source code, but since the core model is nearly identical to one I've already shared, I'll just refer you to that instead: <a href="https://github.com/sjvasquez/web-traffic-forecasting">https://github.com/sjvasquez/web-traffic-forecasting</a></p>
<p>Finally, I'll admit I'm mostly disappointed with Kaggle. I raised concerns about the unseen items in the test set and also about the onpromotion issue earlier in the competition. Ultimately, the competition was a lottery. For the sake of others, I hope Kaggle will begin to demonstrate a moderate level of care in data preparation and show that they understand that whenever they deviate from the assumption that train set is an unbiased sampling of the test distribution, they induce randomness which turns competitions into lotteries. This will be my last Kaggle competition, as my time will be better spent focusing on research from now on. Best of luck to everyone.</p>
<p>For some models it helped and for some it did not. Our private leaderboard scores varied drastically between runs, even when the public leaderboard score stayed the same. While in the end it did not work out very well, I'm confident that the approach of using the leaderboard as validation was the optimal strategy from a statistical viewpoint. We have no choice but to fit our models to the public leaderboard when the organizers don't provide any training data that resembles the test set...</p> |
Corporación Favorita Grocery Sales Forecasting | 5th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Corporación Favorita Grocery Sales Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, congratulations to the winners. Thanks to @Ceshine Lee, @sjv and others whose codes, kernels and discussions gave me many inspirations. This was my first time to use deep learning and I really learn a lot from previous shared kernels and codes from kagglers.</p>
<p><strong>Model Overview</strong></p>
<p>I build 3 models: a lightGBM, a CNN+DNN and a seq2seq RNN model. Final model was a weighted average of these models (where each model is stabilized by training multiple times with different random seeds then take the average), with special dealing with promotion, which I will discuss later. Each model separately can stay in top 1% in the private leaderboard.</p>
<p><strong>LGBM:</strong> It is an upgraded version of the public kernels. More features, data and periods were fed to the model.</p>
<p><strong>CNN+DNN:</strong> This is a traditional NN model, where the CNN part is a dilated causal convolution inspired by WaveNet, and the DNN part is 2 FC layers connected to raw sales sequences. Then the inputs are concatenated together with categorical embeddings and future promotions, and directly output to 16 future days of predictions.</p>
<p><strong>RNN:</strong> This is a seq2seq model with a similar architecture of @Arthur Suilin's solution for the web traffic prediction. Encoder and decoder are both GRUs. The hidden states of the encoder are passed to the decoder through an FC layer connector. This is useful to improve the accuracy significantly.</p>
<p><strong>Feature Engineering</strong></p>
<p>For LGB, for each time periods the mean sales, count of promotions and count of zeros are included. These features are calculated with different ways of splits of the time periods, e.g. with/without promotion, each weekdays, item/store group stat, etc. Categorical features are included with label encoding.</p>
<p>For NN, item mean and year-ago/quarter-ago sales are fed as sequences input. Categorical features and time features (weekday, day of month) are fed as embeddings</p>
<p><strong>Training and Validation</strong></p>
<p>For training and validation, only store-item combinations that are appeared in the test data and have at least one record during 2017 in the training data are included. Validation period is 2017.7.26~2017.8.10 throughout the competition, and training periods are collected randomly during 2017.1.1~2017.7.5.</p>
<p>For LGB the number of range is 18, total # of data is ~3M. For NN the data is randomly generated batch by batch. Batch size is 1500 and steps per epoch is 1500 so total # of data trained per epoch is ~2M.</p>
<p><strong>onPromotion</strong></p>
<p>As you guys all noticed, there are some problems with the promotion feature here. Besides the missing promotion information for zero sales in training data, I found that the promotion info in 8/16 is quite abnormal and unreliable (e.g. there are ~6000 unseen store-item combo on promotion on 8/16). So I don't think it reliable to use PB to infer the distribution of promotion in test set.</p>
<p>The sales ratio of promo/non-promo items in training set is ~2:1 if I fill missing values with 0. I try to infer this ratio in test data. We know that the proportion of 1s in training promo data is underestimated because I fill all missing values with 0. the proportion of 1s in training set is ~6.3%, while in test set is ~6.9% without 8/16. The proportion of missing value is ~40% in training data. If I assume the true distribution is consistent, the proportion of items that are missing in the training data but actually has promotion is (0.069-0.063)/0.4 = ~1.5%. And as the missing values are all 0s, the true ratio would be (0.063*2 + (0.069-0.063)*0)/0.069 = ~1.83. So I guess our training model is around 10% over-estimating the items on promotion.</p>
<p>Then I simple re-train all the models, but without the promotion information. These predictions will of course have a lower sales ratio between promo/non-promo items. Then I average the no-promo predictions with the original predictions with promotion info with weights so that the sales ratio of the final predictions approaches 1.83:1 as I inferred.</p>
<p>This approach is kind of tricky and based on assumptions I cannot validated, but that's the only way I figure out to deal with the promotion bias. However, it seems like it's not useful to the PB. Ironically, the model without any special dealing with promotion bias gives me .510 on PB. I still have no idea why.</p>
<p>I have shared my codes on github <a href="https://github.com/LenzDu/Kaggle-Competition-Favorita">https://github.com/LenzDu/Kaggle-Competition-Favorita</a>, but it is still quite messy there. I will organize it when I have time.</p> |
Corporación Favorita Grocery Sales Forecasting | 6th Place Solution Overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Corporación Favorita Grocery Sales Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The 6th place solution was an ensemble of two solutions that score .517 privately (top 40). Here is my top 40 solution: </p>
<h2>Basic Outline</h2>
<ul>
<li>LGBM Framework which used only 2017 data.</li>
<li>Over 500 features.</li>
<li>Fairly reliable cross-validation on 07-26 --> 08/10</li>
</ul>
<h2>Features</h2>
<p>These are the most important features. The following order reflects importance; however, by nature of having 16 models, there is no exact order. Unless otherwise noted, this was all done at the item-store level. </p>
<ul>
<li>ma_median * isd_avg / isd_week_avg - popularized early on in the competition. </li>
<li>ma_median - Coupled with binary for whether or not it is equal to 0. </li>
<li>Day-of-week averages - Different time periods including 7, 14, 28, 56, 112. </li>
<li>Days since appeared - Difference between the 'start date' of the training cycle and the first date the item showed up in the original train file.</li>
<li>Quantiles for several different time-spans. </li>
<li>Whether the item will be onpromotion. </li>
<li>Simple averages for different time-spans. </li>
<li>Item-cluster means - The past 5-day mean was a good predictor. </li>
<li>Future promotional 'sums' - E.g: sum of onpromotion 8/16 through 8/18. </li>
<li>mean_no_zero_sales - Mean of instances where unit_sales > 0. </li>
<li>Frequency encoding - calculate the frequency of appearance for items, stores, families and classes (four columns that each sum to 1). </li>
</ul>
<p>The above predictors consistently showed up as important predictors, or made immediate score advancements. The following predictors definitely helped as a whole: </p>
<ul>
<li>One-hot-encoding clusters, states, types, cities and families. </li>
<li>Weekend and weekday means. </li>
<li>Staggered mean data (e.g. 8/07 --> 8/14 for the test-set). </li>
<li>Staggered quantile data. </li>
<li>Past number of zero sales for different time-spans. </li>
<li>Past number of promotional days. </li>
<li>Past promotional sales averages. </li>
<li>Past day-of-week quantile data. </li>
</ul>
<h2>Final Submission</h2>
<p>Mentioned earlier, this is a top-40 solution on it's own. My teammate, <a href="https://www.kaggle.com/eranah">Eran</a>, was scoring similarly. He also used LGBM, but we had quite different Feature Engineering. Specifically, we had different top predictors and we generally used different time-spans. We each ran our models about 5 times and created a final ensemble - the 6th place solution at .514 (.507 public LB). </p>
<h2>Some Thoughts</h2>
<ul>
<li>This was a supply-demand problem. In a more realistic setting, knowing which days stores get product for restocking, and potentially Retail Management Software data (i.e. data on when products are being pulled from the backroom) would make prediction a lot easier. </li>
<li>I did not worry about the onpromotion bias. The data-set is very complex, and 'filling in the blanks' seemed unrealistic. If an item was on promotion, but had 0 unit_sales, then it either is very low demand or out-of-stock. </li>
<li>The only certain way to know if a new item-store combination is 'real' is if it was on promotion at some point during the test-set days. </li>
<li>Having your model data start from 8/9 and predicting 8-16 --> 8-20 can be useful to see how it will do for the private leader board. </li>
<li>Examining which items were getting predicted poorly in the validation set was very useful for coming up with feature ideas. </li>
</ul>
<p>Two step process: <br>
- Get all item/store combinations between 2017/01/01 and the start of the train cycle. <br>
- For each item, divide number of appearances by total number of item/store combinations.</p> |
Corporación Favorita Grocery Sales Forecasting | 8th solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Corporación Favorita Grocery Sales Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><blockquote>
<p>you can see sjv‘s post . he used leadboard to validate too.</p>
</blockquote>
<p>And he drops in performance too.</p>
<p>Back to this:</p>
<blockquote>
<p>If you fillna(0), that caused bias both train and valiation data, the items'sales on promotion>1 so you will get a good cv.</p>
</blockquote>
<p>Now that you insist, I remember I ran lgb at point with the original data as evaluation, i.e. without any additional 0 sales rows. I wanted to make sure that adding these rows was fine. That's how I validated that filling missing onpromotion with 0 was the best I could get. And our limited LB probing confirmed later than it was a reasonable bet.</p>
<p>We are circling: "How did you validate your way of filling NA? "</p>
<p>re test data: </p>
<blockquote>
<p>We also probed LB a bit, and found that setting sales > 0 for items that have at least on promotion day was improving LB (both public, and private actually). </p>
</blockquote>
<p>As I said, when adding zero sales rows we put onpromotion to False. We tried other ways, and they led to worse CV values. How did you validate your way of filling NA? You say medals are not due to good use of CV, but I'm afraid this precisely shows the opposite: we used CV to find out how to best deal with that issue, and it worked reasonably well.</p>
<p>We also probed LB a bit, and found that setting sales > 0 for items that have at least on promotion day was improving LB (both public, and private actually). </p>
<p>> it obviously the item on promotion from 2017.8.1~2017.8.15</p>
<p>> that is not true in test data.</p>
<p>How do you know both? </p>
<p>Anyway, we did not use that assumption. The way we use onpromotion (in the NN) is to modify on average the sales level. I guess that way was good enough to yield reasonable predictions.</p>
<p>I did play with how to fillna for onpromotion when adding 0 sales rows. Best CV result was obtained with fillna(0). Sure, it is a bias, but it is the best bias I found. </p>
<p>How did you validate your way of filling missing onpromotion values?</p>
<p>Not sure why you say that. Running this code:</p>
<pre><code>import pandas as pd
train = pd.read_csv('../input/train.csv', usecols=['date', 'onpromotion'])
train.onpromotion.fillna(-1, inplace=True)
train.groupby('onpromotion').date.max()
</code></pre>
<p>yields</p>
<pre><code>onpromotion
-1 2014-03-31
False 2017-08-15
True 2017-08-15
Name: date, dtype: object
</code></pre>
<p>What am I missing?</p>
<p>First of all, I'd like to thank Giba, my team mate, without whom I would not have entered this competition. I also want to thank Kaggle and the host for providing this challenging dataset. Congrats to all gold winning teams and individuals, you did an awesome job here. And there is lots of solution sharing already, which is great. Last but not least, final standing is a great, positive, surprise to us, esp considering we both had the flu.</p>
<p>Our solution is mostly a single NN model that scores 0.515 private and 0.508 public, with a secondary lgb model. Believe it or not, we missed the deadline by few seconds and could not select our second sub ourselves. It scored 0.514 on private LB, maybe we would have advanced a couple of place, but we are very happy with the result anyway. Details below.</p>
<p><strong>Missing values</strong></p>
<p>onpromotion is fully known after 2014-03-31. We ignored the missing values issue by only using data posterior to that. We actually did not use data prior December 1st 2015. We also assumed that any item/store that appears somewhere in train or test data has 0 sales for the dates it does not appear. This seems to be what the data description says. It means that store/item that do not appear in train have zero sales during the train period.</p>
<p><strong>Seasonality</strong></p>
<p>In time series problems there are two key ingredients: seasonality, and cv setting. Here weekly seasonality is extremely strong, yearly seasonality is weak, and there is a monthly seasonality due to pay day. We dealt with weekly seasonality by using time periods that align on weekdays for everything. We did not find a good way to use the monthly seasonality. And we captured the yearly seasonality by using sales figures from 364 days ago. We used 364 and not 365 to align with weekdays.</p>
<p><strong>CV setting</strong></p>
<p>We selected 2 validation periods plus the test period: (2017-07-16, 2017-07-31), (2017-08-01, 2017-08-15), and (2017-08-16, 2017-08-31). For each period we construct a series of training datasets aligned on weekday, for instance for the first validation period:</p>
<pre><code>dataset up to 2017-03-11 predict period of length 16 starting on 2017-03-12
dataset up to 2017-03-18 predict period of length 16 starting on 2017-03-19
dataset up to 2017-03-25 predict period of length 16 starting on 2017-03-26
dataset up to 2017-04-01 predict period of length 16 starting on 2017-04-02
dataset up to 2017-04-08 predict period of length 16 starting on 2017-04-09
dataset up to 2017-04-15 predict period of length 16 starting on 2017-04-16
dataset up to 2017-04-22 predict period of length 16 starting on 2017-04-23
dataset up to 2017-04-29 predict period of length 16 starting on 2017-04-30
dataset up to 2017-05-06 predict period of length 16 starting on 2017-05-07
dataset up to 2017-05-13 predict period of length 16 starting on 2017-05-14
dataset up to 2017-05-20 predict period of length 16 starting on 2017-05-21
dataset up to 2017-05-27 predict period of length 16 starting on 2017-05-28
dataset up to 2017-06-03 predict period of length 16 starting on 2017-06-04
dataset up to 2017-06-10 predict period of length 16 starting on 2017-06-11
dataset up to 2017-06-17 predict period of length 16 starting on 2017-06-18
dataset up to 2017-06-24 predict period of length 16 starting on 2017-06-25
dataset up to 2017-07-15 predict period of length 16 starting on 2017-07-16
</code></pre>
<p>We use all datasets but the last to train, and we use the last one for early stopping. We use 10 fold CV on the training periods. The only caveat is to make sure all store/item pairs for a given time series are in the same fold. I used a very similar CV setting in WTF competition.</p>
<p><strong>Feature engineering</strong></p>
<p>All unit sales are clipped to be non negative, then log1p transformed. For each of the above datasets, we use:</p>
<ul>
<li><p>lags over periods similar to @Ceshine Lee starter LGB kernel. We use mean, max, and proportion of zero entries, as well as proportion of promotion days.</p></li>
<li><p>Sales for each of the last 7 days</p></li>
<li><p>Average sales per weekday over the last 8 weeks</p></li>
<li><p>For each of the test/validation days: sales from 364 days earlier, and promotion status.</p></li>
<li><p>Class of the item</p></li>
<li><p>Store</p></li>
</ul>
<p>We did not use oil price, item number, and other info in our primary model.</p>
<p><strong>NN Model</strong></p>
<p>A feedforward model with 3 dense layers, relu activation. Class and store are embedded in 4 dimension vectors, and appended at the second level. Let x be the output of the dense layers. Its length is equal to the length of the test or validation period. Last level is a 1D convolution with x, the on promotion input vector, and the product of the onpromotion vector with x:</p>
<pre><code>y = Multiply()([x, promo_test_input])
y = Reshape((-1,1))(y)
x = Reshape((-1,1))(x)
z = Reshape((-1,1))(promo_test_input)
x = Concatenate(axis=-1)([x, y, z])
x = Conv1D(1,1, activation='linear')(x)
</code></pre>
<p>This captures the influence of onpromotion day per day quite well.
The target for the model is the sequence of train/validation sales, i.e. 15 or 16 days depending on the period.</p>
<p><strong>Other models</strong></p>
<p>We trained a lgb model using the same features and cv setting, the only difference being the target: we train one model per day, like in @Ceshine Lee starter LGB kernel. Best lgb scores 0.510 on pblic LB and 0.517 on private LB. I don't know how others got better results with LGB than with NN.</p>
<p>Giba created a classifier model to predict when sales are zero or not. Alone this model is useless but it adds when ensembling. I'll let him describe it if need be.</p>
<p><strong>Ensembling</strong></p>
<p>Turns out that what would have been our best sub is an unweighted average of 3 different runs of the NN model using a different sets of training periods, and one lgb model. We tried some limited stacking with a NN with one hidden layer. Giba also looked to how to post process submissions. By default we set to 0 all store/item pairs that do not appear in train, but he found that we should not do it if that pair has a promotion in test. He got a sub that scores 0.514 private that way, but, due to few second glitch we did not not select it... And we did not have time to combine his postprocessing with our best blend either.</p>
<p><strong>Conclusion</strong></p>
<p>Main takeaway from me is once again to ignore public LB feedback. Indeed, that feedback was on the first 5 days of the test period, and it gives zero useful information on what happens for days further in the future. Having some reasonable CV setting was key to estimate how well a model predicts further in the future. I could not say I was expecting to get a result as good as what we got in the end, but I knew we were not overfitting to the first few days of the test period ;) </p>
<p>Let me end with a special notice to Ceshine Lee without whom many would not have fared as well as they did. He shared a great kernel, and he shared very useful tips in the early days of the competition. There should be a sharing gold medal in Kaggle competitions, and he would have won that one here!</p>
<p><strong>Edit:</strong>
The discussion below makes me add this. We added 0 sales row with onpromotion=False for all store/item pairs that appear in our training datasets. We did not add all those store/items pairs that appear in the test set. We did compare with and without these: the results were similar, but training time were higher with the test items. We validated adding rows with onpromotion=False by using the original data in the validation periods. </p>
<p>Thanks! I did try to reuse my WTF model but made a mistake in it, and lost a week trying to get it running correctly. Then we restarted from scratch, but ended up with a model similar to that of WTF, except for the last convolution layer. </p>
<p>Don't mention Caesars to me ;) I see people complaining about data preparation here, but there it was an order of magnitude worse, at least... Not to mention the reset when I was fighting for top spot... Very bad memories to me. Almost made me quit Kaggle. </p>
<p>I feel better now of course with a gold medal :)</p>
<p>@sjv, I have watched that video. Is that the best you can do when confronted with reality? Not only am I ignorant, but I got 2 gold medals by sheer luck using a flawed CV setting? And worse, in WTF, my flawed CV setting resulted in a better submission than yours? BTW, you did not tell us what your model performance was in private LB here. Don't tell me it isn't better than mine. That would hurt.</p>
<p>More seriously, let me forget about this conversation, and let me remember instead your great instacart and your great wtf code. As I wrote elsewhere, i hope you'll cool down and come compete with us again.</p>
<p>@sjv, Sure, you can dismiss what we did as ignorance, hope it makes you feel better. Giba and I did discuss what to do when adding 0 sales rows, and we quickly settled on no promotion as being simple, reliable, and not dangerous. Seems we were right. Sorry if you find this insulting.</p>
<p>I still value what you did, and I'm one that pointed many participants to your WTF code.</p>
<p>Edit. My CV setting is the same as in WTF. The fact that it worked twice reasonably well may be a statistical fallacy, but I doubt it. I don't like videos, do you have the same in writing? </p>
<p>Thanks. Let me try to explain again, the write up may be confusing. For one validation period, say period of length 16 starting on 2017-07-16, we create training datasets as described above. We concatenate these into one dataset, and use kfold on it. We then train one model per fold, as usual in kfold cv, and keep the epoch with the best average cv value on the folds to compute the prediction on the validation data using the average of the fold models predictions. We then use that prediction score as our measure of the trained model. When we are done with model tuning, we use the exact same kfold on the training dataset for the test period, use the best epoch on the folds to compute the final test prediction. With our NN model, this yields a 0.003 improvement over Ceshine Lee's approach. I reported this on the forum, moving from 0.51 to 0.508 on public LB. I'm not sure why, but probably because we average the predictions of several models.</p>
<p>The model must have learned that for this store (store_nbr is one of our features) the early data isn't useful. The good question would be: what special treatment should we have used? Maybe dropping older data from that store?</p>
<p>Yes, same labels with different features for overlapping days. And yes, for one dataset all features are the same for the 16 days.</p> |
Corporación Favorita Grocery Sales Forecasting | A silver solution(33rd) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Corporación Favorita Grocery Sales Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I'm a newbie in Kaggle and it's actually a shock for me to reach top 2%...</p>
<p>My solution is based on Lingzhi's LGBM One Step Ahead with some modifications:</p>
<ul>
<li><p>using 8 weeks of training data starting from May 31st, 2017</p></li>
<li><p>extract min, max, var, skew and kurt</p></li>
<li><p>calculate first order difference for the mean values</p></li>
<li><p>Categorical features(store, item): one hot encoding, count (I learn this trick from <a href="https://www.kaggle.com/xiaozhouwang/2nd-place-lightgbm-solution">https://www.kaggle.com/xiaozhouwang/2nd-place-lightgbm-solution</a>)</p></li>
<li><p>averaged across different random seeds to stabilize model performance (also learned from Little Boat's solution)</p></li>
</ul>
<p>I have posted my LGBM model on <a href="https://www.kaggle.com/deepdreamer/a-silver-solution-33rd">https://www.kaggle.com/deepdreamer/a-silver-solution-33rd</a>. It scores 0.517 in private board.</p>
<p>Thanks Little Boat for sharing his solutions in Porto Seguro and Ceshine lee for sharing the framework. I really learn a lot from them.</p> |
Corporación Favorita Grocery Sales Forecasting | A single Lightgbm model with 0.515 private score | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Corporación Favorita Grocery Sales Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations on becoming a GM.</p>
<p>I have run it locally and submited the result, then I get a 0.515 private score.But I did not choose the result as my final submission because my local cv scores didn't change synchronously with LB at the end of competetion. What a pitty.</p>
<p>Unfortunately I did not select this result.Please see this kernel.
<a href="https://www.kaggle.com/viczyf/lgbm-model-private-socre-0-515">https://www.kaggle.com/viczyf/lgbm-model-private-socre-0-515</a></p> |
Corporación Favorita Grocery Sales Forecasting | My approach - public 0.507, private 0.517 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Corporación Favorita Grocery Sales Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My door is always opening! thanks for your invitation, sir🙈!</p>
<p>I lost one day, lol~ I was trying to make my feedforward NN performance as good as yours, but failed, only 0.521 in private, but I should do better in the next time :) </p>
<p>thanks @CPMP, I was trying to use one year data before the same as your script in WTF but get bad result - -, but your script gives me some ideas! thanks for sharing!!</p>
<p>🙂</p>
<p><img src="http://imglf5.nosdn.127.net/img/OEUzbU5jenB4YlRsQWNNcnk3Um5DVnFkc0Y0eEQyMFQzRXRlRXRBWW5NakMyU28yQnc4MS9nPT0.jpeg?imageView&thumbnail=2086y226&type=jpg&quality=96&stripmeta=0&type=jpg" alt="dd"></p>
<h1>Models</h1>
<p>I use two models to solve this problem</p>
<ol>
<li><p>LightGBM, it's based on Ceshine lee's script, but I add some other new features.
a) zero crossing rate (<a href="https://en.wikipedia.org/wiki/Zero-crossing_rate">https://en.wikipedia.org/wiki/Zero-crossing_rate</a>), I create this feature for different windows(1, 3, 7, 14, 30, 60) and levels(item level, store*family level)
below is my implementation </p>
<p>def corss_zero(row):
return (((row[:-1] * row[1:]) == 0) * ~(row[:-1] == row[1:])).sum()</p></li>
</ol>
<p>b) zero-sales counts in different windows </p>
<pre><code>"zeros_28_2017": (get_timespan(df_2017, t2017, 28, 28) == 0).sum(axis=1).values
"zeros_56_2017": (get_timespan(df_2017, t2017, 56, 56) == 0).sum(axis=1).values
"zeros_84_2017": (get_timespan(df_2017, t2017, 84, 84) == 0).sum(axis=1).values
</code></pre>
<p>b) lastest promo day in last 30 days </p>
<pre><code>def fo_in_row(row):
j = 0
for i in row:
j += 1
if i == 1:
return j
return -1
</code></pre>
<p>"lastest_promo_30": get_timespan(promo_2017, t2017, 30, 30).apply(lambda row: fo_in_row(row[::-1]), axis=1).values.ravel() <br>
2. NN
I used the simplest feed-forward neural network.
same features as LightGBM, for pre-processing, I used ”RankGauss“ recommended by @Michael Jahrer(<a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44629">https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44629</a>), for training I used BatchNorman, big dropout and nothing else.</p>
<p>small tricks.
I separate Perishable and Non-Perishable variable to train different models. and this give me a little improvement.</p>
<h1>Validation Framework</h1>
<p>I used the same framework recommended by @Ceshine lee, but I paid careful attention to first 5 days and last 11 days, I only add variables that both have improvement to first 5 days, last 11 days and Public Board, this give me a rather steady improvement on both public and private. </p>
<p>Improvement to the full 16 days is not equal to improvement to both first 5 and last 11 days, the leadboard is cut by time. Sometimes, improvement to first 5 day, but decrease to last 11 days. The private board is about the last 11 days, so we should choose features that do both good to first 5 and last 11.</p>
<p>I didn't optimized it - -!</p>
<p>Sure, but the function that i written is not optimized. ɿ😏ɾ</p> |
Subsets and Splits